Could Children’s Use of Social Media be Banned in the UK?

Some argue that the primary goal of social media is no longer genuine connection, but the maximisation of user engagement for commercial gain. Platforms generate vast revenues by delivering highly targeted, personalised advertising, incentivising designs that keep users scrolling for longer. With the rise of AI, this content stream has become even more relentless, often amplified by manipulative or overly flattering language that encourages continuous interaction. 

Unsurprisingly, many parents are concerned about their children’s use of social media. Endless scrolling and exposure to videos featuring mindless pranks or viral challenges can have negative effects on both mental and physical health. Increasingly, attention is turning to the platforms themselves: critics suggest that their design may not only encourage excessive use, but also contribute to addiction, anxiety and other forms of harm. 

The US Court Case  

On 25th March 2026, a jury in Los Angeles delivered a damning verdict on two of the world’s most popular social media platforms. It ruled that Instagram and You Tube were deliberately designed to be addictive and consequently their parent companies have been negligent in failing to safeguard their child users. Meta and Google, owners of Instagram and YouTube, must now pay $6m (£4.5m) in damages to “Kaley”, the young woman who was the plaintiff (claimant) in this case. Her lawyers argued that the design of Instagram and YouTube caused her to be addicted to the social media platforms. This addiction impacted her mental health during childhood leaving her with body dysmorphia, depression and suicidal thoughts.  

The judgement has sent shockwaves through tech companies worldwide, not just in Silicon Valley. One tech company insider, who asked not to be identified, told the BBC, “we’re having a moment”. Even the Royal Family chimed in. In a statement, the Duke and Duchess of Sussex said: “This verdict is a reckoning. For too long, families have paid the price for platforms built with total disregard for the children they reach.”   

Both companies vigorously defended the claim and intend to appeal the judgement. Meta maintains that a single platform cannot be solely responsible for a user’s mental health crisis. Google, meanwhile, argues that YouTube is not a social network. 

English Law 

Could such a claim succeed in this country? The tort of negligence provides the best hope for claimants who allege harm from social media use subject to the elements of the tort (duty of care, breach, causation and foreseeability) being satisfied. There is growing recognition in UK law that online platforms may owe a duty of care to users, particularly if the users are children. And the harms of over use of social media  are well documented. However causation is likely to be the most difficult hurdle for claimants in the UK. To succeed, a claimant must prove that a platform’s design caused or materially contributed to the harm they suffered through their use of social media. This is a difficult hurdle when it comes to social media. Psychological harm rarely has a single identifiable cause. Social media companies are likely to argue that their platforms are only one of the many factors which can contribute to an individual’s mental health; alongside family environment, school experiences, pre-existing vulnerabilities and offline relationships to name a few.  

Could social media platforms be treated as “defective products” under the Consumer Protection Act 1987 (CPA)  which carries strict liability for harm? Products, under the CPA, are traditionally understood as tangible goods, not the likes of YouTube and Instagram. It is arguable though that social media platforms are not just intermediaries but “manufacturers” of digital environments, making them liable for defects in algorithms or addictive design. The Law Commission is currently reviewing the CPA to determine if it is fit for the digital age, with a focus on artificial intelligence, software and online platforms. The review, which began in September 2025, may lead to expanded liability for online platforms and software providers. 

It is worth noting that the US case was decided by a jury. In the UK civil cases, particularly those involving negligence, are decided by judges. Juries may be influenced by emotional arguments, whereas judges are trained to apply the law strictly and are less susceptible to being swayed by emotion at the expense of legal principles. 

Despite the issues around causation, a legal action in negligence is probably the best option for aggrieved social media users in the UK; although the lack of Legal Aid and the UK courts restrictive approach to class actions mean a test case would require significant upfront funding. Perhaps insurers, emboldened by the US Judgement, may now be more willing to cover the costs of such a test case.  

Regulating Social Media 

Unlike the US, the UK has moved toward statutory regulation rather than litigation as the primary means of controlling social media harms. 

Since the passage of the Online Safety Act in 2023 (OSA), social media companies and search engines have a duty to ensure their services aren’t used for illegal activity or to promote illegal content, with particular protections for children. The communications regulator, Ofcom, has been tasked with implementing the OSA and can fine infringing companies of up to £18 million, or 10% of their global revenue (whichever is greater). Last month, it published guidance on how platforms must protect children. Furthermore, since platforms are processing users’ personal data, they have to comply with the UK GDPR. The Data (Use and Access) Act 2025, which mainly came into force in February, explicitly requires those who provide an online service that is likely to be used by children, to take their needs into account when deciding how to use their personal data.   

Even before the US judgement, many countries had been considering whether, to regulate social media further and/or ban children from using it. Australia has banned it and others, like France and Denmark, have introduced or are planning to introduce tighter rules. 

The UK government is currently carrying out a consultation to consider whether additional measures are required to keep children safe in the online world. This includes setting a minimum age for children to access social media, restricting risky functionalities and design features that encourage excessive use, such as infinite scrolling and autoplay, whether the digital age of consent should be raised, whether the guidance on the use of mobile phones in schools should be put on a statutory footing and better support for parents, including clearer guidance and simpler parental controls. The consultation ends on 26th May, and the government will respond before the end of July. Alongside the consultation, the government is running a pilot scheme which will see 300 teenagers have their social media apps disabled entirely, blocked overnight or capped to one hour’s use – with some also seeing no such changes at all – in order to compare their experiences. Children and parents involved in the pilot will be interviewed before and after to assess its impact. 

Meanwhile, on 27th March 2026, the government published national guidance that urges parents to strictly limit screen exposure in early years over health and development risks. The new recommendations advise that there should be no screen exposure for children under two except for shared activities. For those aged two to five, usage should be capped at one hour per day, with additional guidance to avoid screens at mealtimes and before bed. 

Parliament is also debating the use of social media platforms by children but remains divided on what action to take. In March, during a debate on the Children’s Wellbeing and Schools Bill, the House of Lords supported a proposal to ban under-16s in the UK from social media platforms. It is the second time peers have defeated the government over the proposal. There is now a standoff between the Commons and the Lords. Whatever happens the verdict in the California court has signalled a rising public expectation for more aggressive regulation of social media platforms. 

Listen to the Guardians of Data Podcast for the latest news and views on data protection, cyber security, AI and freedom of information.   

This and other developments relating to children’s data will be covered forthcoming workshop, Working with Children’s Data.

New Podcast: Filming the Public for Social Media

Act Now is pleased to bring you episode 6 of the Guardians of Data podcast.  

Think about the last time you walked down a busy street, sat in a pub, or queued for a train. Now imagine that moment, completely ordinary to you, being filmed by a stranger, uploaded to TikTok or YouTube and watched by millions. 
Maybe it’s monetised; maybe it’s mocked. One thing is for sure though, it never disappears. 

Filming people in public has now become second nature for some. But what happens when those images are shared, edited and turned into social media content? Can you stop someone filming you in public? What rights do you have when the footage is published? 

In this episode, we are joined by Naomi Mathews, a lawyer who specialises in Data Protection, Freedom of Information and Surveillance Law. Naomi helps us explore what the law actually says about filming people in public; where it falls short and how that affects real people who find themselves turned into content without consent. We’ll also ask the harder questions about ethics, power and whether the UK needs a new law to better protect the public. 

Download and listen here, or on your preferred podcast app. Available on Apple Podcasts, Spotify, and all major podcast platforms. 

Previous episodes of the Guardians of Data podcast have featured Jon Baines, reflecting on his career as a Data Protection Specialist and the hot issues in information governance,  Lynn Wyeth discussing the recent controversy around Grok AI, Maurice Frenkel looking back at 20 years of the Freedom of Information Act, Olu Odeniyi analysing recent cyber breaches and discussing the lessons to learn and Raz Edwards talking about how to succeed as an IG leader.

Scope of the GDPR: ICO Wins Clearview Appeal  

The Information Commissioner has won his appeal (to the Upper Tribunal) against the First-tier Tribunal (FTT) decision involving Clearview AI Inc.  

Clearview is a US based company which describes itself as the “World’s Largest Facial Network”. Its online database contains 20 billion images of people’s faces and data scraped from the internet and social media platforms all over the world. It allows customers to upload an image of a person to its app; the person is then identified by the app checking against all the images in the Clearview database. The appeal raised the issue of the extent to which processing of the personal data of UK data subjects by a private company based outside the UK is excluded from the scope of the GDPR, including where such processing is carried out in the context of its foreign clients’ national security or criminal law enforcement activities. 

Background 

In May 2022 the ICO issued a Monetary Penalty Notice of £7,552,800 to Clearview for breaches of the UK GDPR including failing to use the information of people in the UK in a way that is fair and transparent. Although Clearview is a US company, the ICO ruled that the UK GDPR applied because of Article 3(2)(b) (territorial scope). It concluded that Clearview’s processing activities “are related to…the monitoring of [UK resident’s] behaviour as far as their behaviour takes place within the United Kingdom.” The ICO also issued an Enforcement Notice ordering Clearview to stop obtaining and using the personal data of UK residents that is publicly available on the internet, and to delete the data of UK residents from its systems.  

In October 2023, the FTT overturned the ICO’s enforcement and penalty notice against Clearview. It concluded that although Clearview did carry out data processing related to monitoring the behaviour of people in the UK (Article 3(2)(b) of the UK GDPR), the ICO did not have jurisdiction to take enforcement action or issue a fine. Both the GDPR and UK GDPR provide that acts of foreign governments fall outside their scope; it is not for one government to seek to bind or control the activities of another sovereign state. However the Tribunal noted that the ICO could have taken action under the Law Enforcement Directive (Part 3 of the DPA 2018 in the UK), which specifically regulates the processing of personal data in relation to law enforcement. 

The Upper Tribunal Judgement  

The Upper Tribunal allowed the appeal, set aside the decision of the FTT and remitted the matter to the FTT to decide the substantive appeal on the basis that the Information Commissioner had jurisdiction to issue the notices. It also decided that the FTT was right to find that Clearview’s processing fell within the territorial scope of the GDPRs, albeit that it differed in its reasoning. 

In its judgment, the Upper Tribunal ruled  that: 

(1) The words “in the course of an activity which falls outside the scope of Union law” in Article 2(2)(a) of the GDPR (which provides for an exclusion from the material scope of the GDPR) refer only to those activities in respect of which Member States have reserved control to themselves and not conferred powers on the Union to act, and not to all matters without the competence of the Union (as the ICO argued) or to the activities of third parties whose processing “intersects” with their clients’ processing in the course of “quintessentially state functions” which would offend against comity principles (as Clearview argued); 

(2) The words “behavioural monitoring” in Article 3(2)(b) are to be interpreted broadly, as a response to the challenges posed by ‘Big Data’ in the digital age, and they can encompass passive collection, sorting, classification and storing of data by automated means with a view to potential subsequent use, including use by another controller, of personal data processing techniques which consist of profiling a natural person. “Behavioural monitoring” does not require an element of active “watchfulness” in the sense of human involvement;  

(3) The words “related to” in Article 3(2)(b) of the GDPR, as applied to Article 3(2)(b), have an expansive meaning, and apply not only to controllers who themselves conduct behavioural monitoring, but also to controllers whose data processing is related to behavioural monitoring carried out by another controller. 

Data protection practitioners should read the judgement of the Upper Tribunal as it clarifies the material and territorial scope provisions of the UK GDPR. This and other GDPR developments will be discussed in our forthcoming GDPR Updateworkshop.  

Clearview AI Wins Appeal Against GDPR Fine 

Last week a Tribunal overturned a GDPR Enforcement Notice and a Monetary Penalty Notice issued to Clearview AI, an American facial recognition company. In Clearview AI Inc v The Information Commissioner [2023] UKFTT 00819 (GRC), the First-Tier Tribunal (Information Rights) ruled that the Information Commissioner had no jurisdiction to issue either notice, on the basis that the GDPR/UK GDPR did not apply to the personal data processing in issue.  

Background 

Clearview is a US based company which describes itself as the “World’s Largest Facial Network”. Its online database contains 20 billion images of people’s faces and data scraped from publicly available information on the internet and social media platforms all over the world. It allows customers to upload an image of a person to its app; the person is then identified by the app checking against all the images in the Clearview database.  

In May 2022 the ICO issued a Monetary Penalty Notice of £7,552,800 to Clearview for breaches of the GDPR including failing to use the information of people in the UK in a way that is fair and transparent. Although Clearview is a US company, the ICO ruled that the UK GDPR applied because of Article 3(2)(b) (territorial scope). It concluded that Clearview’s processing activities “are related to… the monitoring of [UK resident’s] behaviour as far as their behaviour takes place within the United Kingdom.” 

The ICO also issued an Enforcement Notice ordering Clearview to stop obtaining and using the personal data of UK residents that is publicly available on the internet, and to delete the data of UK residents from its systems. (see our earlier blog for more detail on these notices.) 

The Judgement  

The First-Tier Tribunal (Information Rights) has now overturned the ICO’s enforcement and penalty notice against Clearview. It concluded that although Clearview did carry out data processing related to monitoring the behaviour of people in the UK (Article Art. 3(2)(b) of the UK GDPR), the ICO did not have jurisdiction to take enforcement action or issue a fine. Both the GDPR and UK GDPR provide that acts of foreign governments fall outside their scope; it is not for one government to seek to bind or control the activities of another sovereign state. However the Tribunal noted that the ICO could have taken action under the Law Enforcement Directive (Part 3 of the DPA 2018 in the UK), which specifically regulates the processing of personal data in relation to law enforcement. 

Learning Points 

While the Tribunal’s judgement in this case reflects the specific circumstances, some of its findings are of wider application: 

  • The term “behaviour” (in Article Art. 3(2)(b)) means something about what a person does (e.g., location, relationship status, occupation, use of social media, habits) rather than just identifying or describing them (e.g., name, date of birth, height, hair colour).  

  • The term “monitoring” not only comes up in Article 3(2)(b) but also in Article 35(3)(c) (when a DPIA is required). The Tribunal ruled that monitoring includes tracking a person at a fixed point in time as well as on a continuous or repeated basis.

  • In this case, Clearview was not monitoring UK residents directly as its processing was limited to creating and maintaining a database of facial images and biometric vectors. However, Clearview’s clients were using its services for monitoring purposes and therefore Clearview’s processing “related to” monitoring under Article 3(2)(b). 

  • A provider of services like Clearview, may be considered a joint controller with its clients where both determine the purposes and means of processing. In this case, Clearview was a joint controller with its clients because it imposed restrictions on how clients could use the services (i.e., only for law enforcement and national security purposes) and determined the means of processing when matching query images against its facial recognition database.  

Data Scraping 

The ruling is not a greenlight for data scraping; where publicly available data, usually from the internet, is collected and processed by companies often without the Data Subject’s knowledge. The Tribunal ruled that this was an activity to which the UK GDPR could apply. In its press release, reacting to the ruling, the ICO said: 

“The ICO will take stock of today’s judgment and carefully consider next steps.
It is important to note that this judgment does not remove the ICO’s ability to act against companies based internationally who process data of people in the UK, particularly businesses scraping data of people in the UK, and instead covers a specific exemption around foreign law enforcement.” 

This is a significant ruling from the First Tier Tribunal which has implications for the extra territorial effect of the UK GDPR and the ICO powers to enforce it. It merits an appeal by the ICO to the Upper Tribunal. Whether this happens depends very much on the ICO’s appetite for a legal battle with a tech company with deep pockets.  

This and other GDPR developments will be discussed by Robert Bateman in our forthcoming GDPR Updateworkshop.  

Act Now Launches New RIPA E Learning Course

Screenshot 2020-11-24 at 10.26.09

The Investigatory Powers Commissioner’s Office (IPCO), like its predecessor the Office of the Surveillance Commissioner(OSC), undertakes inspections of public authorities to ensure their compliance with Part 2 of the Regulation of Investigatory Act 2000 (RIPA).
A common feature of an IPCO report into a council is the highlighting of the lack of regular refresher training for those who undertake covert surveillance, including when using social media.  

The coronavirus pandemic as well as decreasing council budgets means that training staff is difficult to say the least. Social distancing and home working make face to face training impossible and live online training may not always be cost effective for those who need a quick refresher.  

Act Now Training is pleased to announce the launch of RIPA Essentials. This is a new e learning course, consisting of an animated video followed by an online quiz, designed to update local authority employees’ knowledge of Part 2 of RIPA which covers Directed Surveillance, Intrusive Surveillance and CHIS. Designed by our RIPA experts, Ibrahim Hasan and Steve Morris, it uses simple clear language and animation to make the complex simple. 

In just 30 minutes your employees can learn about the main provisions of Part 2 of RIPA including the different types of covert surveillance, the serious crime test and the authorisation process. It also covers how RIPA applies to social media monitoring and how to handle the product of surveillance having regard to data protection. All this at a time and in a place of your employees’ choosing. (See the full contents here.

Steve Morris said: 

“Ibrahim and I have over 40 years of experience in training and advising local authorities on covert surveillance and RIPA. We have used this experience, as well as the latest guidance from the Home Office and IPCO, to produce an online training course which is engaging, interactive and fun.” 

With full admin controls, RIPA Essentials will help you to build a RIPA compliance culture in your organisation and develop a workforce that is able to identify and address privacy risks when conducting surveillance. The course is specifically designed for local authority investigators including trading standards officers, environmental health officers, licensing officers, auditors and legal advisers.  

You can watch a demo of RIPA Essentials here. Prices start from as little as £69 plus vat per user. For a bespoke quote please get in touch

RIPA Essentials follows the successful launch of GDPR Essentials which has been used by our clients to train thousands of staff in the public and private sector.

Google v Lloyd- Representative action for damages fails under the DPA 1998

 

GoogleAs more individuals become aware of the way in which organisations such as Facebook, and Google have used their personal data unlawfully, then the prospect of litigation, and class actions, seems increasingly likely.  However, the recent case of Lloyd v Google [2018] EWHC 2599 (QB) demonstrates that it doesn’t necessarily follow that a clear breach of data protection legislation will result in a successful claim for damages.  The case shows that even if claimants can prove that there has been a breach of data protection legislation (now the GDPR and DPA 2018) they need to identify what harm the breach has caused and how the damage has been caused by the breach. This will inevitably be a fact specific exerciser.

The background-the Safari Workaround and DoubleClick Ad Cookie

The case concerned the use, by Google, of a cookie known as the “DoubleClick Ad cookie” between 2011 -2012. Google allegedly used the cookie to secretly track the internet activity of iPhone users in the US and the UK. Ordinarily the Safari browser (developed by Apple) had a default setting that blocked the use of third-party cookies, such as the DoubleClick Ad cookie. However, Google was able to exploit certain exceptions to this default blockage and implement the so called “Safari Workaround” which enabled Google to set the cookie on an iPhone, when the user used the Safari browser. This gave Google access to a huge amount of browser generated personal information, including the address or URL of the website which the browser is displaying to the user. It was claimed that this information enabled Google to obtain or deduce other sensitive information about individuals, such as their interests and habits, race ethnicity, class, political or religious view, health, age, sexuality and financial position. Google was also alleged to have aggregated this information to create lists of different types of people, such as “football lovers’, and offered these lists to subscribing advertisers.

Regulatory action was against Google in the USA with Google agreeing to pay US$25.5 million civil penalty to settle charges brought by the US Federal Trade Commission, and a further US$17 million to settle state consumer-based actions.  No such regulatory action as taken by the Information Commissioner even though the breach clearly affected UK iPhone users.

 The representative claim

The action against google was brought by Mr Lloyd who was the only named claimant. However, he brought this action as a representative of a much larger class of people. This is a novel type of litigation that allows a representative to sue in a representative capacity on behalf of a class of people who have “the same interest” in the claim. It was not entirely clear how big the class was, but estimates ranged between 5.4-4.4 million people. Google not surprising was keen that permission was denied, bearing in mind it estimated its potential liability (if the case was successful) of between £1-3 billion.

Mr Lloyd argued that he and each member of the group/class he represented had a right to be compensated “for the infringement of their data protection rights”. Specifically, it was alleged that Google had carried out the secret tracking and collation of personal data without the data subject’s consent or knowledge; that this was a breach of Google’s duty under s 4(4) of the DPA 1998 and that the data subjects were entitled to compensation under s 13 DPA 1998.

In other words, the fact of the contravention gave them a right to be compensated.  Neither Mr Lloyd or any member of the group alleged or gave evidence about any financial loss, distress or anxiety. There were no individual allegations of harm. In fact, Mr Lloyd asserted that the claim was generic and claimed an equal, standard “tariff” award for each member of the class (the claim was for £750 per person). This turned out to be fatal to the claim.

Litigation against a US based company

Any litigant, or group of litigants, considering an action against Apple or Google or any other such company that is based outside the UK first need the permission of the High Court in order to serve a claim against a defendant outside of the jurisdiction of the domestic courts. Before the court will grant permission, the claimant must prove three things. First that the case falls within one of the listed “jurisdictional gateways”; second, that the case has a reasonable prospect of success and finally that England is the appropriate place to deal with the case.  The High Court had no difficulty deciding that England would be the natural jurisdiction for the case since the claimants were all in the UK and the alleged damage had been incurred in the UK.  However, the High Court Judge found that Mr Lloyd’s case failed on the remaining two issues and denied permission for the case to proceed.

The Court identified that the relevant gateway in this case was that the claimant had to prove they had a good arguable claim in tort and the damage was sustained in England & Wales.  The Judge was clear that a claim for damages under the DPA 1998 is a claim in tort. He was also satisfied that each member of the class was (for at least some of the relevant period) within the jurisdiction when they connected to the internet using the Safari browser.

However, the real and substantial issue in this case was whether the Safari Workaround had caused “damage” within the meaning of the DPA 1998.  The Court engaged in a lengthy analysis of the case law on DPA damages and concluded that the claimants had not sustained damages in this case. On this basis the court decided that Mr Lloyd did not have a good arguable case or a reasonable prospect of success.

Damages under the DPA 1998

Section 13 of the DPA 1998 provided that an individual who suffers damage by reason of any contravention by a data controller of any of the requirements of the DPA 1998 is entitled to compensation from the data controller for that damage.

The High Court decided that giving the words their natural meaning, this statutory right to compensation arises where

(a) there has been a breach of the DPA; and

(b) as a result, the claimant suffers damage.

These are two separate events connected by a causal link.  In short, the breach must cause the damage. Based on this logic, it necessarily follows that some breaches will not give rise to damages.  The High Court judge suggested some examples where a data controller processes personal data in breach of the DPA, but where the breach may not warrant an award of compensation, such as:

  • recording inaccurate data, but not using or disclosing it
  • Holding, but not disclosing, using or consulting personal data that are irrelevant
  • Holding data for too long
  • Failing, without consequences, to take adequate security measures.

Of course, this is not to say that these types of breaches could never give rise to a successful claim for damages, as much will depend on the context and facts of the case. However, the Court did suggest that data subjects had alternative remedies such as rectification, erasure and objection.

One of the key arguments presented by Lloyd was that the claimants had incurred damage because they lost control of their data. According to the Court, there will be circumstances where the loss of control may have significantly harmful consequences, such as in Vidal Hall. (Google in v Vidal-Hall and others & The Information Commissioner [2015] EWCA Civ 311) The focus in that case was on the significant distress caused to the claimants by the delivery to their screens of unwanted advertising material. However, decision was very fact specific; it seemed that the type of information that had secretly been tracked and used to send targeting advertising was of a particularly private and sensitive nature, such that it would have caused harm to the claimants had any one else seen their computer screens.

The High Court in Lloyd v Google also accepted that delivery of unwanted commercial advertising can be upsetting in other ways, for example where repeated or bulk unwanted communications:

  • Is so distressing it constitutes harassment even if the content is inherently innocuous
  • Infringes a person’s right to respect for their autonomy
  • Represents a material interference with their freedom of choice over how they lead their life.

However, on the facts of the case the Court concluded that the claimants had not provided any particulars of any damage suffered.   Rather the claimants seemed to be relying on the fact that the claimants were entitled to be compensated because of the breach alone. The judge rejected this as a possibility.

A Court cannot award compensation just because the data protection rules have been breached. The Court also rejected the idea that the claimants should be compensated in order to “censure” the defendant’s behaviour. The Court also rejected any argument that damages under the DPA should be awarded in order on a sort of “restitutionary” basis, that is ‘calculated by reference to the market value of the data which has been refused’.

Representative action cases- what lessons can be learnt?

This was a novel litigation, it involved one named claimant bringing an action on behalf of a large group. The cation faced difficulties right from the start, not least in trying to identify the group. The Judge identified three real difficulties with this type of action:

  1. The representative (Lloyd) and the members of the class do not all have the “same interest” which is an essential requirement for any representative action. Some people may have suffered no damage and others different types of damage. Consequently, they did not all have the same interest in the action.
  2. Even if it was possible to define the class of people represented, it would be practically impossible to identify all members of the class.
  3. The court would not exercise its discretion to allow this case to go forward, particularly given the costs of the litigation, the fact that the damages payable to each individual (were the case to succeed) would be modest, and that none of the class had shown any interest in, appeared to care about in the claim.

Anyone contemplating pursuing this type of claim in future would be well advised to carefully consider and take on board the judge’s criticisms and seek to address them before pursing an action.

LGL Advert

Susan Wolf will be delivering the forthcoming GDPR workshop in Birmingham on the 19th November. Book your place now! 

 

Facebook Fan page administrators need to be GDPR compliant

 

canstockphoto53604156

By Susan Wolf

In our previous blog we considered the recent, and much awaited, decision of the Court of Justice of the European Union  (CJEU) on the status of Facebook fan page users [1]. After protracted litigation in the German Courts, the CJEU ruled on June 5th 2018, that the concept of data controller was wide enough to include a user of a fan page hosted on a social network (in this case Facebook).

WirtschaftsakademieSchleswig-Holstein GmbH (a private training academy) operated a Facebook fan page, which it used to promote its activities. Facebook provided Wirtschaftsakademie with anonymsied statistical data about people who visited the fan pages. The German Data Protection authority for Schleswig-Holstein ordered Wirtschaftsakademie to deactivate the page or risk a fine. This is because visitors to the fan page were not warned that their personal data was being being collected by Facebook, by means of cookies that were placed on the visitor’s hard disk. The purpose of that data collection was to compile viewing statistics for the Wirtschaftsakademieand to enable Facebook to publish targeted advertisements.

Technically the Court’s jurisdiction is limited to providing authoritative rulings on the interpretation of EU law and not determining the outcome of a case. However, in this case the Court made it very clear that, Wirtschaftsakademie was a data controller responsible for processing personal data, jointly with Facebook Ireland. However, the ruling has much wider implications and could affect all organisations that use Facebook fan pages, or other similar online social media.

Joint Data Controllers Must have an Agreement that sets out respective responsibilities under the GDPR

 

The fact that an administrator of a fan page uses the platform provided by Facebook in order to benefit from the associated services does not mean it escapes any of the obligations concerning the protection of personal data. In short, as a joint data controller, the fan page user must comply with the GDPR.  Similarly the fact that the fan page user acts as a joint controller, in that it decides to use Facebook as its platform, does not relieve Facebook of its obligations as controller either.  They are joint data controllers; a concept specifically acknowledged by Article 26 of the GDPR, which states.

“Where two or more controllers jointly determine the purposes and means of processing, they shall be joint controllers. They shall, in a transparent manner determine their respective responsibilities for compliance with the obligations under [the GDPR] in particular as regards the exercising of the rights of the data subject and their respective duties to provide the information referred to in Articles 13 and 14, by means of an arrangement between them unless….The arrangement may designate a contact point for data subjects.”

Joint controllers must enter into a specific agreement, or contract, that sets out their respective responsibilities under the GDPR.

Joint Controller does not necessarily mean ‘equal controller’

 

The fact that two entities are joint controllers does not mean that they are ‘equals’. The CJEU acknowledges that the existence of joint responsibility, with an online social network, such as Facebook does not necessarily imply equal responsibility.

Depending on the circumstances, different operators may be involved at different stages of that processing, and also to different degrees.  So for example, it is not necessary for a data controller to have complete control over all aspects of data processing. Indeed data processing today is becoming much more complex and may involve several distinct processes that involve numerous parties, each exercising different degrees of control. With such complexity it is even more important that roles and responsibilities are clearly defined and easily allocated.  However Article 26 GDPR also requires that the ‘allocation’ of responsibilities must be transparent. The Article 29 Working Party 2010 Opinion on Data Controllers [2] (Now the European Data Protection Board) emphasises that the complexities of joint control arrangements must not result in an unworkable distribution of responsibilities that will make it more difficult for data subjects to enforce their rights.

On 15th June Facebook issued a statement for users of Facebook fan pages. This also acknowledges that ‘it does not make sense to impose an equal footing on page operators for the data processing carried out by Facebook’.  Accordingly Facebook has indicated that it will update its own terms and conditions to clarify the respective data protection responsibilities of Facebook and Fan Page site users. (The statement does not expressly refer to the GDPR). However, at the time of writing this blog nothing further has been issued.

A note of caution: Liabilities

The terms of any joint controller agreement will be very important because of the provisions of Article 82 (4). This states that where more than one data controllers are involved in the ‘same processing’ and where they are responsible for any damage caused by processing, each controller shall be held liable for the entire damage. This is to ensure the effective compensation of data subjects who suffer any ‘material or non material’ damage as a result of any breach of the GDPR. However, GDPR Recital 146 states that where both controllers are joined in the same legal proceedings, compensation may be apportioned according to the responsibility of each controller. (Subject to the caveat that the data subject who has suffered any damage is compensated in full).  Therefore an agreement that specifically allocates responsibilities, and liabilities, should be regarded as essential.

What steps should Fan Page users be taking now?

Until Facebook clarifies its position on joint controller agreement, it might be prudent for anyone thinking of opening a Facebook fan page, to defer from doing so.

However, existing fan page users do need to take steps to become GDPR compliant.

The Information Commissioner’s Office has not, as yet, issued any guidance to fan page users. However, the German Data Protection Authorities have issued a statement advising Facebook fan page users/operators that they must comply with the applicable provisions of the GDPR and specifically the following obligations:

  • The operator must provide information on processing activities by Facebook and by the operator itself transparently and in an understandable form.
  • The operator must ensure that Facebook provides the relevant information to enable the operator to fulfil its information obligations.
  • The operator must obtain opt-in consent for tracking visitors to a fan page (e.g., by using cookies or similar technologies).
  • The operator must enter into a co-controller agreement with Facebook.

Perhaps a more pragmatic solution is for fan page users to consider what steps an organisation would need to take, as data controller, if they had created their own website (other than via Facebook) and embedded cookies and implemented a tool similar to the Facebook Insights tool, in order to compile viewing statistics.

[1] Case C210/16 Unabhängiges Landeszentrum für Datenschutz Schleswig-Holstein v

Wirtschaftsakademie Schleswig-Holstein GmbH

[2] Article 29 Data Protection Working Party, Opinion 1/2010 on the concepts of “controller” and “processor”

 

Act Now provides a full GDPR Course programme including one day workshops, elearning, Healthchecks and our GDPR Practitioner Certificate. 

Book now to avoid disappointment! 

 

Decision: Facebook Fan Page Administrators are Data Controllers

canstockphoto29052513

By Susan Wolf

On 5th June 2018 the Court of Justice of the European Union (CJEU) delivered its long awaited Facebook fan page decision. The case concerned the definition of data controller under the now repealed Data Protection Directive 95/46/EC [1] and in particular whether the administrator user of a Facebook fan page was a data controller.

The fact that the Data Protection Directive has been replaced by the GDPR 2016 should not diminish the importance of this ruling, particularly for organisations that use Facebook or other social media platforms to promote their business or organisation.

We explain some of the issues raised in the case and consider the implications of the ruling for administrators of Facebook fan pages under the GDPR.

The case

The case involved Wirtschaftsakademie Schleswig-Holstein GmbH, a private training academy in Germany. The company provided business training for commerce and industry (including GDPR training).  It operated a Facebook fan page to make people aware of its range of services and activities.

Fan pages are user accounts that can be set up on Facebook by individuals or businesses. According to Facebook, a fan page is a place where businesses can create a space on Facebook, to connect with people to tell them about their business.  Fan pages are not the same as Facebook profiles, which are limited purely for individuals’ personal use. Unlike a personal Facebook profile, a Fan page is accessible to anyone using the Internet.

Authors of fan pages must register with Facebook in order to use the online platform to post any kind of communication. At that time, fan page administrators could obtain, from Facebook, anonymous statistical information on visitors to the fan page, via a function called ‘Facebook Insights’. That information was collected by means of ‘cookies’, each containing a unique user code, which remained active for two years and were stored by Facebook on the hard disk of the computer or on other media of visitors to fan pages. The user code, which could be matched with the connection data of users registered on Facebook, was collected and processed when the fan pages were opened.

The service, which was provided free of charge under non-negotiable terms, was no doubt very useful to the German Training Academy.  Unfortunately, neither Wirtschaftsakademie, nor Facebook Ireland notified anybody ‘visiting’ the fan page about the use of the cookies or the subsequent processing of the personal data.  The German Data Protection Supervisory Authority for the Schleswig-Holstein Land (Region) took the view that by setting up its fan page, the Wirtschaftsakademie had made an active and deliberate contribution to the collection by Facebook of personal data relating to visitors to the fan page, from which it profited by means of the statistics provided to it by Facebook.  The regulator concluded (in November 2011) that the Wirtschaftsakademie was a data controller and consequently ordered it to deactivate its fan page and threatened a penalty payment if the page was not removed.

The Wirtschaftsakademie challenged that before the German Administrative Court. Their main argument was that it was not responsible under data protection law for the processing of the data by Facebook or the cookies that Facebook installed, and neither had it commissioned Facebook to process personal data on its behalf. This argument was successful before the administrative court. However the regulator appealed and what followed was lengthy protracted litigation in the German courts. By 2016 the case had reached the Federal Administrative Court. The Federal Court also agreed that the Wirtschaftsakademie was not responsible for the data processing as defined by Article 2 (d) of the Data Protection Directive:

  • (d) ‘controller’ shall mean the natural or legal person, public authority, agency or any other body which alone or jointly with others determines the purposes and means of the processing of personal data. The GDPR, Article 4 defines data controller in identical terms.

However, the Federal Court also decided that it was necessary to refer the question to the CJEU under the preliminary rulings, particularly since the CJEU had previously ruled [2] that the concept of data controller should be given a broad interpretation in the interests of the effective protection of the right of privacy.

The CJEU Ruling

The CJEU has no difficulty in concluding that Facebook Inc. and Facebook Ireland were data controllers because they

determined the purposes and means of processing the personal data of Facebook users and anyone visiting fan pages hosted on Facebook. However, the Court recalls that the definition includes entities that  ‘alone or jointly with others’ determine the purposes and means of data processing. In other words, the purposes may be determined by more than one controller and may be determined by ‘several actors taking part in the processing’ with each being subject to the provisions of the Directive.

On the facts, the Court considered that the administrator of a Facebook fan page:

  • Enters into a contract with Facebook Ireland and subscribes to the conditions of use, including the use of cookies.
  • Is able to define the parameters of the fan page, which has an influence on the processing of personal data for the purposes of producing statistics based on visits to the fan page.
  • Could, with the help of filters made available by Facebook, define the criteria for statistical analysis of data.
  • Could designate the categories of persons whose personal data is to be made use of by Facebook.
  • Can ask Facebook for demographic data relating to its target audience, including age, sex, relationship and occupation, lifestyle and purchasing habits.

These factors pointed to the fact that the administrator of a fan page hosted on Facebook takes part in the determination of the purposes and means of processing the personal data of visitors to the fan page. Consequently the administrator of the fan page is to be regarded as a data controller, jointly with Facebook Ireland.

The Court rejected arguments that the Wirtschaftsakademie only received the statistical data in anonymised form because the fact remained that the statistics were based on the collection, by cookies, of the personal data of visitors to the fan page.

The fact that the fan page administrator uses the platform provided by Facebook does not exempt it from compliance with the Directive. The Court also added that non Facebook users may visit a fan page and therefore the administrator’s responsibilities for the processing of the personal data appears to be even greater as the mere consultation of the home page automatically starts the processing of personal data.

[1]  Case C210/16 Unabhängiges Landeszentrum für Datenschutz Schleswig-Holstein v

Wirtschaftsakademie Schleswig-Holstein GmbH

[2]Case C 212/13  František Ryneš v Úřad pro ochranu osobních údajů

 

We are running GDPR and DPA 2018 workshops throughout the UK. Head over to our website to book your place now. New Dates added for London!

Need to train frontline staff quickly? Try our extremely popular GDPR e-learning course.

Dont forget about our GDPR Helpline, its a great tool to use for some advice when you really need it.

Monitoring Staff Use of Social Networks: The Human Rights Implications

canstockphoto9076695

According to a recent FOI request made by BBC Radio 5 live, last year there was a rise in the number of UK council staff suspended after being accused of breaking social media rules. Many employers, both in the public and the private sector, now monitor staff use of social media within the office environment. The possibilities are endless but care must be taken not to overstep the legal limits.

All employers have to respect their employees’ right to privacy under Article 8 of the European Convention on Human Rights (ECHR).  This means that any surveillance or monitoring must be carried out in a manner that is in accordance with the law and is necessary and proportionate (see Copland v UK (3rd April 2007 ECHR))

A January 2016 judgment of the European Court of Human Rights show that a careful balancing exercise needs to be undertaken when applying the law (Barbulescu v Romania (application 61496/08). In this case, the employer had asked employees such as the applicant to set up Yahoo! messenger accounts for work purposes. Its policies clearly prohibited the use of such work accounts for personal matters. The employer suspected the applicant of misusing his account, so it monitored his messages for a period during July 2007 without his knowledge.

The employer accused the applicant of using his messenger account for personal purposes; he denied this until he was presented with a 45-page printout of his messages with various people, some of which were of an intimate nature. The employer had also accessed his private messenger account (though it did not make use of the contents).

The applicant was sacked for breach of company policy. When he challenged his dismissal before the courts, his employer relied on the print out of his messages as evidence. He argued that, in accessing and using those personal messages, the employer had breached his right to privacy under Article 8 ECHR.

The Court accepted the applicant’s privacy rights were engaged in this case. However the employer’s monitoring was limited in scope and proportionate. It is reasonable for an employer to verify that employees are completing their professional tasks during working hours. Key considerations were:

  • The emails at the centre of the debate had been sent via a Yahoo Messenger account that was created, at the employer’s request, for the specific purpose of responding to client enquiries.
  • The employee’s personal communications came to light only as a result of the employer accessing communications that were expected to contain only business related materials and had therefore been accessed legitimately.
  • The employer operated a clear internal policy prohibiting employees from using the internet for personal and non-business related reasons.
  • The case highlights the need for companies to have a clear internet and electronic communications policy and the importance of such a policy being communicated to employees.

When monitoring employees, the employer will inevitably be gathering personal data about employees and so consideration also has to be given to the provisions of the Data Protection Act 1998 (DPA). The Information Commissioner’s Office’s (ICO) Employment Practices Code, includes a section on surveillance of employees at work. In December 2014, Caerphilly County Borough Council signed an undertaking after an ICO investigation found that the Council’s surveillance of an employee, suspected of fraudulently claiming to be sick, had breached the DPA.

Compliance with the DPA will also help demonstrate that the surveillance is human rights compliant since protection of individuals’ privacy is a cornerstone of the DPA. Of course the data protection angle will bite harder when the new EU Data Protection Regulation comes into force in 2018. Failure to comply could lead to a fine of up to 20 million Euros or 4% of global annual turnover.

Act Now has a range of workshops relating to surveillance and monitoring both within and outside the workplace. Our products include a RIPA polices and procedures toolkit and e-learning modules.

Facebook, Social Networks and the Need for RIPA Authorisations

canstockphoto12584745By Ibrahim Hasan

Increasingly local authorities are turning to the online world, especially social media, when conducting investigations. There is some confusion as to whether the viewing of suspects’ Facebook accounts and other social networks requires an authorisation under Part 2 of the Regulation of Investigatory Powers Act 2000 (RIPA). In his latest annual report the Chief Surveillance Commissioner states (paragraph 5.42):

“Perhaps more than ever, public authorities now make use of the wide availability of details about individuals, groups or locations that are provided on social networking sites and a myriad of other means of open communication between people using the Internet and their mobile communication devices. I repeat my view that just because this material is out in the open, does not render it fair game. The Surveillance Commissioners have provided guidance that certain activities will require authorisation under RIPA or RIP(S)A and this includes repetitive viewing of what are deemed to be “open source” sites for the purpose of intelligence gathering and data collation.”

Careful analysis of the legislation suggests that whilst such activity may be surveillance, within the meaning of RIPA (see S.48(2)), not all of it will require a RIPA authorisation. Of course RIPA geeks will know that RIPA is permissive legislation anyway and so the failure to obtain authorisation does not render surveillance automatically unlawful (see Section 80).

There are two types of surveillance, which may be involved when examining a suspect’s Facebook or other social network pages; namely Directed Surveillance and the deployment of a Covert Human Intelligence Source (CHIS). Section 26 of the Act states that surveillance has to be covert for it to be directed:

“surveillance is covert if, and only if, it is carried out in a manner that is calculated to ensure that persons who are subject to the surveillance are unaware that it is or may be taking place” (my emphasis)

If an investigator decides to browse a suspect’s public blog, website or “open” Facebook page (i.e. where access is not restricted to “friends”, subscribers or followers) how can that be said to be covert? It does not matter how often the site is accessed as long as the investigator is not taking steps to hide his/her activity from the suspect. The fact that the suspect is not told does about the “surveillance” does not make it covert. Note the words in the definition of covert; “unaware that it is or may be taking place.” If a suspect chooses to publish information online they can expect the whole world to read it including law enforcement and council investigators. If he/she wants or expects privacy it is open to them to use the available privacy settings on their blog or social network.

The Commissioner stated in last year’s annual report:

“5.31 In cash-strapped public authorities, it might be tempting to conduct on line investigations from a desktop, as this saves time and money, and often provides far more detail about someone’s personal lifestyle, employment, associates, etc. But just because one can, does not mean one should. The same considerations of privacy, and especially collateral intrusion against innocent parties, must be applied regardless of the technological advances.” (my emphasis)

I agree with the last part of this statement. The gathering and use of online personal information by public authorities will still engage Human Rights particularly the right to privacy under Article 8 of the European Convention on Human Rights. To ensure such rights are respected the Data Protection Act 1998 must be complied with. A case in point is the monitoring last year of Sara Ryan’s blog by Southern Health NHS Trust. Our data protection expert Tim Turner wrote recently about the data protection implications of this kind of monitoring.

Where online surveillance involves employees then the Information Commissioner’s Office’s (ICO) Employment Practices Code (part 3) will apply. This requires an impact assessment to be done before the surveillance is undertaken to consider, amongst other things, necessity, proportionality and collateral intrusion. Whilst the code is not law, it will be taken into account by the ICO and the courts when deciding whether the DPA has been complied with. In December 2014, Caerphilly County Borough Council signed an undertaking after an ICO investigation found that the Council’s surveillance of an employee , suspected of fraudulently claiming to be sick, had breached the DPA.

Facebook Friends – A Friend Indeed

Of course the situation will be different if an investigator needs to become a “friend’ of a person on Facebook in order to communicate with them and get access to their profile and activity pages. For example, local authority trading standards officers often use fake profiles when investigating the sale of counterfeit goods on social networks. In order to see what is on sale they have to have permission from the suspect. This, in my view, does engage RIPA as it involves the deployment of a CHIS defined in section 26(8):

“For the purposes of this Part a person is a covert human intelligence source if—

(a) he establishes or maintains a personal or other relationship with a person for the covert purpose of facilitating the doing of anything falling within paragraph (b) or (c);

(b) he covertly uses such a relationship to obtain information or to provide access to any information to another person; or

(c) he covertly discloses information obtained by the use of such a relationship, or as a consequence of the existence of such a relationship”  (my emphasis)

Here we have a situation where a relationship (albeit not personal) is formed using a fake online profile to covertly obtain information for a covert purpose. In the case of a local authority, this CHIS will not only have to be internally authorised but also, since 1st November 2012, approved by a Magistrate.

This is a complex area and staff who do not work with RIPA on a daily basis can be forgiven for failing to see the RIPA implications of their investigations. From the Chief Surveillance Commissioner’s comments (below) in his annual report, it seems advisable for all public authorities to have in place a corporate policy and training programme on the use of social media in investigations:

“5.44 Many local authorities have not kept pace with these developments. My inspections have continued to find instances where social networking sites have been accessed, albeit with the right intentions for an investigative approach, without any corporate direction, oversight or regulation. This is a matter that every Senior Responsible Officer should ensure is addressed, lest activity is being undertaken that ought to be authorised, to ensure that the right to privacy and matters of collateral intrusion have been adequately considered and staff are not placed at risk by their actions and to ensure that ensuing prosecutions are based upon admissible evidence.”

We have a workshop on investigating E – Crime and Social Networking Sites, which considers all the RIPA implications of such activities. It can also be delivered in house.

In conclusion, my view is that RIPA does not apply to the mere viewing of “open” websites and social network profiles. However in all cases the privacy implications have to be considered carefully and compliance with the Data Protection Act is essential.

Ibrahim will be looking at this issue in depth in our forthcoming webinars.

Looking to update/refresh your colleagues’ RIPA Knowledge. Try our RIPA E Learning Course. Module 1 is free.

We also have a full program of RIPA Courses and our RIPA Policy and Procedures Toolkit contains standard policies as well as forms (with detailed notes to assist completion).