Some argue that the primary goal of social media is no longer genuine connection, but the maximisation of user engagement for commercial gain. Platforms generate vast revenues by delivering highly targeted, personalised advertising, incentivising designs that keep users scrolling for longer. With the rise of AI, this content stream has become even more relentless, often amplified by manipulative or overly flattering language that encourages continuous interaction.
Unsurprisingly, many parents are concerned about their children’s use of social media. Endless scrolling and exposure to videos featuring mindless pranks or viral challenges can have negative effects on both mental and physical health. Increasingly, attention is turning to the platforms themselves: critics suggest that their design may not only encourage excessive use, but also contribute to addiction, anxiety and other forms of harm.
The US Court Case
On 25th March 2026, a jury in Los Angeles delivered a damning verdict on two of the world’s most popular social media platforms. It ruled that Instagram and You Tube were deliberately designed to be addictive and consequently their parent companies have been negligent in failing to safeguard their child users. Meta and Google, owners of Instagram and YouTube, must now pay $6m (£4.5m) in damages to “Kaley”, the young woman who was the plaintiff (claimant) in this case. Her lawyers argued that the design of Instagram and YouTube caused her to be addicted to the social media platforms. This addiction impacted her mental health during childhood leaving her with body dysmorphia, depression and suicidal thoughts.
The judgement has sent shockwaves through tech companies worldwide, not just in Silicon Valley. One tech company insider, who asked not to be identified, told the BBC, “we’re having a moment”. Even the Royal Family chimed in. In a statement, the Duke and Duchess of Sussex said: “This verdict is a reckoning. For too long, families have paid the price for platforms built with total disregard for the children they reach.”
Both companies vigorously defended the claim and intend to appeal the judgement. Meta maintains that a single platform cannot be solely responsible for a user’s mental health crisis. Google, meanwhile, argues that YouTube is not a social network.
English Law
Could such a claim succeed in this country? The tort of negligence provides the best hope for claimants who allege harm from social media use subject to the elements of the tort (duty of care, breach, causation and foreseeability) being satisfied. There is growing recognition in UK law that online platforms may owe a duty of care to users, particularly if the users are children. And the harms of over use of social media are well documented. However causation is likely to be the most difficult hurdle for claimants in the UK. To succeed, a claimant must prove that a platform’s design caused or materially contributed to the harm they suffered through their use of social media. This is a difficult hurdle when it comes to social media. Psychological harm rarely has a single identifiable cause. Social media companies are likely to argue that their platforms are only one of the many factors which can contribute to an individual’s mental health; alongside family environment, school experiences, pre-existing vulnerabilities and offline relationships to name a few.
Could social media platforms be treated as “defective products” under the Consumer Protection Act 1987 (CPA) which carries strict liability for harm? Products, under the CPA, are traditionally understood as tangible goods, not the likes of YouTube and Instagram. It is arguable though that social media platforms are not just intermediaries but “manufacturers” of digital environments, making them liable for defects in algorithms or addictive design. The Law Commission is currently reviewing the CPA to determine if it is fit for the digital age, with a focus on artificial intelligence, software and online platforms. The review, which began in September 2025, may lead to expanded liability for online platforms and software providers.
It is worth noting that the US case was decided by a jury. In the UK civil cases, particularly those involving negligence, are decided by judges. Juries may be influenced by emotional arguments, whereas judges are trained to apply the law strictly and are less susceptible to being swayed by emotion at the expense of legal principles.
Despite the issues around causation, a legal action in negligence is probably the best option for aggrieved social media users in the UK; although the lack of Legal Aid and the UK courts restrictive approach to class actions mean a test case would require significant upfront funding. Perhaps insurers, emboldened by the US Judgement, may now be more willing to cover the costs of such a test case.
Regulating Social Media
Unlike the US, the UK has moved toward statutory regulation rather than litigation as the primary means of controlling social media harms.
Since the passage of the Online Safety Act in 2023 (OSA), social media companies and search engines have a duty to ensure their services aren’t used for illegal activity or to promote illegal content, with particular protections for children. The communications regulator, Ofcom, has been tasked with implementing the OSA and can fine infringing companies of up to £18 million, or 10% of their global revenue (whichever is greater). Last month, it published guidance on how platforms must protect children. Furthermore, since platforms are processing users’ personal data, they have to comply with the UK GDPR. The Data (Use and Access) Act 2025, which mainly came into force in February, explicitly requires those who provide an online service that is likely to be used by children, to take their needs into account when deciding how to use their personal data.
Even before the US judgement, many countries had been considering whether, to regulate social media further and/or ban children from using it. Australia has banned it and others, like France and Denmark, have introduced or are planning to introduce tighter rules.
The UK government is currently carrying out a consultation to consider whether additional measures are required to keep children safe in the online world. This includes setting a minimum age for children to access social media, restricting risky functionalities and design features that encourage excessive use, such as infinite scrolling and autoplay, whether the digital age of consent should be raised, whether the guidance on the use of mobile phones in schools should be put on a statutory footing and better support for parents, including clearer guidance and simpler parental controls. The consultation ends on 26th May, and the government will respond before the end of July. Alongside the consultation, the government is running a pilot scheme which will see 300 teenagers have their social media apps disabled entirely, blocked overnight or capped to one hour’s use – with some also seeing no such changes at all – in order to compare their experiences. Children and parents involved in the pilot will be interviewed before and after to assess its impact.
Meanwhile, on 27th March 2026, the government published national guidance that urges parents to strictly limit screen exposure in early years over health and development risks. The new recommendations advise that there should be no screen exposure for children under two except for shared activities. For those aged two to five, usage should be capped at one hour per day, with additional guidance to avoid screens at mealtimes and before bed.
Parliament is also debating the use of social media platforms by children but remains divided on what action to take. In March, during a debate on the Children’s Wellbeing and Schools Bill, the House of Lords supported a proposal to ban under-16s in the UK from social media platforms. It is the second time peers have defeated the government over the proposal. There is now a standoff between the Commons and the Lords. Whatever happens the verdict in the California court has signalled a rising public expectation for more aggressive regulation of social media platforms.
Listen to the Guardians of Data Podcast for the latest news and views on data protection, cyber security, AI and freedom of information.
This and other developments relating to children’s data will be covered forthcoming workshop, Working with Children’s Data.






