New Podcast: Building Trustworthy and Responsible AI Systems

“Information governance professionals are the bedrock for deploying good governance of AI. We need to be there at the start of the actual thinking process.” 

Tahir Latif, Global Practice Lead for Data Privacy & Responsible AI at Cognizant 

The last two years has seen a massive increase in AI deployment. Previously the domain of Science Fiction, AI is now everywhere – in our workplaces, our personal lives, and in the systems that shape society. From healthcare to security and law enforcement. But alongside the opportunities, there are some big risks: including lack of accuracy and transparency as well as bias and discrimination. 

In this episode, we dive into one of the biggest questions of our time: How do we build trustworthy and responsible AI systems? 

To help us answer this question, we are joined by someone who is right at the heart of the conversation. Tahir Latif is a distinguished expert on building responsible and transparent AI systems. He is the Global Practice Lead for Data Privacy & Responsible AI at Cognizant, one of the largest global professional services companies. Tahir has led complex privacy and AI programmes across multiple industry sectors both in the UK and globally. He is also the Chief AI and Governance Officer and board member at the Ethical AI Alliance, a not for profit body which promotes ethical standards in AI development. Tahir is the co-author of Data Privacy – A Practical Handbook on Governance and Operation.

In this conversation, we explore how to cut through the complexity of ethical AI, what the future holds, and most importantly, what practical steps IG professionals can take to succeed in this new landscape. 

Listen on your preferred platform via our podcast page, or download the episode directly.

This podcast is sponsored by Phaselaw – a purpose-built solution for document disclosures, like subject access requests and FOI requests. Instead of redacting PDFs one by one, or forcing litigation software to do a job it wasn’t designed for, with Phaselaw you get collection, review, and redaction in one workflow. Teams across the World are using it to cut response times from weeks to days. 

For Guardians of Data listeners, Phaselaw is offering a two-month free trial; run it on live requests, see what it does to your backlog, decide from there. No card, no commitment. 

Head to https://www.phase.law/guardians to claim your free trial.  

Previous episodes of the Guardians of Data podcast have featured  Naomi Mathews and Ibrahim Hasan explaining the law on filming people in public for social media, Maurice Frenkel looking back at 20 years of the Freedom of Information Act, Olu Odeniyi analysing recent cyber breaches and discussing the lessons to learn and Raz Edwards talking about how to succeed as an IG leader. 

Could Children’s Use of Social Media be Banned in the UK?

Some argue that the primary goal of social media is no longer genuine connection, but the maximisation of user engagement for commercial gain. Platforms generate vast revenues by delivering highly targeted, personalised advertising, incentivising designs that keep users scrolling for longer. With the rise of AI, this content stream has become even more relentless, often amplified by manipulative or overly flattering language that encourages continuous interaction. 

Unsurprisingly, many parents are concerned about their children’s use of social media. Endless scrolling and exposure to videos featuring mindless pranks or viral challenges can have negative effects on both mental and physical health. Increasingly, attention is turning to the platforms themselves: critics suggest that their design may not only encourage excessive use, but also contribute to addiction, anxiety and other forms of harm. 

The US Court Case  

On 25th March 2026, a jury in Los Angeles delivered a damning verdict on two of the world’s most popular social media platforms. It ruled that Instagram and You Tube were deliberately designed to be addictive and consequently their parent companies have been negligent in failing to safeguard their child users. Meta and Google, owners of Instagram and YouTube, must now pay $6m (£4.5m) in damages to “Kaley”, the young woman who was the plaintiff (claimant) in this case. Her lawyers argued that the design of Instagram and YouTube caused her to be addicted to the social media platforms. This addiction impacted her mental health during childhood leaving her with body dysmorphia, depression and suicidal thoughts.  

The judgement has sent shockwaves through tech companies worldwide, not just in Silicon Valley. One tech company insider, who asked not to be identified, told the BBC, “we’re having a moment”. Even the Royal Family chimed in. In a statement, the Duke and Duchess of Sussex said: “This verdict is a reckoning. For too long, families have paid the price for platforms built with total disregard for the children they reach.”   

Both companies vigorously defended the claim and intend to appeal the judgement. Meta maintains that a single platform cannot be solely responsible for a user’s mental health crisis. Google, meanwhile, argues that YouTube is not a social network. 

English Law 

Could such a claim succeed in this country? The tort of negligence provides the best hope for claimants who allege harm from social media use subject to the elements of the tort (duty of care, breach, causation and foreseeability) being satisfied. There is growing recognition in UK law that online platforms may owe a duty of care to users, particularly if the users are children. And the harms of over use of social media  are well documented. However causation is likely to be the most difficult hurdle for claimants in the UK. To succeed, a claimant must prove that a platform’s design caused or materially contributed to the harm they suffered through their use of social media. This is a difficult hurdle when it comes to social media. Psychological harm rarely has a single identifiable cause. Social media companies are likely to argue that their platforms are only one of the many factors which can contribute to an individual’s mental health; alongside family environment, school experiences, pre-existing vulnerabilities and offline relationships to name a few.  

Could social media platforms be treated as “defective products” under the Consumer Protection Act 1987 (CPA)  which carries strict liability for harm? Products, under the CPA, are traditionally understood as tangible goods, not the likes of YouTube and Instagram. It is arguable though that social media platforms are not just intermediaries but “manufacturers” of digital environments, making them liable for defects in algorithms or addictive design. The Law Commission is currently reviewing the CPA to determine if it is fit for the digital age, with a focus on artificial intelligence, software and online platforms. The review, which began in September 2025, may lead to expanded liability for online platforms and software providers. 

It is worth noting that the US case was decided by a jury. In the UK civil cases, particularly those involving negligence, are decided by judges. Juries may be influenced by emotional arguments, whereas judges are trained to apply the law strictly and are less susceptible to being swayed by emotion at the expense of legal principles. 

Despite the issues around causation, a legal action in negligence is probably the best option for aggrieved social media users in the UK; although the lack of Legal Aid and the UK courts restrictive approach to class actions mean a test case would require significant upfront funding. Perhaps insurers, emboldened by the US Judgement, may now be more willing to cover the costs of such a test case.  

Regulating Social Media 

Unlike the US, the UK has moved toward statutory regulation rather than litigation as the primary means of controlling social media harms. 

Since the passage of the Online Safety Act in 2023 (OSA), social media companies and search engines have a duty to ensure their services aren’t used for illegal activity or to promote illegal content, with particular protections for children. The communications regulator, Ofcom, has been tasked with implementing the OSA and can fine infringing companies of up to £18 million, or 10% of their global revenue (whichever is greater). Last month, it published guidance on how platforms must protect children. Furthermore, since platforms are processing users’ personal data, they have to comply with the UK GDPR. The Data (Use and Access) Act 2025, which mainly came into force in February, explicitly requires those who provide an online service that is likely to be used by children, to take their needs into account when deciding how to use their personal data.   

Even before the US judgement, many countries had been considering whether, to regulate social media further and/or ban children from using it. Australia has banned it and others, like France and Denmark, have introduced or are planning to introduce tighter rules. 

The UK government is currently carrying out a consultation to consider whether additional measures are required to keep children safe in the online world. This includes setting a minimum age for children to access social media, restricting risky functionalities and design features that encourage excessive use, such as infinite scrolling and autoplay, whether the digital age of consent should be raised, whether the guidance on the use of mobile phones in schools should be put on a statutory footing and better support for parents, including clearer guidance and simpler parental controls. The consultation ends on 26th May, and the government will respond before the end of July. Alongside the consultation, the government is running a pilot scheme which will see 300 teenagers have their social media apps disabled entirely, blocked overnight or capped to one hour’s use – with some also seeing no such changes at all – in order to compare their experiences. Children and parents involved in the pilot will be interviewed before and after to assess its impact. 

Meanwhile, on 27th March 2026, the government published national guidance that urges parents to strictly limit screen exposure in early years over health and development risks. The new recommendations advise that there should be no screen exposure for children under two except for shared activities. For those aged two to five, usage should be capped at one hour per day, with additional guidance to avoid screens at mealtimes and before bed. 

Parliament is also debating the use of social media platforms by children but remains divided on what action to take. In March, during a debate on the Children’s Wellbeing and Schools Bill, the House of Lords supported a proposal to ban under-16s in the UK from social media platforms. It is the second time peers have defeated the government over the proposal. There is now a standoff between the Commons and the Lords. Whatever happens the verdict in the California court has signalled a rising public expectation for more aggressive regulation of social media platforms. 

Listen to the Guardians of Data Podcast for the latest news and views on data protection, cyber security, AI and freedom of information.   

This and other developments relating to children’s data will be covered forthcoming workshop, Working with Children’s Data.

New Podcast: Filming the Public for Social Media

Act Now is pleased to bring you episode 6 of the Guardians of Data podcast.  

Think about the last time you walked down a busy street, sat in a pub, or queued for a train. Now imagine that moment, completely ordinary to you, being filmed by a stranger, uploaded to TikTok or YouTube and watched by millions. 
Maybe it’s monetised; maybe it’s mocked. One thing is for sure though, it never disappears. 

Filming people in public has now become second nature for some. But what happens when those images are shared, edited and turned into social media content? Can you stop someone filming you in public? What rights do you have when the footage is published? 

In this episode, we are joined by Naomi Mathews, a lawyer who specialises in Data Protection, Freedom of Information and Surveillance Law. Naomi helps us explore what the law actually says about filming people in public; where it falls short and how that affects real people who find themselves turned into content without consent. We’ll also ask the harder questions about ethics, power and whether the UK needs a new law to better protect the public. 

Download and listen here, or on your preferred podcast app. Available on Apple Podcasts, Spotify, and all major podcast platforms. 

Previous episodes of the Guardians of Data podcast have featured Jon Baines, reflecting on his career as a Data Protection Specialist and the hot issues in information governance,  Lynn Wyeth discussing the recent controversy around Grok AI, Maurice Frenkel looking back at 20 years of the Freedom of Information Act, Olu Odeniyi analysing recent cyber breaches and discussing the lessons to learn and Raz Edwards talking about how to succeed as an IG leader.

ICO Focus on Children’s Data Processing 

In February we wrote about the Information Commissioner’s Office (ICO) issuing fines under the UK GDPR to two social media companies. Reddit was fined £14.47 million and MediaLab (owner of Imgur) was fined £247,590 for failing to implement age‑assurance measures and for processing children’s personal data in a way that potentially exposed them to harmful content. 

Safeguarding children’s privacy is a key enforcement priority for the ICO. The ICO’s investigation into TikTok (opened in March 2025) is still ongoing. It is considering how the platform uses personal data of 13-17 year-olds in the UK to make recommendations to them and deliver suggested content to their feeds. This is in the light of growing concerns about social media and video sharing platforms using data generated by children’s online activity in their recommender systems, which could lead to them being served inappropriate or harmful content. The ICO is also investigating 17 other platforms including Discord, Pinterest, and X, and has been in discussions with Meta and Snapchat over how they use children’s location data in their user map features.  

Safeguarding children’s privacy is also a duty of the ICO under the Online Safety Act, alongside Ofcom. Last week the ICO published an open letter to social media and video‑sharing platforms operating in the UK, calling on them to strengthen age assurance measures so young children cannot access services that are not designed for them. The letter sets out the ICO’s expectations about measures that platforms with a minimum age must implement, beyond relying on children to self-declare their ages (which they can easily bypass).  Instead, platforms should make use of the viable technology that is now readily available to enforce their own minimum ages and prevent these children from accessing their services. The ICO has also written directly to platforms, starting with TikTok, Snapchat, Facebook, Instagram, YouTube and X to ask them to demonstrate how their age assurance measures meet the ICO’s expectations.  

The Data (Use and Access) Act 2025, most of which came in to force earlier this month, explicitly requires those who provide an online service that is likely to be used by children, to take their needs into account when deciding how to use their personal data.  

Listen to the Guardians of Data Podcast for the latest news and views on data protection, cyber security, AI and freedom of information.  

This and other developments relating to children’s data will be covered forthcoming workshop, Working with Children’s Data.

AI Transcription Tools in Social Work Under Scrutiny 

Anyone remember Dragon Dictate? The first versions of this voice transcription software required users to spend hours training it (usually wearing a headset) by repeating stock phrases many times over. Even after full training, the transcription output was far from accurate. How technology has moved on, especially in the last few years, with the proliferation of AI. 

AI powered transcription software has been rapidly adopted by public sector organisations especially in local authority social work departments. Tools, like Magic Notes and Microsoft Copilot, are used by social workers to record conversations with children and families (e.g. interviews or assessments), transcribe spoken audio into text and generate summaries automatically. These “ambient scribes” listen in real-time or process recordings, reducing the need for manual notetaking; thus allowing professionals to focus on interactions rather than documentation. However the use of such tools, especially in sensitive contexts like social work, is not without risks as was highlighted by a recent report.  

Ada Lovelace Institute Report 

On 11th February 2026, the Ada Lovelace Institute published a report titled “Scribe and prejudice? Exploring the use of AI transcription tools in social care.” The report explored the dynamics of adoption and the impacts of AI transcription tools in adult and children’s social care across 17 local authorities in England and Scotland. Based on interviews with frontline social workers and managers, it highlighted serious risks that should be addressed by users.  

These include, amongst others: 

AI “Hallucinations”: The AI sometimes generates false information that wasn’t said in the recorded conversation. A prominent example involved an AI-generated summary incorrectly stating that a child had expressed suicidal ideation. This kind of error is especially dangerous in child protection or mental health contexts, where it could trigger unnecessary interventions or lead to flawed decisions about care. 

Gibberish, misrepresentations, and other errors: AI generated transcripts have included nonsense phrases, misspelled names, incorrect speaker attributions (especially in multi-person conversations), fabricated statements, irrelevant or foul language insertions and overly formal or academic wording that doesn’t reflect normal social work language. 

Bias and Harmful Stereotyping: Some outputs have reportedly promoted stereotypes or biased perceptions of individuals that weren’t present in the original recording. 

These issues echo broader AI concerns but of course are more serious in the context of social work records. Inaccuracies entering official care records could lead to incorrect decisions about a child’s safety, family support, or adult care; potentially resulting in harm to vulnerable people, professional consequences for social workers or even legal liability. 

Social workers generally bear full responsibility for reviewing and approving these AI outputs (the “human in the loop” safeguard), but practices vary widely according to the report. Some social workers spend minutes checking AI output whilst others spend hours. The report questions how effective this is in high-pressure frontline environments. There is also concern that over-reliance on summarisation features could erode professional judgment and the nuanced, interpretive nature of social work documentation. 

The report notes that in early 2025, one AI transcription tool was already in active use by 85 local authorities for social care. But the Ada Lovelace Institute criticises the “limited and light-touch” approaches to ethics, evaluation, testing, regulation, and risk mitigation so far. It has called for more robust safeguards, better guidance and thorough evaluation before wider use. 

Recommendations 

To ensure the safe and responsible use of AI transcription tools, the Institute urged the government to require local authorities to document their use of such tools through the ‘Algorithmic Transparency Reporting Standard.’ 

It also recommended that social care regulators and local authorities collaborate with relevant sector bodies to develop guidance on using AI transcription tools in statutory processes and formal proceedings, supported by clear accountability structures. 

The Institute added that: ‘To enable end-to-end accountability, regulators and professional bodies should review and revise rules and guidance on professional ethics for social workers and support social workers to collaborate with legal and advisory bodies around procedures for AI use in formal proceedings. An advisory board comprised of people with lived experience of drawing on care should be established to inform these actions.’ 

Further recommendations include: 

  • The UK government should extend its pilots of AI transcription tools to include various locations and public sector contexts. 
  • The UK government should set up a What Works Centre for AI in Public Services to generate and synthesise learnings from pilots and evaluations. 
  • A coalition of researchers, policymakers, civil society and community groups should collaborate on research on the systemic impacts of AI transcription tools. 
  • Local authorities should specify their outcomes and expected impact when procuring AI transcription tools to ensure a shared understanding among staff and users. 

The UK GDPR Angle 

The use of AI powered transcription software will involve processing highly sensitive personal data, including audio recordings and derived transcripts/summaries of conversations involving vulnerable individuals. This triggers UK GDPR obligations, with heightened risks due to the sensitive nature of the data and potential for harm if errors occur. 

Local authorities and social care providers should integrate UK GDPR compliance into procurement, deployment, and ongoing use of AI transcription software. Key practical steps include: 

  • Conduct a DPIA:  Before rollout or expansion, complete a Data Protection Impact Assessment to assess all the risks (e.g., hallucinations affecting accuracy, bias in diverse accents/dialects, unauthorised access). Update DPIAs for new tools or features. Involve the organisation’s Data Protection Officer from the outset. 
  • Choose compliant tools and vendors: Prioritise tools with strong data protection (e.g. UK-hosted data, no unnecessary retention, robust security). Review vendor DPIAs, processor agreements, and compliance certifications.  
  • Establish clear consent and transparency processes: Inform service users upfront about recording, AI involvement, and data use (via privacy notices or verbal explanation). Document decisions and allow opt-outs where appropriate. 
  • Implement strong human oversight and review: Mandate thorough checks of all AI outputs before approving records. Train staff to detect inaccuracies, bias, or inappropriate content. Flag AI-generated sections (e.g. via watermarks or metadata) for transparency and future audits. 
  • Secure data handling and contracts: Use encrypted recording/uploading, limit data shared with tools and delete audio promptly after transcription. Ensure processor contracts (Article 28) specify UK GDPR compliance, audit rights and breach notification. 
  • Monitor, audit and train: Regularly audit tool use and outputs for compliance. Provide targeted training on UK GDPR risks (e.g. accuracy, breaches, bias). Track incidents (e.g. hallucinations) and report serious ones as breaches if required. 
  • Define boundaries for use: Establish consensus on when AI transcription is appropriate (or unacceptable).  

AI transcription offers clear benefits for reducing paperwork and freeing up social workers’ time for direct care. However, strong governance measures must be taken to avoid dangerous inaccuracies slipping into official records, and the potential for biased or harmful decisions. 

Listen to the Guardians of Data Podcast for the latest news and views on data protection, cyber security, AI and freedom of information. 

If you need to train your staff on responsible use of AI please get in touch to discuss our customised in house training. The following public courses may also interest you: 

AI and Information Governance:  A one day workshop examining the key data protection and IG issues when deploying AI solutions.  

AI Governance Practitioner Certificate training programme: A four day course providing a practical overview of how AI systems are developed, deployed, and regulated, with particular attention to risk, bias, and accountability. 

New Podcast: The Grok AI Controversy 

Act Now is pleased to bring you episode 2 of a new podcast; Guardians of Data. This is a show where we explore the world of information law and information governance – from privacy and AI to cybersecurity and freedom of information. In each episode we will be speaking with experts and practitioners to unpack the big issues shaping the IG profession. 

In the first episode, we were joined by Jon Baines, a Senior Data Protection Specialist at Mishcon de Reya LLP and the long-standing chair of NADPO. In a wide ranging conversation, Jon shared his journey into IG, his advice for both new starters and seasoned professionals and his perspective on the future of the profession. 

In Episode 2 we discuss the recent controversy around Grok AI. 

Grok,  the AI chatbot developed by xAI and integrated into the social media platform X, has caught the attention of governments and regulators across the world after it was used to edit pictures of real women to show them in revealing clothes and suggestive poses. In the UK, Ofcom and the Information Commissioner’s Office have opened formal investigations,  a significant step that signals how seriously AI-related risks are now being taken.  

This controversy raises fundamental questions about how AI systems are designed and overseen and about whether existing laws and board-level oversight are keeping pace. In episode 2, we unpack these issues with the help of Lynn Wyeth, an expert in AI, data protection and responsible technology.  

Listen via this link or on your preferred podcast app. 
Available on Apple Podcasts, Spotify, and all major podcast platforms.

New Guardians of Data Podcast: In Conversation with Jon Baines 

Act Now is pleased to bring you the first episode of a new podcast; Guardians of Data. This is a show where we explore the world of information law and information governance – from privacy and AI to cybersecurity and freedom of information. In each episode we will be speaking with experts and practitioners to unpack the big issues shaping the IG profession.

In information governance, there’s no substitute for learning from those who have walked the path before us. Experienced IG leaders bring a wealth of knowledge from years at the frontline of data protection and information rights – navigating challenges, overcoming obstacles, and shaping best practice along the way. By listening to their stories, we can all grow in confidence and prepare for the IG challenges of tomorrow. 

In the first episode, we are joined by one such IG leader: Jon Baines is a Senior Data Protection Specialist at Mishcon de Reya LLP where he advises on complex data protection and FOI matters. Jon isn’t a lawyer in the traditional sense yet is listed in Legal 500 as a “Rising Star” in the Data Protection, Privacy and Cybersecurity category. Jon is the long-standing chair of the National Association of Data Protection (NADPO) and Freedom of Information Officers. He is regularly sought for comment by specialist and national media and writes extensively on data protection matters. 

In our conversation, Jon shares his journey into IG, his advice for both new starters and seasoned professionals and his perspective on the future of the profession. 

Listen via the player below, or on your preferred podcast app.
Available on Apple Podcasts, Spotify, and all major podcast platforms.

Do Tennis Players Have a Right to Privacy?

John McEnroe is remembered for his on-court outbursts almost as much as for his exquisite shot-making. “You cannot be serious!” is an instantly recognisable sporting catchphrase. When McEnroe was at the height of his career in the 1980s, tennis players’ behaviour was scrutinised almost exclusively through on-court broadcast cameras. What happened off court largely remained unseen. 

Today, tennis, alongside other elite sports, is an environment of continuous monitoring; players are filmed arriving, warming up, competing and exiting. Visibility is a structural feature of the modern sports industry, justified for enhancing fan engagement and serving security, integrity and officiating purposes. But where should the balance lie when such footage reveals players’ emotional states – be it anger, distress or vulnerability? 

This question came up this week when a tennis player, Coco Gauff, called for greater privacy after footage emerged of her smashing her racquet following her Australian Open quarter-final defeat. Crucially, the incident did not occur on court. Gauff was filmed in the players’ area by behind-the-scenes cameras, with the footage later broadcast on television and circulated widely on social media. Gauff said she had made a conscious effort to suppress her emotions until she believed she was away from public view, referencing a similar incident at the 2023 US Open when Aryna Sabalenka was filmed smashing her racquet after losing the final. Since 2019, the Australian Open has shown footage from the players’ zone beneath the Rod Laver Arena, including the gym, warm-up areas and corridors leading from locker rooms. Camera access in these spaces is more restricted at the other Grand Slams.  

Gauff is not alone in raising concerns about behind-the-scenes cameras. Six-time major champion Iga Świątek said this week players are being watched “like animals in the zoo” in Melbourne. Semi-finalist Jessica Pegula described the constant filming as an “invasion of privacy”, adding that players feel “under a microscope constantly”. Tournament organisers, Tennis Australia, responded by emphasising fan engagement, saying the cameras help create a “deeper connection” between players and audiences while insisting that player comfort and privacy remain a priority. 

From a legal perspective, this issue is not merely a matter of optics. Under modern data-protection regimes such as the GDPR and the Australian Privacy Act, video footage of identifiable athletes constitutes personal data. Where that footage reveals emotional states it becomes particularly sensitive. Organisers must therefore be able to justify not only collecting such footage, but retaining, broadcasting and amplifying it. That justification is relatively straightforward during live play, where filming is integral to the sport itself. It becomes much harder once the match has ended. Filming in player tunnels, medical areas or immediately after defeat may be defensible for security or safety reasons. But the retention and circulation of emotionally charged moments for entertainment value sits on far shakier legal ground.  

Players may agree to extensive filming as a condition of participation, but that agreement does not extinguish their broader privacy rights, particularly where footage is used in a way that is disproportionate, stigmatising or disconnected from its original purpose. This tension is becoming harder to ignore as governing bodies simultaneously emphasise mental health and player welfare while permitting practices that expose athletes’ most vulnerable moments to global audiences. 

Other blog posts that may interest you:

This and other data protection developments will be discussed in detail on our forthcoming  GDPR Update workshop.  

Filming People in Public for Social Media: Is it time for a new law?

In the content creator world, filming people without their consent has become everyday behaviour. From TikTok nightlife clips to YouTube street pranks, millions of people capture others in public places and post the footage online. Whether it is for likes, shares or monetisation, this behaviour is not without consequences for the creators as well as the subjects. Over the weekend the BBC ran a story about two women whose interactions with ‘friendly strangers’ were uploaded to social media causing the women much alarm and distress. 

Dilara was secretly filmed in a London store where she works, by a man wearing smart glasses. The footage was then posted to TikTok, where it received 1.3 million views. Dilara then faced a wave of unwanted messages and calls. It later turned out that the man who filmed her had posted dozens of similar videos, giving men tips on how to approach women. Another woman,Kim, was filmed last summer on a beach in West Sussex, by a different man wearing smart sunglasses. Kim, who was unaware she was being filmed, chatted with him about her employer and family. Later, the man posted two videos online, under the guise of dating advice, which received 6.9 million views on TikTok and more than 100,000 likes on Instagram.  

The Law 

UK law does not expressly prohibit filming or photographing people in public places; unlike other jurisdictions such as UAE, Greece South Korea (see the recent case of the jailed American YouTuber). 
However, a number of legal issues arise when such filming occurs once the footage is uploaded and particularly where it is intrusive, monetised or causes harm.  

Although being in public generally reduces people’s privacy expectations, the UK courts have recognised that privacy rights can still arise in public places. Filming may become unlawful where it captures people in sensitive or intimate situations, such as medical emergencies, emotional distress or vulnerability.
The manner of filming, the focus on the individual, and the purpose of publication are all relevant factors in deciding whether the subject’s privacy has been violated.

Back in 2003, in a landmark decision, the European Court of Human Rights ruled that a British man’s right to respect for his private life (Article 8 of the European Convention on Human Rights) was violated when CCTV footage of him attempting suicide was released to the media. The case was brought by Geoffrey Peck, who, on the evening of 20th August 1995 and while suffering from depression, walked down Brentwood High Street in Essex with a kitchen knife and attempted suicide by cutting his wrists. He was unaware that he had been filmed by a CCTV camera installed by Brentwood Borough Council.  The court awarded Mr Peck damages of £7,800. In recent years, media coverage has highlighted situations where women were filmed on nights out and the footage uploaded online . While the filming occurred in public, the intrusive nature of the footage and the harm caused can give rise to privacy claims. 

Victims of secret filming have a direct cause of action in the tort of misuse of private information, developed by the courts in Campbell v MGN Ltd [2004] UKHL 22. This case was about the supermodel Naomi Campbell who successfully sued the Daily Mirror for publishing photos of her attending a Narcotics Anonymous meeting on The King’s Road in London. The court said that in such cases the test is whether the individual had a reasonable expectation of privacy in the circumstances, and if so, whether that expectation is outweighed by the publisher’s right to freedom of expression under Article 10 of the ECHR.  

Data Protection 

When a person is identifiable in a video, that footage constitutes personal data within the meaning of the UK General Data Protection Regulation (UK GDPR). Publishing such footage online involves ‘processing’ personal data and brings the UK GDPR’s obligations into play. The ‘controller’ has a wide range of obligations including having a lawful basis for processing, complying with the principles of fairness and transparency and respecting data subject (the victims’) rights which includes the rights to objection and deletion. 

Content creators and influencers sometimes assume they come under the ‘domestic purposes exemption’ in Article 2(2)(c) UK GDPR. However, this exemption is narrow and does not usually apply where content is shared publicly, monetised, or used to build an online following.  

Failure to comply with the UK GDPR could (at least in theory) lead to enforcement action by the Information Commissioner which could include a hefty fine. Article 82 of the UK GDPR gives a data subject a right to compensation for material or non-material damage for any breach of the UK GDPR. Section 168 of the Data Protection Act 2018 confirms that ‘non-material damage’ includes distress. 

Harassment  

Even where filming in public is lawful in isolation, repeated or targeted filming can amount to harassment or stalking. Section 1 of the Protection from Harassment Act 1997 prohibits a course of conduct that amounts to harassment and which the defendant knows or ought to know causes alarm or distress. Filming someone repeatedly, following them, or persistently targeting them for online content may satisfy this test. In 2024 a man was arrested by Greater Manchester Police on suspicion of stalking and harassment after filming women on nights out and uploading the videos online. The arrest was based not on public filming alone, but on the cumulative effect of the conduct and the harm caused. 

Individuals who discover that a video of them has been published online without consent can make a direct request to the creator to remove the footage, particularly where it causes distress or raises privacy concerns. If this is unsuccessful, most social media platforms offer reporting mechanisms for privacy violations, harassment, or non-consensual content. Videos are often removed by the platforms following complaints. Other civil remedies may also be available including defamation where footage creates a false and damaging impression.  

A New Law?

Despite the growing prevalence of filming strangers in public for social media content, there remains no single, specific piece of legislation in the UK to govern this area. Instead, there is a patchwork of laws including privacy law, the UK GDPR and harassment legislation; to name a few. While these laws can sometimes provide protection, they were not designed with the modern social media ecosystem in mind and often struggle to respond effectively to the scale, speed, and commercial incentives of online content creation.

Furthermore, civil actions are expensive and it is difficult to get Legal Aid for such claims. Victims are left to navigate for themselves complex legal doctrines such as ‘reasonable expectation of privacy’ or ‘lawful basis for processing’. While police involvement may be appropriate in extreme cases, many videos fall short of criminal thresholds yet still cause significant distress and reputational damage.

Is it time for a new, specific statutory framework addressing non-consensual filming (and publication) in public spaces? Such a law could provide clearer boundaries, simpler remedies and more accessible enforcement mechanisms, while balancing legitimate freedoms of expression and journalism. Let us know your thoughts in the comments section.

This topic was discussed in detail in Episode 6 of the Guardians of Data Podcast (see below)

A Pinch of GDPR: Gregg Wallace Serves Up a Data Rights Claim 

Gregg Wallace, the former MasterChef presenter, has issued proceedings against the BBC and BBC Studios for failing to respond to his subject access requests (SAR) in accordance with the UK GDPR.  Wallace was sacked by the BBC in July following an inquiry into alleged misconduct. As the saying goes, “Revenge is a dish best served cold!”  

Background 

According to court documents, seen by the PA news agency, in March 2025 Wallace made SARs to the BBC and its subsidiary BBC Studios for all personal data held about him. Both requests related to his “work, contractual relations and conduct” spanning 21 years. 

The BBC acknowledged the request and deemed it “complex”. They probably invoked  Article 12(3) of the UK GDPR which allows a Data Controller to extend the one month SAR time limit by a further two months where necessary “taking into account the complexity and number of the requests.” By August, the BBC had apologised for the delay and said it was taking “reasonable steps” to process the request,  but still no data had been provided. BBC Studios, meanwhile, said it would withhold parts of the data because of “freedom of expression.” 

The court documents assert that the defendants had “wrongly redacted” information and had “unlawfully failed to supply all of the claimant’s personal data”. Wallace seeks “up to £10,000” for distress and harassment and an order compelling both entities to comply with his SARs.   

Freedom of Expression Exemption 

BBC Studios’ reliance on “freedom of expression” invites scrutiny. The exemption in Schedule 2 Part 5 of Data Protection Act 2018 (DPA 2018) applies only to personal data processing carried out for the special purposes (journalistic, artistic, academic, or literary)  and only so far as compliance would be incompatible with those purposes. 

The special purposes exemption is interpreted quite narrowly by the courts. If the withheld data consists of production notes, editorial discussions, or source material for broadcast, BBC Studios’ argument has force. But if the data relates to HR investigations, conduct complaints, or contractual matters, the processing is unlikely to be “journalistic”.  

Distress and Damages 

Article 82 UK GDPR gives a data subject a right to compensation for material or non-material damage for any breach of the UK GDPR. Section 168 of the DPA 2018 confirms that “non-material damage” includes distress. However the relevant case law shows (1) the courts distinguishing trivial upset from genuine distress and (2) modest damages being awarded. A long delay in responding to a SAR, especially in the midst of reputational damage, is not trivial. However, if Wallace’s is successful in his claim he is unlikely to be awarded anything close to £10,000: typical awards for emotional harm in data-rights breaches sit between £500 and £2,500. (The excellent Panopticon blog is a must-read for anyone needing help in navigating causation and quantum in such cases.) Furthermore, by limiting his claim to £10,000, Wallace’s case will probably be allocated to the Small Claims track where minimal costs are recoverable.  

ICO Action 

This court action by Greg Wallace may also draw the attention of the Information Commissioner’s Office (ICO). In March 2025, the ICO issued reprimands to two Scottish councils for repeatedly failing to respond to SARs within the statutory timeframe.  There is also the theoretical possibility of a criminal prosecution if the ICO, upon investigation, finds that the BBC has deliberately frustrated the requests.   
 
Section 173 of the DPA 2028 makes it a criminal offence, where a person has made a SAR, to “alter, deface, block, erase, destroy or conceal information with the intention of preventing disclosure of all or part of the information that the person making the request would have been entitled to receive.” In September, Jason Blake, the director of a care home in Bridlington, was found guilty of an offence under S.173.  The court ordered him to pay a fine of £1,100 and additional costs of £5,440.   

Other Celebrity SARs 
 
This is not the first time a primetime BBC show has crossed paths with GDPR. A few years ago, some celebrity contestants on  Strictly Come Dancing alleged mistreatment by professional dancers and production staff. Lawyers acting on behalf of one of the dancers at the centre of the allegations, made a GDPR subject access request for, amongst other things, “all internal BBC correspondence related to the issue, including emails and text messages”.  

In July 2023, Dame Alison Rose, the then CEO of NatWest, resigned after Nigel Farage made a SAR which disclosed information that contradicted the bank’s justification for downgrading his account. There is potentially more SAR court drama to come. In March, the campaign group, Good Law Project(GLP),  “filed a trailblazing new group action” against Farage’s Reform UK at the High Court. GLP claims that Reform failed to comply with a number of SARs and is seeking damages on behalf of the data subjects.  

Whilst Greg Wallace’s case is unlikely to result in a groundbreaking legal judgment or a headline-making damages award, high-profile celebrities pursuing data protection claims are always a welcome development. They help raise awareness of data rights and, conveniently, give information governance professionals a perfect excuse to indulge in a reality TV binge, just in case any other interesting data protection issues arise! 

Our How to Handle a Subject Access Request workshop will help you navigate complex Subject Access Requests.