Children’s Privacy Failures Result in a £14.47m for Reddit

Safeguarding children’s privacy is a key enforcement priority for the Information Commissioner’s Office (ICO). It is also one of their duties under the Online Safety Act, alongside OFCOM.  

In March 2025, the ICO announced three investigations looking into how TikTok, Reddit and Imgur (an image sharing and hosting platform) protect the privacy of their child users in the UK. The investigations into Imgur and Reddit specifically focussed on how the platforms use UK children’s personal data and their use of age assurance measures. 

Article 8(1) of the UK GDPR states the general rule that when a Data Controller is offering an “information society services”  (e.g. social media apps and gaming sites) directly to a child, and it is relying on consent as its lawful basis for processing, only a child aged 13 or over is able to provide their own consent. For a child under 13, the Data Controller must seek consent from whoever holds parental responsibility. Article 8(2) further states: 

“The controller shall make reasonable efforts to verify in such cases that consent is given or authorised by the holder of parental responsibility over the child, taking into consideration available technology.” 

Earlier this month MediaLab.AI, Inc. (MediaLab), owner of Imgur, was fined £247,590 for processing children’s personal data in ways that breached the UK GDPR. Imgur’s terms of use stated that children under 13 could only use the platform with parental supervision. However, the ICO investigation found that, MediaLab did not implement any form of age assurance measures to determine the age of Imgur users and did not have measures in place to obtain parental consent where children under 13 used the platform. 

Yesterday the ICO announced that Reddit has now been fined £14.47m under the UK GDPR. The circumstance of the fine are very similar to MediaLabs. In summary: 

  • Reddit’s terms of service prohibited children under 13 years of age using its platform, but despite that it did not have measures in place to check the age of users accessing its platform until July 2025. 
  • The ICO’s estimates indicated that there were a large number of children under 13 on the platform and Reddit did not have a lawful basis for processing their personal data. 
  • Reddit had not completed a Data Protection Impact Assessment focusing on the risks of using children’s personal data before January 2025, even though children between 13 and 18 were allowed to use the platform. 
  • By using under 13-year-olds’ personal data without a lawful basis and without having properly considered the risks to children more generally, children were at risk of exposure to inappropriate and harmful content on Reddit’s platform. 

We are waiting for the ICO to publish the Monetary Penalty Notices in relation to Redditt and MediaLab. In the case of the latter, the ICO said at the time that it is still considering the redaction of personal and commercially confidential or sensitive information.  

The ICO’s investigation into TikTok is still ongoing. It is considering how the platform uses personal data of 13–17-year-olds in the UK to make recommendations to them and deliver suggested content to their feeds. This is in the light of growing concerns about social media and video sharing platforms using data generated by children’s online activity in their recommender systems, which could lead to them being served inappropriate or harmful content.  

The ICO is also investigating 17 other platforms, including Discord, Pinterest, and X, and has been in discussions with Meta and Snapchat over how they use children’s location data in their user map features. Watch this space! 

The Data (Use and Access) Act 2025, most of which came in to force earlier this month, explicitly requires those who provide an online service that is likely to be used by children, to take their needs into account when deciding how to use their personal data. 

Listen to the Guardians of Data Podcast for the latest news and views on data protection, cyber security, AI and freedom of information.  

This and other developments relating to children’s data will be covered forthcoming workshop, Working with Children’s Data.

AI Transcription Tools in Social Work Under Scrutiny 

Anyone remember Dragon Dictate? The first versions of this voice transcription software required users to spend hours training it (usually wearing a headset) by repeating stock phrases many times over. Even after full training, the transcription output was far from accurate. How technology has moved on, especially in the last few years, with the proliferation of AI. 

AI powered transcription software has been rapidly adopted by public sector organisations especially in local authority social work departments. Tools, like Magic Notes and Microsoft Copilot, are used by social workers to record conversations with children and families (e.g. interviews or assessments), transcribe spoken audio into text and generate summaries automatically. These “ambient scribes” listen in real-time or process recordings, reducing the need for manual notetaking; thus allowing professionals to focus on interactions rather than documentation. However the use of such tools, especially in sensitive contexts like social work, is not without risks as was highlighted by a recent report.  

Ada Lovelace Institute Report 

On 11th February 2026, the Ada Lovelace Institute published a report titled “Scribe and prejudice? Exploring the use of AI transcription tools in social care.” The report explored the dynamics of adoption and the impacts of AI transcription tools in adult and children’s social care across 17 local authorities in England and Scotland. Based on interviews with frontline social workers and managers, it highlighted serious risks that should be addressed by users.  

These include, amongst others: 

AI “Hallucinations”: The AI sometimes generates false information that wasn’t said in the recorded conversation. A prominent example involved an AI-generated summary incorrectly stating that a child had expressed suicidal ideation. This kind of error is especially dangerous in child protection or mental health contexts, where it could trigger unnecessary interventions or lead to flawed decisions about care. 

Gibberish, misrepresentations, and other errors: AI generated transcripts have included nonsense phrases, misspelled names, incorrect speaker attributions (especially in multi-person conversations), fabricated statements, irrelevant or foul language insertions and overly formal or academic wording that doesn’t reflect normal social work language. 

Bias and Harmful Stereotyping: Some outputs have reportedly promoted stereotypes or biased perceptions of individuals that weren’t present in the original recording. 

These issues echo broader AI concerns but of course are more serious in the context of social work records. Inaccuracies entering official care records could lead to incorrect decisions about a child’s safety, family support, or adult care; potentially resulting in harm to vulnerable people, professional consequences for social workers or even legal liability. 

Social workers generally bear full responsibility for reviewing and approving these AI outputs (the “human in the loop” safeguard), but practices vary widely according to the report. Some social workers spend minutes checking AI output whilst others spend hours. The report questions how effective this is in high-pressure frontline environments. There is also concern that over-reliance on summarisation features could erode professional judgment and the nuanced, interpretive nature of social work documentation. 

The report notes that in early 2025, one AI transcription tool was already in active use by 85 local authorities for social care. But the Ada Lovelace Institute criticises the “limited and light-touch” approaches to ethics, evaluation, testing, regulation, and risk mitigation so far. It has called for more robust safeguards, better guidance and thorough evaluation before wider use. 

Recommendations 

To ensure the safe and responsible use of AI transcription tools, the Institute urged the government to require local authorities to document their use of such tools through the ‘Algorithmic Transparency Reporting Standard.’ 

It also recommended that social care regulators and local authorities collaborate with relevant sector bodies to develop guidance on using AI transcription tools in statutory processes and formal proceedings, supported by clear accountability structures. 

The Institute added that: ‘To enable end-to-end accountability, regulators and professional bodies should review and revise rules and guidance on professional ethics for social workers and support social workers to collaborate with legal and advisory bodies around procedures for AI use in formal proceedings. An advisory board comprised of people with lived experience of drawing on care should be established to inform these actions.’ 

Further recommendations include: 

  • The UK government should extend its pilots of AI transcription tools to include various locations and public sector contexts. 
  • The UK government should set up a What Works Centre for AI in Public Services to generate and synthesise learnings from pilots and evaluations. 
  • A coalition of researchers, policymakers, civil society and community groups should collaborate on research on the systemic impacts of AI transcription tools. 
  • Local authorities should specify their outcomes and expected impact when procuring AI transcription tools to ensure a shared understanding among staff and users. 

The UK GDPR Angle 

The use of AI powered transcription software will involve processing highly sensitive personal data, including audio recordings and derived transcripts/summaries of conversations involving vulnerable individuals. This triggers UK GDPR obligations, with heightened risks due to the sensitive nature of the data and potential for harm if errors occur. 

Local authorities and social care providers should integrate UK GDPR compliance into procurement, deployment, and ongoing use of AI transcription software. Key practical steps include: 

  • Conduct a DPIA:  Before rollout or expansion, complete a Data Protection Impact Assessment to assess all the risks (e.g., hallucinations affecting accuracy, bias in diverse accents/dialects, unauthorised access). Update DPIAs for new tools or features. Involve the organisation’s Data Protection Officer from the outset. 
  • Choose compliant tools and vendors: Prioritise tools with strong data protection (e.g. UK-hosted data, no unnecessary retention, robust security). Review vendor DPIAs, processor agreements, and compliance certifications.  
  • Establish clear consent and transparency processes: Inform service users upfront about recording, AI involvement, and data use (via privacy notices or verbal explanation). Document decisions and allow opt-outs where appropriate. 
  • Implement strong human oversight and review: Mandate thorough checks of all AI outputs before approving records. Train staff to detect inaccuracies, bias, or inappropriate content. Flag AI-generated sections (e.g. via watermarks or metadata) for transparency and future audits. 
  • Secure data handling and contracts: Use encrypted recording/uploading, limit data shared with tools and delete audio promptly after transcription. Ensure processor contracts (Article 28) specify UK GDPR compliance, audit rights and breach notification. 
  • Monitor, audit and train: Regularly audit tool use and outputs for compliance. Provide targeted training on UK GDPR risks (e.g. accuracy, breaches, bias). Track incidents (e.g. hallucinations) and report serious ones as breaches if required. 
  • Define boundaries for use: Establish consensus on when AI transcription is appropriate (or unacceptable).  

AI transcription offers clear benefits for reducing paperwork and freeing up social workers’ time for direct care. However, strong governance measures must be taken to avoid dangerous inaccuracies slipping into official records, and the potential for biased or harmful decisions. 

Listen to the Guardians of Data Podcast for the latest news and views on data protection, cyber security, AI and freedom of information. 

If you need to train your staff on responsible use of AI please get in touch to discuss our customised in house training. The following public courses may also interest you: 

AI and Information Governance:  A one day workshop examining the key data protection and IG issues when deploying AI solutions.  

AI Governance Practitioner Certificate training programme: A four day course providing a practical overview of how AI systems are developed, deployed, and regulated, with particular attention to risk, bias, and accountability. 

Act Now Nominated for IRMS Supplier of the Year Award 

Act Now Training is pleased to announce that it has been nominated for the 2026 Information and Records Management Society (IRMS) awards. 

Each year the IRMS recognises excellence in the field of information management with their prestigious industry awards. These highly sought-after awards are presented at a glittering ceremony at the annual Conference following the Gala Dinner.  

Act Now has been nominated for the Supplier of the Year award which it previously won in 2021, 2022 and 2024. 

Voting is open to IRMS members until Wednesday 18th March 2026. 

If you are an IRMS member, you can login to your account and vote for Act Now here

Thank you for your support!

Listen to the Guardians of Data Podcast for the latest news and views on developments in GDPR, AI, cyber security and FOI

New Podcast: The Grok AI Controversy 

Act Now is pleased to bring you episode 2 of a new podcast; Guardians of Data. This is a show where we explore the world of information law and information governance – from privacy and AI to cybersecurity and freedom of information. In each episode we will be speaking with experts and practitioners to unpack the big issues shaping the IG profession. 

In the first episode, we were joined by Jon Baines, a Senior Data Protection Specialist at Mishcon de Reya LLP and the long-standing chair of NADPO. In a wide ranging conversation, Jon shared his journey into IG, his advice for both new starters and seasoned professionals and his perspective on the future of the profession. 

In Episode 2 we discuss the recent controversy around Grok AI. 

Grok,  the AI chatbot developed by xAI and integrated into the social media platform X, has caught the attention of governments and regulators across the world after it was used to edit pictures of real women to show them in revealing clothes and suggestive poses. In the UK, Ofcom and the Information Commissioner’s Office have opened formal investigations,  a significant step that signals how seriously AI-related risks are now being taken.  

This controversy raises fundamental questions about how AI systems are designed and overseen and about whether existing laws and board-level oversight are keeping pace. In episode 2, we unpack these issues with the help of Lynn Wyeth, an expert in AI, data protection and responsible technology.  

Listen via this link or on your preferred podcast app. 
Available on Apple Podcasts, Spotify, and all major podcast platforms.

Data Protection Complaints Procedure: New ICO Guidance 

The main changes to the UK  data protection regime made by the Data (Use and Access) Act 2025 (DUA Act) came into force on Thursday 5th February 2026. One key provision though is due to commence on 19th June 2026; the requirement for Data Controllers to have a complaints procedure to handle data protection complaints.  

A new section 164A into the Data Protection Act 2018 requires Data Controllers to: 

  • give Data Subjects a way of making data protection complaints; 
  • acknowledge receipt of complaints within 30 days of receiving them; 
  • without undue delay, take appropriate steps to respond to complaints, including making appropriate enquiries, and keep Data Subjects informed; and 
  • without undue delay, tell Data Subjects the outcome of their complaints 

Following a consultation, which closed in October last year, the ICO has published its guidance explaining the new requirements and informing Data Controllers of what they must, should and could do to comply.  

Data protection expert, and guest on the first Guardians of Data podcast, Jon Baines writes on his personal blog that in declining to suggest how long controllers should normally take to respond to data subject complaints, the ICO has missed an opportunity to provide regulatory clarity.  

Listen to the Guardians of Data Podcast for the latest news and views on developments in GDPR, AI, cyber security and FOI.

If you are looking to implement the changes made by the DUA Act to the UK data protection regime, consider our very popular half day workshop.  

The newly updated UK GDPR Handbook (2nd edition) includes all amendments introduced by the DUA Act, with colour-coded changes for easy navigation and links to relevant recitals, ICO guidance, and caselaw that help make sense of the reforms in context. We have included relevant provisions of the amended DPA 2018 to support a deeper understanding of how the laws interact.

Cyber Security and Resilience Bill in Parliament 

On 12th November 2025, the Government introduced the Cyber Security and Resilience (Network and Information Systems) Bill in the House of Commons. This is an important development in the evolution of the UK’s cyber security regulation. The Bill is currently at the Committee stage.

The Bill was trailed in the King’s Speech of July 2024, and was followed by the Government publishing its Cyber security and resilience policy statement. The Bill is designed to update the existing Network and Information Systems Regulations 2018 to raise cyber resilience across key parts of the economy, and to give government and regulators more agile powers to respond to evolving threats. Amongst other things, it will expand cyber security regulation to cover more digital services and supply chains, and mandate increased incident reporting to improve the government’s response to cyber-attacks including where a company has been held to ransom. 

The Bill imposes new maximum penalties similar to GDPR levels. For more serious breaches, the maximum penalty is up to £17 million, or 4% of a regulated entity’s worldwide turnover, whichever is higher. For other breaches, the maximum penalty is up to £10 million, or 2% of a regulated entity’s worldwide turnover, whichever is higher. 

Key Provisions 

Expanded Regulatory Scope: The Bill will broaden the range of organisations and sectors under regulatory oversight, extending beyond essential services and digital providers to include a wider array of entities integral to national infrastructure. ​ 

Enhanced Regulatory Powers: Regulators will receive increased authority to ensure compliance with cybersecurity standards, including proactive investigation capabilities and mechanisms for cost recovery to support their activities. ​ 

Mandatory Incident Reporting: The Bill mandates comprehensive reporting of cyber incidents, notably ransomware attacks, to improve national threat assessment and response strategies. ​ 

Supply Chain Security: The Bill introduces measures to strengthen supply chain security, granting regulators the power to designate ‘Critical Suppliers’ whose services are integral to public sector operations. ​ 

Regulatory Oversight: The Information Commissioner’s Office will gain greater authority to investigate and enforce compliance among digital service providers, including those that supply technology to the public sector. ​ The ICO recently published its response to the Bill. 

For a detailed analysis of the Bill, read this article by law firm Clifford Chance. 

We have two workshops coming up (How to Increase Cyber Security in your Organisation and Cyber Security for DPOs) which are ideal for organisations who wish to upskill their employees about cyber security. See also our Managing Personal Data Breaches Workshop.

New Guardians of Data Podcast: In Conversation with Jon Baines 

Act Now is pleased to bring you the first episode of a new podcast; Guardians of Data. This is a show where we explore the world of information law and information governance – from privacy and AI to cybersecurity and freedom of information. In each episode we will be speaking with experts and practitioners to unpack the big issues shaping the IG profession.

In information governance, there’s no substitute for learning from those who have walked the path before us. Experienced IG leaders bring a wealth of knowledge from years at the frontline of data protection and information rights – navigating challenges, overcoming obstacles, and shaping best practice along the way. By listening to their stories, we can all grow in confidence and prepare for the IG challenges of tomorrow. 

In the first episode, we are joined by one such IG leader: Jon Baines is a Senior Data Protection Specialist at Mishcon de Reya LLP where he advises on complex data protection and FOI matters. Jon isn’t a lawyer in the traditional sense yet is listed in Legal 500 as a “Rising Star” in the Data Protection, Privacy and Cybersecurity category. Jon is the long-standing chair of the National Association of Data Protection (NADPO) and Freedom of Information Officers. He is regularly sought for comment by specialist and national media and writes extensively on data protection matters. 

In our conversation, Jon shares his journey into IG, his advice for both new starters and seasoned professionals and his perspective on the future of the profession. 

Listen via the player below, or on your preferred podcast app.
Available on Apple Podcasts, Spotify, and all major podcast platforms.

Do Tennis Players Have a Right to Privacy?

John McEnroe is remembered for his on-court outbursts almost as much as for his exquisite shot-making. “You cannot be serious!” is an instantly recognisable sporting catchphrase. When McEnroe was at the height of his career in the 1980s, tennis players’ behaviour was scrutinised almost exclusively through on-court broadcast cameras. What happened off court largely remained unseen. 

Today, tennis, alongside other elite sports, is an environment of continuous monitoring; players are filmed arriving, warming up, competing and exiting. Visibility is a structural feature of the modern sports industry, justified for enhancing fan engagement and serving security, integrity and officiating purposes. But where should the balance lie when such footage reveals players’ emotional states – be it anger, distress or vulnerability? 

This question came up this week when a tennis player, Coco Gauff, called for greater privacy after footage emerged of her smashing her racquet following her Australian Open quarter-final defeat. Crucially, the incident did not occur on court. Gauff was filmed in the players’ area by behind-the-scenes cameras, with the footage later broadcast on television and circulated widely on social media. Gauff said she had made a conscious effort to suppress her emotions until she believed she was away from public view, referencing a similar incident at the 2023 US Open when Aryna Sabalenka was filmed smashing her racquet after losing the final. Since 2019, the Australian Open has shown footage from the players’ zone beneath the Rod Laver Arena, including the gym, warm-up areas and corridors leading from locker rooms. Camera access in these spaces is more restricted at the other Grand Slams.  

Gauff is not alone in raising concerns about behind-the-scenes cameras. Six-time major champion Iga Świątek said this week players are being watched “like animals in the zoo” in Melbourne. Semi-finalist Jessica Pegula described the constant filming as an “invasion of privacy”, adding that players feel “under a microscope constantly”. Tournament organisers, Tennis Australia, responded by emphasising fan engagement, saying the cameras help create a “deeper connection” between players and audiences while insisting that player comfort and privacy remain a priority. 

From a legal perspective, this issue is not merely a matter of optics. Under modern data-protection regimes such as the GDPR and the Australian Privacy Act, video footage of identifiable athletes constitutes personal data. Where that footage reveals emotional states it becomes particularly sensitive. Organisers must therefore be able to justify not only collecting such footage, but retaining, broadcasting and amplifying it. That justification is relatively straightforward during live play, where filming is integral to the sport itself. It becomes much harder once the match has ended. Filming in player tunnels, medical areas or immediately after defeat may be defensible for security or safety reasons. But the retention and circulation of emotionally charged moments for entertainment value sits on far shakier legal ground.  

Players may agree to extensive filming as a condition of participation, but that agreement does not extinguish their broader privacy rights, particularly where footage is used in a way that is disproportionate, stigmatising or disconnected from its original purpose. This tension is becoming harder to ignore as governing bodies simultaneously emphasise mental health and player welfare while permitting practices that expose athletes’ most vulnerable moments to global audiences. 

Other blog posts that may interest you:

This and other data protection developments will be discussed in detail on our forthcoming  GDPR Update workshop.  

Who Guards Our Data? Responsibility, Trust, and the Reality of Data Protection 

Data protection is often framed as a question of compliance. Regulations, policies, and frameworks dominate much of the discussion. 

In practice, however, the most important questions are about responsibility, trust, and judgement. 

Every organisation that collects or uses personal data is, in effect, a custodian of that information. With that role comes an expectation: that personal data will be handled carefully, used appropriately, and respected as something that belongs to people, not systems. Meeting those expectations is rarely straightforward. 

Day-to-day data protection decisions are often made under pressure. They involve trade-offs, uncertainty, and situations where the law does not provide a simple or immediate answer. Legislation defines the boundaries, but it does not resolve every ethical or operational question organisations face. 

This is where many of the real challenges of data protection sit, in the grey areas between what is permitted and what is appropriate. 

Guardians of Data was created to explore this space. The podcast brings together people working in privacy and information governance to talk openly about the realities of responsible data use. Rather than focusing on theory or compliance checklists, the conversations centre on how decisions are made in real organisations, and how trust is maintained when handling personal data. 

Each episode is short and focused, examining judgement calls, ethical considerations, and the expectations placed on organisations entrusted with personal data. The aim is not to provide definitive answers, but to encourage thoughtful discussion about what good data stewardship looks like in practice. 

Guardians of Data is intended as a space for reflection and conversation for anyone navigating the responsibilities that come with using personal data in today’s digital environment.

Click below to listen to the podcasts.

New Podcast: Building Trustworthy and Responsible AI Systems

“Information governance professionals are the bedrock for deploying good governance of AI. We need to be there at the start of the actual thinking process.”  Tahir Latif, Global Practice Lead for Data Privacy & Responsible AI at Cognizant  The last two years has seen a massive increase in AI deployment. Previously the domain of Science Fiction, AI is now everywhere…

New Podcast: Filming the Public for Social Media

Act Now is pleased to bring you episode 6 of the Guardians of Data podcast.   Think about the last time you walked down a busy street, sat in a pub, or queued for a train. Now imagine that moment, completely ordinary to you, being filmed by a stranger, uploaded to TikTok or YouTube and watched by millions. Maybe it’s monetised; maybe it’s mocked. One thing is for sure though,…

New Podcast: How to Succeed as an IG Leader 

Act Now is pleased to bring you episode 5 of the Guardians of Data podcast.   In information governance, there is no substitute for learning from those who have walked the path before us. Experienced IG leaders bring a wealth of knowledge from years at the frontline of data protection and information rights – navigating challenges, overcoming obstacles and shaping best practice along the way. By sharing…

New Podcast: Lessons from Cyber Breaches

Act Now is pleased to bring you episode 4 of the Guardians of Data podcast. This is a show where we explore the world of information law and information governance; from privacy and AI to cybersecurity and freedom of information.   The topic of this episode is cyber security. Every week we read about organisations being hacked, held to ransom or their data being stolen. The BBC recently discovered,…

Transparency and FOI: 20 Years On

Act Now is pleased to bring you episode 3 of the Guardians of Data podcast. This is a show where we explore the world of information law and information governance – from privacy and AI to cybersecurity and freedom of information.   In the past few weeks, we have had a stark reminder of why transparency in public life is a democratic necessity. The US Government’s release of millions…

New Podcast: The Grok AI Controversy 

Act Now is pleased to bring you episode 2 of a new podcast; Guardians of Data. This is a show where we explore the world of information law and information governance – from privacy and AI to cybersecurity and freedom of information. In each episode we will be speaking with experts and practitioners to unpack the big…

New Guardians of Data Podcast: In Conversation with Jon Baines 

Act Now is pleased to bring you the first episode of a new podcast; Guardians of Data. This is a show where we explore the world of information law and information governance – from privacy and AI to cybersecurity and freedom of information. In each episode we will be speaking with experts and practitioners to…

Who Guards Our Data? Responsibility, Trust, and the Reality of Data Protection 

Data protection is often framed as a question of compliance. Regulations, policies, and frameworks dominate much of the discussion.  In practice, however, the most important questions are about responsibility, trust, and judgement.  Every organisation that collects or uses personal data is, in effect, a custodian of that information. With that role comes an expectation: that…

Post Office Reprimand Following Horizon Data Breach 

You would think that the Post Office has learnt its lessons from the Horizon IT Scandal. And of course it would have taken extra care to ensure that the victims of the UK’s most widespread miscarriage of justice are not further harmed by their actions in dealing with the aftermath. Not so, judging by the Information Commissioner’s Office (ICO) announcement on Tuesday.  

The ICO has issued a reprimand to Post Office Limited following an ‘entirely preventable’ data breach which resulted in the unauthorised disclosure of personal data belonging to hundreds of postmasters who were the victims of the Horizon IT scandal.  The breach occurred when the Post Office’s communications team mistakenly published an unredacted version of a legal settlement document on its corporate website. The document contained the names, home addresses and postmaster status of 502 people who were part of group litigation against the organisation. The document remained publicly accessible for almost two months in 2024, before being removed following notification from an external law firm. 

During its investigation, the ICO found that the Post Office failed to implement appropriate technical and organisational measures to protect people’s personal data. There was a lack of documented policies or quality assurance processes for publishing documents on the Post Office website, as well as insufficient staff training, with no specific guidance on information sensitivity or publishing practices.  

In the ‘gold old days’ such a data breach would have attracted a substantial fine; especially considering the impact on the victims described by their lawyers (‘the shock and anxiety of this incident cannot help but compound all of the adverse harms suffered by our clients as a result of the wider Horizon scandal’.) Remember when the ICO fined the Cabinet Office £500,000 for disclosing postal addresses of the 2020 New Year Honours recipients online? 

 But we are in a new age of GDPR ‘enforcement’! The ICO says it had initially considered imposing a fine of up to £1.094 million on the Post Office Limited. However, it did not consider that the data protection infringements identified reached the threshold of ‘egregious’ under its public sector approach, and a reprimand has been issued instead. This approach, which was extended recently after a two year trial,  ‘prioritises early engagement and other enforcement tools such as warnings, reprimands, and enforcement notices, while issuing fines for only the most egregious breaches in the public sector’ so says the ICO. Not everyone agrees. The law firm, Handley Gill, has just published an analysis of the ICO’s public sector approach trial and the new version of it, essentially concluding that reprimands unaccompanied by enforcement notices won’t achieve the stated objective of driving up data protection standards in the public sector. 

The ICO highlights the following key lessons from this reprimand: 

  • Establish clear publication protocols: Sensitive documents should go through a formal review and approval process before being published online. A multi-step sign-off process can help prevent errors. 
  • Understand the data you handle: Every team, especially those handling public-facing content, must be trained to recognise personal information and assess its sensitivity in context. This includes understanding the reputational and emotional impact of disclosure. 
  • Centralise and classify documents: Use secure, shared repositories with clear access controls and classification labels. Avoid reliance on personal storage systems such as OneDrive and Google Drive. 
  • Define roles and responsibilities: Ensure that everyone involved in publishing content understands their role and the checks required before publication. 
  • Tailor training to the task: General data protection training is not enough. Teams need specific guidance on publishing protocols, data classification, and risk awareness.  

This and other data protection developments will be discussed in detail on our forthcoming  GDPR Update workshop.The new (2nd) edition of the UK GDPR Handbook has been published. It contains all the changes made by the Data (Use and Access) Act 2025.