20 Years of FOI: An Interview with Maurice Frankel  

It is more than 20 years since the Freedom of Information Act came into force. Now more than ever transparency is an important aspect of public life and indeed a democratic necessity.  

In Episode 3of the Guardians of Data podcast we discussed these issues with our guest was Maurice Frankel OBE, director of the Campaign for Freedom of Information .  

The following is an abridged version of the podcast focusing on Jon’s advice to IG professionals.   

Question: What was life like before the Freedom of Information Act? How easy was it to obtain information from the public sector? 

Answer: It was extremely difficult in most cases; unless the information you were asking for, was helpful for the public authorities position, in which case the authority would be prepared to release it. But if you asked for information which might question its position, then it was very difficult to get the information and officials, council leaders and ministers would treat the information as if it was their own personal information, and they’d sometimes be affronted that you would even ask and expect that information to be disclosed. 

What were the other challenges in terms of getting the FOI Act onto the statute books? 

Well, the fact is, the government realized and Tony Blair realised, once the legislation was going through Parliament that, this was something that would cause them problems. And, it came to the point at which, the government privately threatened to pull the FOI Bill from Parliament if further improvements to the bill were made during its parliamentary progress.  

Jack straw, who was the Home Secretary and the Justice Minister, confirmed this in his memoirs; that the government actively considered dropping the FOI Bill, for fear that it had gone too far, that it was providing too much openness; that explains why they put it off for so long. 

You mentioned the cost limit. There was a story recently about an author who had a number of FOI requests about Andrew Mountbatten Windsor refused on costs grounds. Do you think there’s a case here for the cost limit rules to be changed so FOI requests cannot be refused on the grounds of costs if there’s a strong public interest in disclosing the information? 

Well, I think there’s a good case for that. We argued for that when the FOI Bill was going through Parliament because, it was obvious that you had an absolute limit on what could be disclosed based on the time needed to find it, essentially. And there was no way through that. And that limit applied in the same way to a request about the purchase of government stationery and to information the government held about a life threatening disease or potential pandemic. And, the case for treating those differently and recognising the public interest in serious cases, I think is very strong. Now the government will argue that everybody will make a public interest case for disclosure. But everybody does make a public interest case for disclosure of information about commercial interests, law enforcement matters and so on. And the exemption does not, collapse in every case simply because somebody makes that argument. Tt gives way when there is genuine evidence which justifies a disclosure of otherwise exempt information. I think the same could take place if there was a public interest test applying to the cost limit. 

You mentioned previously with regards to inquiries and their power to seek information from government. The Covid inquiries are ongoing. We’ve about the use of unofficial communications such as WhatsApp, Signal and Google Chat by ministers and advisers and in some cases, them using disappearing messages. What does that say to you about attitudes to transparency when it comes to the major decisions, particularly around Covid? 

Well, a chunk of the history will have been lost forever. It may be that there’s enough been recorded, to make up for that in the main areas. But I think the use of auto deletion, or messaging software, is a very unhealthy development. And if it’s possible to prevent officials using it, even where they need to use messaging software for efficiency purposes, they should not be able to use software, which automatically deletes messages once they’ve been read. I think that is inimical to proper record keeping practices, to accountability and to the operation of the Freedom of Information Act. 

Do you think that the fallout from the Epstein Scandal and the Covid Inquiry so far, is going to lead to improvements in government transparency, or is it going to lead to more unrecorded decisions? 

Well, I think the surprising thing is that very embarrassing material has come out of the Post Office Inquiry. For example, about the real reasons for continuing with various practices, despite the fact that it was well known that the Post Office was subject to the Freedom of Information Act and was receiving Freedom of Information requests. So I think what is perhaps more surprising is how much of that information has survived, despite the existence of FOI. I mean, when the Act was being discussed in the early days, the government would argue that people would use post-it notes to record sensitive information so that these could be pulled off the documents when an FOI request was received. And so they believed that the threat of disclosure would prevent anything significant, which could be embarrassing being recorded in a permanent form at all, and that’s not proved to be the case. And I think that is probably because, first of all, the chances, I think officials will recognise that they’re dealing with vast volumes of documents, and very few of those were ever requested under FOI. And that means the ordinary incentive to carry on, recording information in the ordinary way or sending recorded information to colleagues, in the ordinary way, carries on, despite what in practice, maybe a hypothetical possibility of an FOI request being received at some later stage. So the information is, is not that vulnerable, to pre-emptive destruction, to prevent disclosure. I think that is perhaps a reassuring, result of these inquiries. 

I agree with you, Maurice, that having had over twenty years of FOI, we are seeing the government disclosing more information, sometimes embarrassing as well and certainly the inquiry system is disclosing more information perhaps, than the Freedom of Information Act would have allowed. So together, I think I agree we have made progress. But do you think there is still room for improvement? Do you think certain public authorities need to improve more than others? 

Well, I think there’s room for improvement across the board. I think there’s a number of things. I think the first thing is, authorities are sometimes too keen to impute bad motive to a requester, just as requesters are sometimes too keen to impute bad motive to a public authority for withholding information.  

I think a second problem is that, public authorities are not making proper use of Boolean searches,. That is, they’re not searching for search term A combined with search term B, but excluding search term C. They are simply looking for hits under particular search terms and not intelligently, using the ability that their systems in many cases, must have to narrow the request by proper use of the of search language. So I think that needs to be looked at.  

And I also think that the Act itself needs to be amended, to address some of the shortcomings that it creates. And, chief of those is, the reasonable extension to consider the public interest test. So the twenty working days is extendable by an unspecified reasonable period to consider the public interest test. I think that extension should be got rid of, just as the Environmental Information Regulations have got rid of it (and Scotland’s Freedom of Information Act, has never adopted that approach). 

Where do you think FOI is going? If we get a change of government, do you think you’ll be back on the campaign trail trying to save FOI? 

Well, we are always aware of the fact that the Act could come under threat at any time. The number of times we have had to come in and try and defend the Act against attempts by, initially the Blair Administration, then the Coalition Government, Conservative Government, to stop attacks on FOI is remarkable.  

I mean, we had attempts to remove Parliament itself from the scope of the act in the early days. There was an attempt to expand the cost limit so that the cost limit of effectively 18 hours or 24 hours of time spent looking for information would apply not to a single request, or to all similar requests within a sixty working day period, but to all requests by a requester to the same public authority, whether they were related or not. And that would mean that, and not just from the same individual requester, but from the same organisation. So it would mean that major news organisations would be limited to one or two requests to the Home Office in a in a three month period, spread amongst all of their journalists. This was seriously put forward by the Blair Administration in the early days. And so, I don’t underestimate the threat to FOI.  

The most recent serious threat we had was, the government setting up the Independent Commission on Freedom of Information, in the mid-nineties, where the unspoken aim was to remove information about policy making from the scope of FOI altogether. We did a very detailed analysis of all Tribunal decisions over, I think, a sixteen month period, relying on section 35, and showed that in very many cases, the exemption worked as it the government had intended it to work. That is, it protected sensitive discussions, from disclosure even after the decision had been taken. But that in a number of cases where the public interest justified it, that information was disclosed and the Tribunal accepted that that was the exemption and the public interest test working as it was supposed to, and that there should be no change to that that position. And so I think that was a very important milestone in the Act, because that resulted in the government before the final report was published, announcing that it hoped the Independent Commission would not require any weakening of the Freedom of Information Act, whereas a weakening of the Act had been the whole purpose of setting up the Commission 

And just finally, some words of inspiration for our new professionals please Maurice. 

Try and understand what the rationale for bringing FOI in actually was, and that was that openness serves the public interest. It serves the interest of accountability. It deters bad practice and it exposes unacceptable conduct. Those are all things which authorities, should be endorsing. And the FOI officers in particular, should see that as the benefit of freedom of information. And in my own experience where I’ve been provided information in the right spirit, it does change your view of the authority you’re dealing with. It does make you more willing to accept what they tell you, and more willing to have confidence in their decisions. It increases public trust in the organisation which can only be a good thing.  

You can listen to the full Episode 3 podcast with Maurice here. 

The Information Commissioner Steps Aside (Temporarily)  

Five days ago, the Information Commissioner, John Edwards, posted on LinkedIn: 

“Colleagues and friends!👋🏻 I wanted to let you know that for the last few weeks I have voluntarily stepped aside from my duties at the ICO while an independent investigation into HR matters is undertaken. I am fully cooperating and engaged with the investigation and will report progress in due course.” 

Paul Arnold, CEO of the new (but not yet functioning) Information Commission, has assumed the role of Acting Information Commissioner.   

Edwards announcement has come as a surprise to ICO watchers. It was only issued after a POLITICO journalist made enquiries to the ICO regarding Edwards’ work absence. Until then there was silence; not what you would expect from a statutory regulator in the area of, amongst other things, openness and transparency.  

Listen to the Guardians of Data Podcast for the latest news and views on data protection, cyber security, AI and freedom of information. 

New Podcast: Building Trustworthy and Responsible AI Systems

“Information governance professionals are the bedrock for deploying good governance of AI. We need to be there at the start of the actual thinking process.” 

Tahir Latif, Global Practice Lead for Data Privacy & Responsible AI at Cognizant 

The last two years has seen a massive increase in AI deployment. Previously the domain of Science Fiction, AI is now everywhere – in our workplaces, our personal lives, and in the systems that shape society. From healthcare to security and law enforcement. But alongside the opportunities, there are some big risks: including lack of accuracy and transparency as well as bias and discrimination. 

In this episode, we dive into one of the biggest questions of our time: How do we build trustworthy and responsible AI systems? 

To help us answer this question, we are joined by someone who is right at the heart of the conversation. Tahir Latif is a distinguished expert on building responsible and transparent AI systems. He is the Global Practice Lead for Data Privacy & Responsible AI at Cognizant, one of the largest global professional services companies. Tahir has led complex privacy and AI programmes across multiple industry sectors both in the UK and globally. He is also the Chief AI and Governance Officer and board member at the Ethical AI Alliance, a not for profit body which promotes ethical standards in AI development. Tahir is the co-author of Data Privacy – A Practical Handbook on Governance and Operation.

In this conversation, we explore how to cut through the complexity of ethical AI, what the future holds, and most importantly, what practical steps IG professionals can take to succeed in this new landscape. 

Listen on your preferred platform via our podcast page, or download the episode directly.

This podcast is sponsored by Phaselaw – a purpose-built solution for document disclosures, like subject access requests and FOI requests. Instead of redacting PDFs one by one, or forcing litigation software to do a job it wasn’t designed for, with Phaselaw you get collection, review, and redaction in one workflow. Teams across the World are using it to cut response times from weeks to days. 

For Guardians of Data listeners, Phaselaw is offering a two-month free trial; run it on live requests, see what it does to your backlog, decide from there. No card, no commitment. 

Head to https://www.phase.law/guardians to claim your free trial.  

Previous episodes of the Guardians of Data podcast have featured  Naomi Mathews and Ibrahim Hasan explaining the law on filming people in public for social media, Maurice Frenkel looking back at 20 years of the Freedom of Information Act, Olu Odeniyi analysing recent cyber breaches and discussing the lessons to learn and Raz Edwards talking about how to succeed as an IG leader. 

How to Succeed in Information Governance

Seasoned IG professionals offer invaluable advice, having tackled data protection hurdles and shaped best practices over years in the field. By listening to their journeys, new IG professionals can better prepare themselves to face tomorrow’s IG challenges with confidence. 

In Episode 1 of the Guardians of Data podcast our guest was Jon Baines who is a senior data protection specialist at Mishcon de Reya LLP, a law firm where he advises on complex data protection and freedom of information matters. Jon isn’t a lawyer in the traditional sense, yet he has been listed in Legal 500 as a rising star in the data protection, privacy and cybersecurity category. Jon is also the long standing chair of the National Association of Data Protection and Freedom of Information Officers.  

In the podcast, our conversation ranges widely and goes into Jon’s route to the law, what sort of work a non-lawyer like gets involved in at a law firm, whether young professionals need to or should qualify as solicitors in order to develop a career in information law, some of the specialisms and the history of Mishcon de Reya LLP; and developments of data protection in the age of AI. 

The following is an abridged version of the podcast focusing on Jon’s advice to IG professionals.  

Question: You’ve proved that you don’t need to be a lawyer to work at the cutting edge of information law. What skills or perspectives can non-lawyers bring that make them particularly valuable in this field? 

Answer: Critical thinking. I’m a big advocate for seeing both sides. I nearly always, when I approach a task or an instruction, think “if I were advising the other side, what would I be doing?” Because I think it’s really important that you don’t just see the positives on your side; that ability to see across the issue and be able to challenge yourself is important. And that’s part of critical thinking.  

In a lot of data protection matters, it’s important to remember that a data subject is all of us effectively; we are all data subjects. Data protection is about a fundamental right, let’s call it the right to respect for our personal information and a limited right to control that information. So a certain amount of empathy is important.  

It’s also important to understand how commerce works; data protection law doesn’t exist in a vacuum. As I say, it’s about us; it’s about our information. It’s also about how that information, operates and can be used within a commercial world, a business world, a public service world. We don’t have a complete right to privacy, let alone privacy of our information. It’s a qualified right. So I think an understanding of business and understanding that business needs data in order to operate is important. 

What is your advice for those who are new to the IG profession? 

I think one of the biggest skills you need is being able to be across the whole organisation that you work for. So don’t work in a silo. Your role might be part of Legal etc. but make sure that you get out and learn about your organisation. Make sure that people know who you are. It’s old fashioned internal networking, I guess. 

How should IG professionals, position themselves, to add value to AI projects? 

Well, it kind of makes me think of the old Data Protection Impact Assessment or prior to GDPR, when we called them privacy impact assessments. It’s not much use being part of that sort of project if you’re only brought in at the last moment. The whole idea of risk assessment is to assess in advance. So it’s important for IG professionals to remind those setting up AI projects that their input is needed from the start; indeed, even before a decision is taken to initiate a project. There are going to be few AI projects that will not involve data protection, in some way or another, or that don’t have the potential to do so in the future. So I think it’s as simple as that really. Try and make sure you’ve got your foot in the door at the start, because it’s going to be very difficult to do your job if you’re brought in at the last moment. 

If you could go back and give your younger self one piece of career advice, what would it be? 

I would probably tell myself that, just in the years after graduation, time goes quite quickly. And whilst I wouldn’t ever want to put pressure on my younger self, I think I would want to tell my younger self to “pull your socks up” a bit and start doing this sort of thing earlier. I think I drifted for a number of years and, as I get older, I increasingly find myself in this role of elder sage and telling young people, don’t waste time; it goes so quickly. 

How useful is NADPO in terms of professional development? 

NADPO is a venerable institution. It’s been going since 1993. We’re an association of information law professionals and by that I mean there are DPOs, there are FOI officers, there are lawyers, there are some journalist members, academics etc. So everyone is welcome. We exist to support the profession by providing an opportunity to learn from experts (whilst we don’t do direct training). So for a payment of, what’s rather an eccentric, membership fee of £130 for two years, you get to attend our in-person events, which includes our annual conference where we have seven or eight expert speakers talking on various areas of information law. We also have monthly webinars and a range of other member benefits. I’m very keen that NADPO is for its members. So I love it when members come to me with ideas for speakers or offers. Like I say, it’s open to anyone who’s working in or really interested in the area of data protection, FOI and IG.  

You can listen to the full Episode 1 podcast with Jon here.  

More valuable careers advice in Episode 5 where our guest is Raz Edwards, Head of Data Security and Protection at Wolverhampton NHS Trust. In our conversation, Raz shares her journey into Information Governance, the challenges she’s faced and overcome as an IG leader, her advice for both new starters and seasoned professionals and her perspective on the future of the profession.  She also reflects on what she’s learned through her tribunal role and what it takes to succeed as an IG leader. 

Could Children’s Use of Social Media be Banned in the UK?

Some argue that the primary goal of social media is no longer genuine connection, but the maximisation of user engagement for commercial gain. Platforms generate vast revenues by delivering highly targeted, personalised advertising, incentivising designs that keep users scrolling for longer. With the rise of AI, this content stream has become even more relentless, often amplified by manipulative or overly flattering language that encourages continuous interaction. 

Unsurprisingly, many parents are concerned about their children’s use of social media. Endless scrolling and exposure to videos featuring mindless pranks or viral challenges can have negative effects on both mental and physical health. Increasingly, attention is turning to the platforms themselves: critics suggest that their design may not only encourage excessive use, but also contribute to addiction, anxiety and other forms of harm. 

The US Court Case  

On 25th March 2026, a jury in Los Angeles delivered a damning verdict on two of the world’s most popular social media platforms. It ruled that Instagram and You Tube were deliberately designed to be addictive and consequently their parent companies have been negligent in failing to safeguard their child users. Meta and Google, owners of Instagram and YouTube, must now pay $6m (£4.5m) in damages to “Kaley”, the young woman who was the plaintiff (claimant) in this case. Her lawyers argued that the design of Instagram and YouTube caused her to be addicted to the social media platforms. This addiction impacted her mental health during childhood leaving her with body dysmorphia, depression and suicidal thoughts.  

The judgement has sent shockwaves through tech companies worldwide, not just in Silicon Valley. One tech company insider, who asked not to be identified, told the BBC, “we’re having a moment”. Even the Royal Family chimed in. In a statement, the Duke and Duchess of Sussex said: “This verdict is a reckoning. For too long, families have paid the price for platforms built with total disregard for the children they reach.”   

Both companies vigorously defended the claim and intend to appeal the judgement. Meta maintains that a single platform cannot be solely responsible for a user’s mental health crisis. Google, meanwhile, argues that YouTube is not a social network. 

English Law 

Could such a claim succeed in this country? The tort of negligence provides the best hope for claimants who allege harm from social media use subject to the elements of the tort (duty of care, breach, causation and foreseeability) being satisfied. There is growing recognition in UK law that online platforms may owe a duty of care to users, particularly if the users are children. And the harms of over use of social media  are well documented. However causation is likely to be the most difficult hurdle for claimants in the UK. To succeed, a claimant must prove that a platform’s design caused or materially contributed to the harm they suffered through their use of social media. This is a difficult hurdle when it comes to social media. Psychological harm rarely has a single identifiable cause. Social media companies are likely to argue that their platforms are only one of the many factors which can contribute to an individual’s mental health; alongside family environment, school experiences, pre-existing vulnerabilities and offline relationships to name a few.  

Could social media platforms be treated as “defective products” under the Consumer Protection Act 1987 (CPA)  which carries strict liability for harm? Products, under the CPA, are traditionally understood as tangible goods, not the likes of YouTube and Instagram. It is arguable though that social media platforms are not just intermediaries but “manufacturers” of digital environments, making them liable for defects in algorithms or addictive design. The Law Commission is currently reviewing the CPA to determine if it is fit for the digital age, with a focus on artificial intelligence, software and online platforms. The review, which began in September 2025, may lead to expanded liability for online platforms and software providers. 

It is worth noting that the US case was decided by a jury. In the UK civil cases, particularly those involving negligence, are decided by judges. Juries may be influenced by emotional arguments, whereas judges are trained to apply the law strictly and are less susceptible to being swayed by emotion at the expense of legal principles. 

Despite the issues around causation, a legal action in negligence is probably the best option for aggrieved social media users in the UK; although the lack of Legal Aid and the UK courts restrictive approach to class actions mean a test case would require significant upfront funding. Perhaps insurers, emboldened by the US Judgement, may now be more willing to cover the costs of such a test case.  

Regulating Social Media 

Unlike the US, the UK has moved toward statutory regulation rather than litigation as the primary means of controlling social media harms. 

Since the passage of the Online Safety Act in 2023 (OSA), social media companies and search engines have a duty to ensure their services aren’t used for illegal activity or to promote illegal content, with particular protections for children. The communications regulator, Ofcom, has been tasked with implementing the OSA and can fine infringing companies of up to £18 million, or 10% of their global revenue (whichever is greater). Last month, it published guidance on how platforms must protect children. Furthermore, since platforms are processing users’ personal data, they have to comply with the UK GDPR. The Data (Use and Access) Act 2025, which mainly came into force in February, explicitly requires those who provide an online service that is likely to be used by children, to take their needs into account when deciding how to use their personal data.   

Even before the US judgement, many countries had been considering whether, to regulate social media further and/or ban children from using it. Australia has banned it and others, like France and Denmark, have introduced or are planning to introduce tighter rules. 

The UK government is currently carrying out a consultation to consider whether additional measures are required to keep children safe in the online world. This includes setting a minimum age for children to access social media, restricting risky functionalities and design features that encourage excessive use, such as infinite scrolling and autoplay, whether the digital age of consent should be raised, whether the guidance on the use of mobile phones in schools should be put on a statutory footing and better support for parents, including clearer guidance and simpler parental controls. The consultation ends on 26th May, and the government will respond before the end of July. Alongside the consultation, the government is running a pilot scheme which will see 300 teenagers have their social media apps disabled entirely, blocked overnight or capped to one hour’s use – with some also seeing no such changes at all – in order to compare their experiences. Children and parents involved in the pilot will be interviewed before and after to assess its impact. 

Meanwhile, on 27th March 2026, the government published national guidance that urges parents to strictly limit screen exposure in early years over health and development risks. The new recommendations advise that there should be no screen exposure for children under two except for shared activities. For those aged two to five, usage should be capped at one hour per day, with additional guidance to avoid screens at mealtimes and before bed. 

Parliament is also debating the use of social media platforms by children but remains divided on what action to take. In March, during a debate on the Children’s Wellbeing and Schools Bill, the House of Lords supported a proposal to ban under-16s in the UK from social media platforms. It is the second time peers have defeated the government over the proposal. There is now a standoff between the Commons and the Lords. Whatever happens the verdict in the California court has signalled a rising public expectation for more aggressive regulation of social media platforms. 

Listen to the Guardians of Data Podcast for the latest news and views on data protection, cyber security, AI and freedom of information.   

This and other developments relating to children’s data will be covered forthcoming workshop, Working with Children’s Data.

New Podcast: Filming the Public for Social Media

Act Now is pleased to bring you episode 6 of the Guardians of Data podcast.  

Think about the last time you walked down a busy street, sat in a pub, or queued for a train. Now imagine that moment, completely ordinary to you, being filmed by a stranger, uploaded to TikTok or YouTube and watched by millions. 
Maybe it’s monetised; maybe it’s mocked. One thing is for sure though, it never disappears. 

Filming people in public has now become second nature for some. But what happens when those images are shared, edited and turned into social media content? Can you stop someone filming you in public? What rights do you have when the footage is published? 

In this episode, we are joined by Naomi Mathews, a lawyer who specialises in Data Protection, Freedom of Information and Surveillance Law. Naomi helps us explore what the law actually says about filming people in public; where it falls short and how that affects real people who find themselves turned into content without consent. We’ll also ask the harder questions about ethics, power and whether the UK needs a new law to better protect the public. 

Download and listen here, or on your preferred podcast app. Available on Apple Podcasts, Spotify, and all major podcast platforms. 

Previous episodes of the Guardians of Data podcast have featured Jon Baines, reflecting on his career as a Data Protection Specialist and the hot issues in information governance,  Lynn Wyeth discussing the recent controversy around Grok AI, Maurice Frenkel looking back at 20 years of the Freedom of Information Act, Olu Odeniyi analysing recent cyber breaches and discussing the lessons to learn and Raz Edwards talking about how to succeed as an IG leader.

Data Protection Complaints Procedure Deadline Approaching

A new section 164A has been inserted into the Data Protection Act 2018 (DPA) by the Data (Use and Access) Act 2025 (DUA Act). 

From 19th June 2026, Data Controllers will be required to have a complaints procedure to handle data protection complaints. They must also: 

  • acknowledge receipt of complaints within 30 days of receiving them; 
  • without undue delay, take appropriate steps to respond to complaints, including making appropriate enquiries, and keep Data Subjects informed; and 
  • without undue delay, tell Data Subjects the outcome of their complaints 

Under the DPA, individuals are entitled to raise complaints where they believe there has been a breach of the UK GDPR e.g. not responding to a subject access request. This extends to any alleged non-compliance involving an individual’s personal data. The key requirement is that the issue must relate to the individual bringing the complaint. In other words, there needs to be a direct connection between the person and the alleged infringement. For example, if a complaint concerns deficiencies in a privacy notice, the individual will need to demonstrate how those shortcomings affect their own personal data, rather than simply pointing to general non-compliance. 

There is no prescribed format for handling complaints and organisations have discretion in designing their processes. The essential requirement is that individuals must have a clear way to submit a complaint, and that complaints are acknowledged and responded to. Data Controllers may wish to build on existing complaint-handling frameworks that are already in place and functioning effectively; for example your FOI complaints procedure. 

Notably, the legislation does not impose strict deadlines for issuing a final response. As long as responses are provided within a reasonable timeframe and individuals are kept informed of progress, there is no obligation to conclude an investigation within a fixed period. The ICO recently published its guidance explaining the new requirements. Data protection expert, and guest on the first Guardians of Data podcast, Jon Baines writes on his personal blog that in declining to suggest how long controllers should normally take to respond to data subject complaints, the ICO has missed an opportunity to provide regulatory clarity.  

If you are looking to implement the changes made by the DUA Act to the UK data protection regime, consider our very popular half day workshop.  

The newly updated UK GDPR Handbook (2nd edition) includes all amendments introduced by the DUA Act, with colour-coded changes for easy navigation and links to relevant recitals, ICO guidance, and caselaw that help make sense of the reforms in context. We have included relevant provisions of the amended DPA 2018 to support a deeper understanding of how the laws interact.

The Right to Erasure and Unfounded Malicious Allegations

The Victims and Prisoners Act 2024 (Commencement No. 10) and Data (Use and Access) Act 2025 (Commencement No. 8) Regulations 2026 brings into force an important change to Article 17 of the UK GDPR (the right to erasure).    

In 2023, Stella Creasy MP was subjected to a social services investigation after a man complained to Leicestershire Police that the MP’s children should be taken into care due to her “extreme views”. The Labour MP told Today on BBC Radio 4 that the complaint was made because the man disagreed with her campaign against misogyny. 

Waltham Forest Council launched an investigation, as it was legally required to do, following a referral from Leicestershire Police. But despite Ms Creasy being cleared, the council said it was legally prevented from removing the man’s complaint from its records. 

The MP then tabled an amendment to the Victims and Prisoners Bill which was going through Parliament. This was enacted as section 31 of the Victims and Prisoners Act 2024.  Section 31 inserts a new Article 17(1)(g) into the UK GDPR. It extends the grounds upon which a data subject has a right to erasure, to cases of unfounded malicious allegations where: 
 
“the personal data have been processed as a result of an allegation about the data subject- 

(i) which was made by a person who is a malicious person in relation to the data subject (whether they became such a person before or after the allegation was made),

(ii) which has been investigated by the controller, and 

(iii) in relation to which the controller has decided that no further action is to be taken” 
 
New Article 17(4) of the UK GDPR defines a “malicious person” as one who has been convicted of a specified offence or who is subject to a stalking protection order. 

At the same time, the 2026 order also commenced paragraph 32 of Schedule 11 of the Data (Use and Access) Act 2025, which extends the same provisions to Scotland and Northern Ireland. 

Listen to the Guardians of Data Podcast for the latest news and views on data protection, cyber security, AI and freedom of information.   

This and other data protection developments will be discussed in detail on our forthcoming  GDPR Update workshop. 

New Podcast: How to Succeed as an IG Leader 

Act Now is pleased to bring you episode 5 of the Guardians of Data podcast.  

In information governance, there is no substitute for learning from those who have walked the path before us. Experienced IG leaders bring a wealth of knowledge from years at the frontline of data protection and information rights – navigating challenges, overcoming obstacles and shaping best practice along the way.
By sharing their stories, lessons learned and practical advice, they help both new starters and seasoned professionals grow in confidence, strengthen their practice and prepare for the challenges of tomorrow. 

In this episode we are joined by Raz Edwards, Head of Data Security and Protection at Wolverhampton NHS Trust. Raz has over 17 years of experience as a Data Protection Officer, including more than a decade in the NHS. She is also Chair of the National Strategic Information Governance Network and serves as a member of the Upper Tribunal and First-Tier Tribunal in the Information Rights Jurisdiction. 

In our conversation, Raz shares her journey into Information Governance, the challenges she’s faced and overcome as an IG leader, her advice for both new starters and seasoned professionals and her perspective on the future of the profession.
She also reflects on what she’s learned through her tribunal role and what it takes to succeed as an IG leader. 

 Download and listen here, or on your preferred podcast app. Available on Apple Podcasts, Spotify, and all major podcast platforms. 

Previous episodes of the Guardians of Data podcast have featured Jon Baines, reflecting on his career as a Data Protection Specialist and the hot issues in information governance, Lynn Wyeth discussing the recent controversy around Grok AI, Maurice Frenkel looking back at 20 years of the Freedom of Information Act and Olu Odeniyi analysing recent cyber breaches and discussing the lessons to learn.

ICO Focus on Children’s Data Processing 

In February we wrote about the Information Commissioner’s Office (ICO) issuing fines under the UK GDPR to two social media companies. Reddit was fined £14.47 million and MediaLab (owner of Imgur) was fined £247,590 for failing to implement age‑assurance measures and for processing children’s personal data in a way that potentially exposed them to harmful content. 

Safeguarding children’s privacy is a key enforcement priority for the ICO. The ICO’s investigation into TikTok (opened in March 2025) is still ongoing. It is considering how the platform uses personal data of 13-17 year-olds in the UK to make recommendations to them and deliver suggested content to their feeds. This is in the light of growing concerns about social media and video sharing platforms using data generated by children’s online activity in their recommender systems, which could lead to them being served inappropriate or harmful content. The ICO is also investigating 17 other platforms including Discord, Pinterest, and X, and has been in discussions with Meta and Snapchat over how they use children’s location data in their user map features.  

Safeguarding children’s privacy is also a duty of the ICO under the Online Safety Act, alongside Ofcom. Last week the ICO published an open letter to social media and video‑sharing platforms operating in the UK, calling on them to strengthen age assurance measures so young children cannot access services that are not designed for them. The letter sets out the ICO’s expectations about measures that platforms with a minimum age must implement, beyond relying on children to self-declare their ages (which they can easily bypass).  Instead, platforms should make use of the viable technology that is now readily available to enforce their own minimum ages and prevent these children from accessing their services. The ICO has also written directly to platforms, starting with TikTok, Snapchat, Facebook, Instagram, YouTube and X to ask them to demonstrate how their age assurance measures meet the ICO’s expectations.  

The Data (Use and Access) Act 2025, most of which came in to force earlier this month, explicitly requires those who provide an online service that is likely to be used by children, to take their needs into account when deciding how to use their personal data.  

Listen to the Guardians of Data Podcast for the latest news and views on data protection, cyber security, AI and freedom of information.  

This and other developments relating to children’s data will be covered forthcoming workshop, Working with Children’s Data.