The Information Commissioner Steps Aside (Temporarily)  

Five days ago, the Information Commissioner, John Edwards, posted on LinkedIn: 

“Colleagues and friends!👋🏻 I wanted to let you know that for the last few weeks I have voluntarily stepped aside from my duties at the ICO while an independent investigation into HR matters is undertaken. I am fully cooperating and engaged with the investigation and will report progress in due course.” 

Paul Arnold, CEO of the new (but not yet functioning) Information Commission, has assumed the role of Acting Information Commissioner.   

Edwards announcement has come as a surprise to ICO watchers. It was only issued after a POLITICO journalist made enquiries to the ICO regarding Edwards’ work absence. Until then there was silence; not what you would expect from a statutory regulator in the area of, amongst other things, openness and transparency.  

Listen to the Guardians of Data Podcast for the latest news and views on data protection, cyber security, AI and freedom of information. 

New Podcast: Building Trustworthy and Responsible AI Systems

“Information governance professionals are the bedrock for deploying good governance of AI. We need to be there at the start of the actual thinking process.” 

Tahir Latif, Global Practice Lead for Data Privacy & Responsible AI at Cognizant 

The last two years has seen a massive increase in AI deployment. Previously the domain of Science Fiction, AI is now everywhere – in our workplaces, our personal lives, and in the systems that shape society. From healthcare to security and law enforcement. But alongside the opportunities, there are some big risks: including lack of accuracy and transparency as well as bias and discrimination. 

In this episode, we dive into one of the biggest questions of our time: How do we build trustworthy and responsible AI systems? 

To help us answer this question, we are joined by someone who is right at the heart of the conversation. Tahir Latif is a distinguished expert on building responsible and transparent AI systems. He is the Global Practice Lead for Data Privacy & Responsible AI at Cognizant, one of the largest global professional services companies. Tahir has led complex privacy and AI programmes across multiple industry sectors both in the UK and globally. He is also the Chief AI and Governance Officer and board member at the Ethical AI Alliance, a not for profit body which promotes ethical standards in AI development. Tahir is the co-author of Data Privacy – A Practical Handbook on Governance and Operation.

In this conversation, we explore how to cut through the complexity of ethical AI, what the future holds, and most importantly, what practical steps IG professionals can take to succeed in this new landscape. 

Listen on your preferred platform via our podcast page, or download the episode directly.

This podcast is sponsored by Phaselaw – a purpose-built solution for document disclosures, like subject access requests and FOI requests. Instead of redacting PDFs one by one, or forcing litigation software to do a job it wasn’t designed for, with Phaselaw you get collection, review, and redaction in one workflow. Teams across the World are using it to cut response times from weeks to days. 

For Guardians of Data listeners, Phaselaw is offering a two-month free trial; run it on live requests, see what it does to your backlog, decide from there. No card, no commitment. 

Head to https://www.phase.law/guardians to claim your free trial.  

Previous episodes of the Guardians of Data podcast have featured  Naomi Mathews and Ibrahim Hasan explaining the law on filming people in public for social media, Maurice Frenkel looking back at 20 years of the Freedom of Information Act, Olu Odeniyi analysing recent cyber breaches and discussing the lessons to learn and Raz Edwards talking about how to succeed as an IG leader. 

How to Succeed in Information Governance

Seasoned IG professionals offer invaluable advice, having tackled data protection hurdles and shaped best practices over years in the field. By listening to their journeys, new IG professionals can better prepare themselves to face tomorrow’s IG challenges with confidence. 

In Episode 1 of the Guardians of Data podcast our guest was Jon Baines who is a senior data protection specialist at Mishcon de Reya LLP, a law firm where he advises on complex data protection and freedom of information matters. Jon isn’t a lawyer in the traditional sense, yet he has been listed in Legal 500 as a rising star in the data protection, privacy and cybersecurity category. Jon is also the long standing chair of the National Association of Data Protection and Freedom of Information Officers.  

In the podcast, our conversation ranges widely and goes into Jon’s route to the law, what sort of work a non-lawyer like gets involved in at a law firm, whether young professionals need to or should qualify as solicitors in order to develop a career in information law, some of the specialisms and the history of Mishcon de Reya LLP; and developments of data protection in the age of AI. 

The following is an abridged version of the podcast focusing on Jon’s advice to IG professionals.  

Question: You’ve proved that you don’t need to be a lawyer to work at the cutting edge of information law. What skills or perspectives can non-lawyers bring that make them particularly valuable in this field? 

Answer: Critical thinking. I’m a big advocate for seeing both sides. I nearly always, when I approach a task or an instruction, think “if I were advising the other side, what would I be doing?” Because I think it’s really important that you don’t just see the positives on your side; that ability to see across the issue and be able to challenge yourself is important. And that’s part of critical thinking.  

In a lot of data protection matters, it’s important to remember that a data subject is all of us effectively; we are all data subjects. Data protection is about a fundamental right, let’s call it the right to respect for our personal information and a limited right to control that information. So a certain amount of empathy is important.  

It’s also important to understand how commerce works; data protection law doesn’t exist in a vacuum. As I say, it’s about us; it’s about our information. It’s also about how that information, operates and can be used within a commercial world, a business world, a public service world. We don’t have a complete right to privacy, let alone privacy of our information. It’s a qualified right. So I think an understanding of business and understanding that business needs data in order to operate is important. 

What is your advice for those who are new to the IG profession? 

I think one of the biggest skills you need is being able to be across the whole organisation that you work for. So don’t work in a silo. Your role might be part of Legal etc. but make sure that you get out and learn about your organisation. Make sure that people know who you are. It’s old fashioned internal networking, I guess. 

How should IG professionals, position themselves, to add value to AI projects? 

Well, it kind of makes me think of the old Data Protection Impact Assessment or prior to GDPR, when we called them privacy impact assessments. It’s not much use being part of that sort of project if you’re only brought in at the last moment. The whole idea of risk assessment is to assess in advance. So it’s important for IG professionals to remind those setting up AI projects that their input is needed from the start; indeed, even before a decision is taken to initiate a project. There are going to be few AI projects that will not involve data protection, in some way or another, or that don’t have the potential to do so in the future. So I think it’s as simple as that really. Try and make sure you’ve got your foot in the door at the start, because it’s going to be very difficult to do your job if you’re brought in at the last moment. 

If you could go back and give your younger self one piece of career advice, what would it be? 

I would probably tell myself that, just in the years after graduation, time goes quite quickly. And whilst I wouldn’t ever want to put pressure on my younger self, I think I would want to tell my younger self to “pull your socks up” a bit and start doing this sort of thing earlier. I think I drifted for a number of years and, as I get older, I increasingly find myself in this role of elder sage and telling young people, don’t waste time; it goes so quickly. 

How useful is NADPO in terms of professional development? 

NADPO is a venerable institution. It’s been going since 1993. We’re an association of information law professionals and by that I mean there are DPOs, there are FOI officers, there are lawyers, there are some journalist members, academics etc. So everyone is welcome. We exist to support the profession by providing an opportunity to learn from experts (whilst we don’t do direct training). So for a payment of, what’s rather an eccentric, membership fee of £130 for two years, you get to attend our in-person events, which includes our annual conference where we have seven or eight expert speakers talking on various areas of information law. We also have monthly webinars and a range of other member benefits. I’m very keen that NADPO is for its members. So I love it when members come to me with ideas for speakers or offers. Like I say, it’s open to anyone who’s working in or really interested in the area of data protection, FOI and IG.  

You can listen to the full Episode 1 podcast with Jon here.  

More valuable careers advice in Episode 5 where our guest is Raz Edwards, Head of Data Security and Protection at Wolverhampton NHS Trust. In our conversation, Raz shares her journey into Information Governance, the challenges she’s faced and overcome as an IG leader, her advice for both new starters and seasoned professionals and her perspective on the future of the profession.  She also reflects on what she’s learned through her tribunal role and what it takes to succeed as an IG leader. 

Could Children’s Use of Social Media be Banned in the UK?

Some argue that the primary goal of social media is no longer genuine connection, but the maximisation of user engagement for commercial gain. Platforms generate vast revenues by delivering highly targeted, personalised advertising, incentivising designs that keep users scrolling for longer. With the rise of AI, this content stream has become even more relentless, often amplified by manipulative or overly flattering language that encourages continuous interaction. 

Unsurprisingly, many parents are concerned about their children’s use of social media. Endless scrolling and exposure to videos featuring mindless pranks or viral challenges can have negative effects on both mental and physical health. Increasingly, attention is turning to the platforms themselves: critics suggest that their design may not only encourage excessive use, but also contribute to addiction, anxiety and other forms of harm. 

The US Court Case  

On 25th March 2026, a jury in Los Angeles delivered a damning verdict on two of the world’s most popular social media platforms. It ruled that Instagram and You Tube were deliberately designed to be addictive and consequently their parent companies have been negligent in failing to safeguard their child users. Meta and Google, owners of Instagram and YouTube, must now pay $6m (£4.5m) in damages to “Kaley”, the young woman who was the plaintiff (claimant) in this case. Her lawyers argued that the design of Instagram and YouTube caused her to be addicted to the social media platforms. This addiction impacted her mental health during childhood leaving her with body dysmorphia, depression and suicidal thoughts.  

The judgement has sent shockwaves through tech companies worldwide, not just in Silicon Valley. One tech company insider, who asked not to be identified, told the BBC, “we’re having a moment”. Even the Royal Family chimed in. In a statement, the Duke and Duchess of Sussex said: “This verdict is a reckoning. For too long, families have paid the price for platforms built with total disregard for the children they reach.”   

Both companies vigorously defended the claim and intend to appeal the judgement. Meta maintains that a single platform cannot be solely responsible for a user’s mental health crisis. Google, meanwhile, argues that YouTube is not a social network. 

English Law 

Could such a claim succeed in this country? The tort of negligence provides the best hope for claimants who allege harm from social media use subject to the elements of the tort (duty of care, breach, causation and foreseeability) being satisfied. There is growing recognition in UK law that online platforms may owe a duty of care to users, particularly if the users are children. And the harms of over use of social media  are well documented. However causation is likely to be the most difficult hurdle for claimants in the UK. To succeed, a claimant must prove that a platform’s design caused or materially contributed to the harm they suffered through their use of social media. This is a difficult hurdle when it comes to social media. Psychological harm rarely has a single identifiable cause. Social media companies are likely to argue that their platforms are only one of the many factors which can contribute to an individual’s mental health; alongside family environment, school experiences, pre-existing vulnerabilities and offline relationships to name a few.  

Could social media platforms be treated as “defective products” under the Consumer Protection Act 1987 (CPA)  which carries strict liability for harm? Products, under the CPA, are traditionally understood as tangible goods, not the likes of YouTube and Instagram. It is arguable though that social media platforms are not just intermediaries but “manufacturers” of digital environments, making them liable for defects in algorithms or addictive design. The Law Commission is currently reviewing the CPA to determine if it is fit for the digital age, with a focus on artificial intelligence, software and online platforms. The review, which began in September 2025, may lead to expanded liability for online platforms and software providers. 

It is worth noting that the US case was decided by a jury. In the UK civil cases, particularly those involving negligence, are decided by judges. Juries may be influenced by emotional arguments, whereas judges are trained to apply the law strictly and are less susceptible to being swayed by emotion at the expense of legal principles. 

Despite the issues around causation, a legal action in negligence is probably the best option for aggrieved social media users in the UK; although the lack of Legal Aid and the UK courts restrictive approach to class actions mean a test case would require significant upfront funding. Perhaps insurers, emboldened by the US Judgement, may now be more willing to cover the costs of such a test case.  

Regulating Social Media 

Unlike the US, the UK has moved toward statutory regulation rather than litigation as the primary means of controlling social media harms. 

Since the passage of the Online Safety Act in 2023 (OSA), social media companies and search engines have a duty to ensure their services aren’t used for illegal activity or to promote illegal content, with particular protections for children. The communications regulator, Ofcom, has been tasked with implementing the OSA and can fine infringing companies of up to £18 million, or 10% of their global revenue (whichever is greater). Last month, it published guidance on how platforms must protect children. Furthermore, since platforms are processing users’ personal data, they have to comply with the UK GDPR. The Data (Use and Access) Act 2025, which mainly came into force in February, explicitly requires those who provide an online service that is likely to be used by children, to take their needs into account when deciding how to use their personal data.   

Even before the US judgement, many countries had been considering whether, to regulate social media further and/or ban children from using it. Australia has banned it and others, like France and Denmark, have introduced or are planning to introduce tighter rules. 

The UK government is currently carrying out a consultation to consider whether additional measures are required to keep children safe in the online world. This includes setting a minimum age for children to access social media, restricting risky functionalities and design features that encourage excessive use, such as infinite scrolling and autoplay, whether the digital age of consent should be raised, whether the guidance on the use of mobile phones in schools should be put on a statutory footing and better support for parents, including clearer guidance and simpler parental controls. The consultation ends on 26th May, and the government will respond before the end of July. Alongside the consultation, the government is running a pilot scheme which will see 300 teenagers have their social media apps disabled entirely, blocked overnight or capped to one hour’s use – with some also seeing no such changes at all – in order to compare their experiences. Children and parents involved in the pilot will be interviewed before and after to assess its impact. 

Meanwhile, on 27th March 2026, the government published national guidance that urges parents to strictly limit screen exposure in early years over health and development risks. The new recommendations advise that there should be no screen exposure for children under two except for shared activities. For those aged two to five, usage should be capped at one hour per day, with additional guidance to avoid screens at mealtimes and before bed. 

Parliament is also debating the use of social media platforms by children but remains divided on what action to take. In March, during a debate on the Children’s Wellbeing and Schools Bill, the House of Lords supported a proposal to ban under-16s in the UK from social media platforms. It is the second time peers have defeated the government over the proposal. There is now a standoff between the Commons and the Lords. Whatever happens the verdict in the California court has signalled a rising public expectation for more aggressive regulation of social media platforms. 

Listen to the Guardians of Data Podcast for the latest news and views on data protection, cyber security, AI and freedom of information.   

This and other developments relating to children’s data will be covered forthcoming workshop, Working with Children’s Data.

New Podcast: Filming the Public for Social Media

Act Now is pleased to bring you episode 6 of the Guardians of Data podcast.  

Think about the last time you walked down a busy street, sat in a pub, or queued for a train. Now imagine that moment, completely ordinary to you, being filmed by a stranger, uploaded to TikTok or YouTube and watched by millions. 
Maybe it’s monetised; maybe it’s mocked. One thing is for sure though, it never disappears. 

Filming people in public has now become second nature for some. But what happens when those images are shared, edited and turned into social media content? Can you stop someone filming you in public? What rights do you have when the footage is published? 

In this episode, we are joined by Naomi Mathews, a lawyer who specialises in Data Protection, Freedom of Information and Surveillance Law. Naomi helps us explore what the law actually says about filming people in public; where it falls short and how that affects real people who find themselves turned into content without consent. We’ll also ask the harder questions about ethics, power and whether the UK needs a new law to better protect the public. 

Download and listen here, or on your preferred podcast app. Available on Apple Podcasts, Spotify, and all major podcast platforms. 

Previous episodes of the Guardians of Data podcast have featured Jon Baines, reflecting on his career as a Data Protection Specialist and the hot issues in information governance,  Lynn Wyeth discussing the recent controversy around Grok AI, Maurice Frenkel looking back at 20 years of the Freedom of Information Act, Olu Odeniyi analysing recent cyber breaches and discussing the lessons to learn and Raz Edwards talking about how to succeed as an IG leader.

Data Protection Complaints Procedure Deadline Approaching

A new section 164A has been inserted into the Data Protection Act 2018 (DPA) by the Data (Use and Access) Act 2025 (DUA Act). 

From 19th June 2026, Data Controllers will be required to have a complaints procedure to handle data protection complaints. They must also: 

  • acknowledge receipt of complaints within 30 days of receiving them; 
  • without undue delay, take appropriate steps to respond to complaints, including making appropriate enquiries, and keep Data Subjects informed; and 
  • without undue delay, tell Data Subjects the outcome of their complaints 

Under the DPA, individuals are entitled to raise complaints where they believe there has been a breach of the UK GDPR e.g. not responding to a subject access request. This extends to any alleged non-compliance involving an individual’s personal data. The key requirement is that the issue must relate to the individual bringing the complaint. In other words, there needs to be a direct connection between the person and the alleged infringement. For example, if a complaint concerns deficiencies in a privacy notice, the individual will need to demonstrate how those shortcomings affect their own personal data, rather than simply pointing to general non-compliance. 

There is no prescribed format for handling complaints and organisations have discretion in designing their processes. The essential requirement is that individuals must have a clear way to submit a complaint, and that complaints are acknowledged and responded to. Data Controllers may wish to build on existing complaint-handling frameworks that are already in place and functioning effectively; for example your FOI complaints procedure. 

Notably, the legislation does not impose strict deadlines for issuing a final response. As long as responses are provided within a reasonable timeframe and individuals are kept informed of progress, there is no obligation to conclude an investigation within a fixed period. The ICO recently published its guidance explaining the new requirements. Data protection expert, and guest on the first Guardians of Data podcast, Jon Baines writes on his personal blog that in declining to suggest how long controllers should normally take to respond to data subject complaints, the ICO has missed an opportunity to provide regulatory clarity.  

If you are looking to implement the changes made by the DUA Act to the UK data protection regime, consider our very popular half day workshop.  

The newly updated UK GDPR Handbook (2nd edition) includes all amendments introduced by the DUA Act, with colour-coded changes for easy navigation and links to relevant recitals, ICO guidance, and caselaw that help make sense of the reforms in context. We have included relevant provisions of the amended DPA 2018 to support a deeper understanding of how the laws interact.

The Right to Erasure and Unfounded Malicious Allegations

The Victims and Prisoners Act 2024 (Commencement No. 10) and Data (Use and Access) Act 2025 (Commencement No. 8) Regulations 2026 brings into force an important change to Article 17 of the UK GDPR (the right to erasure).    

In 2023, Stella Creasy MP was subjected to a social services investigation after a man complained to Leicestershire Police that the MP’s children should be taken into care due to her “extreme views”. The Labour MP told Today on BBC Radio 4 that the complaint was made because the man disagreed with her campaign against misogyny. 

Waltham Forest Council launched an investigation, as it was legally required to do, following a referral from Leicestershire Police. But despite Ms Creasy being cleared, the council said it was legally prevented from removing the man’s complaint from its records. 

The MP then tabled an amendment to the Victims and Prisoners Bill which was going through Parliament. This was enacted as section 31 of the Victims and Prisoners Act 2024.  Section 31 inserts a new Article 17(1)(g) into the UK GDPR. It extends the grounds upon which a data subject has a right to erasure, to cases of unfounded malicious allegations where: 
 
“the personal data have been processed as a result of an allegation about the data subject- 

(i) which was made by a person who is a malicious person in relation to the data subject (whether they became such a person before or after the allegation was made),

(ii) which has been investigated by the controller, and 

(iii) in relation to which the controller has decided that no further action is to be taken” 
 
New Article 17(4) of the UK GDPR defines a “malicious person” as one who has been convicted of a specified offence or who is subject to a stalking protection order. 

At the same time, the 2026 order also commenced paragraph 32 of Schedule 11 of the Data (Use and Access) Act 2025, which extends the same provisions to Scotland and Northern Ireland. 

Listen to the Guardians of Data Podcast for the latest news and views on data protection, cyber security, AI and freedom of information.   

This and other data protection developments will be discussed in detail on our forthcoming  GDPR Update workshop. 

New Podcast: How to Succeed as an IG Leader 

Act Now is pleased to bring you episode 5 of the Guardians of Data podcast.  

In information governance, there is no substitute for learning from those who have walked the path before us. Experienced IG leaders bring a wealth of knowledge from years at the frontline of data protection and information rights – navigating challenges, overcoming obstacles and shaping best practice along the way.
By sharing their stories, lessons learned and practical advice, they help both new starters and seasoned professionals grow in confidence, strengthen their practice and prepare for the challenges of tomorrow. 

In this episode we are joined by Raz Edwards, Head of Data Security and Protection at Wolverhampton NHS Trust. Raz has over 17 years of experience as a Data Protection Officer, including more than a decade in the NHS. She is also Chair of the National Strategic Information Governance Network and serves as a member of the Upper Tribunal and First-Tier Tribunal in the Information Rights Jurisdiction. 

In our conversation, Raz shares her journey into Information Governance, the challenges she’s faced and overcome as an IG leader, her advice for both new starters and seasoned professionals and her perspective on the future of the profession.
She also reflects on what she’s learned through her tribunal role and what it takes to succeed as an IG leader. 

 Download and listen here, or on your preferred podcast app. Available on Apple Podcasts, Spotify, and all major podcast platforms. 

Previous episodes of the Guardians of Data podcast have featured Jon Baines, reflecting on his career as a Data Protection Specialist and the hot issues in information governance, Lynn Wyeth discussing the recent controversy around Grok AI, Maurice Frenkel looking back at 20 years of the Freedom of Information Act and Olu Odeniyi analysing recent cyber breaches and discussing the lessons to learn.

ICO Focus on Children’s Data Processing 

In February we wrote about the Information Commissioner’s Office (ICO) issuing fines under the UK GDPR to two social media companies. Reddit was fined £14.47 million and MediaLab (owner of Imgur) was fined £247,590 for failing to implement age‑assurance measures and for processing children’s personal data in a way that potentially exposed them to harmful content. 

Safeguarding children’s privacy is a key enforcement priority for the ICO. The ICO’s investigation into TikTok (opened in March 2025) is still ongoing. It is considering how the platform uses personal data of 13-17 year-olds in the UK to make recommendations to them and deliver suggested content to their feeds. This is in the light of growing concerns about social media and video sharing platforms using data generated by children’s online activity in their recommender systems, which could lead to them being served inappropriate or harmful content. The ICO is also investigating 17 other platforms including Discord, Pinterest, and X, and has been in discussions with Meta and Snapchat over how they use children’s location data in their user map features.  

Safeguarding children’s privacy is also a duty of the ICO under the Online Safety Act, alongside Ofcom. Last week the ICO published an open letter to social media and video‑sharing platforms operating in the UK, calling on them to strengthen age assurance measures so young children cannot access services that are not designed for them. The letter sets out the ICO’s expectations about measures that platforms with a minimum age must implement, beyond relying on children to self-declare their ages (which they can easily bypass).  Instead, platforms should make use of the viable technology that is now readily available to enforce their own minimum ages and prevent these children from accessing their services. The ICO has also written directly to platforms, starting with TikTok, Snapchat, Facebook, Instagram, YouTube and X to ask them to demonstrate how their age assurance measures meet the ICO’s expectations.  

The Data (Use and Access) Act 2025, most of which came in to force earlier this month, explicitly requires those who provide an online service that is likely to be used by children, to take their needs into account when deciding how to use their personal data.  

Listen to the Guardians of Data Podcast for the latest news and views on data protection, cyber security, AI and freedom of information.  

This and other developments relating to children’s data will be covered forthcoming workshop, Working with Children’s Data.

Join Our Team – Become an Act Now Trainer! 

We are Hiring – Become an Act Now Trainer! 

Are you an information governance expert with a passion for sharing your knowledge? Do you have experience delivering engaging training on GDPR, FOI, AI or Cyber Security? Do you want to help build a privacy-conscious world? If so, we want to hear from you! 

Why Act Now Training? 

Act Now Training is Europe’s leading provider of information governance training. For the past 24 years, we have been working with government organisations, multinational corporations, financial institutions, and corporate law firms. Our team of expert trainers delivers high-quality, practical courses that make complex topics easy to understand. 

With a comprehensive programme ranging from short webinars and one-day workshops to advanced practitioner certificate courses, we equip professionals with the knowledge and skills they need to navigate the evolving landscape of data protection and privacy. 

Our Mission 

At Act Now, we believe in building a privacy-conscious world. Our goal is to promote trust and respect for privacy, ensuring organisations embed data protection into their operations by default. We break down complex legal concepts, making education accessible and empowering professionals to become leaders in their field. By fostering a culture of responsible data usage, we help build public trust and drive positive change. 

Why Join Us? 

Due to increasing demand, we are expanding our team of expert trainers. With new courses launched in 2026, we are looking for talented trainers to deliver engaging, practical and jargon-free training. 

We offer opportunities for both full-time trainers and those looking to complement their existing roles. We are looking for passionate professionals who bring energy and innovation to their training sessions. 

What We Are Looking For: 

  • Experience of delivering GDPR, FOI, AI or Cyber Security training 
  • A passion for teaching and the ability to simplify complex concepts. 
  • A commitment to delivering interactive, engaging training (no “death by PowerPoint”!). 
  • Availability for two to ten training days per month  

We have opportunities to deliver a variety of courses, including our flagship GDPR Practitioner Certificate and our AI Governance Practitioner Certificate as well as customised in house training. 

Apply Now 

If you are ready to take the next step in your career and join a team dedicated to shaping the future of information governance, we’d love to hear from you! 

Email info@actnow.org.uk your CV, detailing your training and consultancy experience in GDPR, FOI, or Cyber Security. Closing date for applications is 5th May 2026. A full privacy policy is available on our website.