New Podcast: Building Trustworthy and Responsible AI Systems

“Information governance professionals are the bedrock for deploying good governance of AI. We need to be there at the start of the actual thinking process.” 

Tahir Latif, Global Practice Lead for Data Privacy & Responsible AI at Cognizant 

The last two years has seen a massive increase in AI deployment. Previously the domain of Science Fiction, AI is now everywhere – in our workplaces, our personal lives, and in the systems that shape society. From healthcare to security and law enforcement. But alongside the opportunities, there are some big risks: including lack of accuracy and transparency as well as bias and discrimination. 

In this episode, we dive into one of the biggest questions of our time: How do we build trustworthy and responsible AI systems? 

To help us answer this question, we are joined by someone who is right at the heart of the conversation. Tahir Latif is a distinguished expert on building responsible and transparent AI systems. He is the Global Practice Lead for Data Privacy & Responsible AI at Cognizant, one of the largest global professional services companies. Tahir has led complex privacy and AI programmes across multiple industry sectors both in the UK and globally. He is also the Chief AI and Governance Officer and board member at the Ethical AI Alliance, a not for profit body which promotes ethical standards in AI development. Tahir is the co-author of Data Privacy – A Practical Handbook on Governance and Operation.

In this conversation, we explore how to cut through the complexity of ethical AI, what the future holds, and most importantly, what practical steps IG professionals can take to succeed in this new landscape. 

Listen on your preferred platform via our podcast page, or download the episode directly.

This podcast is sponsored by Phaselaw – a purpose-built solution for document disclosures, like subject access requests and FOI requests. Instead of redacting PDFs one by one, or forcing litigation software to do a job it wasn’t designed for, with Phaselaw you get collection, review, and redaction in one workflow. Teams across the World are using it to cut response times from weeks to days. 

For Guardians of Data listeners, Phaselaw is offering a two-month free trial; run it on live requests, see what it does to your backlog, decide from there. No card, no commitment. 

Head to https://www.phase.law/guardians to claim your free trial.  

Previous episodes of the Guardians of Data podcast have featured  Naomi Mathews and Ibrahim Hasan explaining the law on filming people in public for social media, Maurice Frenkel looking back at 20 years of the Freedom of Information Act, Olu Odeniyi analysing recent cyber breaches and discussing the lessons to learn and Raz Edwards talking about how to succeed as an IG leader. 

Data Protection Complaints Procedure Deadline Approaching

A new section 164A has been inserted into the Data Protection Act 2018 (DPA) by the Data (Use and Access) Act 2025 (DUA Act). 

From 19th June 2026, Data Controllers will be required to have a complaints procedure to handle data protection complaints. They must also: 

  • acknowledge receipt of complaints within 30 days of receiving them; 
  • without undue delay, take appropriate steps to respond to complaints, including making appropriate enquiries, and keep Data Subjects informed; and 
  • without undue delay, tell Data Subjects the outcome of their complaints 

Under the DPA, individuals are entitled to raise complaints where they believe there has been a breach of the UK GDPR e.g. not responding to a subject access request. This extends to any alleged non-compliance involving an individual’s personal data. The key requirement is that the issue must relate to the individual bringing the complaint. In other words, there needs to be a direct connection between the person and the alleged infringement. For example, if a complaint concerns deficiencies in a privacy notice, the individual will need to demonstrate how those shortcomings affect their own personal data, rather than simply pointing to general non-compliance. 

There is no prescribed format for handling complaints and organisations have discretion in designing their processes. The essential requirement is that individuals must have a clear way to submit a complaint, and that complaints are acknowledged and responded to. Data Controllers may wish to build on existing complaint-handling frameworks that are already in place and functioning effectively; for example your FOI complaints procedure. 

Notably, the legislation does not impose strict deadlines for issuing a final response. As long as responses are provided within a reasonable timeframe and individuals are kept informed of progress, there is no obligation to conclude an investigation within a fixed period. The ICO recently published its guidance explaining the new requirements. Data protection expert, and guest on the first Guardians of Data podcast, Jon Baines writes on his personal blog that in declining to suggest how long controllers should normally take to respond to data subject complaints, the ICO has missed an opportunity to provide regulatory clarity.  

If you are looking to implement the changes made by the DUA Act to the UK data protection regime, consider our very popular half day workshop.  

The newly updated UK GDPR Handbook (2nd edition) includes all amendments introduced by the DUA Act, with colour-coded changes for easy navigation and links to relevant recitals, ICO guidance, and caselaw that help make sense of the reforms in context. We have included relevant provisions of the amended DPA 2018 to support a deeper understanding of how the laws interact.

The Right to Erasure and Unfounded Malicious Allegations

The Victims and Prisoners Act 2024 (Commencement No. 10) and Data (Use and Access) Act 2025 (Commencement No. 8) Regulations 2026 brings into force an important change to Article 17 of the UK GDPR (the right to erasure).    

In 2023, Stella Creasy MP was subjected to a social services investigation after a man complained to Leicestershire Police that the MP’s children should be taken into care due to her “extreme views”. The Labour MP told Today on BBC Radio 4 that the complaint was made because the man disagreed with her campaign against misogyny. 

Waltham Forest Council launched an investigation, as it was legally required to do, following a referral from Leicestershire Police. But despite Ms Creasy being cleared, the council said it was legally prevented from removing the man’s complaint from its records. 

The MP then tabled an amendment to the Victims and Prisoners Bill which was going through Parliament. This was enacted as section 31 of the Victims and Prisoners Act 2024.  Section 31 inserts a new Article 17(1)(g) into the UK GDPR. It extends the grounds upon which a data subject has a right to erasure, to cases of unfounded malicious allegations where: 
 
“the personal data have been processed as a result of an allegation about the data subject- 

(i) which was made by a person who is a malicious person in relation to the data subject (whether they became such a person before or after the allegation was made),

(ii) which has been investigated by the controller, and 

(iii) in relation to which the controller has decided that no further action is to be taken” 
 
New Article 17(4) of the UK GDPR defines a “malicious person” as one who has been convicted of a specified offence or who is subject to a stalking protection order. 

At the same time, the 2026 order also commenced paragraph 32 of Schedule 11 of the Data (Use and Access) Act 2025, which extends the same provisions to Scotland and Northern Ireland. 

Listen to the Guardians of Data Podcast for the latest news and views on data protection, cyber security, AI and freedom of information.   

This and other data protection developments will be discussed in detail on our forthcoming  GDPR Update workshop. 

Iain Harrison

Act Now Training is deeply saddened to report the passing of our colleague and dear friend, Iain Harrison

Over a career spanning more than 20 years, Iain provided training and consultancy on information law related issues to a wide range of public, private, and voluntary sector organisations. Iain’s final job was Senior Information Assurance Officer at Leicester University. He also worked for Wolverhampton City Council, Wright Hassle LLP, Leicester City Council and Birmingham City Council.  

Alongside his day job, Iain was a Senior Associate at Act Now Training where he used his vast experience to deliver training on data protection and freedom information.  His courses were defined by his depth of knowledge, sound judgement, and an unwavering commitment to supporting others. Colleagues and clients alike valued Iain for his clarity, calm guidance and understated humour. Delegates always commented about his rare ability to make complex legal frameworks accessible and manageable. He always approached his work with patience, integrity and a genuine desire to help.  

Iain’s contribution to the field of information law, and to the many people and organisations he supported throughout his career, leaves a lasting legacy. He will be greatly missed by colleagues, clients and friends. 

Ibrahim Hasan, Director of Act Now Training, said: 

“I knew Iain for over 20 years, since his days at  Coventry City Council. Behind the quiet, unassuming person was a true expert, always willing to listen and help. Every interaction I had with him was filled with kindness and good humour. My thoughts and prayers are with his loved ones.” 

AI Transcription Tools in Social Work Under Scrutiny 

Anyone remember Dragon Dictate? The first versions of this voice transcription software required users to spend hours training it (usually wearing a headset) by repeating stock phrases many times over. Even after full training, the transcription output was far from accurate. How technology has moved on, especially in the last few years, with the proliferation of AI. 

AI powered transcription software has been rapidly adopted by public sector organisations especially in local authority social work departments. Tools, like Magic Notes and Microsoft Copilot, are used by social workers to record conversations with children and families (e.g. interviews or assessments), transcribe spoken audio into text and generate summaries automatically. These “ambient scribes” listen in real-time or process recordings, reducing the need for manual notetaking; thus allowing professionals to focus on interactions rather than documentation. However the use of such tools, especially in sensitive contexts like social work, is not without risks as was highlighted by a recent report.  

Ada Lovelace Institute Report 

On 11th February 2026, the Ada Lovelace Institute published a report titled “Scribe and prejudice? Exploring the use of AI transcription tools in social care.” The report explored the dynamics of adoption and the impacts of AI transcription tools in adult and children’s social care across 17 local authorities in England and Scotland. Based on interviews with frontline social workers and managers, it highlighted serious risks that should be addressed by users.  

These include, amongst others: 

AI “Hallucinations”: The AI sometimes generates false information that wasn’t said in the recorded conversation. A prominent example involved an AI-generated summary incorrectly stating that a child had expressed suicidal ideation. This kind of error is especially dangerous in child protection or mental health contexts, where it could trigger unnecessary interventions or lead to flawed decisions about care. 

Gibberish, misrepresentations, and other errors: AI generated transcripts have included nonsense phrases, misspelled names, incorrect speaker attributions (especially in multi-person conversations), fabricated statements, irrelevant or foul language insertions and overly formal or academic wording that doesn’t reflect normal social work language. 

Bias and Harmful Stereotyping: Some outputs have reportedly promoted stereotypes or biased perceptions of individuals that weren’t present in the original recording. 

These issues echo broader AI concerns but of course are more serious in the context of social work records. Inaccuracies entering official care records could lead to incorrect decisions about a child’s safety, family support, or adult care; potentially resulting in harm to vulnerable people, professional consequences for social workers or even legal liability. 

Social workers generally bear full responsibility for reviewing and approving these AI outputs (the “human in the loop” safeguard), but practices vary widely according to the report. Some social workers spend minutes checking AI output whilst others spend hours. The report questions how effective this is in high-pressure frontline environments. There is also concern that over-reliance on summarisation features could erode professional judgment and the nuanced, interpretive nature of social work documentation. 

The report notes that in early 2025, one AI transcription tool was already in active use by 85 local authorities for social care. But the Ada Lovelace Institute criticises the “limited and light-touch” approaches to ethics, evaluation, testing, regulation, and risk mitigation so far. It has called for more robust safeguards, better guidance and thorough evaluation before wider use. 

Recommendations 

To ensure the safe and responsible use of AI transcription tools, the Institute urged the government to require local authorities to document their use of such tools through the ‘Algorithmic Transparency Reporting Standard.’ 

It also recommended that social care regulators and local authorities collaborate with relevant sector bodies to develop guidance on using AI transcription tools in statutory processes and formal proceedings, supported by clear accountability structures. 

The Institute added that: ‘To enable end-to-end accountability, regulators and professional bodies should review and revise rules and guidance on professional ethics for social workers and support social workers to collaborate with legal and advisory bodies around procedures for AI use in formal proceedings. An advisory board comprised of people with lived experience of drawing on care should be established to inform these actions.’ 

Further recommendations include: 

  • The UK government should extend its pilots of AI transcription tools to include various locations and public sector contexts. 
  • The UK government should set up a What Works Centre for AI in Public Services to generate and synthesise learnings from pilots and evaluations. 
  • A coalition of researchers, policymakers, civil society and community groups should collaborate on research on the systemic impacts of AI transcription tools. 
  • Local authorities should specify their outcomes and expected impact when procuring AI transcription tools to ensure a shared understanding among staff and users. 

The UK GDPR Angle 

The use of AI powered transcription software will involve processing highly sensitive personal data, including audio recordings and derived transcripts/summaries of conversations involving vulnerable individuals. This triggers UK GDPR obligations, with heightened risks due to the sensitive nature of the data and potential for harm if errors occur. 

Local authorities and social care providers should integrate UK GDPR compliance into procurement, deployment, and ongoing use of AI transcription software. Key practical steps include: 

  • Conduct a DPIA:  Before rollout or expansion, complete a Data Protection Impact Assessment to assess all the risks (e.g., hallucinations affecting accuracy, bias in diverse accents/dialects, unauthorised access). Update DPIAs for new tools or features. Involve the organisation’s Data Protection Officer from the outset. 
  • Choose compliant tools and vendors: Prioritise tools with strong data protection (e.g. UK-hosted data, no unnecessary retention, robust security). Review vendor DPIAs, processor agreements, and compliance certifications.  
  • Establish clear consent and transparency processes: Inform service users upfront about recording, AI involvement, and data use (via privacy notices or verbal explanation). Document decisions and allow opt-outs where appropriate. 
  • Implement strong human oversight and review: Mandate thorough checks of all AI outputs before approving records. Train staff to detect inaccuracies, bias, or inappropriate content. Flag AI-generated sections (e.g. via watermarks or metadata) for transparency and future audits. 
  • Secure data handling and contracts: Use encrypted recording/uploading, limit data shared with tools and delete audio promptly after transcription. Ensure processor contracts (Article 28) specify UK GDPR compliance, audit rights and breach notification. 
  • Monitor, audit and train: Regularly audit tool use and outputs for compliance. Provide targeted training on UK GDPR risks (e.g. accuracy, breaches, bias). Track incidents (e.g. hallucinations) and report serious ones as breaches if required. 
  • Define boundaries for use: Establish consensus on when AI transcription is appropriate (or unacceptable).  

AI transcription offers clear benefits for reducing paperwork and freeing up social workers’ time for direct care. However, strong governance measures must be taken to avoid dangerous inaccuracies slipping into official records, and the potential for biased or harmful decisions. 

Listen to the Guardians of Data Podcast for the latest news and views on data protection, cyber security, AI and freedom of information. 

If you need to train your staff on responsible use of AI please get in touch to discuss our customised in house training. The following public courses may also interest you: 

AI and Information Governance:  A one day workshop examining the key data protection and IG issues when deploying AI solutions.  

AI Governance Practitioner Certificate training programme: A four day course providing a practical overview of how AI systems are developed, deployed, and regulated, with particular attention to risk, bias, and accountability. 

New Podcast: The Grok AI Controversy 

Act Now is pleased to bring you episode 2 of a new podcast; Guardians of Data. This is a show where we explore the world of information law and information governance – from privacy and AI to cybersecurity and freedom of information. In each episode we will be speaking with experts and practitioners to unpack the big issues shaping the IG profession. 

In the first episode, we were joined by Jon Baines, a Senior Data Protection Specialist at Mishcon de Reya LLP and the long-standing chair of NADPO. In a wide ranging conversation, Jon shared his journey into IG, his advice for both new starters and seasoned professionals and his perspective on the future of the profession. 

In Episode 2 we discuss the recent controversy around Grok AI. 

Grok,  the AI chatbot developed by xAI and integrated into the social media platform X, has caught the attention of governments and regulators across the world after it was used to edit pictures of real women to show them in revealing clothes and suggestive poses. In the UK, Ofcom and the Information Commissioner’s Office have opened formal investigations,  a significant step that signals how seriously AI-related risks are now being taken.  

This controversy raises fundamental questions about how AI systems are designed and overseen and about whether existing laws and board-level oversight are keeping pace. In episode 2, we unpack these issues with the help of Lynn Wyeth, an expert in AI, data protection and responsible technology.  

Listen via this link or on your preferred podcast app. 
Available on Apple Podcasts, Spotify, and all major podcast platforms.

New Guardians of Data Podcast: In Conversation with Jon Baines 

Act Now is pleased to bring you the first episode of a new podcast; Guardians of Data. This is a show where we explore the world of information law and information governance – from privacy and AI to cybersecurity and freedom of information. In each episode we will be speaking with experts and practitioners to unpack the big issues shaping the IG profession.

In information governance, there’s no substitute for learning from those who have walked the path before us. Experienced IG leaders bring a wealth of knowledge from years at the frontline of data protection and information rights – navigating challenges, overcoming obstacles, and shaping best practice along the way. By listening to their stories, we can all grow in confidence and prepare for the IG challenges of tomorrow. 

In the first episode, we are joined by one such IG leader: Jon Baines is a Senior Data Protection Specialist at Mishcon de Reya LLP where he advises on complex data protection and FOI matters. Jon isn’t a lawyer in the traditional sense yet is listed in Legal 500 as a “Rising Star” in the Data Protection, Privacy and Cybersecurity category. Jon is the long-standing chair of the National Association of Data Protection (NADPO) and Freedom of Information Officers. He is regularly sought for comment by specialist and national media and writes extensively on data protection matters. 

In our conversation, Jon shares his journey into IG, his advice for both new starters and seasoned professionals and his perspective on the future of the profession. 

Listen via the player below, or on your preferred podcast app.
Available on Apple Podcasts, Spotify, and all major podcast platforms.

Who Guards Our Data? Responsibility, Trust, and the Reality of Data Protection 

Data protection is often framed as a question of compliance. Regulations, policies, and frameworks dominate much of the discussion. 

In practice, however, the most important questions are about responsibility, trust, and judgement. 

Every organisation that collects or uses personal data is, in effect, a custodian of that information. With that role comes an expectation: that personal data will be handled carefully, used appropriately, and respected as something that belongs to people, not systems. Meeting those expectations is rarely straightforward. 

Day-to-day data protection decisions are often made under pressure. They involve trade-offs, uncertainty, and situations where the law does not provide a simple or immediate answer. Legislation defines the boundaries, but it does not resolve every ethical or operational question organisations face. 

This is where many of the real challenges of data protection sit, in the grey areas between what is permitted and what is appropriate. 

Guardians of Data was created to explore this space. The podcast brings together people working in privacy and information governance to talk openly about the realities of responsible data use. Rather than focusing on theory or compliance checklists, the conversations centre on how decisions are made in real organisations, and how trust is maintained when handling personal data. 

Each episode is short and focused, examining judgement calls, ethical considerations, and the expectations placed on organisations entrusted with personal data. The aim is not to provide definitive answers, but to encourage thoughtful discussion about what good data stewardship looks like in practice. 

Guardians of Data is intended as a space for reflection and conversation for anyone navigating the responsibilities that come with using personal data in today’s digital environment.

Click below to listen to the podcasts.

New Podcast: The Government’s Plans For Our Children’s Data

“I think privacy is often given a bad name. We talk about it in abstract terms; we should abandon thinking about it in that way. What you do to my data, you do to me. There is no real distinction anymore between our online life and our offline life. So whatever you know about me through my digital footprint, you…

New Podcast: Building Trustworthy and Responsible AI Systems

“Information governance professionals are the bedrock for deploying good governance of AI. We need to be there at the start of the actual thinking process.”  Tahir Latif, Global Practice Lead for Data Privacy & Responsible AI at Cognizant  The last two years has seen a massive increase in AI deployment. Previously the domain of Science Fiction, AI is now everywhere…

New Podcast: Filming the Public for Social Media

Act Now is pleased to bring you episode 6 of the Guardians of Data podcast.   Think about the last time you walked down a busy street, sat in a pub, or queued for a train. Now imagine that moment, completely ordinary to you, being filmed by a stranger, uploaded to TikTok or YouTube and watched by millions. Maybe it’s monetised; maybe it’s mocked. One thing is for sure though,…

New Podcast: How to Succeed as an IG Leader 

Act Now is pleased to bring you episode 5 of the Guardians of Data podcast.   In information governance, there is no substitute for learning from those who have walked the path before us. Experienced IG leaders bring a wealth of knowledge from years at the frontline of data protection and information rights – navigating challenges, overcoming obstacles and shaping best practice along the way. By sharing…

New Podcast: Lessons from Cyber Breaches

Act Now is pleased to bring you episode 4 of the Guardians of Data podcast. This is a show where we explore the world of information law and information governance; from privacy and AI to cybersecurity and freedom of information.   The topic of this episode is cyber security. Every week we read about organisations being hacked, held to ransom or their data being stolen. The BBC recently discovered,…

Transparency and FOI: 20 Years On

Act Now is pleased to bring you episode 3 of the Guardians of Data podcast. This is a show where we explore the world of information law and information governance – from privacy and AI to cybersecurity and freedom of information.   In the past few weeks, we have had a stark reminder of why transparency in public life is a democratic necessity. The US Government’s release of millions…

New Podcast: The Grok AI Controversy 

Act Now is pleased to bring you episode 2 of a new podcast; Guardians of Data. This is a show where we explore the world of information law and information governance – from privacy and AI to cybersecurity and freedom of information. In each episode we will be speaking with experts and practitioners to unpack the big…

New Guardians of Data Podcast: In Conversation with Jon Baines 

Act Now is pleased to bring you the first episode of a new podcast; Guardians of Data. This is a show where we explore the world of information law and information governance – from privacy and AI to cybersecurity and freedom of information. In each episode we will be speaking with experts and practitioners to…

Who Guards Our Data? Responsibility, Trust, and the Reality of Data Protection 

Data protection is often framed as a question of compliance. Regulations, policies, and frameworks dominate much of the discussion.  In practice, however, the most important questions are about responsibility, trust, and judgement.  Every organisation that collects or uses personal data is, in effect, a custodian of that information. With that role comes an expectation: that…

Home Office Acknowledges Racial and Gender Bias in UK Police Facial Recognition Technology

Facial recognition is often sold as a neutral, objective tool. But recent admissions from the UK government show just how fragile that claim really is.

New evidence has confirmed that facial recognition technology used by UK police is significantly more likely to misidentify people from certain demographic groups. The problem is not marginal, and it is not theoretical. It is already embedded in live policing.

A Systematic Pattern of Error

Independent testing commissioned by the Home Office found that false-positive rates increase dramatically depending on ethnicity, gender, and system settings.

At lower operating thresholds — where the software is configured to return more matches — the disparity becomes stark. White individuals were falsely matched at a rate of around 0.04%. For Asian individuals, the rate rose to approximately 4%. For Black individuals, it reached about 5.5%. The highest error rate was recorded among Black women, who were falsely matched close to 10% of the time.

The data highlights a striking imbalance: Asian and Black individuals were misidentified almost 100 times more frequently than white individuals, while women faced error rates roughly double those of men.

Why This Is Not an Abstract Risk

This technology is already in widespread use. Police forces rely on facial recognition to analyse CCTV footage, conduct retrospective searches across custody databases, and, in some cases, deploy live systems in public spaces.

The scale matters. Thousands of retrospective facial recognition searches are conducted each month. Even a low error rate, when multiplied across that volume, results in a significant number of people being wrongly flagged.

A false match can lead to questioning, surveillance, or police intervention. Even if officers ultimately decide not to act, the encounter itself can be intrusive, distressing, and damaging. These effects do not disappear simply because a human later overrides the system.

Bias, Thresholds, and Operational Reality

For years, facial recognition vendors and public authorities argued that bias could be controlled through careful configuration. In controlled conditions, stricter thresholds reduce error rates. But operational pressures often incentivise looser settings that generate more matches, even at the cost of accuracy.

The government’s own findings now confirm what critics have long warned: fairness is conditional. Bias does not vanish; it shifts depending on how the system is used.

The data also shows that demographic impacts overlap. Women, older people, and ethnic minorities are all more likely to be misidentified, with compounded effects for those who sit at multiple intersections.

Expansion Amid Fragile Trust

Despite these findings, the government is consulting on proposals to expand national facial recognition capability, including systems that could draw on large biometric datasets such as passport and driving licence records.

Ministers have pointed to plans to procure newer algorithms and to subject them to independent evaluation. While improved testing and oversight are essential, they do not answer the underlying question: should surveillance infrastructure be expanded while known structural risks remain unresolved?

Civil liberties groups and oversight bodies have described the findings as deeply concerning, warning that transparency, accountability, and public confidence are being strained by the rapid adoption of opaque technologies.

This Is a Governance Issue, Not Just a Technical One

Facial recognition is not simply a question of software performance. It is a question of how power is exercised and how risk is distributed.

When automated systems systematically misidentify certain groups, the consequences fall unevenly. Decisions about who is stopped, questioned, or monitored start to reflect the limitations of technology rather than evidence or behaviour.

Once such systems become normalised, rolling them back becomes difficult. That is why scrutiny matters now, not after expansion.

If technology is allowed to shape policing, the justice system, and public space, it must be subject to the highest standards of accountability, fairness, and democratic oversight.

These and other developments in the use of artificial intelligence, surveillance, and automated decision-making will be examined in detail in our AI Governance Practitioner Certificate training programme, which provides a practical and accessible overview of how AI systems are developed, deployed, and regulated, with particular attention to risk, bias, and accountability.

Staying Up to Date: The UK GDPR Handbook (2nd Edition) 

The data protection landscape continues to evolve. With the Data (Use and Access) Act 2025 now in force, practitioners need to ensure their materials reflect the latest changes to the UK GDPR, Data Protection Act 2018, and PECR.

The newly updated UK GDPR Handbook (2nd edition) brings these developments together in one practical reference. It includes all amendments introduced by the DUA Act, with colour-coded changes for easy navigation and links to relevant recitals, ICO guidance, and caselaw that help make sense of the reforms in context.

This edition also covers the amendments made to Article 17 (right to erasure) under the Victims and Prisoners Act 2024, ensuring readers have a complete view of the current regime.

Act Now has included relevant provisions of the amended DPA 2018 to support a deeper understanding of how the laws interact. As before, the aim is clarity and usability, helping practitioners work confidently within a complex framework.

And for each handbook sold, £1 is donated to the Rainfall Foundation, supporting the reintegration of prison leavers into society, a reminder that compliance and community impact can go hand in hand.

If you’re revisiting your data protection resources this year, this updated edition is a good place to start. Order your copy here.