New Podcast: Building Trustworthy and Responsible AI Systems

“Information governance professionals are the bedrock for deploying good governance of AI. We need to be there at the start of the actual thinking process.” 

Tahir Latif, Global Practice Lead for Data Privacy & Responsible AI at Cognizant 

The last two years has seen a massive increase in AI deployment. Previously the domain of Science Fiction, AI is now everywhere – in our workplaces, our personal lives, and in the systems that shape society. From healthcare to security and law enforcement. But alongside the opportunities, there are some big risks: including lack of accuracy and transparency as well as bias and discrimination. 

In this episode, we dive into one of the biggest questions of our time: How do we build trustworthy and responsible AI systems? 

To help us answer this question, we are joined by someone who is right at the heart of the conversation. Tahir Latif is a distinguished expert on building responsible and transparent AI systems. He is the Global Practice Lead for Data Privacy & Responsible AI at Cognizant, one of the largest global professional services companies. Tahir has led complex privacy and AI programmes across multiple industry sectors both in the UK and globally. He is also the Chief AI and Governance Officer and board member at the Ethical AI Alliance, a not for profit body which promotes ethical standards in AI development. Tahir is the co-author of Data Privacy – A Practical Handbook on Governance and Operation.

In this conversation, we explore how to cut through the complexity of ethical AI, what the future holds, and most importantly, what practical steps IG professionals can take to succeed in this new landscape. 

Listen on your preferred platform via our podcast page, or download the episode directly.

This podcast is sponsored by Phaselaw – a purpose-built solution for document disclosures, like subject access requests and FOI requests. Instead of redacting PDFs one by one, or forcing litigation software to do a job it wasn’t designed for, with Phaselaw you get collection, review, and redaction in one workflow. Teams across the World are using it to cut response times from weeks to days. 

For Guardians of Data listeners, Phaselaw is offering a two-month free trial; run it on live requests, see what it does to your backlog, decide from there. No card, no commitment. 

Head to https://www.phase.law/guardians to claim your free trial.  

Previous episodes of the Guardians of Data podcast have featured  Naomi Mathews and Ibrahim Hasan explaining the law on filming people in public for social media, Maurice Frenkel looking back at 20 years of the Freedom of Information Act, Olu Odeniyi analysing recent cyber breaches and discussing the lessons to learn and Raz Edwards talking about how to succeed as an IG leader. 

How to Succeed in Information Governance

Seasoned IG professionals offer invaluable advice, having tackled data protection hurdles and shaped best practices over years in the field. By listening to their journeys, new IG professionals can better prepare themselves to face tomorrow’s IG challenges with confidence. 

In Episode 1 of the Guardians of Data podcast our guest was Jon Baines who is a senior data protection specialist at Mishcon de Reya LLP, a law firm where he advises on complex data protection and freedom of information matters. Jon isn’t a lawyer in the traditional sense, yet he has been listed in Legal 500 as a rising star in the data protection, privacy and cybersecurity category. Jon is also the long standing chair of the National Association of Data Protection and Freedom of Information Officers.  

In the podcast, our conversation ranges widely and goes into Jon’s route to the law, what sort of work a non-lawyer like gets involved in at a law firm, whether young professionals need to or should qualify as solicitors in order to develop a career in information law, some of the specialisms and the history of Mishcon de Reya LLP; and developments of data protection in the age of AI. 

The following is an abridged version of the podcast focusing on Jon’s advice to IG professionals.  

Question: You’ve proved that you don’t need to be a lawyer to work at the cutting edge of information law. What skills or perspectives can non-lawyers bring that make them particularly valuable in this field? 

Answer: Critical thinking. I’m a big advocate for seeing both sides. I nearly always, when I approach a task or an instruction, think “if I were advising the other side, what would I be doing?” Because I think it’s really important that you don’t just see the positives on your side; that ability to see across the issue and be able to challenge yourself is important. And that’s part of critical thinking.  

In a lot of data protection matters, it’s important to remember that a data subject is all of us effectively; we are all data subjects. Data protection is about a fundamental right, let’s call it the right to respect for our personal information and a limited right to control that information. So a certain amount of empathy is important.  

It’s also important to understand how commerce works; data protection law doesn’t exist in a vacuum. As I say, it’s about us; it’s about our information. It’s also about how that information, operates and can be used within a commercial world, a business world, a public service world. We don’t have a complete right to privacy, let alone privacy of our information. It’s a qualified right. So I think an understanding of business and understanding that business needs data in order to operate is important. 

What is your advice for those who are new to the IG profession? 

I think one of the biggest skills you need is being able to be across the whole organisation that you work for. So don’t work in a silo. Your role might be part of Legal etc. but make sure that you get out and learn about your organisation. Make sure that people know who you are. It’s old fashioned internal networking, I guess. 

How should IG professionals, position themselves, to add value to AI projects? 

Well, it kind of makes me think of the old Data Protection Impact Assessment or prior to GDPR, when we called them privacy impact assessments. It’s not much use being part of that sort of project if you’re only brought in at the last moment. The whole idea of risk assessment is to assess in advance. So it’s important for IG professionals to remind those setting up AI projects that their input is needed from the start; indeed, even before a decision is taken to initiate a project. There are going to be few AI projects that will not involve data protection, in some way or another, or that don’t have the potential to do so in the future. So I think it’s as simple as that really. Try and make sure you’ve got your foot in the door at the start, because it’s going to be very difficult to do your job if you’re brought in at the last moment. 

If you could go back and give your younger self one piece of career advice, what would it be? 

I would probably tell myself that, just in the years after graduation, time goes quite quickly. And whilst I wouldn’t ever want to put pressure on my younger self, I think I would want to tell my younger self to “pull your socks up” a bit and start doing this sort of thing earlier. I think I drifted for a number of years and, as I get older, I increasingly find myself in this role of elder sage and telling young people, don’t waste time; it goes so quickly. 

How useful is NADPO in terms of professional development? 

NADPO is a venerable institution. It’s been going since 1993. We’re an association of information law professionals and by that I mean there are DPOs, there are FOI officers, there are lawyers, there are some journalist members, academics etc. So everyone is welcome. We exist to support the profession by providing an opportunity to learn from experts (whilst we don’t do direct training). So for a payment of, what’s rather an eccentric, membership fee of £130 for two years, you get to attend our in-person events, which includes our annual conference where we have seven or eight expert speakers talking on various areas of information law. We also have monthly webinars and a range of other member benefits. I’m very keen that NADPO is for its members. So I love it when members come to me with ideas for speakers or offers. Like I say, it’s open to anyone who’s working in or really interested in the area of data protection, FOI and IG.  

You can listen to the full Episode 1 podcast with Jon here.  

More valuable careers advice in Episode 5 where our guest is Raz Edwards, Head of Data Security and Protection at Wolverhampton NHS Trust. In our conversation, Raz shares her journey into Information Governance, the challenges she’s faced and overcome as an IG leader, her advice for both new starters and seasoned professionals and her perspective on the future of the profession.  She also reflects on what she’s learned through her tribunal role and what it takes to succeed as an IG leader. 

Could Children’s Use of Social Media be Banned in the UK?

Some argue that the primary goal of social media is no longer genuine connection, but the maximisation of user engagement for commercial gain. Platforms generate vast revenues by delivering highly targeted, personalised advertising, incentivising designs that keep users scrolling for longer. With the rise of AI, this content stream has become even more relentless, often amplified by manipulative or overly flattering language that encourages continuous interaction. 

Unsurprisingly, many parents are concerned about their children’s use of social media. Endless scrolling and exposure to videos featuring mindless pranks or viral challenges can have negative effects on both mental and physical health. Increasingly, attention is turning to the platforms themselves: critics suggest that their design may not only encourage excessive use, but also contribute to addiction, anxiety and other forms of harm. 

The US Court Case  

On 25th March 2026, a jury in Los Angeles delivered a damning verdict on two of the world’s most popular social media platforms. It ruled that Instagram and You Tube were deliberately designed to be addictive and consequently their parent companies have been negligent in failing to safeguard their child users. Meta and Google, owners of Instagram and YouTube, must now pay $6m (£4.5m) in damages to “Kaley”, the young woman who was the plaintiff (claimant) in this case. Her lawyers argued that the design of Instagram and YouTube caused her to be addicted to the social media platforms. This addiction impacted her mental health during childhood leaving her with body dysmorphia, depression and suicidal thoughts.  

The judgement has sent shockwaves through tech companies worldwide, not just in Silicon Valley. One tech company insider, who asked not to be identified, told the BBC, “we’re having a moment”. Even the Royal Family chimed in. In a statement, the Duke and Duchess of Sussex said: “This verdict is a reckoning. For too long, families have paid the price for platforms built with total disregard for the children they reach.”   

Both companies vigorously defended the claim and intend to appeal the judgement. Meta maintains that a single platform cannot be solely responsible for a user’s mental health crisis. Google, meanwhile, argues that YouTube is not a social network. 

English Law 

Could such a claim succeed in this country? The tort of negligence provides the best hope for claimants who allege harm from social media use subject to the elements of the tort (duty of care, breach, causation and foreseeability) being satisfied. There is growing recognition in UK law that online platforms may owe a duty of care to users, particularly if the users are children. And the harms of over use of social media  are well documented. However causation is likely to be the most difficult hurdle for claimants in the UK. To succeed, a claimant must prove that a platform’s design caused or materially contributed to the harm they suffered through their use of social media. This is a difficult hurdle when it comes to social media. Psychological harm rarely has a single identifiable cause. Social media companies are likely to argue that their platforms are only one of the many factors which can contribute to an individual’s mental health; alongside family environment, school experiences, pre-existing vulnerabilities and offline relationships to name a few.  

Could social media platforms be treated as “defective products” under the Consumer Protection Act 1987 (CPA)  which carries strict liability for harm? Products, under the CPA, are traditionally understood as tangible goods, not the likes of YouTube and Instagram. It is arguable though that social media platforms are not just intermediaries but “manufacturers” of digital environments, making them liable for defects in algorithms or addictive design. The Law Commission is currently reviewing the CPA to determine if it is fit for the digital age, with a focus on artificial intelligence, software and online platforms. The review, which began in September 2025, may lead to expanded liability for online platforms and software providers. 

It is worth noting that the US case was decided by a jury. In the UK civil cases, particularly those involving negligence, are decided by judges. Juries may be influenced by emotional arguments, whereas judges are trained to apply the law strictly and are less susceptible to being swayed by emotion at the expense of legal principles. 

Despite the issues around causation, a legal action in negligence is probably the best option for aggrieved social media users in the UK; although the lack of Legal Aid and the UK courts restrictive approach to class actions mean a test case would require significant upfront funding. Perhaps insurers, emboldened by the US Judgement, may now be more willing to cover the costs of such a test case.  

Regulating Social Media 

Unlike the US, the UK has moved toward statutory regulation rather than litigation as the primary means of controlling social media harms. 

Since the passage of the Online Safety Act in 2023 (OSA), social media companies and search engines have a duty to ensure their services aren’t used for illegal activity or to promote illegal content, with particular protections for children. The communications regulator, Ofcom, has been tasked with implementing the OSA and can fine infringing companies of up to £18 million, or 10% of their global revenue (whichever is greater). Last month, it published guidance on how platforms must protect children. Furthermore, since platforms are processing users’ personal data, they have to comply with the UK GDPR. The Data (Use and Access) Act 2025, which mainly came into force in February, explicitly requires those who provide an online service that is likely to be used by children, to take their needs into account when deciding how to use their personal data.   

Even before the US judgement, many countries had been considering whether, to regulate social media further and/or ban children from using it. Australia has banned it and others, like France and Denmark, have introduced or are planning to introduce tighter rules. 

The UK government is currently carrying out a consultation to consider whether additional measures are required to keep children safe in the online world. This includes setting a minimum age for children to access social media, restricting risky functionalities and design features that encourage excessive use, such as infinite scrolling and autoplay, whether the digital age of consent should be raised, whether the guidance on the use of mobile phones in schools should be put on a statutory footing and better support for parents, including clearer guidance and simpler parental controls. The consultation ends on 26th May, and the government will respond before the end of July. Alongside the consultation, the government is running a pilot scheme which will see 300 teenagers have their social media apps disabled entirely, blocked overnight or capped to one hour’s use – with some also seeing no such changes at all – in order to compare their experiences. Children and parents involved in the pilot will be interviewed before and after to assess its impact. 

Meanwhile, on 27th March 2026, the government published national guidance that urges parents to strictly limit screen exposure in early years over health and development risks. The new recommendations advise that there should be no screen exposure for children under two except for shared activities. For those aged two to five, usage should be capped at one hour per day, with additional guidance to avoid screens at mealtimes and before bed. 

Parliament is also debating the use of social media platforms by children but remains divided on what action to take. In March, during a debate on the Children’s Wellbeing and Schools Bill, the House of Lords supported a proposal to ban under-16s in the UK from social media platforms. It is the second time peers have defeated the government over the proposal. There is now a standoff between the Commons and the Lords. Whatever happens the verdict in the California court has signalled a rising public expectation for more aggressive regulation of social media platforms. 

Listen to the Guardians of Data Podcast for the latest news and views on data protection, cyber security, AI and freedom of information.   

This and other developments relating to children’s data will be covered forthcoming workshop, Working with Children’s Data.

New Podcast: How to Succeed as an IG Leader 

Act Now is pleased to bring you episode 5 of the Guardians of Data podcast.  

In information governance, there is no substitute for learning from those who have walked the path before us. Experienced IG leaders bring a wealth of knowledge from years at the frontline of data protection and information rights – navigating challenges, overcoming obstacles and shaping best practice along the way.
By sharing their stories, lessons learned and practical advice, they help both new starters and seasoned professionals grow in confidence, strengthen their practice and prepare for the challenges of tomorrow. 

In this episode we are joined by Raz Edwards, Head of Data Security and Protection at Wolverhampton NHS Trust. Raz has over 17 years of experience as a Data Protection Officer, including more than a decade in the NHS. She is also Chair of the National Strategic Information Governance Network and serves as a member of the Upper Tribunal and First-Tier Tribunal in the Information Rights Jurisdiction. 

In our conversation, Raz shares her journey into Information Governance, the challenges she’s faced and overcome as an IG leader, her advice for both new starters and seasoned professionals and her perspective on the future of the profession.
She also reflects on what she’s learned through her tribunal role and what it takes to succeed as an IG leader. 

 Download and listen here, or on your preferred podcast app. Available on Apple Podcasts, Spotify, and all major podcast platforms. 

Previous episodes of the Guardians of Data podcast have featured Jon Baines, reflecting on his career as a Data Protection Specialist and the hot issues in information governance, Lynn Wyeth discussing the recent controversy around Grok AI, Maurice Frenkel looking back at 20 years of the Freedom of Information Act and Olu Odeniyi analysing recent cyber breaches and discussing the lessons to learn.

AI Transcription Tools in Social Work Under Scrutiny 

Anyone remember Dragon Dictate? The first versions of this voice transcription software required users to spend hours training it (usually wearing a headset) by repeating stock phrases many times over. Even after full training, the transcription output was far from accurate. How technology has moved on, especially in the last few years, with the proliferation of AI. 

AI powered transcription software has been rapidly adopted by public sector organisations especially in local authority social work departments. Tools, like Magic Notes and Microsoft Copilot, are used by social workers to record conversations with children and families (e.g. interviews or assessments), transcribe spoken audio into text and generate summaries automatically. These “ambient scribes” listen in real-time or process recordings, reducing the need for manual notetaking; thus allowing professionals to focus on interactions rather than documentation. However the use of such tools, especially in sensitive contexts like social work, is not without risks as was highlighted by a recent report.  

Ada Lovelace Institute Report 

On 11th February 2026, the Ada Lovelace Institute published a report titled “Scribe and prejudice? Exploring the use of AI transcription tools in social care.” The report explored the dynamics of adoption and the impacts of AI transcription tools in adult and children’s social care across 17 local authorities in England and Scotland. Based on interviews with frontline social workers and managers, it highlighted serious risks that should be addressed by users.  

These include, amongst others: 

AI “Hallucinations”: The AI sometimes generates false information that wasn’t said in the recorded conversation. A prominent example involved an AI-generated summary incorrectly stating that a child had expressed suicidal ideation. This kind of error is especially dangerous in child protection or mental health contexts, where it could trigger unnecessary interventions or lead to flawed decisions about care. 

Gibberish, misrepresentations, and other errors: AI generated transcripts have included nonsense phrases, misspelled names, incorrect speaker attributions (especially in multi-person conversations), fabricated statements, irrelevant or foul language insertions and overly formal or academic wording that doesn’t reflect normal social work language. 

Bias and Harmful Stereotyping: Some outputs have reportedly promoted stereotypes or biased perceptions of individuals that weren’t present in the original recording. 

These issues echo broader AI concerns but of course are more serious in the context of social work records. Inaccuracies entering official care records could lead to incorrect decisions about a child’s safety, family support, or adult care; potentially resulting in harm to vulnerable people, professional consequences for social workers or even legal liability. 

Social workers generally bear full responsibility for reviewing and approving these AI outputs (the “human in the loop” safeguard), but practices vary widely according to the report. Some social workers spend minutes checking AI output whilst others spend hours. The report questions how effective this is in high-pressure frontline environments. There is also concern that over-reliance on summarisation features could erode professional judgment and the nuanced, interpretive nature of social work documentation. 

The report notes that in early 2025, one AI transcription tool was already in active use by 85 local authorities for social care. But the Ada Lovelace Institute criticises the “limited and light-touch” approaches to ethics, evaluation, testing, regulation, and risk mitigation so far. It has called for more robust safeguards, better guidance and thorough evaluation before wider use. 

Recommendations 

To ensure the safe and responsible use of AI transcription tools, the Institute urged the government to require local authorities to document their use of such tools through the ‘Algorithmic Transparency Reporting Standard.’ 

It also recommended that social care regulators and local authorities collaborate with relevant sector bodies to develop guidance on using AI transcription tools in statutory processes and formal proceedings, supported by clear accountability structures. 

The Institute added that: ‘To enable end-to-end accountability, regulators and professional bodies should review and revise rules and guidance on professional ethics for social workers and support social workers to collaborate with legal and advisory bodies around procedures for AI use in formal proceedings. An advisory board comprised of people with lived experience of drawing on care should be established to inform these actions.’ 

Further recommendations include: 

  • The UK government should extend its pilots of AI transcription tools to include various locations and public sector contexts. 
  • The UK government should set up a What Works Centre for AI in Public Services to generate and synthesise learnings from pilots and evaluations. 
  • A coalition of researchers, policymakers, civil society and community groups should collaborate on research on the systemic impacts of AI transcription tools. 
  • Local authorities should specify their outcomes and expected impact when procuring AI transcription tools to ensure a shared understanding among staff and users. 

The UK GDPR Angle 

The use of AI powered transcription software will involve processing highly sensitive personal data, including audio recordings and derived transcripts/summaries of conversations involving vulnerable individuals. This triggers UK GDPR obligations, with heightened risks due to the sensitive nature of the data and potential for harm if errors occur. 

Local authorities and social care providers should integrate UK GDPR compliance into procurement, deployment, and ongoing use of AI transcription software. Key practical steps include: 

  • Conduct a DPIA:  Before rollout or expansion, complete a Data Protection Impact Assessment to assess all the risks (e.g., hallucinations affecting accuracy, bias in diverse accents/dialects, unauthorised access). Update DPIAs for new tools or features. Involve the organisation’s Data Protection Officer from the outset. 
  • Choose compliant tools and vendors: Prioritise tools with strong data protection (e.g. UK-hosted data, no unnecessary retention, robust security). Review vendor DPIAs, processor agreements, and compliance certifications.  
  • Establish clear consent and transparency processes: Inform service users upfront about recording, AI involvement, and data use (via privacy notices or verbal explanation). Document decisions and allow opt-outs where appropriate. 
  • Implement strong human oversight and review: Mandate thorough checks of all AI outputs before approving records. Train staff to detect inaccuracies, bias, or inappropriate content. Flag AI-generated sections (e.g. via watermarks or metadata) for transparency and future audits. 
  • Secure data handling and contracts: Use encrypted recording/uploading, limit data shared with tools and delete audio promptly after transcription. Ensure processor contracts (Article 28) specify UK GDPR compliance, audit rights and breach notification. 
  • Monitor, audit and train: Regularly audit tool use and outputs for compliance. Provide targeted training on UK GDPR risks (e.g. accuracy, breaches, bias). Track incidents (e.g. hallucinations) and report serious ones as breaches if required. 
  • Define boundaries for use: Establish consensus on when AI transcription is appropriate (or unacceptable).  

AI transcription offers clear benefits for reducing paperwork and freeing up social workers’ time for direct care. However, strong governance measures must be taken to avoid dangerous inaccuracies slipping into official records, and the potential for biased or harmful decisions. 

Listen to the Guardians of Data Podcast for the latest news and views on data protection, cyber security, AI and freedom of information. 

If you need to train your staff on responsible use of AI please get in touch to discuss our customised in house training. The following public courses may also interest you: 

AI and Information Governance:  A one day workshop examining the key data protection and IG issues when deploying AI solutions.  

AI Governance Practitioner Certificate training programme: A four day course providing a practical overview of how AI systems are developed, deployed, and regulated, with particular attention to risk, bias, and accountability. 

New Podcast: The Grok AI Controversy 

Act Now is pleased to bring you episode 2 of a new podcast; Guardians of Data. This is a show where we explore the world of information law and information governance – from privacy and AI to cybersecurity and freedom of information. In each episode we will be speaking with experts and practitioners to unpack the big issues shaping the IG profession. 

In the first episode, we were joined by Jon Baines, a Senior Data Protection Specialist at Mishcon de Reya LLP and the long-standing chair of NADPO. In a wide ranging conversation, Jon shared his journey into IG, his advice for both new starters and seasoned professionals and his perspective on the future of the profession. 

In Episode 2 we discuss the recent controversy around Grok AI. 

Grok,  the AI chatbot developed by xAI and integrated into the social media platform X, has caught the attention of governments and regulators across the world after it was used to edit pictures of real women to show them in revealing clothes and suggestive poses. In the UK, Ofcom and the Information Commissioner’s Office have opened formal investigations,  a significant step that signals how seriously AI-related risks are now being taken.  

This controversy raises fundamental questions about how AI systems are designed and overseen and about whether existing laws and board-level oversight are keeping pace. In episode 2, we unpack these issues with the help of Lynn Wyeth, an expert in AI, data protection and responsible technology.  

Listen via this link or on your preferred podcast app. 
Available on Apple Podcasts, Spotify, and all major podcast platforms.

Home Office Acknowledges Racial and Gender Bias in UK Police Facial Recognition Technology

Facial recognition is often sold as a neutral, objective tool. But recent admissions from the UK government show just how fragile that claim really is.

New evidence has confirmed that facial recognition technology used by UK police is significantly more likely to misidentify people from certain demographic groups. The problem is not marginal, and it is not theoretical. It is already embedded in live policing.

A Systematic Pattern of Error

Independent testing commissioned by the Home Office found that false-positive rates increase dramatically depending on ethnicity, gender, and system settings.

At lower operating thresholds — where the software is configured to return more matches — the disparity becomes stark. White individuals were falsely matched at a rate of around 0.04%. For Asian individuals, the rate rose to approximately 4%. For Black individuals, it reached about 5.5%. The highest error rate was recorded among Black women, who were falsely matched close to 10% of the time.

The data highlights a striking imbalance: Asian and Black individuals were misidentified almost 100 times more frequently than white individuals, while women faced error rates roughly double those of men.

Why This Is Not an Abstract Risk

This technology is already in widespread use. Police forces rely on facial recognition to analyse CCTV footage, conduct retrospective searches across custody databases, and, in some cases, deploy live systems in public spaces.

The scale matters. Thousands of retrospective facial recognition searches are conducted each month. Even a low error rate, when multiplied across that volume, results in a significant number of people being wrongly flagged.

A false match can lead to questioning, surveillance, or police intervention. Even if officers ultimately decide not to act, the encounter itself can be intrusive, distressing, and damaging. These effects do not disappear simply because a human later overrides the system.

Bias, Thresholds, and Operational Reality

For years, facial recognition vendors and public authorities argued that bias could be controlled through careful configuration. In controlled conditions, stricter thresholds reduce error rates. But operational pressures often incentivise looser settings that generate more matches, even at the cost of accuracy.

The government’s own findings now confirm what critics have long warned: fairness is conditional. Bias does not vanish; it shifts depending on how the system is used.

The data also shows that demographic impacts overlap. Women, older people, and ethnic minorities are all more likely to be misidentified, with compounded effects for those who sit at multiple intersections.

Expansion Amid Fragile Trust

Despite these findings, the government is consulting on proposals to expand national facial recognition capability, including systems that could draw on large biometric datasets such as passport and driving licence records.

Ministers have pointed to plans to procure newer algorithms and to subject them to independent evaluation. While improved testing and oversight are essential, they do not answer the underlying question: should surveillance infrastructure be expanded while known structural risks remain unresolved?

Civil liberties groups and oversight bodies have described the findings as deeply concerning, warning that transparency, accountability, and public confidence are being strained by the rapid adoption of opaque technologies.

This Is a Governance Issue, Not Just a Technical One

Facial recognition is not simply a question of software performance. It is a question of how power is exercised and how risk is distributed.

When automated systems systematically misidentify certain groups, the consequences fall unevenly. Decisions about who is stopped, questioned, or monitored start to reflect the limitations of technology rather than evidence or behaviour.

Once such systems become normalised, rolling them back becomes difficult. That is why scrutiny matters now, not after expansion.

If technology is allowed to shape policing, the justice system, and public space, it must be subject to the highest standards of accountability, fairness, and democratic oversight.

These and other developments in the use of artificial intelligence, surveillance, and automated decision-making will be examined in detail in our AI Governance Practitioner Certificate training programme, which provides a practical and accessible overview of how AI systems are developed, deployed, and regulated, with particular attention to risk, bias, and accountability.

New Guidance on AI Risk Management

The development, procurement and deployment of AI systems involving the processing of personal data raises significant risks to data subjects’ fundamental rights and freedoms, including but not limited to privacy and data protection. The principle of accountability enshrined in the UK GDPR and the EU GDPR require Data Controllers to identify and mitigate these risks, as well as to demonstrate how they did so. This is especially important for AI systems that are the product of intricate supply chains often involving multiple actors processing personal data in different capacities.

The European Data Protection Supervisor (EDPS) has just released an important new guidance document to help organisations conduct data protection risk assessments when developing, procuring, or deploying AI systems.  It focuses on the risk of non-compliance with certain data protection principles for which the mitigation strategies that controllers must implement can be technical in nature – namely fairness, accuracy, data minimisation, security and data subjects’ rights. 

Key sections of the document address:

  • the risk management methodology according to ISO 31000:2018
  • the typical development lifecycle of AI systems as well as the different steps involved in their procurement 
  • the notions of interpretability and explainability 
  • an analytical framework for identifying and treating risks that may arise in AI systems, structured according to the data protection principles potentially affected. 

The EDPS has issued this guidance in his role as a data protection supervisory authority for EU institutions. However it is a very useful document for any organisation deploying AI and which requires guidance on how to systematically  assess the risks from a data protection perspective. 

Our AI Governance Practitioner Certificate course, is designed to equip Information Governance professionals with the essential knowledge and skills to management the risk of AI deployment within their organisations. This year 50 delegates, from a variety of backgrounds, have successfully completed the course, giving great feedback

The first course of 2026 starts on 8th January. Places are limited so book early to avoid disappointment. If you require an introduction to AI and information governance, please consider booking on our one day workshop

Our 23rd Birthday! Celebrate with Us and Save on Training  

This month marks 23 years of Act Now Training. We delivered our first course in 2003 (on the Data Protection Act 1998!) at the National Railway Museum in York. Fast forward to today, and we deliver over 300 training days a year on AI, GDPR, records management, surveillance law and cyber security; supporting delegates across multiple jurisdictions including the Middle East.  

Our success comes from more than just longevity; we are trusted by clients across every sector, giving us a unique insight into the real-world challenges of information governance. That’s why our education-first approach focuses on practical skills, measurable impact, and lasting value for your organisation. 

Anniversary Offer: To celebrate, we are giving you a £50 discount on any one-day workshop, if you book by 30th September 2025. Choose from our most popular sessions like GDPR and FOI A to Z, or explore new topics like AI and Information Governance and the Risk Managment in IG

Simply quote “23rd Anniversary” on your booking form to claim your discount.

Data (Use and Access) Act 2025: ICO Consultation 

Last month the ICO, launched public consultations on its guidance in response to The Data (Use and Access) Act 2025 (DUA Act) coming into force.  

The DUA Act received Royal Assent on 19th June 2025. It amends, rather than replaces, the UK GDPR as well as the Privacy and Electronic Communications (EC Directive) Regulations 2003 (PECR) and the Data Protection Act 2018. (You can read a summary of the Act here.)  

The Act is not fully in force yet. The only substantive amendment (Section 78) to the UK GDPR that came into force on 19th June inserted a new Article 15(1A), relating to subject access requests: 

“…the data subject is only entitled to such confirmation, personal data and other information as the controller is able to provide based on a reasonable and proportionate search for the personal data and other information described in that paragraph.” 

Other provisions of the Act will commence in stages, 2 to 12 months after Royal Assent. The first commencement order, The Data (Use and Access) Act 2025 (Commencement No. 1) Regulations 2025, came into force on 20th August.  

Recognised Legitimate Interests 

The DUA Act amends Article 6 of the UK GDPR to introduce ‘Recognised legitimate interest’ as a new lawful basis for processing personal data. This covers activities such as crime prevention, public security, safeguarding, emergencies and sharing personal data to help other organisations perform their public tasks. The proposed ICO guidance aims to make it easier for organisations to successfully use recognised legitimate interest by explaining how it works, along with giving practical examples. Further details on the 10-week consultation, which closes on 30 October 2025, can be found here.  

Data Protection Complaints 

By June 2026, Data Controllers must have a process in place to handle data protection complaints. A complaint can come from anyone who is unhappy with how an organisation has handled their personal data. The proposed ICO guidance sets out the new requirements and informs organisations of what they must, should and could do to comply. Further details on the eight-week consultation, which closes on 19 October 2025, can be found here.  

Data protection professionals need to assess the changes to the UK data protection regime set out in the DUA Act. Our half day workshop will explore the new Act in detail giving you an action plan for compliance. A revised UK GDPR Handbook is now available incorporating the changes made by the DUA Act.