New Podcast: How to Succeed as an IG Leader 

Act Now is pleased to bring you episode 5 of the Guardians of Data podcast.  

In information governance, there is no substitute for learning from those who have walked the path before us. Experienced IG leaders bring a wealth of knowledge from years at the frontline of data protection and information rights – navigating challenges, overcoming obstacles and shaping best practice along the way.
By sharing their stories, lessons learned and practical advice, they help both new starters and seasoned professionals grow in confidence, strengthen their practice and prepare for the challenges of tomorrow. 

In this episode we are joined by Raz Edwards, Head of Data Security and Protection at Wolverhampton NHS Trust. Raz has over 17 years of experience as a Data Protection Officer, including more than a decade in the NHS. She is also Chair of the National Strategic Information Governance Network and serves as a member of the Upper Tribunal and First-Tier Tribunal in the Information Rights Jurisdiction. 

In our conversation, Raz shares her journey into Information Governance, the challenges she’s faced and overcome as an IG leader, her advice for both new starters and seasoned professionals and her perspective on the future of the profession.
She also reflects on what she’s learned through her tribunal role and what it takes to succeed as an IG leader. 

 Download and listen here, or on your preferred podcast app. Available on Apple Podcasts, Spotify, and all major podcast platforms. 

Previous episodes of the Guardians of Data podcast have featured Jon Baines, reflecting on his career as a Data Protection Specialist and the hot issues in information governance, Lynn Wyeth discussing the recent controversy around Grok AI, Maurice Frenkel looking back at 20 years of the Freedom of Information Act and Olu Odeniyi analysing recent cyber breaches and discussing the lessons to learn.

AI Transcription Tools in Social Work Under Scrutiny 

Anyone remember Dragon Dictate? The first versions of this voice transcription software required users to spend hours training it (usually wearing a headset) by repeating stock phrases many times over. Even after full training, the transcription output was far from accurate. How technology has moved on, especially in the last few years, with the proliferation of AI. 

AI powered transcription software has been rapidly adopted by public sector organisations especially in local authority social work departments. Tools, like Magic Notes and Microsoft Copilot, are used by social workers to record conversations with children and families (e.g. interviews or assessments), transcribe spoken audio into text and generate summaries automatically. These “ambient scribes” listen in real-time or process recordings, reducing the need for manual notetaking; thus allowing professionals to focus on interactions rather than documentation. However the use of such tools, especially in sensitive contexts like social work, is not without risks as was highlighted by a recent report.  

Ada Lovelace Institute Report 

On 11th February 2026, the Ada Lovelace Institute published a report titled “Scribe and prejudice? Exploring the use of AI transcription tools in social care.” The report explored the dynamics of adoption and the impacts of AI transcription tools in adult and children’s social care across 17 local authorities in England and Scotland. Based on interviews with frontline social workers and managers, it highlighted serious risks that should be addressed by users.  

These include, amongst others: 

AI “Hallucinations”: The AI sometimes generates false information that wasn’t said in the recorded conversation. A prominent example involved an AI-generated summary incorrectly stating that a child had expressed suicidal ideation. This kind of error is especially dangerous in child protection or mental health contexts, where it could trigger unnecessary interventions or lead to flawed decisions about care. 

Gibberish, misrepresentations, and other errors: AI generated transcripts have included nonsense phrases, misspelled names, incorrect speaker attributions (especially in multi-person conversations), fabricated statements, irrelevant or foul language insertions and overly formal or academic wording that doesn’t reflect normal social work language. 

Bias and Harmful Stereotyping: Some outputs have reportedly promoted stereotypes or biased perceptions of individuals that weren’t present in the original recording. 

These issues echo broader AI concerns but of course are more serious in the context of social work records. Inaccuracies entering official care records could lead to incorrect decisions about a child’s safety, family support, or adult care; potentially resulting in harm to vulnerable people, professional consequences for social workers or even legal liability. 

Social workers generally bear full responsibility for reviewing and approving these AI outputs (the “human in the loop” safeguard), but practices vary widely according to the report. Some social workers spend minutes checking AI output whilst others spend hours. The report questions how effective this is in high-pressure frontline environments. There is also concern that over-reliance on summarisation features could erode professional judgment and the nuanced, interpretive nature of social work documentation. 

The report notes that in early 2025, one AI transcription tool was already in active use by 85 local authorities for social care. But the Ada Lovelace Institute criticises the “limited and light-touch” approaches to ethics, evaluation, testing, regulation, and risk mitigation so far. It has called for more robust safeguards, better guidance and thorough evaluation before wider use. 

Recommendations 

To ensure the safe and responsible use of AI transcription tools, the Institute urged the government to require local authorities to document their use of such tools through the ‘Algorithmic Transparency Reporting Standard.’ 

It also recommended that social care regulators and local authorities collaborate with relevant sector bodies to develop guidance on using AI transcription tools in statutory processes and formal proceedings, supported by clear accountability structures. 

The Institute added that: ‘To enable end-to-end accountability, regulators and professional bodies should review and revise rules and guidance on professional ethics for social workers and support social workers to collaborate with legal and advisory bodies around procedures for AI use in formal proceedings. An advisory board comprised of people with lived experience of drawing on care should be established to inform these actions.’ 

Further recommendations include: 

  • The UK government should extend its pilots of AI transcription tools to include various locations and public sector contexts. 
  • The UK government should set up a What Works Centre for AI in Public Services to generate and synthesise learnings from pilots and evaluations. 
  • A coalition of researchers, policymakers, civil society and community groups should collaborate on research on the systemic impacts of AI transcription tools. 
  • Local authorities should specify their outcomes and expected impact when procuring AI transcription tools to ensure a shared understanding among staff and users. 

The UK GDPR Angle 

The use of AI powered transcription software will involve processing highly sensitive personal data, including audio recordings and derived transcripts/summaries of conversations involving vulnerable individuals. This triggers UK GDPR obligations, with heightened risks due to the sensitive nature of the data and potential for harm if errors occur. 

Local authorities and social care providers should integrate UK GDPR compliance into procurement, deployment, and ongoing use of AI transcription software. Key practical steps include: 

  • Conduct a DPIA:  Before rollout or expansion, complete a Data Protection Impact Assessment to assess all the risks (e.g., hallucinations affecting accuracy, bias in diverse accents/dialects, unauthorised access). Update DPIAs for new tools or features. Involve the organisation’s Data Protection Officer from the outset. 
  • Choose compliant tools and vendors: Prioritise tools with strong data protection (e.g. UK-hosted data, no unnecessary retention, robust security). Review vendor DPIAs, processor agreements, and compliance certifications.  
  • Establish clear consent and transparency processes: Inform service users upfront about recording, AI involvement, and data use (via privacy notices or verbal explanation). Document decisions and allow opt-outs where appropriate. 
  • Implement strong human oversight and review: Mandate thorough checks of all AI outputs before approving records. Train staff to detect inaccuracies, bias, or inappropriate content. Flag AI-generated sections (e.g. via watermarks or metadata) for transparency and future audits. 
  • Secure data handling and contracts: Use encrypted recording/uploading, limit data shared with tools and delete audio promptly after transcription. Ensure processor contracts (Article 28) specify UK GDPR compliance, audit rights and breach notification. 
  • Monitor, audit and train: Regularly audit tool use and outputs for compliance. Provide targeted training on UK GDPR risks (e.g. accuracy, breaches, bias). Track incidents (e.g. hallucinations) and report serious ones as breaches if required. 
  • Define boundaries for use: Establish consensus on when AI transcription is appropriate (or unacceptable).  

AI transcription offers clear benefits for reducing paperwork and freeing up social workers’ time for direct care. However, strong governance measures must be taken to avoid dangerous inaccuracies slipping into official records, and the potential for biased or harmful decisions. 

Listen to the Guardians of Data Podcast for the latest news and views on data protection, cyber security, AI and freedom of information. 

If you need to train your staff on responsible use of AI please get in touch to discuss our customised in house training. The following public courses may also interest you: 

AI and Information Governance:  A one day workshop examining the key data protection and IG issues when deploying AI solutions.  

AI Governance Practitioner Certificate training programme: A four day course providing a practical overview of how AI systems are developed, deployed, and regulated, with particular attention to risk, bias, and accountability. 

New Podcast: The Grok AI Controversy 

Act Now is pleased to bring you episode 2 of a new podcast; Guardians of Data. This is a show where we explore the world of information law and information governance – from privacy and AI to cybersecurity and freedom of information. In each episode we will be speaking with experts and practitioners to unpack the big issues shaping the IG profession. 

In the first episode, we were joined by Jon Baines, a Senior Data Protection Specialist at Mishcon de Reya LLP and the long-standing chair of NADPO. In a wide ranging conversation, Jon shared his journey into IG, his advice for both new starters and seasoned professionals and his perspective on the future of the profession. 

In Episode 2 we discuss the recent controversy around Grok AI. 

Grok,  the AI chatbot developed by xAI and integrated into the social media platform X, has caught the attention of governments and regulators across the world after it was used to edit pictures of real women to show them in revealing clothes and suggestive poses. In the UK, Ofcom and the Information Commissioner’s Office have opened formal investigations,  a significant step that signals how seriously AI-related risks are now being taken.  

This controversy raises fundamental questions about how AI systems are designed and overseen and about whether existing laws and board-level oversight are keeping pace. In episode 2, we unpack these issues with the help of Lynn Wyeth, an expert in AI, data protection and responsible technology.  

Listen via this link or on your preferred podcast app. 
Available on Apple Podcasts, Spotify, and all major podcast platforms.

Home Office Acknowledges Racial and Gender Bias in UK Police Facial Recognition Technology

Facial recognition is often sold as a neutral, objective tool. But recent admissions from the UK government show just how fragile that claim really is.

New evidence has confirmed that facial recognition technology used by UK police is significantly more likely to misidentify people from certain demographic groups. The problem is not marginal, and it is not theoretical. It is already embedded in live policing.

A Systematic Pattern of Error

Independent testing commissioned by the Home Office found that false-positive rates increase dramatically depending on ethnicity, gender, and system settings.

At lower operating thresholds — where the software is configured to return more matches — the disparity becomes stark. White individuals were falsely matched at a rate of around 0.04%. For Asian individuals, the rate rose to approximately 4%. For Black individuals, it reached about 5.5%. The highest error rate was recorded among Black women, who were falsely matched close to 10% of the time.

The data highlights a striking imbalance: Asian and Black individuals were misidentified almost 100 times more frequently than white individuals, while women faced error rates roughly double those of men.

Why This Is Not an Abstract Risk

This technology is already in widespread use. Police forces rely on facial recognition to analyse CCTV footage, conduct retrospective searches across custody databases, and, in some cases, deploy live systems in public spaces.

The scale matters. Thousands of retrospective facial recognition searches are conducted each month. Even a low error rate, when multiplied across that volume, results in a significant number of people being wrongly flagged.

A false match can lead to questioning, surveillance, or police intervention. Even if officers ultimately decide not to act, the encounter itself can be intrusive, distressing, and damaging. These effects do not disappear simply because a human later overrides the system.

Bias, Thresholds, and Operational Reality

For years, facial recognition vendors and public authorities argued that bias could be controlled through careful configuration. In controlled conditions, stricter thresholds reduce error rates. But operational pressures often incentivise looser settings that generate more matches, even at the cost of accuracy.

The government’s own findings now confirm what critics have long warned: fairness is conditional. Bias does not vanish; it shifts depending on how the system is used.

The data also shows that demographic impacts overlap. Women, older people, and ethnic minorities are all more likely to be misidentified, with compounded effects for those who sit at multiple intersections.

Expansion Amid Fragile Trust

Despite these findings, the government is consulting on proposals to expand national facial recognition capability, including systems that could draw on large biometric datasets such as passport and driving licence records.

Ministers have pointed to plans to procure newer algorithms and to subject them to independent evaluation. While improved testing and oversight are essential, they do not answer the underlying question: should surveillance infrastructure be expanded while known structural risks remain unresolved?

Civil liberties groups and oversight bodies have described the findings as deeply concerning, warning that transparency, accountability, and public confidence are being strained by the rapid adoption of opaque technologies.

This Is a Governance Issue, Not Just a Technical One

Facial recognition is not simply a question of software performance. It is a question of how power is exercised and how risk is distributed.

When automated systems systematically misidentify certain groups, the consequences fall unevenly. Decisions about who is stopped, questioned, or monitored start to reflect the limitations of technology rather than evidence or behaviour.

Once such systems become normalised, rolling them back becomes difficult. That is why scrutiny matters now, not after expansion.

If technology is allowed to shape policing, the justice system, and public space, it must be subject to the highest standards of accountability, fairness, and democratic oversight.

These and other developments in the use of artificial intelligence, surveillance, and automated decision-making will be examined in detail in our AI Governance Practitioner Certificate training programme, which provides a practical and accessible overview of how AI systems are developed, deployed, and regulated, with particular attention to risk, bias, and accountability.

Proposed Changes to the EU GDPR: Could we see more changes to the UK GDPR?

Yesterday the European Commission published its long awaited Digital Omnibus Regulation Proposal and Digital Omnibus on AI Regulation Proposal. If approved, these proposals will mean significant changes to the EU GDPR and other EU legislation and may even encourage the UK to further amend the UK GDPR. 

The aim of the “Digital Omnibus” package is to ease administrative burdens for businesses across areas like privacy, cybersecurity and artificial intelligence. Although the EU GDPR is considered balanced and fit for purpose, “targeted changes” are proposed to address concerns, particularly from smaller companies. These include:

  • Clarification of Definitions: The definition of “personal data” is clarified. Information is not considered personal to a company if it does not possess means “reasonably likely” to be used to identify an individual.
  • Processing for AI Training: It is clarified that the processing of personal data for the development and training of AI systems can constitute a “legitimate interest” under certain conditions.
  • Simplified Reporting of Data Breaches: The reporting obligation to supervisory authorities is aligned with the threshold for notifying data subjects. A report is only required if there is a “high risk” to the rights and freedoms of natural persons. The deadline for reporting is extended to 96 hours.
  • Harmonization of Data Protection Impact Assessments (DPIA): National lists of processing operations requiring a DPIA (or not) are to be replaced by unified EU-wide lists to promote harmonisation.
  • Scientific Research: The conditions for data processing for scientific research purposes are clarified by defining “scientific research” and clarifying that this constitutes a legitimate interest.

The EU AI Act also faces a number of amendments, including simplifications for small and medium-sized enterprises and small mid-cap companies in the form of pared-back technical documentation requirements. Other measures involve sandboxes for real-world testing and to “reinforce the AI Office’s powers and centralise oversight of AI systems built on general-purpose AI models, reducing governance fragmentation”.

Both omnibus packages now have a long road ahead as they enter into the trilogue negotiations with European Parliament and the Council of the European Union. It is expected to take at least several months until negotiations are finalised. 

Impact on the UK

The UK has already enacted its own package of amendments to the UK GDPR in the form of the Data (Use and Access) Act 2025 which received Royal Assent on 19th June 2025. The amendments are quite modest even before comparing them to the EU proposals above. 

A more bolder list of amendments were contained in the Data Protection and Digital Information Bill published in 2022 by the Conservative Government. This included proposals to amend the definition of personal data and to replace Data Protection Officers with Senior Responsible Individuals. This bill was later replaced by a diluted bill of the same name (number 2 Bill) only for that to be dropped in the Parliamentary “wash up” stage before the last General Election.

Could the EU reforms (if enacted) lead to the UK making more fundamental changes to the UK GDPR? We doubt it. The Labour Government has more pressing priorities and with the passing of the DUA Act they can say they have “done GDPR reform”. If we get a change in Government, then Reform and the Conservatives might target the UK GDPR as way of reigning in “pesky human rights laws”. 

Data protection professionals need to assess the changes to the UK data protection regime made by the DUA Act. Our half day workshop will explore the Act in detail giving you an action plan for compliance. A revised UK GDPR Handbook is now available incorporating the changes made by the DUA Act. 

New Guidance on AI Risk Management

The development, procurement and deployment of AI systems involving the processing of personal data raises significant risks to data subjects’ fundamental rights and freedoms, including but not limited to privacy and data protection. The principle of accountability enshrined in the UK GDPR and the EU GDPR require Data Controllers to identify and mitigate these risks, as well as to demonstrate how they did so. This is especially important for AI systems that are the product of intricate supply chains often involving multiple actors processing personal data in different capacities.

The European Data Protection Supervisor (EDPS) has just released an important new guidance document to help organisations conduct data protection risk assessments when developing, procuring, or deploying AI systems.  It focuses on the risk of non-compliance with certain data protection principles for which the mitigation strategies that controllers must implement can be technical in nature – namely fairness, accuracy, data minimisation, security and data subjects’ rights. 

Key sections of the document address:

  • the risk management methodology according to ISO 31000:2018
  • the typical development lifecycle of AI systems as well as the different steps involved in their procurement 
  • the notions of interpretability and explainability 
  • an analytical framework for identifying and treating risks that may arise in AI systems, structured according to the data protection principles potentially affected. 

The EDPS has issued this guidance in his role as a data protection supervisory authority for EU institutions. However it is a very useful document for any organisation deploying AI and which requires guidance on how to systematically  assess the risks from a data protection perspective. 

Our AI Governance Practitioner Certificate course, is designed to equip Information Governance professionals with the essential knowledge and skills to management the risk of AI deployment within their organisations. This year 50 delegates, from a variety of backgrounds, have successfully completed the course, giving great feedback

The first course of 2026 starts on 8th January. Places are limited so book early to avoid disappointment. If you require an introduction to AI and information governance, please consider booking on our one day workshop

Scope of the GDPR: ICO Wins Clearview Appeal  

The Information Commissioner has won his appeal (to the Upper Tribunal) against the First-tier Tribunal (FTT) decision involving Clearview AI Inc.  

Clearview is a US based company which describes itself as the “World’s Largest Facial Network”. Its online database contains 20 billion images of people’s faces and data scraped from the internet and social media platforms all over the world. It allows customers to upload an image of a person to its app; the person is then identified by the app checking against all the images in the Clearview database. The appeal raised the issue of the extent to which processing of the personal data of UK data subjects by a private company based outside the UK is excluded from the scope of the GDPR, including where such processing is carried out in the context of its foreign clients’ national security or criminal law enforcement activities. 

Background 

In May 2022 the ICO issued a Monetary Penalty Notice of £7,552,800 to Clearview for breaches of the UK GDPR including failing to use the information of people in the UK in a way that is fair and transparent. Although Clearview is a US company, the ICO ruled that the UK GDPR applied because of Article 3(2)(b) (territorial scope). It concluded that Clearview’s processing activities “are related to…the monitoring of [UK resident’s] behaviour as far as their behaviour takes place within the United Kingdom.” The ICO also issued an Enforcement Notice ordering Clearview to stop obtaining and using the personal data of UK residents that is publicly available on the internet, and to delete the data of UK residents from its systems.  

In October 2023, the FTT overturned the ICO’s enforcement and penalty notice against Clearview. It concluded that although Clearview did carry out data processing related to monitoring the behaviour of people in the UK (Article 3(2)(b) of the UK GDPR), the ICO did not have jurisdiction to take enforcement action or issue a fine. Both the GDPR and UK GDPR provide that acts of foreign governments fall outside their scope; it is not for one government to seek to bind or control the activities of another sovereign state. However the Tribunal noted that the ICO could have taken action under the Law Enforcement Directive (Part 3 of the DPA 2018 in the UK), which specifically regulates the processing of personal data in relation to law enforcement. 

The Upper Tribunal Judgement  

The Upper Tribunal allowed the appeal, set aside the decision of the FTT and remitted the matter to the FTT to decide the substantive appeal on the basis that the Information Commissioner had jurisdiction to issue the notices. It also decided that the FTT was right to find that Clearview’s processing fell within the territorial scope of the GDPRs, albeit that it differed in its reasoning. 

In its judgment, the Upper Tribunal ruled  that: 

(1) The words “in the course of an activity which falls outside the scope of Union law” in Article 2(2)(a) of the GDPR (which provides for an exclusion from the material scope of the GDPR) refer only to those activities in respect of which Member States have reserved control to themselves and not conferred powers on the Union to act, and not to all matters without the competence of the Union (as the ICO argued) or to the activities of third parties whose processing “intersects” with their clients’ processing in the course of “quintessentially state functions” which would offend against comity principles (as Clearview argued); 

(2) The words “behavioural monitoring” in Article 3(2)(b) are to be interpreted broadly, as a response to the challenges posed by ‘Big Data’ in the digital age, and they can encompass passive collection, sorting, classification and storing of data by automated means with a view to potential subsequent use, including use by another controller, of personal data processing techniques which consist of profiling a natural person. “Behavioural monitoring” does not require an element of active “watchfulness” in the sense of human involvement;  

(3) The words “related to” in Article 3(2)(b) of the GDPR, as applied to Article 3(2)(b), have an expansive meaning, and apply not only to controllers who themselves conduct behavioural monitoring, but also to controllers whose data processing is related to behavioural monitoring carried out by another controller. 

Data protection practitioners should read the judgement of the Upper Tribunal as it clarifies the material and territorial scope provisions of the UK GDPR. This and other GDPR developments will be discussed in our forthcoming GDPR Updateworkshop.  

Our 23rd Birthday! Celebrate with Us and Save on Training  

This month marks 23 years of Act Now Training. We delivered our first course in 2003 (on the Data Protection Act 1998!) at the National Railway Museum in York. Fast forward to today, and we deliver over 300 training days a year on AI, GDPR, records management, surveillance law and cyber security; supporting delegates across multiple jurisdictions including the Middle East.  

Our success comes from more than just longevity; we are trusted by clients across every sector, giving us a unique insight into the real-world challenges of information governance. That’s why our education-first approach focuses on practical skills, measurable impact, and lasting value for your organisation. 

Anniversary Offer: To celebrate, we are giving you a £50 discount on any one-day workshop, if you book by 30th September 2025. Choose from our most popular sessions like GDPR and FOI A to Z, or explore new topics like AI and Information Governance and the Risk Managment in IG

Simply quote “23rd Anniversary” on your booking form to claim your discount.

Data (Use and Access) Act 2025: ICO Consultation 

Last month the ICO, launched public consultations on its guidance in response to The Data (Use and Access) Act 2025 (DUA Act) coming into force.  

The DUA Act received Royal Assent on 19th June 2025. It amends, rather than replaces, the UK GDPR as well as the Privacy and Electronic Communications (EC Directive) Regulations 2003 (PECR) and the Data Protection Act 2018. (You can read a summary of the Act here.)  

The Act is not fully in force yet. The only substantive amendment (Section 78) to the UK GDPR that came into force on 19th June inserted a new Article 15(1A), relating to subject access requests: 

“…the data subject is only entitled to such confirmation, personal data and other information as the controller is able to provide based on a reasonable and proportionate search for the personal data and other information described in that paragraph.” 

Other provisions of the Act will commence in stages, 2 to 12 months after Royal Assent. The first commencement order, The Data (Use and Access) Act 2025 (Commencement No. 1) Regulations 2025, came into force on 20th August.  

Recognised Legitimate Interests 

The DUA Act amends Article 6 of the UK GDPR to introduce ‘Recognised legitimate interest’ as a new lawful basis for processing personal data. This covers activities such as crime prevention, public security, safeguarding, emergencies and sharing personal data to help other organisations perform their public tasks. The proposed ICO guidance aims to make it easier for organisations to successfully use recognised legitimate interest by explaining how it works, along with giving practical examples. Further details on the 10-week consultation, which closes on 30 October 2025, can be found here.  

Data Protection Complaints 

By June 2026, Data Controllers must have a process in place to handle data protection complaints. A complaint can come from anyone who is unhappy with how an organisation has handled their personal data. The proposed ICO guidance sets out the new requirements and informs organisations of what they must, should and could do to comply. Further details on the eight-week consultation, which closes on 19 October 2025, can be found here.  

Data protection professionals need to assess the changes to the UK data protection regime set out in the DUA Act. Our half day workshop will explore the new Act in detail giving you an action plan for compliance. A revised UK GDPR Handbook is now available incorporating the changes made by the DUA Act.

AI Governance Practitioner Certificate: Final Course for 2025 

Act Now is pleased to report that the next AI Governance Practitioner Certificate course, starting in September, is fully booked. There are still a few places available on the next course, starting in October, which is the final one in 2025. 

The AI Governance Practitioner Certificate is designed to equip Information Governance professionals with the essential knowledge and skills to navigate AI deployment within their organisations. As we detailed in our previous blog “What is the role of IG Professionals in AI Governance?”, IG professionals should be aware of how this technology works so that they can help to ensure that there is responsible deployment from an IG perspective, just as would be the case with any new technology.   

So far thirty delegates, from a variety of backgrounds, have successfully completed the course, giving great feedback. Delegates have complimented us on the scope of the syllabus and the delivery style. Cora Suckley, Information Governance Service Manager, Digital Health and Care Wales said: 

“The AI Governance Practitioner Certificate exceeded my expectations. The content was comprehensive and well-structured, successfully bridging the gap between technical AI concepts and essential governance frameworks. The course delved into responsible AI principles, risk management, compliance, policy and ethical considerations, equipping me with practical tools to navigate the evolving regulatory landscape. 

The instructor was excellent and made the sessions interactive, highly engaging and applicable, providing real-world examples. This course provides a solid foundation for implementing AI governance in a meaningful and effective way.” 

The final course for 2025 starts in October. Places are limited so book early to avoid disappointment.