AI Governance Practitioner Certificate: Final Course for 2025 

Act Now is pleased to report that the next AI Governance Practitioner Certificate course, starting in September, is fully booked. There are still a few places available on the next course, starting in October, which is the final one in 2025. 

The AI Governance Practitioner Certificate is designed to equip Information Governance professionals with the essential knowledge and skills to navigate AI deployment within their organisations. As we detailed in our previous blog “What is the role of IG Professionals in AI Governance?”, IG professionals should be aware of how this technology works so that they can help to ensure that there is responsible deployment from an IG perspective, just as would be the case with any new technology.   

So far thirty delegates, from a variety of backgrounds, have successfully completed the course, giving great feedback. Delegates have complimented us on the scope of the syllabus and the delivery style. Cora Suckley, Information Governance Service Manager, Digital Health and Care Wales said: 

“The AI Governance Practitioner Certificate exceeded my expectations. The content was comprehensive and well-structured, successfully bridging the gap between technical AI concepts and essential governance frameworks. The course delved into responsible AI principles, risk management, compliance, policy and ethical considerations, equipping me with practical tools to navigate the evolving regulatory landscape. 

The instructor was excellent and made the sessions interactive, highly engaging and applicable, providing real-world examples. This course provides a solid foundation for implementing AI governance in a meaningful and effective way.” 

The final course for 2025 starts in October. Places are limited so book early to avoid disappointment.  

Dr. Malkiat Thiarai Joins the Act Now Team 

Act Now is delighted to welcome Dr. Malkiat Thiarai to our team of associates. 

Our associates play a vital role in delivering our mission: helping to create a more privacy-conscious world by educating IG professionals. At Act Now, we pride ourselves on providing training that is clear, practical, and jargon-free — making complex topics accessible and engaging. 

Every one of our associates brings extensive real-world experience from the information governance sector. This expertise enriches our courses and ensures they remain relevant, insightful, and highly rated by our delegates. We are excited to have Dr. Thiarai join us in continuing this tradition of excellence. 

Dr. Malkiat Thiarai has worked for Birmingham City Council for over 30 years and has led the information governance function for over 20 years. He is currently the Head of Practice – Corporate Information Management and part of the council’s Digital and Technology Services multi-disciplinary leadership team. His role encompasses the duties of the Data Protection Officer as well as other aspects of information governance. He helps to improve the management of council data assets and provide strategic and operational management of information management.   

In 2021, Dr. Malkiat successfully completed a PhD in Urban Science from the University of Warwick. His research focussed on the understanding the challenges and capability of using personal data held within public sector organisations for research purposes and use the analysis to develop new models of service delivery that are focused on social care data whilst balancing the rights of the individual to privacy and a personal life. He has previously completed the LLM Information Rights and Law as well as an MBA in Public Service. 

Dr. Malkiat will be developing new courses around his area of expertise and sitting on our curriculum and exam board. He will also be assisting our team to deliver everything from one-day workshops to advanced practitioner certificate courses. 

Ibrahim Hasan, Director of Act Now Training, said:  

“I am very pleased that Dr. Malkiat has joined our team. I have known Malkiat for over 25 years. I am confident that his strong academic background coupled with experience of working in IG for many years, he will be great contribution to our team developing innovative curricula to help foster a culture of responsible data usage, build public trust and drive positive change.” 

AI Governance Practitioner Certificate: First Cohort Successfully Completes Course 

Act Now is pleased to report that the first cohort of its new AI Governance Practitioner Certificate has successfully completed the course. 

This course is designed to equip Information Governance professionals with the essential knowledge and skills to navigate AI deployment within their organisations. As we detailed in our previous blog “What is the role of IG Professionals in AI Governance?”, IG professionals should be aware of how this technology works so that they can help to ensure that there is responsible deployment from an IG perspective, just as would be the case with any new technology.   

The first course ran over a four week period in May and June. It consisted of ten delegates from the health sector in Wales. They all successfully completed the course assessment in July. 

The course was extremely well received by the delegates who complimented us on the scope of the syllabus and the delivery style: 

“I took a huge amount from the course which will help shape the development of processes for us internally in the coming months.” Dave Parsons , WASPI Code Manager (Wales Accord on the Sharing of Personal Information)  

“This was a superb course with a lot of information delivered at a carefully managed rate that encouraged discussion and reflection.  Literacy in AI and its application is vital – without it we cannot comprehend the ever changing level of IG threat and risk.” MA, Digital Health and Care Wales

The training was very good. The instructor was also very knowledgeable about the subject.” HP, Digital Health and Care Wales

Cora Suckley, Information Governance Service Manager, Digital Health and Care Wales said: 

“The AI Governance Practitioner Certificate exceeded my expectations. The content was comprehensive and well-structured, successfully bridging the gap between technical AI concepts and essential governance frameworks. The course delved into responsible AI principles, risk management, compliance, policy and ethical considerations, equipping me with practical tools to navigate the evolving regulatory landscape. 

The instructor was excellent and made the sessions interactive, highly engaging and applicable, providing real-world examples. This course provides a solid foundation for implementing AI governance in a meaningful and effective way.” 

Two more cohorts are currently completing the course. The next course starts in September and has a few places left.  

Charity Receives £18,000 GDPR Fine

On Monday, a Scottish Charity (Birthlink) received a GDPR Monetary Penalty Notice of £18,000 after it destroyed approximately 4,800 personal records, up to ten percent of which may be irreplaceable. 

Birthlink is a charity specialising in post-adoption support and advice, for people who have been affected by adoption with a Scottish connection.
Since 1984 it has owned and maintained the Adoption Contact Register for Scotland. The Register allows adopted people, birth parents, birth relatives and relatives of an adopted person to register their details with the aim of being linked to and potentially reunited with family members. 

Key findings from the Information Commissioner’s Office (ICO) investigation include: 

  • Handwritten letters and photographs from birth parents amongst items destroyed 
  • Some people’s access to part of their family histories and identities may have been permanently erased due to systematic data protection failures 
  • Poor records management means true extent of actual loss will never fully be known 
  • The charity had limited knowledge of data protection obligations and lacked cost effective and easy-to-implement policies and procedures, which would likely have prevented the destruction. 

Background 

In January 2021, Birthlink reviewed whether they could destroy ‘Linked Records’ as space was running out in the charity’s filing cabinets. ‘Linked Records’ are files of cases where people had already been linked with the person they sought and can include handwritten letters from birth parents, photographs, and copies of birth certificates.  

Following a February 2021 Board meeting, it was agreed no barriers to the destruction of records existed but that retention periods should apply to certain files and only replaceable records could be destroyed. Due to poor record keeping, it is estimated some records were destroyed on 15 April 2021 with a further 40 bags destroyed on 27 May 2021.  

In August 2023, following an inspection by the Care Inspectorate, the Birthlink Board became aware that irreplaceable items had in fact been destroyed as part of the overall record destruction. It reported the incident to the ICO. 

ICO Findings 

The ICO investigation found the following infringements of the UK GDPR: 

  1. Birthlink’s destruction of manual records containing personal data of approximately 4,800 of its service users without authorisation or lawful basis (“Relevant Processing”) occurred as a result of its failure to implement appropriate organisational measures ensuring the security of the personal data contained in the records. In this regard, the ICO found that Birthlink contravened Articles 5(1)(f) and 32(1)-(2) of the UK GDPR (security). 
  1. A significant contributing factor leading to the Relevant Processing, was Birthlink’s failure to demonstrate compliance with the data protection principles in accordance with Article 5(2) of the UK GDPR. Birthlink has accepted that there was limited understanding of the UK GDPR at the time of the Relevant Processing until around March 2023 when it introduced data protection training for its staff. 
  1. Despite acknowledging the high risk to affected service users arising from the Relevant Processing, Birthlink did not notify the ICO of the personal data breach until 8 September 2023. A delay of two years and five months represents a marked departure from the obligation to notify the ICO within 72 hours of becoming aware of a personal data breach in accordance with Article 33(1) UK GDPR. 

Why a fine now? 

This fine comes two weeks after the catastrophic data breach involving the Ministry of Defence (MoD) was reported, following the High Court lifting a superinjunction. In February 2022, an MoD official mistakenly emailed a spreadsheet containing personal details of over 18,000 Afghan nationals who had applied to move to the UK under the Afghan Relocations and Assistance Policy (ARAP). The data breach also contained personal details of more than 100 British officials including those whose identities are most closely guarded; special forces and spies.  

Despite the scale and sensitivity of the MoD data breach, the ICO decided not to take any regulatory action; not even a reprimand! In its press release, the ICO praised the MoD’s internal investigation and mitigation efforts, stating that “no further regulatory action is required at this time”.  

The ICO has been heavily criticised for their inaction. The Commons Defence Committee said it would launch its own inquiry, and Dame Chi Onwurah, chair of the Commons Committee for Science Innovation and Technology, said that it is writing to the Information Commissioner pushing for an investigation. Following this, the Information Commissioner issued a further statement explaining the ICO approach.  

Of course no one is suggesting that the ICO fine for Birthlink is an attempt by the ICO to move on from the MoD non-enforcement but readers may at least be wondering why a relatively small Scottish charity is fined whilst a large government department (which has been fined previously in similar circumstances) has faced no action at all.  

This case shows the importance of good records management in ensuring GDPR compliance. Our forthcoming workshop will help you implement records management best practice and understand how it can help manage the personal data lifecycle. 

Retail Under Siege Through AI Enabled Cyber Attacks 

The UK retail sector has come under siege in 2025, with an unprecedented wave of cyber attacks. After the Ticketmaster breach in 2024 where millions of users were affected, one would assume retailers had taken note. However, From Marks & Spencer to Louis Vuitton, companies large and small are grappling with relentless, tech-enhanced intrusions that threaten customer trust and digital resilience. It’s almost a daily occurrence these days receiving an email from a company apologising for a data breach. There also seems to be no retailer safe regardless of their size or stature. Sometimes it is a retailer that you may not have even shopped with for a number of years at which point I’m sure you must be thinking, ‘What’s their data retention policy?’ 
 
Below we take a look at some of the major breaches and attacks of 2025 and what you can do to protect your information online. 

High-Profile Retail Cyberattacks of 2025 

Here’s a snapshot of the most disruptive recent cyber incidents: 

Company Date Attack Type Impact & Highlights 
Louis Vuitton UK July 2025 Data breach Customer contact details & purchase history stolen; phishing scams followed 
Marks & Spencer April 2025 Ransomware £3.8M/day in lost revenue; £700M market value wiped; credential theft via vendor 
Harrods May 2025 Attempted breach Real-time containment; no confirmed data loss but serious operational disruption 
Co-op UK May 2025 Ransomware Customer data compromised; back-office systems disabled 
Peter Green Chilled May 2025 Ransomware Disrupted cold-chain deliveries to Tesco, Aldi, Waitrose 
Victoria’s Secret Spring 2025 Web attack E-commerce platform outage during peak shopping period 

These incidents underscore one clear truth: cybercrime is evolving, and no retailer, no matter its size or prestige, is immune. What is worrying is, companies with infinite resources are still extremely vulnerable. 

The Role of AI  

In many of these data breaches, AI was used by hackers to accelerate and deepen the damage. Their tactics included: 

  • Hyper-Personalised Phishing: AI-generated messages mimicked trusted communications, referencing recent purchases to trick recipients. Louis Vuitton customers received convincing fake discount offers. 
  • Credential Cracking and MFA Bypass: AI automated brute-force login attacks, while adversary-in-the-middle techniques stole session tokens to sidestep multi-factor authentication. 
  • Network Reconnaissance: Malicious bots used AI to scan retail systems, identify vulnerabilities, and map out supply chains for deeper impact. 
  • Autonomous Ransomware: Sophisticated strains like DragonForce adapted in real time to avoid detection and self-propagate through connected systems. 
  • Voice Phishing (Vishing): AI-generated voices impersonated IT staff to deceive employees into disclosing access credentials; a tactic especially potent in luxury retail. 

AI has supercharged cybercrime, making attacks faster, more targeted, and far harder to detect. With the emergence of (RaaS) ransomware as a service and (DLS) there is now a marketplace for our data that is much more accessible. 

How Consumers Can Protect Their Data 

While companies bear the financial burden of breaches, consumers often suffer the most; through stolen data, financial fraud, and disrupted services. Lessons for consumers include: 

  • Even luxury brands are vulnerable – don’t assume prestige equals protection. 
  • Cyberattacks are increasingly tailored based on what you buy, how often you shop, and where you live. 
  • Supply chains and vendor access are weak points; your data might be exposed even if the retailer itself isn’t directly breached. 

Whether you shop in-store or online, these simple steps can dramatically improve the security of your personal data: 

Digital Defence 

  • Use Strong, Unique Passwords: A password manager can help you avoid reuse and weak combinations. 
  • Enable Multi-Factor Authentication: Critical for accounts tied to payments or personal information. 
  • Monitor Your Financial Activity: Check bank statements and credit reports for irregularities. Set up alerts where possible. 
  • Be Phishing-Aware: Always verify communications by visiting the retailer’s official website. Don’t click suspicious links or download unexpected attachments. 
  • Don’t Save Your Payment Data: If you can avoid saving your payment/address details with a retailer online then always avoid.  

Data Discipline 

  • Limit the Personal Data You Share: Don’t offer extra details to loyalty schemes or retailers unless absolutely necessary. 
  • Freeze Your Credit (If Breached): Prevent identity thieves from opening new accounts using your stolen details. 

Payment Hygiene 

  • Use Credit Cards Online: They offer better fraud protection and don’t expose your actual bank balance. In addition, you have certain buyer protections when buying on credit card
  • Avoid Public Wi-Fi for Shopping: Use a VPN or shop from secure, private networks. 

The digital age has made shopping easier; but also riskier. Cybersecurity now requires a partnership between retailers and consumers. Companies must implement
zero-trust architectures. AI-powered threat detection and employee cyber-awareness training. Meanwhile, consumers should stay informed, cautious, and quick to respond when their personal data is at risk. 

According to Stanford University’s recent study, human error accounted for 88% of data breaches and a recent Accenture study found that there has been a 97% increase in cyber threats since the start of the Russia/Ukraine war.  
 
We have two workshops coming up (How to Increase Cyber Security in your Organisation and Cyber Security for DPOs) which are ideal for organisations who wish to upskill their employees about cyber security. 

The MoD Afghan Data Breach: Could the Information Commissioner have done more? 

On Tuesday, the High Court lifted a superinjunction that prevented scrutiny of one of the most serious personal data breaches involving a UK Government department. In February 2022, a Ministry of Defence (MoD) official mistakenly emailed a spreadsheet containing personal details of over 18,000 Afghan nationals who had applied to move to the UK under the Afghan Relocations and Assistance Policy (ARAP).  

The breach was only discovered in August 2023, when excerpts of the data appeared on Facebook. By then, the damage was done. A new resettlement scheme for those on the leaked list was set up and has seen 4,500 Afghans arrive in the UK so far. The Afghan Relocation Route has cost £400m so far, and the Government has said it is expected to cost a further £450m. Interesting that that the High Court in May 2024 heard it could cost “several billions”. 

Shockingly, people whose details were leaked were only informed on Tuesday. A review of the incident carried out on behalf of the MoD found it was “highly unlikely” an individual would have been targeted solely because of the leaked data, which “may not have spread nearly as widely as initially feared”. On Wednesday though, the Defence Secretary said he was “unable to say for sure” whether anyone had been killed as a result of the data breach. The daughter of an Afghan translator whose details were leaked told the BBC that her whole family “panicked”.  

“No one knows where the data has been sent to – it could be sent to the Taliban, they could have their hands on it,” she said. Her grandmother, who is still in Afghanistan, is “completely vulnerable”, she added. 

This is not the first time the MoD has mishandled Afghan data. In December 2023, it was fined £350,000  for disclosing details of people seeking relocation to the UK shortly after the Taliban took control of Afghanistan in 2021. The MoD sent an email to a distribution list of Afghan nationals eligible for evacuation using the ‘To’ field, with personal information relating to 245 people being inadvertently disclosed. The email addresses could be seen by all recipients, with 55 people having thumbnail pictures on their email profiles.  
Two people ‘replied all’ to the entire list of recipients, with one of them providing their location.  

ICO’s Response 

Despite the scale and sensitivity of the latest MoD data breach, the Information Commissioner’s Office (ICO) has decided not to take any regulatory action; no, not even a reprimand! In its press release, the ICO praised the MoD’s internal investigation and mitigation efforts, stating that “no further regulatory action is required at this time”. 

Compare this case to the data breach involving the Police Service of Northern Ireland (PSNI). Last year, the ICO fined the PSNI £750,000 after staff mistakenly divulged the surnames of more than 9,483 PSNI officers and staff, their initials and other data in response to a Freedom of Information (FoI) request. The request, via the What Do They Know.Com website, had asked the PSNI for a breakdown of all staff rank and grades. But as well as publishing a table containing the number of people holding positions such as constable, a spreadsheet was included. The information was published on the WDTK website for more than two hours, leaving many fearing for their safety. 

In September las year it was announced that a mediation process involving the PSNI is to take place to attempt to agree the amount of damages to be paid to up to 7,000 staff impacted by the data breach. The final bill could be as much as £240m, according to previous reports. Compare that with the impact and cost of the latest MoD data breach. 

Other ICO enforcement actions in the past few years for security failures include: 

  • Cabinet Office (2020): Fined £500,000 for publishing New Year Honours list online. Cause? Spreadsheet error. 
  • HIV Scotland (2021): Fined £10,000 when it sent an email to 105 people living with HIV. All the email addresses were visible to all recipients, and 65 of the addresses identified people by name. From the personal data disclosed, an assumption could be made about individuals’ HIV status or risk.   
  • Mermaids (2021): Fined £25,000 for failing to implement an appropriate level of security to its internal email systems, which resulted in documents or emails containing personal data being searchable and viewable online by third parties through internet search engine results.  

In the MoD case, the ICO claims it considered the “critical need to share data urgently” and the MoD’s “steps to protect those most affected”. But urgency wasn’t the issue; it was negligence. The breach occurred during routine verification, not a crisis. Even more concerning, the ICO’s own guidance states that breaches involving unauthorised disclosure of sensitive data, especially where lives are at risk, should trigger enforcement action. 

This lack of action by the ICO raises serious questions about the ICO’s independence and willingness to challenge government departments. Even if it felt a fine was not appropriate, a report to Parliament (under Section 139(3) of Data Protection Act 2018) would have highlighted the seriousness of the issues raised and consequently allowed MP’s to scrutinise the MoD’s actions.  

This breach is a national scandal; not just for its scale, but for the lack of transparency, accountability, and regulatory action. If the UK is serious about data protection, it must demand more from its regulator. Otherwise, the next breach may be even worse and just as quietly buried. 

Yesterday, the Commons Defence Committee confirmed it would launch its own inquiry, and Dame Chi Onwurah, chair of the Commons Committee for Science Innovation and Technology, said that it is writing to the Information Commissioner pushing for an investigation. Watch this space! 

STOP PRESS: This afternoon the BBC reports that the data breach was much worse than previously thought: it contained personal details of more than 100 British officials including those whose identities are most closely guarded – special forces and spies. Is an ICO u turn incoming?

We have two workshops coming up (How to Increase Cyber Security in your Organisation and Cyber Security for DPOs) which are ideal for organisations who wish to upskill their employees about cyber security.

When AI Misses the Line: What Wimbledon 2025 Teaches Us About Deploying AI in the Workplace 

This year’s Wimbledon Tennis Championships are not just a showcase for elite athleticism but also a high-profile test of Artificial Intelligence. For the first time in the tournament’s 148-year history, all line calls across its 18 courts are made entirely by Hawk-Eye Live, an AI-assisted system that has replaced human line judges. This follows, amongst others, the Semi-Assisted Offside System deployed in last year’s football Champions League after its success in the Qatar World Cup.  

The promise? Faster decisions, greater consistency, and reduced human error. 
The reality? Multiple malfunctions, public apologies, and growing mistrust among players and fans (not to mention losing the ‘best dressed officials’ in sport). 

What Went Wrong? 

  • System Failure Mid-Match: During a high-stakes women’s singles match between Anastasia Pavlyuchenkova and Sonay Kartal, the line-calling system was accidentally switched off for several points. No alerts were raised, and the match proceeded with no accurate judgments. Wimbledon officials later admitted human error was to blame, not the AI. 
  • Misclassification Errors: In the men’s quarter-final between Taylor Fritz and Karen Khachanov, Hawk-Eye incorrectly called a rally forehand a “fault,” apparently confusing it with a serve. Play was halted and the point was replayed, leaving fans and players confused and frustrated. 
  • User Experience Failures: Multiple players, including Emma Raducanu and Jack Draper, complained that some calls were “clearly wrong” and that the system’s announcements were too quiet to hear amid crowd noise. Some players called for the return of human line judges, citing a lack of trust in the technology.  

Lessons for AI and IG Professionals 

Wimbledon’s AI hiccup offers more than a headline; it surfaces deep issues around trust, oversight, and operational design that are relevant to any AI deployment in the workplace. Here are the key lessons: 

1. Automation ≠ Autonomy 

The Wimbledon system is not truly autonomous; it relies on human operators to activate it before each match. When staff forgot to do so, the AI didn’t intervene or alert anyone. This exposes a major pitfall: automated systems are only as reliable as their orchestration layers. 

Governance Principle: Ensure clear workflows and audit trails around when and how AI systems are initiated, paused, or overridden. Build in fail-safe triggers and status checks to prevent silent failures. 

2. Build in Redundancy and Exception Handling 

AI systems excel at pattern recognition in controlled environments but can fail spectacularly at edge cases. Wimbledon’s AI was likely trained on thousands of hours of ball trajectories – but it still confused a forehand rally shot with a serve under unusual conditions. 

Governance Principle: Plan for edge case management. When the AI encounters uncertainty, it should either defer to human review or trigger a fallback protocol.  

3. Usability is a Core Component of Accuracy 

Even when the AI was functioning correctly, players couldn’t always hear the line calls due to low audio volume. What good is a precise call if the user can’t perceive it? 

Governance Principle: Don’t separate accuracy from usability. A technically correct output must be understandable, accessible, and actionable to its end users. Invest in UI/UX design early in the AI lifecycle. 

4. Transparency Builds Trust 

Wimbledon’s initial response (vague statements and slow clarifications) only fuelled player frustration. Trust was eroded not just because of the error, but because of how it was handled. 

Governance Principle: When deploying AI, especially in high-stakes environments, build a culture of transparent accountability. Log decisions, explain anomalies, and communicate clearly when things go wrong. 

5. Hybrid Systems Are Often More Effective Than Pure AI 

While Wimbledon has fully replaced line judges with AI, there’s a strong case for a hybrid model. A combination of automated systems with empowered human oversight could preserve both accuracy and human judgment. 

Governance Principle: Consider augmented intelligence models, where AI supports rather than replaces human decision-makers. This ensures operational continuity and enables learning from both machine and human feedback. 

6. Respect Context and Culture 

Wimbledon isn’t just any tournament; it’s steeped in tradition, where human line judges are part of the spectacle. Removing them altered the tournament’s character, sparking emotional backlash from players and spectators alike. 

Governance Principle: Understand the organisational and cultural context where AI is deployed. Technology doesn’t operate in a vacuum. Change management, stakeholder engagement, and empathy are as important as algorithms. 

The problems with Wimbledon’s AI line-calling system are symptoms of incomplete design thinking. Whether you’re deploying AI in HR analytics, document classification, or customer service, the Wimbledon experience shows that trust isn’t just built on data; it’s built on reliability, clarity, and human-centred design. 

In a world increasingly mediated by automation, we must remember: AI doesn’t replace the need for governance. It raises the stakes for getting it right. And we just wish it was around for the “Hand of God” goal

Are you looking to enhance your career with an AI governance qualification? Our AI Governance Practitioner Certificate is designed to equip compliance professionals with the essential knowledge and skills to navigate this transformative technology while upholding the highest standards of data protection and information governance. The first course was fully booked, and we have added more dates.

The New Data (Use and Access) Act 2025 

The Data (Use and Access) Act 2025 received Royal Assent on 19th June 2025. It is important to note that the new Act will not replace current UK data protection legislation. Rather it will amend the UK GDPR as well as the Privacy and Electronic Communications (EC Directive) Regulations 2003 (PECR) and the Data Protection Act 2018. Most of these amendments will commence in stages, 2 to 12 months after Royal Assent. Exact dates for each measure will be set out in commencement regulations. 

The Bill was introduced into Parliament in October last year. It was trailed in the King’s Speech in July (under its old name of the “Digital Information and Smart Data Bill”) with his Majesty announcing that there would be “targeted reforms to some data laws that will maintain high standards of protection but where there is currently a lack of clarity impeding the safe development and deployment of some new technologies.” However, this statement of intent does not match the reality; many of the core provisions are a “cut and paste” of the Data Protection and Digital Information(No.2) Bill (“DP Bill”), which was dropped by the Conservative Government in the Parliamentary “wash up” stage before last year’s snap General Election. 

Key Provisions 

Let’s examine the key provisions of the new Act.  

Smart Data: The Act retains the provisions from the DP Bill that will enable the creation of a legal framework for Smart Data. This involves companies securely sharing customer data, upon the customer’s (business or consumer) request, with authorised third-party providers (ATPs) who can enhance the customer data with broader, contextual ‘business’ data. These ATPs will provide the customer with innovative services to improve decision making and engagement in a market. Open Banking is the only current example of a regime that is comparable to a ‘Smart Data scheme’. The Act will give such schemes a statutory footing, from which they can grow and expand.  

Digital Identity Products: Just like its predecessor, the Act contains provisions aimed at establishing digital verification services including digital identity products to help people quickly and securely identify themselves when they use online services e.g. to help with moving house, pre-employment checks and buying age restricted goods and services. It is important to note that this is not the same as compulsory digital ID cards as some media outlets have reported. 

Research Provisions: The Act keeps the DP Bill’s provisions that clarify that companies can use personal data for research and development projects, as long as they follow data protection safeguards.  

Legitimate Interests: The Act retains the concept of ‘recognised legitimate interests’ under Article 6 of the UK GDPR- specific purposes for personal data processing such as national security, emergency response, and safeguarding for which Data Controllers will be exempt from conducting a full “Legitimate Interests Assessment” when processing personal data.  

Subject Access Requests: The Act it makes it clear that Data Controllers only have to make reasonable and proportionate searches when someone asks for access to their personal data. 

Automated Decision Making: Like the DP Bill, the Act seeks to limit the right, under Article 22 of the UK GDPR, for a data subject not to be subject to automated decision making or profiling to only cases where Special Category Data is used. Under new article 22A, a decision would qualify as being “based solely on automated processing” if there was “no meaningful human involvement in the taking of the decision”. This could give the green light to companies to use AI techniques on personal data scraped from the internet for the purposes of pre employment background checks. 

International Transfers: The Act maintains most of the DP Bill’s international transfer provisions. There will be a new approach to the test for adequacy applied by the UK Government to countries (and international organisations) and when Data Controllers are carrying out a Transfer Impact Assessment or TIA. The threshold for this new “data protection test” will be whether a jurisdiction offers protection that is “not materially lower” than under the UK GDPR 

Health and Social Care Information: The Act maintains, without any changes, the provisions that establish consistent information standards for health and adult social care IT systems in England, enabling the creation of unified medical records accessible across all related services. 

PECR Changes: One of the most significant changes, copied from the DP Bill, is the increase in fines for breaches of PECR, from £500,000 to UK GDPR levels; meaning organisations could face fines of up to  up to £17.5m of 4% of global annual turnover (whichever is higher) for the most serious infringements. Other changes include allowing cookies to be used without consent for the purposes of web analytics and to install automatic software updates and extending the “soft opt” in for electronic marketing to charities.  

A full list of the changes to the UK data protection regime can be read on the ICO website.  

What is not in the new Act? 

Most of the controversial parts of the DP Bill have been have not made it into the Act. These include: 

  • Replacing the terms “manifestly unfounded” or “excessive” requests, in Article 12 of the UK GDPR, with “vexatious” or “excessive” requests. Explanation and examples of such requests would also have been included.  
  • Exempting all controllers and processors from the duty to maintain a ROPA, under Article 30, unless they are carrying out high risk processing activities.  
  • The “strategic priorities” mechanism, which would have allowed the Secretary of State to set binding priorities for the Information Commissioner. 
  • The requirements for the Information Commissioner to submit codes of practice to the Secretary of State for review and recommendations.  

The UK’s adequacy status under the EU GDPR now expires on 27th December following the recent announcement of a six month extension. Whilst the EU will commence a formal review of adequacy once the Bill receives Royal Assent, nothing in the Bill will jeopardise the free flow of personal between the EU and the UK. The situation would perhaps have been different had the DP Bill made it on to the statute books.  

AI and Copyright 

Much of the delay to the Bill was passing was caused by an issue which was not originally intended to be addressed in the Bill; that of the use of copyright works to train AI. Like the monster plant in Little Shop of Horrors, AI has an insatiable appetite; for data though rather than food. AI applications need a constant supply of data to train (and improve) their output algorithms. This obviously concerns copyright holders such as musicians and writers whose work may be used to train AI models to produce similar output, without the former receiving any financial compensation. A number of copyright infringements lawsuits are set to hit the courts soon. Amongst them, Getty Images’ is suing Stability AI accusing it of using Getty images to train its Stable Diffusion system, which can generate images from text inputs. Similar lawsuits have been launched in the US by novelists and news outlets. 

During the passage of the Bill through Parliament, there was strong disagreement between the Lords and the Commons over an amendment introduced by the crossbench peer and former film director Beeban Kidron. The amendment would have required AI developers to be transparent with copyright owners, about using their material to train AI models. 400 British musicians, writers and artists, including Sir Paul McCartney, signed a letter urging the Government to adopt the amendment. They argued that failing to do so would mean them “giving away” their work to tech firms.  

In the end, the Baroness Kidron dropped her amendment follow repeated rejection in the Commons. I expect this issue to raise its head again soon. The Government’s consultation on AI and copyright ended in February. Amongst other options, it proposes to give copyright holders the right to opt-out of their works being used for training AI. However, the music industry believes that such a measure would offer insufficient protection for copyright holders. In an interview with the BBC, Sir Elton John described the government as “absolute losers” and said he feels “incredibly betrayed” over the Government’s plans. 

Once the Government publishes it response to the copyright consultation, it will have to consider how to take the matter forward. Whether this comes in the form of a new copyright bill or AI regulation bill, expect more parliamentary wranglings as well as celebrity interviews.  

Data protection professionals need to assess the changes to the UK data protection regime. Our half day workshop will explore the new Act in detail giving you an action plan for compliance. A revised UK GDPR Handbook is now available incorporating the changes made by the DUA Act.

£2.31 Million GDPR Fine for Genetic Testing Company. But will the fine be paid? 

The Information Commissioner’s Office (ICO) has fined a US genetic testing company £2.31 million under the UK GDPR following a 2023 cyber-attack. 

23andMe provides genetic testing for, amongst other things, health purposes and ancestry tracing. In 2023 a hacker carried out a credential stuffing attack on the company’s platform, exploiting reused login credentials that were stolen from previous unrelated data breaches. This resulted in unauthorised access to 155,592 UK residents’ personal data; potentially revealing sensitive data such as profile images, race, ethnicity, family trees and health reports. The type and amount of personal data accessed varied depending on the information included in a customer’s account. 

The investigation into 23andMe revealed serious security failings at the time of the 2023 data breach. The company failed to implement appropriate authentication and verification measures, such as mandatory multi-factor authentication, secure password protocols, or unpredictable usernames. It also failed to implement appropriate controls over access to raw genetic data and did not have effective systems in place to monitor, detect, or respond to cyber threats targeting its customers’ sensitive information. 

The ICO also found that 23andMe’s response to the unfolding incident was inadequate. The hacker began their credential stuffing attack in April 2023, before carrying out their first period of intense credential stuffing activity in May 2023.
In August 2023, a claim of data theft affecting over 10 million users was dismissed as a hoax, despite 23andMe having conducted isolated investigations into unauthorised activity on its platform in July 2023. Another wave of credential stuffing followed in September 2023, but the company did not start a full investigation until October 2023, when a 23andMe employee discovered that the stolen data had been advertised for sale on Reddit. Only then did 23andMe confirm that a breach had occurred.  

What happens now? 

The ICO has made much of this penalty and the joint investigation conducted with the Office of the Privacy Commissioner of Canada. John Edwards, the Information Commissioner, said: 

“We carried out this investigation in collaboration with our Canadian counterparts, and it highlights the power of international cooperation in holding global companies to account. Data protection doesn’t stop at borders, and neither do we when it comes to protecting the rights of UK residents.” 

The fine comes after an ICO statement in March which said that a Notice of Intent had been issued of £4.59 million. An almost 50% reduction but, whatever the amount of the fine, the ICO is unlike to see a penny.  

In April 23andMe filed for bankruptcy in the US courts. On Friday it said that it had agreed to the sale of its assets to a non-profit biotech organisation led by its
co-founder and former chief executive. It said the purchase of the company would come with binding commitments to uphold existing policies and consumer protections, such as letting customers delete their accounts, genetic data and opt out of research.
A bankruptcy court is scheduled to hear the case for its approval on Wednesday. 

This case is also a good example of  the extra territorial reach of the UK GDPR.  Article 3(2)(a) UK GDPR as although 23andMe is not established within the UK, it processes the personal data of the affected UK Data Subjects for the purposes of offering goods or services to those individuals. 

This is the third fine issued by the ICO in 2025. In April a £60,000 fine was issued to a law firm and in March an NHS IT supplier was fined £3million. Both also followed cyber-attacks.   

 We have two workshops coming up (How to Increase Cyber Security in your Organisation and Cyber Security for DPOs) which are ideal for organisations who wish to up skill their employees about cyber security. See also our Managing Personal Data Breaches Workshop. 

The Data (Use and Access) Bill Ready for the Statute Books 

The Data (Use and Access) Bill has cleared the final hurdle in Parliament and will soon become the Data (Use and Access) Act 2025 following Royal Assent.  

The new Act will amend the UK GDPR as well as PECR and the Data Protection Act 2018. The key changes are summarised in our blog post here. Most of these are not particularly controversial and were in the Data Protection and Digital Information Bill  which failed to make it through Parliamentary “wash up” stage when the General Election was announced last year. 

Much of the delay to the passing of the Bill was caused by amendments proposed by Baroness Kidron in the House of Lords. She wanted more protection for artists whose data is often used to train AI models, especially Generative AI. Her amendment would have required developers to be transparent with copyright owners about using their material to train AI models. 400 British musicians, writers and artists signed a letter saying the Government’s failing to adopt the amendment would mean them “giving away” their work to tech firms. In the end Baroness Kidron, following repeated rejections of her amendment in the House of Commons during the “ping pong” stage, decided to withdraw gracefully. Expect this issue to come up again when the government eventually brings forth AI legislation as mentioned in the King’s Speech. 

We expect most of the substantive provisions to come into force a few months after commencement. Plenty of time for us to update the UK GDPR Handbook

Data protection professionals need to assess the changes to the UK data protection regime. A revised UK GDPR Handbook is now available incorporating the changes made by the DUA Act.