Home Office Acknowledges Racial and Gender Bias in UK Police Facial Recognition Technology

Facial recognition is often sold as a neutral, objective tool. But recent admissions from the UK government show just how fragile that claim really is.

New evidence has confirmed that facial recognition technology used by UK police is significantly more likely to misidentify people from certain demographic groups. The problem is not marginal, and it is not theoretical. It is already embedded in live policing.

A Systematic Pattern of Error

Independent testing commissioned by the Home Office found that false-positive rates increase dramatically depending on ethnicity, gender, and system settings.

At lower operating thresholds — where the software is configured to return more matches — the disparity becomes stark. White individuals were falsely matched at a rate of around 0.04%. For Asian individuals, the rate rose to approximately 4%. For Black individuals, it reached about 5.5%. The highest error rate was recorded among Black women, who were falsely matched close to 10% of the time.

The data highlights a striking imbalance: Asian and Black individuals were misidentified almost 100 times more frequently than white individuals, while women faced error rates roughly double those of men.

Why This Is Not an Abstract Risk

This technology is already in widespread use. Police forces rely on facial recognition to analyse CCTV footage, conduct retrospective searches across custody databases, and, in some cases, deploy live systems in public spaces.

The scale matters. Thousands of retrospective facial recognition searches are conducted each month. Even a low error rate, when multiplied across that volume, results in a significant number of people being wrongly flagged.

A false match can lead to questioning, surveillance, or police intervention. Even if officers ultimately decide not to act, the encounter itself can be intrusive, distressing, and damaging. These effects do not disappear simply because a human later overrides the system.

Bias, Thresholds, and Operational Reality

For years, facial recognition vendors and public authorities argued that bias could be controlled through careful configuration. In controlled conditions, stricter thresholds reduce error rates. But operational pressures often incentivise looser settings that generate more matches, even at the cost of accuracy.

The government’s own findings now confirm what critics have long warned: fairness is conditional. Bias does not vanish; it shifts depending on how the system is used.

The data also shows that demographic impacts overlap. Women, older people, and ethnic minorities are all more likely to be misidentified, with compounded effects for those who sit at multiple intersections.

Expansion Amid Fragile Trust

Despite these findings, the government is consulting on proposals to expand national facial recognition capability, including systems that could draw on large biometric datasets such as passport and driving licence records.

Ministers have pointed to plans to procure newer algorithms and to subject them to independent evaluation. While improved testing and oversight are essential, they do not answer the underlying question: should surveillance infrastructure be expanded while known structural risks remain unresolved?

Civil liberties groups and oversight bodies have described the findings as deeply concerning, warning that transparency, accountability, and public confidence are being strained by the rapid adoption of opaque technologies.

This Is a Governance Issue, Not Just a Technical One

Facial recognition is not simply a question of software performance. It is a question of how power is exercised and how risk is distributed.

When automated systems systematically misidentify certain groups, the consequences fall unevenly. Decisions about who is stopped, questioned, or monitored start to reflect the limitations of technology rather than evidence or behaviour.

Once such systems become normalised, rolling them back becomes difficult. That is why scrutiny matters now, not after expansion.

If technology is allowed to shape policing, the justice system, and public space, it must be subject to the highest standards of accountability, fairness, and democratic oversight.

These and other developments in the use of artificial intelligence, surveillance, and automated decision-making will be examined in detail in our AI Governance Practitioner Certificate training programme, which provides a practical and accessible overview of how AI systems are developed, deployed, and regulated, with particular attention to risk, bias, and accountability.

Proposed Changes to the EU GDPR: Could we see more changes to the UK GDPR?

Yesterday the European Commission published its long awaited Digital Omnibus Regulation Proposal and Digital Omnibus on AI Regulation Proposal. If approved, these proposals will mean significant changes to the EU GDPR and other EU legislation and may even encourage the UK to further amend the UK GDPR. 

The aim of the “Digital Omnibus” package is to ease administrative burdens for businesses across areas like privacy, cybersecurity and artificial intelligence. Although the EU GDPR is considered balanced and fit for purpose, “targeted changes” are proposed to address concerns, particularly from smaller companies. These include:

  • Clarification of Definitions: The definition of “personal data” is clarified. Information is not considered personal to a company if it does not possess means “reasonably likely” to be used to identify an individual.
  • Processing for AI Training: It is clarified that the processing of personal data for the development and training of AI systems can constitute a “legitimate interest” under certain conditions.
  • Simplified Reporting of Data Breaches: The reporting obligation to supervisory authorities is aligned with the threshold for notifying data subjects. A report is only required if there is a “high risk” to the rights and freedoms of natural persons. The deadline for reporting is extended to 96 hours.
  • Harmonization of Data Protection Impact Assessments (DPIA): National lists of processing operations requiring a DPIA (or not) are to be replaced by unified EU-wide lists to promote harmonisation.
  • Scientific Research: The conditions for data processing for scientific research purposes are clarified by defining “scientific research” and clarifying that this constitutes a legitimate interest.

The EU AI Act also faces a number of amendments, including simplifications for small and medium-sized enterprises and small mid-cap companies in the form of pared-back technical documentation requirements. Other measures involve sandboxes for real-world testing and to “reinforce the AI Office’s powers and centralise oversight of AI systems built on general-purpose AI models, reducing governance fragmentation”.

Both omnibus packages now have a long road ahead as they enter into the trilogue negotiations with European Parliament and the Council of the European Union. It is expected to take at least several months until negotiations are finalised. 

Impact on the UK

The UK has already enacted its own package of amendments to the UK GDPR in the form of the Data (Use and Access) Act 2025 which received Royal Assent on 19th June 2025. The amendments are quite modest even before comparing them to the EU proposals above. 

A more bolder list of amendments were contained in the Data Protection and Digital Information Bill published in 2022 by the Conservative Government. This included proposals to amend the definition of personal data and to replace Data Protection Officers with Senior Responsible Individuals. This bill was later replaced by a diluted bill of the same name (number 2 Bill) only for that to be dropped in the Parliamentary “wash up” stage before the last General Election.

Could the EU reforms (if enacted) lead to the UK making more fundamental changes to the UK GDPR? We doubt it. The Labour Government has more pressing priorities and with the passing of the DUA Act they can say they have “done GDPR reform”. If we get a change in Government, then Reform and the Conservatives might target the UK GDPR as way of reigning in “pesky human rights laws”. 

Data protection professionals need to assess the changes to the UK data protection regime made by the DUA Act. Our half day workshop will explore the Act in detail giving you an action plan for compliance. A revised UK GDPR Handbook is now available incorporating the changes made by the DUA Act. 

New Guidance on AI Risk Management

The development, procurement and deployment of AI systems involving the processing of personal data raises significant risks to data subjects’ fundamental rights and freedoms, including but not limited to privacy and data protection. The principle of accountability enshrined in the UK GDPR and the EU GDPR require Data Controllers to identify and mitigate these risks, as well as to demonstrate how they did so. This is especially important for AI systems that are the product of intricate supply chains often involving multiple actors processing personal data in different capacities.

The European Data Protection Supervisor (EDPS) has just released an important new guidance document to help organisations conduct data protection risk assessments when developing, procuring, or deploying AI systems.  It focuses on the risk of non-compliance with certain data protection principles for which the mitigation strategies that controllers must implement can be technical in nature – namely fairness, accuracy, data minimisation, security and data subjects’ rights. 

Key sections of the document address:

  • the risk management methodology according to ISO 31000:2018
  • the typical development lifecycle of AI systems as well as the different steps involved in their procurement 
  • the notions of interpretability and explainability 
  • an analytical framework for identifying and treating risks that may arise in AI systems, structured according to the data protection principles potentially affected. 

The EDPS has issued this guidance in his role as a data protection supervisory authority for EU institutions. However it is a very useful document for any organisation deploying AI and which requires guidance on how to systematically  assess the risks from a data protection perspective. 

Our AI Governance Practitioner Certificate course, is designed to equip Information Governance professionals with the essential knowledge and skills to management the risk of AI deployment within their organisations. This year 50 delegates, from a variety of backgrounds, have successfully completed the course, giving great feedback

The first course of 2026 starts on 8th January. Places are limited so book early to avoid disappointment. If you require an introduction to AI and information governance, please consider booking on our one day workshop

Scope of the GDPR: ICO Wins Clearview Appeal  

The Information Commissioner has won his appeal (to the Upper Tribunal) against the First-tier Tribunal (FTT) decision involving Clearview AI Inc.  

Clearview is a US based company which describes itself as the “World’s Largest Facial Network”. Its online database contains 20 billion images of people’s faces and data scraped from the internet and social media platforms all over the world. It allows customers to upload an image of a person to its app; the person is then identified by the app checking against all the images in the Clearview database. The appeal raised the issue of the extent to which processing of the personal data of UK data subjects by a private company based outside the UK is excluded from the scope of the GDPR, including where such processing is carried out in the context of its foreign clients’ national security or criminal law enforcement activities. 

Background 

In May 2022 the ICO issued a Monetary Penalty Notice of £7,552,800 to Clearview for breaches of the UK GDPR including failing to use the information of people in the UK in a way that is fair and transparent. Although Clearview is a US company, the ICO ruled that the UK GDPR applied because of Article 3(2)(b) (territorial scope). It concluded that Clearview’s processing activities “are related to…the monitoring of [UK resident’s] behaviour as far as their behaviour takes place within the United Kingdom.” The ICO also issued an Enforcement Notice ordering Clearview to stop obtaining and using the personal data of UK residents that is publicly available on the internet, and to delete the data of UK residents from its systems.  

In October 2023, the FTT overturned the ICO’s enforcement and penalty notice against Clearview. It concluded that although Clearview did carry out data processing related to monitoring the behaviour of people in the UK (Article 3(2)(b) of the UK GDPR), the ICO did not have jurisdiction to take enforcement action or issue a fine. Both the GDPR and UK GDPR provide that acts of foreign governments fall outside their scope; it is not for one government to seek to bind or control the activities of another sovereign state. However the Tribunal noted that the ICO could have taken action under the Law Enforcement Directive (Part 3 of the DPA 2018 in the UK), which specifically regulates the processing of personal data in relation to law enforcement. 

The Upper Tribunal Judgement  

The Upper Tribunal allowed the appeal, set aside the decision of the FTT and remitted the matter to the FTT to decide the substantive appeal on the basis that the Information Commissioner had jurisdiction to issue the notices. It also decided that the FTT was right to find that Clearview’s processing fell within the territorial scope of the GDPRs, albeit that it differed in its reasoning. 

In its judgment, the Upper Tribunal ruled  that: 

(1) The words “in the course of an activity which falls outside the scope of Union law” in Article 2(2)(a) of the GDPR (which provides for an exclusion from the material scope of the GDPR) refer only to those activities in respect of which Member States have reserved control to themselves and not conferred powers on the Union to act, and not to all matters without the competence of the Union (as the ICO argued) or to the activities of third parties whose processing “intersects” with their clients’ processing in the course of “quintessentially state functions” which would offend against comity principles (as Clearview argued); 

(2) The words “behavioural monitoring” in Article 3(2)(b) are to be interpreted broadly, as a response to the challenges posed by ‘Big Data’ in the digital age, and they can encompass passive collection, sorting, classification and storing of data by automated means with a view to potential subsequent use, including use by another controller, of personal data processing techniques which consist of profiling a natural person. “Behavioural monitoring” does not require an element of active “watchfulness” in the sense of human involvement;  

(3) The words “related to” in Article 3(2)(b) of the GDPR, as applied to Article 3(2)(b), have an expansive meaning, and apply not only to controllers who themselves conduct behavioural monitoring, but also to controllers whose data processing is related to behavioural monitoring carried out by another controller. 

Data protection practitioners should read the judgement of the Upper Tribunal as it clarifies the material and territorial scope provisions of the UK GDPR. This and other GDPR developments will be discussed in our forthcoming GDPR Updateworkshop.  

Our 23rd Birthday! Celebrate with Us and Save on Training  

This month marks 23 years of Act Now Training. We delivered our first course in 2003 (on the Data Protection Act 1998!) at the National Railway Museum in York. Fast forward to today, and we deliver over 300 training days a year on AI, GDPR, records management, surveillance law and cyber security; supporting delegates across multiple jurisdictions including the Middle East.  

Our success comes from more than just longevity; we are trusted by clients across every sector, giving us a unique insight into the real-world challenges of information governance. That’s why our education-first approach focuses on practical skills, measurable impact, and lasting value for your organisation. 

Anniversary Offer: To celebrate, we are giving you a £50 discount on any one-day workshop, if you book by 30th September 2025. Choose from our most popular sessions like GDPR and FOI A to Z, or explore new topics like AI and Information Governance and the Risk Managment in IG

Simply quote “23rd Anniversary” on your booking form to claim your discount.

Data (Use and Access) Act 2025: ICO Consultation 

Last month the ICO, launched public consultations on its guidance in response to The Data (Use and Access) Act 2025 (DUA Act) coming into force.  

The DUA Act received Royal Assent on 19th June 2025. It amends, rather than replaces, the UK GDPR as well as the Privacy and Electronic Communications (EC Directive) Regulations 2003 (PECR) and the Data Protection Act 2018. (You can read a summary of the Act here.)  

The Act is not fully in force yet. The only substantive amendment (Section 78) to the UK GDPR that came into force on 19th June inserted a new Article 15(1A), relating to subject access requests: 

“…the data subject is only entitled to such confirmation, personal data and other information as the controller is able to provide based on a reasonable and proportionate search for the personal data and other information described in that paragraph.” 

Other provisions of the Act will commence in stages, 2 to 12 months after Royal Assent. The first commencement order, The Data (Use and Access) Act 2025 (Commencement No. 1) Regulations 2025, came into force on 20th August.  

Recognised Legitimate Interests 

The DUA Act amends Article 6 of the UK GDPR to introduce ‘Recognised legitimate interest’ as a new lawful basis for processing personal data. This covers activities such as crime prevention, public security, safeguarding, emergencies and sharing personal data to help other organisations perform their public tasks. The proposed ICO guidance aims to make it easier for organisations to successfully use recognised legitimate interest by explaining how it works, along with giving practical examples. Further details on the 10-week consultation, which closes on 30 October 2025, can be found here.  

Data Protection Complaints 

By June 2026, Data Controllers must have a process in place to handle data protection complaints. A complaint can come from anyone who is unhappy with how an organisation has handled their personal data. The proposed ICO guidance sets out the new requirements and informs organisations of what they must, should and could do to comply. Further details on the eight-week consultation, which closes on 19 October 2025, can be found here.  

Data protection professionals need to assess the changes to the UK data protection regime set out in the DUA Act. Our half day workshop will explore the new Act in detail giving you an action plan for compliance. A revised UK GDPR Handbook is now available incorporating the changes made by the DUA Act.

AI Governance Practitioner Certificate: Final Course for 2025 

Act Now is pleased to report that the next AI Governance Practitioner Certificate course, starting in September, is fully booked. There are still a few places available on the next course, starting in October, which is the final one in 2025. 

The AI Governance Practitioner Certificate is designed to equip Information Governance professionals with the essential knowledge and skills to navigate AI deployment within their organisations. As we detailed in our previous blog “What is the role of IG Professionals in AI Governance?”, IG professionals should be aware of how this technology works so that they can help to ensure that there is responsible deployment from an IG perspective, just as would be the case with any new technology.   

So far thirty delegates, from a variety of backgrounds, have successfully completed the course, giving great feedback. Delegates have complimented us on the scope of the syllabus and the delivery style. Cora Suckley, Information Governance Service Manager, Digital Health and Care Wales said: 

“The AI Governance Practitioner Certificate exceeded my expectations. The content was comprehensive and well-structured, successfully bridging the gap between technical AI concepts and essential governance frameworks. The course delved into responsible AI principles, risk management, compliance, policy and ethical considerations, equipping me with practical tools to navigate the evolving regulatory landscape. 

The instructor was excellent and made the sessions interactive, highly engaging and applicable, providing real-world examples. This course provides a solid foundation for implementing AI governance in a meaningful and effective way.” 

The final course for 2025 starts in October. Places are limited so book early to avoid disappointment.  

AI Governance Practitioner Certificate: First Cohort Successfully Completes Course 

Act Now is pleased to report that the first cohort of its new AI Governance Practitioner Certificate has successfully completed the course. 

This course is designed to equip Information Governance professionals with the essential knowledge and skills to navigate AI deployment within their organisations. As we detailed in our previous blog “What is the role of IG Professionals in AI Governance?”, IG professionals should be aware of how this technology works so that they can help to ensure that there is responsible deployment from an IG perspective, just as would be the case with any new technology.   

The first course ran over a four week period in May and June. It consisted of ten delegates from the health sector in Wales. They all successfully completed the course assessment in July. 

The course was extremely well received by the delegates who complimented us on the scope of the syllabus and the delivery style: 

“I took a huge amount from the course which will help shape the development of processes for us internally in the coming months.” Dave Parsons , WASPI Code Manager (Wales Accord on the Sharing of Personal Information)  

“This was a superb course with a lot of information delivered at a carefully managed rate that encouraged discussion and reflection.  Literacy in AI and its application is vital – without it we cannot comprehend the ever changing level of IG threat and risk.” MA, Digital Health and Care Wales

The training was very good. The instructor was also very knowledgeable about the subject.” HP, Digital Health and Care Wales

Cora Suckley, Information Governance Service Manager, Digital Health and Care Wales said: 

“The AI Governance Practitioner Certificate exceeded my expectations. The content was comprehensive and well-structured, successfully bridging the gap between technical AI concepts and essential governance frameworks. The course delved into responsible AI principles, risk management, compliance, policy and ethical considerations, equipping me with practical tools to navigate the evolving regulatory landscape. 

The instructor was excellent and made the sessions interactive, highly engaging and applicable, providing real-world examples. This course provides a solid foundation for implementing AI governance in a meaningful and effective way.” 

Two more cohorts are currently completing the course. The next course starts in September and has a few places left.  

When AI Misses the Line: What Wimbledon 2025 Teaches Us About Deploying AI in the Workplace 

This year’s Wimbledon Tennis Championships are not just a showcase for elite athleticism but also a high-profile test of Artificial Intelligence. For the first time in the tournament’s 148-year history, all line calls across its 18 courts are made entirely by Hawk-Eye Live, an AI-assisted system that has replaced human line judges. This follows, amongst others, the Semi-Assisted Offside System deployed in last year’s football Champions League after its success in the Qatar World Cup.  

The promise? Faster decisions, greater consistency, and reduced human error. 
The reality? Multiple malfunctions, public apologies, and growing mistrust among players and fans (not to mention losing the ‘best dressed officials’ in sport). 

What Went Wrong? 

  • System Failure Mid-Match: During a high-stakes women’s singles match between Anastasia Pavlyuchenkova and Sonay Kartal, the line-calling system was accidentally switched off for several points. No alerts were raised, and the match proceeded with no accurate judgments. Wimbledon officials later admitted human error was to blame, not the AI. 
  • Misclassification Errors: In the men’s quarter-final between Taylor Fritz and Karen Khachanov, Hawk-Eye incorrectly called a rally forehand a “fault,” apparently confusing it with a serve. Play was halted and the point was replayed, leaving fans and players confused and frustrated. 
  • User Experience Failures: Multiple players, including Emma Raducanu and Jack Draper, complained that some calls were “clearly wrong” and that the system’s announcements were too quiet to hear amid crowd noise. Some players called for the return of human line judges, citing a lack of trust in the technology.  

Lessons for AI and IG Professionals 

Wimbledon’s AI hiccup offers more than a headline; it surfaces deep issues around trust, oversight, and operational design that are relevant to any AI deployment in the workplace. Here are the key lessons: 

1. Automation ≠ Autonomy 

The Wimbledon system is not truly autonomous; it relies on human operators to activate it before each match. When staff forgot to do so, the AI didn’t intervene or alert anyone. This exposes a major pitfall: automated systems are only as reliable as their orchestration layers. 

Governance Principle: Ensure clear workflows and audit trails around when and how AI systems are initiated, paused, or overridden. Build in fail-safe triggers and status checks to prevent silent failures. 

2. Build in Redundancy and Exception Handling 

AI systems excel at pattern recognition in controlled environments but can fail spectacularly at edge cases. Wimbledon’s AI was likely trained on thousands of hours of ball trajectories – but it still confused a forehand rally shot with a serve under unusual conditions. 

Governance Principle: Plan for edge case management. When the AI encounters uncertainty, it should either defer to human review or trigger a fallback protocol.  

3. Usability is a Core Component of Accuracy 

Even when the AI was functioning correctly, players couldn’t always hear the line calls due to low audio volume. What good is a precise call if the user can’t perceive it? 

Governance Principle: Don’t separate accuracy from usability. A technically correct output must be understandable, accessible, and actionable to its end users. Invest in UI/UX design early in the AI lifecycle. 

4. Transparency Builds Trust 

Wimbledon’s initial response (vague statements and slow clarifications) only fuelled player frustration. Trust was eroded not just because of the error, but because of how it was handled. 

Governance Principle: When deploying AI, especially in high-stakes environments, build a culture of transparent accountability. Log decisions, explain anomalies, and communicate clearly when things go wrong. 

5. Hybrid Systems Are Often More Effective Than Pure AI 

While Wimbledon has fully replaced line judges with AI, there’s a strong case for a hybrid model. A combination of automated systems with empowered human oversight could preserve both accuracy and human judgment. 

Governance Principle: Consider augmented intelligence models, where AI supports rather than replaces human decision-makers. This ensures operational continuity and enables learning from both machine and human feedback. 

6. Respect Context and Culture 

Wimbledon isn’t just any tournament; it’s steeped in tradition, where human line judges are part of the spectacle. Removing them altered the tournament’s character, sparking emotional backlash from players and spectators alike. 

Governance Principle: Understand the organisational and cultural context where AI is deployed. Technology doesn’t operate in a vacuum. Change management, stakeholder engagement, and empathy are as important as algorithms. 

The problems with Wimbledon’s AI line-calling system are symptoms of incomplete design thinking. Whether you’re deploying AI in HR analytics, document classification, or customer service, the Wimbledon experience shows that trust isn’t just built on data; it’s built on reliability, clarity, and human-centred design. 

In a world increasingly mediated by automation, we must remember: AI doesn’t replace the need for governance. It raises the stakes for getting it right. And we just wish it was around for the “Hand of God” goal

Are you looking to enhance your career with an AI governance qualification? Our AI Governance Practitioner Certificate is designed to equip compliance professionals with the essential knowledge and skills to navigate this transformative technology while upholding the highest standards of data protection and information governance. The first course was fully booked, and we have added more dates.

The New Data (Use and Access) Act 2025 

The Data (Use and Access) Act 2025 received Royal Assent on 19th June 2025. It is important to note that the new Act will not replace current UK data protection legislation. Rather it will amend the UK GDPR as well as the Privacy and Electronic Communications (EC Directive) Regulations 2003 (PECR) and the Data Protection Act 2018. Most of these amendments will commence in stages, 2 to 12 months after Royal Assent. Exact dates for each measure will be set out in commencement regulations. 

The Bill was introduced into Parliament in October last year. It was trailed in the King’s Speech in July (under its old name of the “Digital Information and Smart Data Bill”) with his Majesty announcing that there would be “targeted reforms to some data laws that will maintain high standards of protection but where there is currently a lack of clarity impeding the safe development and deployment of some new technologies.” However, this statement of intent does not match the reality; many of the core provisions are a “cut and paste” of the Data Protection and Digital Information(No.2) Bill (“DP Bill”), which was dropped by the Conservative Government in the Parliamentary “wash up” stage before last year’s snap General Election. 

Key Provisions 

Let’s examine the key provisions of the new Act.  

Smart Data: The Act retains the provisions from the DP Bill that will enable the creation of a legal framework for Smart Data. This involves companies securely sharing customer data, upon the customer’s (business or consumer) request, with authorised third-party providers (ATPs) who can enhance the customer data with broader, contextual ‘business’ data. These ATPs will provide the customer with innovative services to improve decision making and engagement in a market. Open Banking is the only current example of a regime that is comparable to a ‘Smart Data scheme’. The Act will give such schemes a statutory footing, from which they can grow and expand.  

Digital Identity Products: Just like its predecessor, the Act contains provisions aimed at establishing digital verification services including digital identity products to help people quickly and securely identify themselves when they use online services e.g. to help with moving house, pre-employment checks and buying age restricted goods and services. It is important to note that this is not the same as compulsory digital ID cards as some media outlets have reported. 

Research Provisions: The Act keeps the DP Bill’s provisions that clarify that companies can use personal data for research and development projects, as long as they follow data protection safeguards.  

Legitimate Interests: The Act retains the concept of ‘recognised legitimate interests’ under Article 6 of the UK GDPR- specific purposes for personal data processing such as national security, emergency response, and safeguarding for which Data Controllers will be exempt from conducting a full “Legitimate Interests Assessment” when processing personal data.  

Subject Access Requests: The Act it makes it clear that Data Controllers only have to make reasonable and proportionate searches when someone asks for access to their personal data. 

Automated Decision Making: Like the DP Bill, the Act seeks to limit the right, under Article 22 of the UK GDPR, for a data subject not to be subject to automated decision making or profiling to only cases where Special Category Data is used. Under new article 22A, a decision would qualify as being “based solely on automated processing” if there was “no meaningful human involvement in the taking of the decision”. This could give the green light to companies to use AI techniques on personal data scraped from the internet for the purposes of pre employment background checks. 

International Transfers: The Act maintains most of the DP Bill’s international transfer provisions. There will be a new approach to the test for adequacy applied by the UK Government to countries (and international organisations) and when Data Controllers are carrying out a Transfer Impact Assessment or TIA. The threshold for this new “data protection test” will be whether a jurisdiction offers protection that is “not materially lower” than under the UK GDPR 

Health and Social Care Information: The Act maintains, without any changes, the provisions that establish consistent information standards for health and adult social care IT systems in England, enabling the creation of unified medical records accessible across all related services. 

PECR Changes: One of the most significant changes, copied from the DP Bill, is the increase in fines for breaches of PECR, from £500,000 to UK GDPR levels; meaning organisations could face fines of up to  up to £17.5m of 4% of global annual turnover (whichever is higher) for the most serious infringements. Other changes include allowing cookies to be used without consent for the purposes of web analytics and to install automatic software updates and extending the “soft opt” in for electronic marketing to charities.  

A full list of the changes to the UK data protection regime can be read on the ICO website.  

What is not in the new Act? 

Most of the controversial parts of the DP Bill have been have not made it into the Act. These include: 

  • Replacing the terms “manifestly unfounded” or “excessive” requests, in Article 12 of the UK GDPR, with “vexatious” or “excessive” requests. Explanation and examples of such requests would also have been included.  
  • Exempting all controllers and processors from the duty to maintain a ROPA, under Article 30, unless they are carrying out high risk processing activities.  
  • The “strategic priorities” mechanism, which would have allowed the Secretary of State to set binding priorities for the Information Commissioner. 
  • The requirements for the Information Commissioner to submit codes of practice to the Secretary of State for review and recommendations.  

The UK’s adequacy status under the EU GDPR now expires on 27th December following the recent announcement of a six month extension. Whilst the EU will commence a formal review of adequacy once the Bill receives Royal Assent, nothing in the Bill will jeopardise the free flow of personal between the EU and the UK. The situation would perhaps have been different had the DP Bill made it on to the statute books.  

AI and Copyright 

Much of the delay to the Bill was passing was caused by an issue which was not originally intended to be addressed in the Bill; that of the use of copyright works to train AI. Like the monster plant in Little Shop of Horrors, AI has an insatiable appetite; for data though rather than food. AI applications need a constant supply of data to train (and improve) their output algorithms. This obviously concerns copyright holders such as musicians and writers whose work may be used to train AI models to produce similar output, without the former receiving any financial compensation. A number of copyright infringements lawsuits are set to hit the courts soon. Amongst them, Getty Images’ is suing Stability AI accusing it of using Getty images to train its Stable Diffusion system, which can generate images from text inputs. Similar lawsuits have been launched in the US by novelists and news outlets. 

During the passage of the Bill through Parliament, there was strong disagreement between the Lords and the Commons over an amendment introduced by the crossbench peer and former film director Beeban Kidron. The amendment would have required AI developers to be transparent with copyright owners, about using their material to train AI models. 400 British musicians, writers and artists, including Sir Paul McCartney, signed a letter urging the Government to adopt the amendment. They argued that failing to do so would mean them “giving away” their work to tech firms.  

In the end, the Baroness Kidron dropped her amendment follow repeated rejection in the Commons. I expect this issue to raise its head again soon. The Government’s consultation on AI and copyright ended in February. Amongst other options, it proposes to give copyright holders the right to opt-out of their works being used for training AI. However, the music industry believes that such a measure would offer insufficient protection for copyright holders. In an interview with the BBC, Sir Elton John described the government as “absolute losers” and said he feels “incredibly betrayed” over the Government’s plans. 

Once the Government publishes it response to the copyright consultation, it will have to consider how to take the matter forward. Whether this comes in the form of a new copyright bill or AI regulation bill, expect more parliamentary wranglings as well as celebrity interviews.  

Data protection professionals need to assess the changes to the UK data protection regime. Our half day workshop will explore the new Act in detail giving you an action plan for compliance. A revised UK GDPR Handbook is now available incorporating the changes made by the DUA Act.