Home Office Acknowledges Racial and Gender Bias in UK Police Facial Recognition Technology

Facial recognition is often sold as a neutral, objective tool. But recent admissions from the UK government show just how fragile that claim really is.

New evidence has confirmed that facial recognition technology used by UK police is significantly more likely to misidentify people from certain demographic groups. The problem is not marginal, and it is not theoretical. It is already embedded in live policing.

A Systematic Pattern of Error

Independent testing commissioned by the Home Office found that false-positive rates increase dramatically depending on ethnicity, gender, and system settings.

At lower operating thresholds — where the software is configured to return more matches — the disparity becomes stark. White individuals were falsely matched at a rate of around 0.04%. For Asian individuals, the rate rose to approximately 4%. For Black individuals, it reached about 5.5%. The highest error rate was recorded among Black women, who were falsely matched close to 10% of the time.

The data highlights a striking imbalance: Asian and Black individuals were misidentified almost 100 times more frequently than white individuals, while women faced error rates roughly double those of men.

Why This Is Not an Abstract Risk

This technology is already in widespread use. Police forces rely on facial recognition to analyse CCTV footage, conduct retrospective searches across custody databases, and, in some cases, deploy live systems in public spaces.

The scale matters. Thousands of retrospective facial recognition searches are conducted each month. Even a low error rate, when multiplied across that volume, results in a significant number of people being wrongly flagged.

A false match can lead to questioning, surveillance, or police intervention. Even if officers ultimately decide not to act, the encounter itself can be intrusive, distressing, and damaging. These effects do not disappear simply because a human later overrides the system.

Bias, Thresholds, and Operational Reality

For years, facial recognition vendors and public authorities argued that bias could be controlled through careful configuration. In controlled conditions, stricter thresholds reduce error rates. But operational pressures often incentivise looser settings that generate more matches, even at the cost of accuracy.

The government’s own findings now confirm what critics have long warned: fairness is conditional. Bias does not vanish; it shifts depending on how the system is used.

The data also shows that demographic impacts overlap. Women, older people, and ethnic minorities are all more likely to be misidentified, with compounded effects for those who sit at multiple intersections.

Expansion Amid Fragile Trust

Despite these findings, the government is consulting on proposals to expand national facial recognition capability, including systems that could draw on large biometric datasets such as passport and driving licence records.

Ministers have pointed to plans to procure newer algorithms and to subject them to independent evaluation. While improved testing and oversight are essential, they do not answer the underlying question: should surveillance infrastructure be expanded while known structural risks remain unresolved?

Civil liberties groups and oversight bodies have described the findings as deeply concerning, warning that transparency, accountability, and public confidence are being strained by the rapid adoption of opaque technologies.

This Is a Governance Issue, Not Just a Technical One

Facial recognition is not simply a question of software performance. It is a question of how power is exercised and how risk is distributed.

When automated systems systematically misidentify certain groups, the consequences fall unevenly. Decisions about who is stopped, questioned, or monitored start to reflect the limitations of technology rather than evidence or behaviour.

Once such systems become normalised, rolling them back becomes difficult. That is why scrutiny matters now, not after expansion.

If technology is allowed to shape policing, the justice system, and public space, it must be subject to the highest standards of accountability, fairness, and democratic oversight.

These and other developments in the use of artificial intelligence, surveillance, and automated decision-making will be examined in detail in our AI Governance Practitioner Certificate training programme, which provides a practical and accessible overview of how AI systems are developed, deployed, and regulated, with particular attention to risk, bias, and accountability.

Facial Recognition in Schools: ICO Reprimand

For a number of years schools have used biometrics, particularly fingerprint scanning, to streamline various processes such class registration, library book borrowing and cashless catering. Big Brother Watch (BBW) raised privacy concerns about this way back in 2014. Recently some schools have started to implement facial recognition technology (FRT).

FRT is even more problematic. In May, BBW launched a fundraiser to support two members of the public to bring legal challenges after FRT wrongly flagged them as criminals. And in January 2023, the ICO issued a letter to North Ayrshire Council (NAC) following their use of FRT in school canteens. The Financial Times reported that, “nine schools in North Ayrshire will start taking payments for school lunches by scanning the faces of pupils, claiming that the new system speeds up queues and is more Covid-secure than the card payments and fingerprint scanners they used previously.”

Last week the ICO issued a reprimand to Chelmer Valley High School, in Chelmsford, after it started using FRT to take cashless canteen payments from students. The ICO said that the school failed to complete a Data Protection Impact Assessment (DPIA), in compliance with of Article 35(1) of the UK GDPR, prior to introducing the system.

As readers will know, when processing any form of biometric data, a data Controller requires a lawful basis under Article 6 of the UK GDPR as well as Article 9 due to the processing of Special Category Data. In most cases, the only lawful basis for FRT usage is express consent (see the GDPR Enforcement Notices issued to public service provider Serco Leisure, Serco Jersey and seven associated community leisure trusts  requiring them to stop using FRT and fingerprint scanning to monitor employee attendance)

In March 2023, Chelmer Valley High School sent a letter to parents with a slip for them to return if they did not want their child to participate in the FRT. Positive opt-in consent (express consent) was not sought, meaning until November 2023 the school was wrongly relying on assumed (opt out) consent. The ICO noted most students were old enough to provide their own consent and therefore, parental opt-out deprived students of the ability to exercise their rights and freedoms.

The ICO also noted that the School has failed to consult its Data Protection Officer or the parents and students before implementing the technology. The reprimand included a set of recommendations:

  1. Prior to new processing operations, or upon changes to the nature, scope, context or purposes of processing for activities that pose a high risk to the rights and freedoms of data subjects, complete a DPIA and integrate outcomes back into the project plans. (see our DPIA workshop). 
  1. Amend the DPIA to give thorough consideration to the necessity and proportionality of cashless catering, and to mitigating specific, additional risks such as bias and discrimination.
  1. Review and follow all ICO guidance for schools considering whether to use facial recognition for cashless catering.
  1. Amend privacy information given to students so that it provides for their information rights under the UK GDPR in an appropriate way. (see our Children’s Data workshop). 
  1. Engage more closely and in a timely fashion with their DPO when considering new projects or operations processing personal data, and document their advice and any changes to the processing that are made as a result.

All the recent GDPR developments will be discussed in detail on our forthcoming GDPR Update workshop. We have a few places left on our Advanced Certificate in GDPR Practice course starting in September.

ICO Fines “World’s Largest Facial Network”

The Information Commissioner’s Office has issued a Monetary Penalty Notice of £7,552,800 to Clearview AI Inc for breaches of the UK GDPR. 

Clearview is a US based company which describes itself as the “World’s Largest Facial Network”. It allows customers, including the police, to upload an image of a person to its app, which is then checked against all the images in the Clearview database. The app then provides a list of matching images with a link to the websites from where they came from. 

Clearview’s online database contains 20 billion images of people’s faces and data scraped from publicly available information on the internet and social media platforms all over the world. This service was used on a free trial basis by a number of UK law enforcement agencies. The trial was discontinued and the service is no longer being offered in the UK. However Clearview has customers in other countries, so the ICO ruled that is still processing the personal data of UK residents.

The ICO was of the view that, given the high number of UK internet and social media users, Clearview’s database is likely to include a substantial amount of data from UK residents, which has been gathered without their knowledge. It found the company had breached the UK GDPR by:

  • failing to use the information of people in the UK in a way that is fair and transparent, given that individuals are not made aware or would not reasonably expect their personal data to be used in this way;
  • failing to have a lawful reason for collecting people’s information;
  • failing to have a process in place to stop the data being retained indefinitely;
  • failing to meet the higher data protection standards required for biometric data (Special Category Data):
  • asking for additional personal information, including photos, when asked by members of the public if they are on their database. This may have acted as a disincentive to individuals who wish to object to their data being collected and used.

The ICO has also issued an enforcement notice ordering Clearview to stop obtaining and using the personal data of UK residents that is publicly available on the internet, and to delete the data of UK residents from its systems.

The precise legal basis for the ICO’s fine will only be known when (hopefully not if) it decides to publish the Monetary Penalty Notice. The information we have so far suggests that it considered breaches of Article 5 (1st and 5th Principles – lawfulness, transparency and data retention) Article 9 (Special Category Data) and Article 14 (privacy notice) amongst others.  (UPDATE – the notice has now been published here)

Whilst substantially lower than the £17 million Notice of Intent, issued in November 2021, this fine shows that the new Information Commissioner, John Edwards, is willing to take on at least some of the big tech companies. 

The ICO enforcement action comes after a joint investigation with the Office of the Australian Information Commissioner (OAIC). The latter also ordered the company to stop processing citizens’ data and delete any information it held. France, Itlay and Canada have also sanctioned the company under the EU GDPR. 

So what next for Clearview? The ICO has very limited means to enforce a fine against foreign entities.  Clearview has no operations or offices in the UK so it could just refuse to pay. This may be problematic from a public relations perspective as many of Clearview’s customers are law enforcement agencies in Europe who may not be willing to associate themselves with a company that has been found to have breached EU privacy laws. 

When the Italian DP regulator fined Clearview €20m (£16.9m) earlier this year, it responded by saying it did not operate in any way that brought it under the jurisdiction of the EU GDPR. Could it argue the same in the UK, where it also has no operations, customers or headquarters? Students of our  UK GDPR Practitioner certificate course will know that the answer lies in Article 3(2) which is sets out the extra territorial effect of the UK GDPR:

This Regulation applies to the relevant processing of personal data of data subjects who are in the United Kingdom by a controller or processor not established in the United Kingdom where the processing activities are related to:

  1. the offering of goods or services, irrespective of whether a payment of the data subject is required, to such data subjects in the United Kingdom; or
  2. the monitoring of their behaviour as far as their behaviour takes place within the United Kingdom. [our emphasis]

Whilst clearly Clearview (no pun intended) is not established in the UK, the ICO is of the view it is covered by the UK GDPR due to Article 3(2). See the statement of the Commissioner, John Edwards:

“Clearview AI Inc has collected multiple images of people all over the world, including in the UK, from a variety of websites and social media platforms, creating a database with more than 20 billion images. The company not only enables identification of those people, but effectively monitors their behaviour and offers it as a commercial service. That is unacceptable. That is why we have acted to protect people in the UK by both fining the company and issuing an enforcement notice.”

If Clearview does appeal, we will hopefully receive judicial guidance about the territorial scope of the  UK GDPR.   

UPDATE 19/10/22): Clearview’s appeal against the ICO’s £7.5 million fine is scheduled for 21-23 November in the First Tier Tribunal(Information Rights).

This and other GDPR developments will be discussed in detail on our forthcoming GDPR Update workshop. We also have a few places left on our Advanced Certificate in GDPR Practice course starting in September.

advanced_gdpr_cert

Facial Recognition in Schools: Please, sir, I want some more.

Yesterday the Financial Times reported that, “nine schools in North Ayrshire will start taking payments for school lunches by scanning the faces of pupils, claiming that the new system speeds up queues and is more Covid-secure than the card payments and fingerprint scanners they used previously.”

For a few years now, schools have used biometrics including automated fingerprint identification systems for registration, library book borrowing and cashless catering. Big Brother Watch reported privacy concerns about this way back in 2014. Now a company, called CRB Cunninghams, has introduced facial recognition technology to allow schools to offer children the ability to collect and pay for lunches without the need for physical contact. In addition to the nine schools in Scotland, four English schools are reported to be introducing the technology. Silkie Carlo, the head of Big Brother Watch, said: 

“It’s normalising biometric identity check for something that is mundane. You don’t need to resort to airport-style [technology] for children getting their lunch.”

The law on the use of such technology is clear. Back in 2012, the Protection of Freedoms Act (POFA) created an explicit legal framework for the use of all biometric technologies (including facial recognition) in schools for the first time. It states that schools (and colleges) must seek the written consent of at least one parent of a child (anyone under the age of 18) before that child’s biometric data can be processed. Even if a parent consents, the child can still object or refuse to participate in the processing of their biometric data. In such a case schools must provide a reasonable alternative means of accessing the service i.e. paying for school meals in the present case. 

POFA only applies to schools and colleges in England and Wales. However, all organisation processing personal data must comply with the UK GDPR. Facial recognition data, being biometric, is classed as Special Category Data and there is a legal prohibition on anyone processing it unless one of the conditions in paragraph 2 of Article 9 are satisfied. Express consent of the Data Subjects (i.e. the children, subject to their capacity) seems to be the only way to justify such processing. 

In 2019 the Swedish Data Protection Authority fined an education authority (SEK 200 000 ,approximately 20 000 Euros) after the latter instructed schools to use facial recognition to track pupil attendance. The schools had sought to base the processing on consent. However, the Swedish DPA considered that consent was not a valid legal basis given the imbalance between the Data Subject and the Data Controller. It ruled that there was a breach of Article 5, by processing students’ personal data in a manner that is more intrusive as regards personal integrity and encompasses more personal data than is necessary for the specified purpose (monitoring of attendance), Article 9 and Articles 35 and 36 by failing to fulfil the requirements for an impact assessment and failing to carry out prior consultation with the Swedish DPA. 

The French regulator (CNIL) has also raised concerns about a facial recognition trial commissioned by the Provence-Alpes-Côte d’Azur Regional Council, and which took place in two schools to control access by pupils and visitors. The CNIL concluded that “free and informed consent of students had not been obtained and the controller had failed to demonstrate that its objectives could not have been achieved by other, less intrusive means.” CNIL also said that facial recognition devices are particularly intrusive and present major risks of harming the privacy and individual freedoms of the persons concerned. They are also likely to create a sense of enhanced surveillance. These risks are increased when facial recognition devices are applied to minors, who are subject to special protection in national and European laws.

Facial recognition has also caused controversy in other parts of the world recently. In India the government has been criticised for its decision to install it in some government-funded schools in Delhi. As more UK schools opt for this technology it will be interesting to see how many objections they receive not just from from parents but also from children. This and other recent privacy related stories highlight the importance of a Data Protection Officer’s role.

BONUS QUESTION: The title of this contains a nod to which classic novel? Answers in the comments section below.

All the recent GDPR developments will be discussed in detail on our forthcoming GDPR Update workshop. We have a few places left on our Advanced Certificate in GDPR Practice course starting in November.