Facial recognition is often sold as a neutral, objective tool. But recent admissions from the UK government show just how fragile that claim really is.
New evidence has confirmed that facial recognition technology used by UK police is significantly more likely to misidentify people from certain demographic groups. The problem is not marginal, and it is not theoretical. It is already embedded in live policing.
A Systematic Pattern of Error
Independent testing commissioned by the Home Office found that false-positive rates increase dramatically depending on ethnicity, gender, and system settings.
At lower operating thresholds — where the software is configured to return more matches — the disparity becomes stark. White individuals were falsely matched at a rate of around 0.04%. For Asian individuals, the rate rose to approximately 4%. For Black individuals, it reached about 5.5%. The highest error rate was recorded among Black women, who were falsely matched close to 10% of the time.
The data highlights a striking imbalance: Asian and Black individuals were misidentified almost 100 times more frequently than white individuals, while women faced error rates roughly double those of men.
Why This Is Not an Abstract Risk
This technology is already in widespread use. Police forces rely on facial recognition to analyse CCTV footage, conduct retrospective searches across custody databases, and, in some cases, deploy live systems in public spaces.
The scale matters. Thousands of retrospective facial recognition searches are conducted each month. Even a low error rate, when multiplied across that volume, results in a significant number of people being wrongly flagged.
A false match can lead to questioning, surveillance, or police intervention. Even if officers ultimately decide not to act, the encounter itself can be intrusive, distressing, and damaging. These effects do not disappear simply because a human later overrides the system.
Bias, Thresholds, and Operational Reality
For years, facial recognition vendors and public authorities argued that bias could be controlled through careful configuration. In controlled conditions, stricter thresholds reduce error rates. But operational pressures often incentivise looser settings that generate more matches, even at the cost of accuracy.
The government’s own findings now confirm what critics have long warned: fairness is conditional. Bias does not vanish; it shifts depending on how the system is used.
The data also shows that demographic impacts overlap. Women, older people, and ethnic minorities are all more likely to be misidentified, with compounded effects for those who sit at multiple intersections.
Expansion Amid Fragile Trust
Despite these findings, the government is consulting on proposals to expand national facial recognition capability, including systems that could draw on large biometric datasets such as passport and driving licence records.
Ministers have pointed to plans to procure newer algorithms and to subject them to independent evaluation. While improved testing and oversight are essential, they do not answer the underlying question: should surveillance infrastructure be expanded while known structural risks remain unresolved?
Civil liberties groups and oversight bodies have described the findings as deeply concerning, warning that transparency, accountability, and public confidence are being strained by the rapid adoption of opaque technologies.
This Is a Governance Issue, Not Just a Technical One
Facial recognition is not simply a question of software performance. It is a question of how power is exercised and how risk is distributed.
When automated systems systematically misidentify certain groups, the consequences fall unevenly. Decisions about who is stopped, questioned, or monitored start to reflect the limitations of technology rather than evidence or behaviour.
Once such systems become normalised, rolling them back becomes difficult. That is why scrutiny matters now, not after expansion.
If technology is allowed to shape policing, the justice system, and public space, it must be subject to the highest standards of accountability, fairness, and democratic oversight.
These and other developments in the use of artificial intelligence, surveillance, and automated decision-making will be examined in detail in our AI Governance Practitioner Certificate training programme, which provides a practical and accessible overview of how AI systems are developed, deployed, and regulated, with particular attention to risk, bias, and accountability.







