2023 is going to be the year of AI. In January, Microsoft announced a multibillion dollar investment in OpenAI, the company behind image generation tool Dall-E and the chatbot ChatGPT. The public sector is increasingly leveraging the power of AI to undertake administrative tasks and create more tailored services that meet users’ needs.
The term “artificial intelligence” or “AI” refers to the use of machines and computer systems that can perform tasks normally requiring human intelligence. The past few years have seen rapid progress in this field. Advancements in deep learning algorithms, cloud computing, and data storage have made it possible for machines to process and analyse large amounts of data quickly and accurately. AI’s ability to interpret human language means that virtual assistants, such as Siri and Alexa, can now understand and respond to complex spoken commands at lightning speed.
The public sector is increasingly leveraging the power of AI to undertake administrative tasks and create more tailored services that meet users’ needs. Local government is using AI to simplify staff scheduling, predict demand for services and even estimate the risk of an individual committing fraud. Healthcare providers are now able to provide automated diagnoses based on medical imaging data from patients, thereby reducing wait times.
With any major technological advance there are potential risks and downsides. On Monday, ElevenLabs, an AI speech software company, said it had found an “increasing number of voice cloning misuse cases”. According to reports, hackers used the ElevenLabs software to create deepfake voices of famous people (including Emma Watson and Joe Rogan) making racist, transphobic and violent comments.
There are concerns about the impact of AI on employment and the future of work. In April 2021, the Court of Amsterdam ordered that Uber reinstate taxi drivers in the UK and Portugal who had been dismissed by “robo firing”; the use of an algorithm to make a decision about dismissal with no human involvement. The Court concluded that Uber’s had made the decisions “based solely on automated processing” within the meaning of Article 22(1) of the GDPR. It was ordered to reinstate the drivers’ accounts and pay them compensation.
As well as ethical questions about the use of AI in decision-making processes that affect people’s lives, AI-driven algorithms may lead to unintended biases or inaccurate decisions if not properly monitored and regulated. In 2021 the privacy pressure group, NOYB, filed a GDPR complaint against Amazon, claiming that Amazon’s algorithm discriminates against some customers by denying them the opportunity to pay for items by monthly invoice.
There is also a risk that AI is deployed without consideration of the privacy implications. In May 2022, the UK Information Commissioner’s Office fined Clearview AI Inc more than £7.5 million for breaches of GDPR. Clearview’s online database contains 20 billion images of people’s faces and data scraped from publicly available information on the internet and social media platforms all over the world. The company, which describes itself as the “World’s Largest Facial Network”, allows customers, including the police, to upload an image of a person to its app, which then uses AI to check it against all the images in the Clearview database. The app then provides a list of matching images with a link to the websites from where they came from.
Recently, the ICO conducted an inquiry after concerns were raised about the use of algorithms in decision-making in the welfare system by local authorities and the DWP. In this instance, the ICO did not find any evidence to suggest that benefit claimants are subjected to any harms or financial detriment as a result of the use of algorithms. It did though emphasise a number of practical steps that local authorities and central government can take when using algorithms or AI:
1. Take a data protection by design and default approach
Data processed using algorithms, data analytics or similar systems should be reactively and proactively reviewed to ensure it is accurate and up to date. If a local authority decides to engage a third party to process personal data using algorithms, data analytics or AI, they are responsible for assessing that they are competent to process personal data in line with the UK GDPR.
2. Be transparent with people about how you are using their data
Local authorities should regularly review their privacy policies, to ensure they comply with Articles 13 and 14, and identify areas for improvement. They should also bring any new uses of individuals’ personal data to their attention.
3. Identify the potential risks to people’s privacy
Local authorities should consider conducting a Data Protection Impact Assessment (DPIA) to help identify and minimise the data protection risks of using algorithms, AI or data analytics. A DPIA should consider compliance risks, but also broader risks to the rights and freedoms of people, including the potential for any significant social or economic disadvantage.
In April 2021, the European Commission presented its proposal for a Regulation to harmonise rules for AI, also known as the “AI Act of the European Union’. Whilst there is still a long way to go before this proposal becomes legislation, it could create an impetus for the UK to further regulate AI.
Use of AI has enormous benefits. It does though have a potential to adversely impact people’s lives and deny their fundamental rights. As such, understanding the implications of AI technology and how to use it in a fair and lawful manner is critical for data protection/information governance officers to understand.
Want to know more about this rapidly developing area? Our forthcoming AI and Machine Learning workshop will explore the common challenges that this subject presents focussing on GDPR as well as other information governance and records management issues.
Are you an experienced GDPR Practitioner wanting to take your skills to the next level? See our Advanced Certificate in GDPR Practice.