EU Leads Global AI Regulation with Landmark Legislation

European representatives in Strasbourg recently concluded an extensive 37-hour discussion, resulting in the world’s first extensive framework for regulating artificial intelligence. This ground-breaking agreement, facilitated by European Commissioner Thierry Breton and Spain’s AI Secretary of State, Carme Artigas, is set to shape how social media and search engines operate, impacting major companies. 

The deal, achieved after lengthy negotiations and hailed as a significant milestone, puts the EU at the forefront of AI regulation globally, surpassing the US, China, and the UK. The new legislation, expected to be enacted by 2025, involves comprehensive rules for AI applications, including a
risk-based system to address potential threats to health, safety, and human rights. 

Key components of the agreement include strict controls on AI-driven surveillance and real-time biometric technologies, with specific exceptions for law enforcement under certain circumstances. The European Parliament ensured a ban on such technologies, except in cases of terrorist threats, search for victims, or serious criminal investigations. 

MEP Brando Benefei and Dragoș Tudorache, who led the negotiations, emphasised the aim of developing an AI ecosystem in Europe that prioritises human rights and values. The agreement also includes provisions for independent authorities to oversee predictive policing and uphold the presumption of innocence. 

Tudorache highlighted the balance struck between equipping law enforcement with necessary tools and banning AI technologies that could pre-emptively identify potential criminals. (Minority Report anyone?)
The highest risk AI systems will now be regulated based on the computational power required for training, with GPT4 being a notable example and the only technology fulfilling this criterion. 

Some Key Aspects 
 
The new EU AI Act delineates distinct regulations for AI systems based on their perceived level of risk, effectively categorizing them into “Unacceptable Risk,” “High Risk,” “Generative AI,” and “Limited Risk” groups, each with specific obligations for providers and users. 

Unacceptable Risk 

AI systems deemed a threat to people’s safety or rights will be prohibited. This includes: 

  • AI-driven cognitive behavioural manipulation, particularly targeting vulnerable groups, like voice-activated toys promoting hazardous behaviours in children. 
  • Social scoring systems that classify individuals based on behaviour,
    socio-economic status, or personal characteristics. 
  • Real-time and remote biometric identification systems, like facial recognition. 
  • Exceptions exist, such as “post” remote biometric identification for serious crime investigations, subject to court approval. 

High Risk 

AI systems impacting safety or fundamental rights fall under high-risk, subdivided into: 

  • AI in EU-regulated product safety categories, like toys, aviation, cars, medical devices, and lifts. 
  • Specific areas requiring EU database registration, including biometric identification, critical infrastructure management, education, employment, public services access, law enforcement, migration control, and legal assistance. 
  • High-risk AI systems must undergo pre-market and lifecycle assessments. 

Generative AI 

AI like ChatGPT must adhere to transparency protocols: 

  • Disclosing AI-generated content. 
  • Preventing generation of illegal content. 
  • Publishing summaries of copyrighted data used in training. 
  • Limited Risk 
  • These AI systems require basic transparency for informed user decisions, particularly for AI that generates or manipulates visual and audio content, like deepfakes. Users should be aware when interacting with AI. 

The legislation sets a precedent for future digital regulation. As we saw with the GDPR, Governments outside the EU used the legislation as a foundation for their own laws and many corporations adopted the same privacy standards from within Europe for their businesses worldwide for efficiency. This could easily happen in the case of the EU AI Act with governments using it as a ‘starter for ten’. It will be interesting to see how the legislation will cater for algorithmic biases found within current iterations of the technology from facial recognition technology to other automated decision making algorithms. The UK did publish its AI White Paper in March of this year and says it follows a “Pro-Innovation” approach. However, it seems to have decided to go ‘face first’ before any legislation is passed with facial recognition software recently used in the Beyoncé gig, King Charles’ coronation and during the Formula One Grand Prix. For many, it is the impact of the decision making the software is formulating through the power of AI which is worrying. The ICO does have useful guides on the use of AI which can be found here. 

As artificial intelligence technology rapidly advances, exemplified by Google’s impressive Gemini demo, the urgency for comprehensive regulation was becoming increasingly apparent. The EU has signalled its intent to avoid past oversights seen in the unchecked expansion of tech giants and be at the forefront of regulating this fascinating technology to ensure its ethical and responsible utilisation. 

Join our Artificial Intelligence and Machine Learning, How to Implement Good Information Governance workshop for hands-on insights, key resource awareness, and best practices, ensuring you’re ready to navigate AI complexities fairly and lawfully.

Clearview AI Wins Appeal Against GDPR Fine 

Last week a Tribunal overturned a GDPR Enforcement Notice and a Monetary Penalty Notice issued to Clearview AI, an American facial recognition company. In Clearview AI Inc v The Information Commissioner [2023] UKFTT 00819 (GRC), the First-Tier Tribunal (Information Rights) ruled that the Information Commissioner had no jurisdiction to issue either notice, on the basis that the GDPR/UK GDPR did not apply to the personal data processing in issue.  

Background 

Clearview is a US based company which describes itself as the “World’s Largest Facial Network”. Its online database contains 20 billion images of people’s faces and data scraped from publicly available information on the internet and social media platforms all over the world. It allows customers to upload an image of a person to its app; the person is then identified by the app checking against all the images in the Clearview database.  

In May 2022 the ICO issued a Monetary Penalty Notice of £7,552,800 to Clearview for breaches of the GDPR including failing to use the information of people in the UK in a way that is fair and transparent. Although Clearview is a US company, the ICO ruled that the UK GDPR applied because of Article 3(2)(b) (territorial scope). It concluded that Clearview’s processing activities “are related to… the monitoring of [UK resident’s] behaviour as far as their behaviour takes place within the United Kingdom.” 

The ICO also issued an Enforcement Notice ordering Clearview to stop obtaining and using the personal data of UK residents that is publicly available on the internet, and to delete the data of UK residents from its systems. (see our earlier blog for more detail on these notices.) 

The Judgement  

The First-Tier Tribunal (Information Rights) has now overturned the ICO’s enforcement and penalty notice against Clearview. It concluded that although Clearview did carry out data processing related to monitoring the behaviour of people in the UK (Article Art. 3(2)(b) of the UK GDPR), the ICO did not have jurisdiction to take enforcement action or issue a fine. Both the GDPR and UK GDPR provide that acts of foreign governments fall outside their scope; it is not for one government to seek to bind or control the activities of another sovereign state. However the Tribunal noted that the ICO could have taken action under the Law Enforcement Directive (Part 3 of the DPA 2018 in the UK), which specifically regulates the processing of personal data in relation to law enforcement. 

Learning Points 

While the Tribunal’s judgement in this case reflects the specific circumstances, some of its findings are of wider application: 

  • The term “behaviour” (in Article Art. 3(2)(b)) means something about what a person does (e.g., location, relationship status, occupation, use of social media, habits) rather than just identifying or describing them (e.g., name, date of birth, height, hair colour).  

  • The term “monitoring” not only comes up in Article 3(2)(b) but also in Article 35(3)(c) (when a DPIA is required). The Tribunal ruled that monitoring includes tracking a person at a fixed point in time as well as on a continuous or repeated basis.

  • In this case, Clearview was not monitoring UK residents directly as its processing was limited to creating and maintaining a database of facial images and biometric vectors. However, Clearview’s clients were using its services for monitoring purposes and therefore Clearview’s processing “related to” monitoring under Article 3(2)(b). 

  • A provider of services like Clearview, may be considered a joint controller with its clients where both determine the purposes and means of processing. In this case, Clearview was a joint controller with its clients because it imposed restrictions on how clients could use the services (i.e., only for law enforcement and national security purposes) and determined the means of processing when matching query images against its facial recognition database.  

Data Scraping 

The ruling is not a greenlight for data scraping; where publicly available data, usually from the internet, is collected and processed by companies often without the Data Subject’s knowledge. The Tribunal ruled that this was an activity to which the UK GDPR could apply. In its press release, reacting to the ruling, the ICO said: 

“The ICO will take stock of today’s judgment and carefully consider next steps.
It is important to note that this judgment does not remove the ICO’s ability to act against companies based internationally who process data of people in the UK, particularly businesses scraping data of people in the UK, and instead covers a specific exemption around foreign law enforcement.” 

This is a significant ruling from the First Tier Tribunal which has implications for the extra territorial effect of the UK GDPR and the ICO powers to enforce it. It merits an appeal by the ICO to the Upper Tribunal. Whether this happens depends very much on the ICO’s appetite for a legal battle with a tech company with deep pockets.  

This and other GDPR developments will be discussed by Robert Bateman in our forthcoming GDPR Updateworkshop.  

Exploring the Legal and Regulatory Challenges of AI and Chat GPT 

In our recent blog post, entitled “GDPR and AI: The Rise of the Machines”, we said that 2023 is going to be the year of Artificial Intelligence (AI). Events so far seem to suggest that advances in the technology as well legal and regulatory challenges are on the horizon.   

Generative AI, particularly large language models like ChatGPT, have captured the world’s imagination. ChatGPT registered 100 million monthly users in January alone; having only been launched in November and it set the record for the fastest growing platform since TikTok, which took nine months to hit the same usage level. In March 2023, it recorded 1.6 Billion user visits which are just mind-boggling numbers and shows how much of a technological advancement it will become. There have already been some amazing medical uses of generative AI including the ability to match drugs to patients, numerous stories of major cancer research breakthroughs as well as the ability for robots to do major surgery. 
 
However, it is important to take a step back and reflect on the risks of a technology that has made its own CEO “a bit scared” and which has caused the “Godfather of AI” to quit his job at Google. The regulatory and legal backlash against AI has already started. Recently, Italy became the first Western country to block ChatGPT. The Italian DPA highlighted privacy concerns relating to the model. Other European regulators are reported to be looking into the issue too. In April the European Data Protection Board launched a dedicated task force on ChatGPT. It said the goal is to “foster cooperation and to exchange information on possible enforcement actions conducted by data protection authorities.” Elsewhere, Canada has opened an investigation into OpenAI due to a complaint alleging the collection, use and disclosure of personal information is without consent. 

The UK Information Commissioner’s Office (ICO) has expressed its own concerns. Stephen Almond, Director of Technology and Innovation at the ICO, said in a blog post

“Data protection law still applies when the personal information that you’re processing comes from publicly accessible sources…We will act where organisations are not following the law and considering the impact on individuals.”  

Wider Concerns 

ChatGPT suffered its first major personal data breach in March.
According to a blog post by OpenAI, the breach exposed payment-related and other personal information of 1.2% of the ChatGPT Plus subscribers. But the concerns around AI and ChatGPT don’t stop at privacy law.   

An Australian mayor is considering a defamation suit against ChatGPT after it told users that he was jailed for bribery; in reality he was the whistleblower in the bribery case. Similarly it falsely accused a US law professor of sexual assault. The Guardian reported recently that ChatGPT is making up fake Guardian articles. There are concerns about copyright law too; there have been a number of songs that use AI to clone the voices of artists including Drake and The Weeknd which has since  been removed from streaming services after criticism from music publishers. There has also been a full AI-Generated Joe Rogan episode with the OpenAI CEO as well as with Donald Trump. These podcasts are definitely worth a sample, it is frankly scary how realistic they actually are. 

AI also poses a significant threat to jobs. A report by investment bank Goldman Sachs says it could replace the equivalent of 300 million full-time jobs. Our director, Ibrahim Hasan, recently gave his thoughts on this topic to BBC News Arabic. (You can watch him here. If you just want to hear Ibrahim “speak in Arabic” skip the video to 2min 48 secs!) 
 

EU Regulation 

With increasing concern about the future risks AI could pose to people’s privacy, their human rights or their safety, many experts and policy makers believe AI needs to be regulated. The European Union’s proposed legislation, the Artificial Intelligence (AI) Act, focuses primarily on strengthening rules around data quality, transparency, human oversight and accountability. It also aims to address ethical questions and implementation challenges in various sectors ranging from healthcare and education to finance and energy. 

The Act also envisages grading AI products according to how potentially harmful they might be and staggering regulation accordingly. So for example an email spam filter would be more lightly regulated than something designed to diagnose a medical condition – and some AI uses, such as social grading by governments, would be prohibited altogether. 

UK White Paper 

On 29th March 2023, the UK government published a white paper entitled “A pro-innovation approach to AI regulation.” The paper sets out a new “flexible” approach to regulating AI which is intended to build public trust and make it easier for businesses to grow and create jobs. Unlike the EU there will be no new legislation to regulate AI. In its press release, the UK government says: 

“The government will avoid heavy-handed legislation which could stifle innovation and take an adaptable approach to regulating AI. Instead of giving responsibility for AI governance to a new single regulator, the government will empower existing regulators – such as the Health and Safety Executive, Equality and Human Rights Commission and Competition and Markets Authority – to come up with tailored, context-specific approaches that suit the way AI is actually being used in their sectors.” 

The white paper outlines the following five principles that regulators are to consider facilitating the safe and innovative use of AI in their industries: 

  • Safety, Security and Robustness: applications of AI should function in a secure, safe and robust way where risks are carefully managed; 

  • Transparency and Explainability: organizations developing and deploying AI should be able to communicate when and how it is used and explain a system’s decision-making process in an appropriate level of detail that matches the risks posed by the use of the AI; 

  • Fairness: AI should be used in a way which complies with the UK’s existing laws (e.g., the UK General Data Protection Regulation), and must not discriminate against individuals or create unfair commercial outcomes; 

  • Accountability and Governance: measures are needed to ensure there is appropriate oversight of the way AI is being used and clear accountability for the outcomes; and 

  • Contestability and Redress: people need to have clear routes to dispute harmful outcomes or decisions generated by AI 

Over the next 12 months, regulators will be tasked with issuing practical guidance to organisations, as well as other tools and resources such as risk assessment templates, that set out how the above five principles should be implemented in their sectors. The government has said this could be accompanied by legislation, when parliamentary time allows, to ensure consistency among the regulators. 

Michelle Donelan MP, Secretary of State for Science, Innovation and Technology, considers that this this light-touch, principles-based approach “will enable . . . [the UK] to adapt as needed while providing industry with the clarity needed to innovate.” However, this approach does make the UK an outlier in comparison to global trends. Many other countries are developing or passing special laws to address alleged AI dangers, such as algorithmic rules imposed in China or the United States. Consumer groups and privacy advocates will also be concerned about the risks to society in the absence of detailed and unified statutory AI regulation.  

Want to know more about this rapidly developing area? Our forthcoming AI and Machine Learning workshop will explore the common challenges that this subject presents focussing on GDPR as well as other information governance and records management issues.  

GDPR News Roundup

So much has happened in the world of data protection recently. Where to start?

International Transfers

In April, the European Data Protection Board’s (EDPB) opinions (GDPR and Law Enforcement Directive (LED)) on UK adequacy were adopted. The EDPB has looked at the draft EU adequacy decisions. It acknowledge that there is alignment between the EU and UK laws but also expressed some concerns. It has though issued a non-binding opinion recommending their acceptance. If accepted the two adequacy decisions will run for an initial period of four years. More here.

Last month saw the ICO’s annual data protection conference go online due to the pandemic. Whilst not the same as a face to face conference, it was still a good event with lots of nuggets for data protection professionals including the news that the ICO is working on bespoke UK standard contractual clauses (SCCs) for international data transfers. Deputy Commissioner Steve Wood said: 

“I think we recognise that standard contractual clauses are one of the most heavily used transfer tools in the UK GDPR. We’ve always sought to help organisations use them effectively with our guidance. The ICO is working on bespoke UK standard clauses for international transfers, and we intend to go out for consultation on those in the summer. We’re also considering the value to the UK for us to recognise transfer tools from other countries, so standard data transfer agreements, so that would include the EU’s standard contractual clauses as well.”

Lloyd v Google 

The much-anticipated Supreme Court hearing in the case of Lloyd v Google LLC took place at the end of April. The case concerns the legality of Google’s collection and use of browser generated data from more than 4 million+ iPhone users during 2011-12 without their consent.  Following the two-day hearing, the Supreme Court will now decide, amongst other things, whether, under the DPA 1998, damages are recoverable for ‘loss of control’ of data without needing to identify any specific financial loss and whether a claimant can bring a representative action on behalf of a group on the basis that the group have the ‘same interest’ in the claim and are identifiable. The decision is likely to have wide ranging implications for representative actions, what damages can be awarded for and the level of damages in data protection cases. Watch this space!

Ticketmaster Appeal

In November 2020, the ICO fined Ticketmaster £1.25m for a breach of Articles 5(1)(f) and 32 GPDR (security). Ticketmaster appealed the penalty notice on the basis that there had been no breach of the GDPR; alternatively that it was inappropriate to impose a penalty, and that in any event the sum was excessive. The appeal has now been stayed by the First-Tier Tribunal until 28 days after the pending judgment in a damages claim brought against Ticketmaster by 795 customers: Collins & Others v Ticketmaster UK Ltd (BL-2019-LIV-000007). 

Age Appropriate Design Code

This code came into force on 2 September 2020, with a 12 month transition period. The Code sets out 15 standards organisations must meet to ensure that children’s data is protected online. It applies to all the major online services used by children in the UK and includes measures such as providing default settings which ensure that children have the best possible access to online services whilst minimising data collection and use.

With less than four months to go (2 September 2021) the ICO is urging organisations and businesses to make the necessary changes to their online services and products. We are planning a webinar on the code. Get in touch if interested.

AI and Automated Decision Making

Article 22 of GDPR provides protection for individuals against purely automated decisions with a legal or significant impact. In February, the Court of Amsterdam ordered Uber, the ride-hailing app, to reinstate six drivers who it was claimed were unfairly dismissed “by algorithmic means.” The court also ordered Uber to pay the compensation to the sacked drivers.

In April EU Commission published a proposal for a harmonised framework on AI. The framework seeks to impose obligations on both providers and users of AI. Like the GDPR the proposal includes fine levels and an extra-territorial effect. (Readers may be interested in our new webinar on AI and Machine Learning.)

Publicly Available Information

Just because information is publicly available it does not provide a free pass for companies to use it without consequences. Data protection laws have to be complied with. In November 2020, the ICO ordered the credit reference agency Experian Limited to make fundamental changes to how it handles personal data within its direct marketing services. The ICO found that significant ‘invisible’ processing took place, likely affecting millions of adults in the UK. It is ‘invisible’ because the individual is not aware that the organisation is collecting and using their personal data. Experian has lodged an appeal against the Enforcement Notice.

Interesting that recently the Spanish regulator has fined another credit reference agency, Equifax, €1m for several failures under the GDPR. Individuals complained about Equifax’s use of their personal data which was publicly available. Equifax had also failed to provide the individuals with a privacy notice. 

Data Protection by Design

The Irish data protection regulator issued its largest domestic fine recently. Irish Credit Bureau (ICB) was fined €90,000 following a change in the ICB’s computer code in 2018 resulted in 15,000 accounts having incorrect details recorded about their loans before the mistake was noticed. Amongst other things, the decision found that the ICB infringed Article 25(1) of the GDPR by failing to implement appropriate technical and organisational measures designed to implement the principle of accuracy in an effective manner and to integrate the necessary safeguards into the processing in order to meet the requirements of the GDPR and protect the rights of data subjects (aka DP by design and by default). 

Data Sharing 

The ICO’s Data Sharing Code of Practice provides organisations with a practical guide on how to share personal data in line with data protection law. Building on the code, the ICO recently outlined its plans to update its guidance on anonymisation and pseudonymisation, and to explore the role that privacy enhancing technologies might play in enabling safe and lawful data sharing.

UK GDPR Handbook

The UK GDPR Handbook is proving very popular among data protection professionals.

It sets out the full text of the UK GDPR laid out in a clear and easy to read format. It cross references the EU GDPR recitals, which also now form part of the UK GDPR, allowing for a more logical reading. The handbook uses a unique colour coding system that allows users to easily identify amendments, insertions and deletions from the EU GDPR. Relevant provisions of the amended DPA 2018 have been included where they supplement the UK GDPR. To assist users in interpreting the legislation, guidance from the Information Commissioner’s Office, Article 29 Working Party and the European Data Protection Board is also signposted. Read what others have said:

“A very useful, timely, and professional handbook. Highly recommended.”

“What I’m liking so far is that this is “just” the text (beautifully collated together and cross-referenced Articles / Recital etc.), rather than a pundits interpretation of it (useful as those interpretations are on many occasions in other books).”

“Great resource, love the tabs. Logical and easy to follow.”

Order your copy here.

These and other GDPR developments will also be discussed in detail in our online GDPR update workshop next week.

%d