EU AI Act Approved by European Parliament  

On Wednesday 13th March 2024, the European Parliament approved the text of the harmonised rules on artificial intelligence, the so-called  “Artificial Intelligence Act” (AI Act). Agreed upon in negotiations with member states in December 2023, the Act was endorsed by MEPs with 523 votes in favour, 46 against and 49 abstentions. It aims to “protect fundamental rights, democracy, the rule of law and environmental sustainability from high-risk AI, while boosting innovation and establishing Europe as a leader in the field.” Despite Brexit, UK businesses and entities engaged in AI-related activities will still be affected by the Act if they intend to operate within the EU market. The Act will have an extra territorial reach just like the EU GDPR 

The main provisions of Act can be read here. In summary, the Act sets out comprehensive rules for AI applications, including a risk-based system to address potential threats to health and safety, and human rights. The Act will ban some AI applications which pose an “unacceptable risk”, such as real-time and remote biometric identification systems like facial recognition, and impose strict obligations on others considered as “high risk”, such as AI in EU-regulated product safety categories such as cars and medical devices. These obligations include adherence to data governance standards, transparency rules, and the incorporation of human oversight mechanisms.  

Next steps 

The Act is still subject to a final lawyer-linguist check and is expected to be finally adopted before the end of the legislature (through the so-called corrigendum procedure). It also needs to be formally endorsed by the Council of Europe. 

The Act will enter into force twenty days after its publication in the official Journal, and be fully applicable 24 months after its entry into force, except for: bans on prohibited practises, which will apply six months after the entry into force date; codes of practise (nine months after entry into force); general-purpose AI rules including governance (12 months after entry into force); and obligations for high-risk systems (36 months after entry into force). 

Influence on UK AI Regulation 

The EU’s regulatory approach will impact the UK Government’s decisions on AI governance. An AI White Paper was published in March last year entitled  
“A pro-innovation approach to AI regulation”. The paper sets out the UK’s preference not to place AI regulation on a statutory footing but to make use of “regulators’ domain-specific expertise to tailor the implementation of the principles to the specific context in which AI is used.” In January 2024, the ICO launched  a consultation series on Generative AI, examining how aspects of data protection law should apply to the development and use of the technology. It is expected to issue more AI guidance later in 2024. 

By attending our new AI Act workshop, you will understand the new law in detail and its interaction with the UK’s objectives and strategy for AI regulation.  

AI Regulation and the EU AI Act  

2024 is going to be the year of AI regulation. As the impact of AI increases in our daily lives, governments and regulatory bodies globally are grappling with the need to establish clear guidelines and standards for its responsible use. 

ChatGPT 

Ask people about AI and many will talk about AI powered chat bots like ChatGPT and Gemini –The Bard Replacement from Google. The former currently has around 180.5 million users who generated 1.6 billion visits in December 2023. However, with great popularity comes increased scrutiny as well as privacy and regulatory challenges. 

In March 2023, Italy became the first Western country to block ChatGPT when its data protection regulator (Garante Per La Protezione Dei Dati Personali) cited privacy concerns. Garante’s communication to to OpenAI, owner of ChatGPT, highlighted both the lack of a suitable legal basis for the collection and processing of personal data for the purpose of training the algorithms underlying ChatGPT, the potential to produce inaccurate information about individuals and child safety. In total, Garante said that it suspected ChatGPT to be breaching Articles 5, 6, 8, 13 and 25 of the EU GDPR. 

ChatGPT was made accessible in Italy, fours week after the above decision but Garante launched a “fact-finding activity” at the time. This culminated in a statement on 31st January 2024, in which it said it “concluded that the available evidence pointed to the existence of breaches of the provisions contained in the EU GDPR [General Data Protection Regulation]”. The cited breaches are essentially the same as the provisional finding discussed above; focussing on the mass collection of users’ data for training purposes and the risk of younger users may being exposed to inappropriate content. ChatGPT has 30 days to respond with a defence. 

EU AI Act 

Of course there is more to AI than ChatGPT and some would say much more beneficial use cases. Examples include the ability to match drugs to patients, numerous stories of major cancer research breakthroughs as well as the ability for robots to do major surgery. But there are downsides too including bias, lack of transparency, and failure to take account of the ethical implications. 

On 2nd February 2024, EU member states unanimously reached an agreement on the text of the harmonised rules on artificial intelligence, the so-called
Artificial Intelligence Act” (AI Act). The final draft of the Act will be adopted by the European Parliament in a plenary vote in April and will come into force in 2025 with a two year transition period.  

The main provisions of Act can be read here. They do not differ much from the previous draft can be read on our previous blog here. In summary, the AI Act sets out comprehensive rules for AI applications, including a risk-based system to address potential threats to health and safety, and human rights. The Act will ban some AI applications which pose an “unacceptable risk” (e.g. Real-time and remote biometric identification systems, like facial recognition) and impose strict obligations on others considered as “high risk” (e.g. AI in EU-regulated product safety categories such as cars and medical devices). These obligations include adherence to data governance standards, transparency rules, and the incorporation of human oversight mechanisms.  

Despite Brexit, UK businesses and entities engaged in AI-related activities will still be affected by the Act if they intend to operate within the EU market. The Act will have an extra territorial reach just like the EU GDPR.  

UK Response 

The UK Government’s own decisions on how to regulate AI will be influenced by the EU’s approach. An AI White Paper was published in March last year entitled
“A pro-innovation approach to AI regulation”. The paper sets out the UK’s preference not to place AI regulation on a statutory footing but to make use of “regulators’ domain-specific expertise to tailor the implementation of the principles to the specific context in which AI is used.”  

The government’s long-awaited follow-up to the AI White Paper was published last week. 

Key takeaways are: 

  • The government’s  proposals for regulating AI, still revolve around empowering existing regulators to create tailored, context-specific rules that suit the ways the technology is being used in the sectors they scrutinise i.e. no legislation yet (regulators have been given until 30th April 2024 to publish their AI plans). 
     
  • The government generally reaffirmed its commitment to the whitepaper’s proposals, claiming this approach to regulation will ensure the UK remains more agile than “competitor nations” while also putting it on course to be a leader in safe, responsible AI innovation. 
     
  • It will though consider creating “targeted binding requirements” for select companies developing highly capable AI systems. 
     
  • It also committed to conducting regular reviews of potential regulatory gaps on an ongoing basis: “We remain committed to the iterative approach set out in the whitepaper, anticipating that our framework will need to evolve as new risks or regulatory gaps emerge.” 
     

According to Michelle Donelan,  Secretary of State for Science, Innovation and Technology, the UK’s approach to AI regulation has already made the country a world leader in both AI safety and AI development. 
 

“AI is moving fast, but we have shown that humans can move just as fast,” she said. “By taking an agile, sector-specific approach, we have begun to grip the risks immediately, which in turn is paving the way for the UK to become one of the first countries in the world to reap the benefits of AI safely.” 

Practical Steps 

Last year, the ICO conducted an inquiry after concerns were raised about the use of algorithms in decision-making in the welfare system by local authorities and the DWP. In this instance, the ICO did not find any evidence to suggest that benefit claimants are subjected to any harms or financial detriment as a result of the use of algorithms. It did though emphasise a number of practical steps that local authorities and central government can take when using AI: 

  • Take a data protection by design and default approach 
  • Be transparent with people about how you are using their data by regularly reviewing privacy policies
  • Identify the potential risks to people’s privacy by conducting a Data Protection Impact Assessment

In January 2024, the ICO launched  a consultation series on Generative AI, examining how aspects of data protection law should apply to the development and use of the technology. It is expected to issue more AI guidance later in 2024. 

Join our Artificial Intelligence and Machine Learning, How to Implement Good Information Governance workshop for hands-on insights, key resource awareness, and best practices, ensuring you’re ready to navigate AI complexities fairly and lawfully

The Hidden Reach of the Prevent Strategy:
Beyond Counter-Terrorism Units

The UK government’s anti-radicalisation program, Prevent, is reportedly sharing the personal details of thousands of individuals more extensively than previously known. This sharing includes not just counter-terrorism units, but also airports, ports, immigration services, and officials at the Home Office and the Foreign, Commonwealth and Development Office (FCDO). Critics argue that such widespread data sharing could be illegal, as it involves moving sensitive personal data between databases without the consent of the individuals. 

A Metropolitan police document titled “Prevent case management guidance” indicates that Prevent details are also shared with the ports authority watchlist. This raises concerns that individuals may face increased scrutiny at airports or be subjected to counter-terrorism powers without reasonable suspicion. The document also mentions that foreign nationals may have their backgrounds checked by the FCDO and immigration services for any overseas convictions or intelligence. 

Furthermore, the Acro Criminal Records Office, which manages UK criminal records, is notified about individuals referred to Prevent, despite the program dealing with individuals who haven’t necessarily engaged in criminal behaviour.
Counter-terror police emphasise their careful approach to data sharing, which aims to protect vulnerable individuals. 

Prevent’s goal is to divert people from terrorism before they offend, and most people are unaware of their referral to the program. 95% of referrals result in no further action. A secret database, the National Police Prevent Case Management database, was previously disclosed in 2019, revealing the storage of details of those referred to Prevent. 

Newly disclosed information, obtained through a freedom of information request by the Open Rights Group (ORG), reveals that Prevent data is shared across various police databases, including the Police National Computer, specialised counter-terrorism and local intelligence systems, and the National Crime Agency. 

The sharing of this data was accidentally revealed due to a redaction error in a heavily edited Met document. Despite its sensitive nature, the ORG decided to make the document public. Sophia Akram of the ORG expressed concerns over the extent of the data sharing and potential harms, suggesting that it could be unfair and possibly unlawful. 

The guidance also indicates that data is retained and used even in cases where no further action is taken. There are concerns about the impact on young people’s educational opportunities, as Prevent requires public bodies like schools and the police to identify individuals at risk of extremism. 

Recent figures show thousands of referrals to Prevent, predominantly from educational institutions. From April 2022 to March 2023, a total of 6,817 individuals were directed to the Prevent program. Within this group, educational institutions were responsible for 2,684 referrals. Breaking down the referrals by age, there were 2,203 adolescents between the ages of 15 and 20, and 2,119 referrals involved children aged 14 or younger.

There are worries about the long-term consequences for children and young people referred to the program. Several cases have highlighted the intrusive nature of this data sharing and its potential impact on individuals’ lives. Cases in which students have missed gaining a place at a sixth form college and other cases involving children as young as four years old.  

Prevent Watch, an organisation monitoring the program, has raised alarms about the data sharing, particularly its effect on young children. The FoI disclosures challenge the notion that Prevent is non-criminalising, as data on individuals, even those marked as ‘no further action’, can be stored on criminal databases and flagged on watchlists. 

Counter-terrorism policing spokespeople defend the program, emphasising its
multi-agency nature and focus on protecting people from harm. They assert that data sharing is carefully managed and legally compliant, aiming to safeguard vulnerable individuals from joining terror groups or entering conflict zones. 

Learn more about data sharing with our UK GDPR Practitioner Certificate. Dive into the issues discussed in this blog and secure your spot now.

EU Leads Global AI Regulation with Landmark Legislation

European representatives in Strasbourg recently concluded an extensive 37-hour discussion, resulting in the world’s first extensive framework for regulating artificial intelligence. This ground-breaking agreement, facilitated by European Commissioner Thierry Breton and Spain’s AI Secretary of State, Carme Artigas, is set to shape how social media and search engines operate, impacting major companies. 

The deal, achieved after lengthy negotiations and hailed as a significant milestone, puts the EU at the forefront of AI regulation globally, surpassing the US, China, and the UK. The new legislation, expected to be enacted by 2025, involves comprehensive rules for AI applications, including a
risk-based system to address potential threats to health, safety, and human rights. 

Key components of the agreement include strict controls on AI-driven surveillance and real-time biometric technologies, with specific exceptions for law enforcement under certain circumstances. The European Parliament ensured a ban on such technologies, except in cases of terrorist threats, search for victims, or serious criminal investigations. 

MEP Brando Benefei and Dragoș Tudorache, who led the negotiations, emphasised the aim of developing an AI ecosystem in Europe that prioritises human rights and values. The agreement also includes provisions for independent authorities to oversee predictive policing and uphold the presumption of innocence. 

Tudorache highlighted the balance struck between equipping law enforcement with necessary tools and banning AI technologies that could pre-emptively identify potential criminals. (Minority Report anyone?)
The highest risk AI systems will now be regulated based on the computational power required for training, with GPT4 being a notable example and the only technology fulfilling this criterion. 

Some Key Aspects 
 
The new EU AI Act delineates distinct regulations for AI systems based on their perceived level of risk, effectively categorizing them into “Unacceptable Risk,” “High Risk,” “Generative AI,” and “Limited Risk” groups, each with specific obligations for providers and users. 

Unacceptable Risk 

AI systems deemed a threat to people’s safety or rights will be prohibited. This includes: 

  • AI-driven cognitive behavioural manipulation, particularly targeting vulnerable groups, like voice-activated toys promoting hazardous behaviours in children. 
  • Social scoring systems that classify individuals based on behaviour,
    socio-economic status, or personal characteristics. 
  • Real-time and remote biometric identification systems, like facial recognition. 
  • Exceptions exist, such as “post” remote biometric identification for serious crime investigations, subject to court approval. 

High Risk 

AI systems impacting safety or fundamental rights fall under high-risk, subdivided into: 

  • AI in EU-regulated product safety categories, like toys, aviation, cars, medical devices, and lifts. 
  • Specific areas requiring EU database registration, including biometric identification, critical infrastructure management, education, employment, public services access, law enforcement, migration control, and legal assistance. 
  • High-risk AI systems must undergo pre-market and lifecycle assessments. 

Generative AI 

AI like ChatGPT must adhere to transparency protocols: 

  • Disclosing AI-generated content. 
  • Preventing generation of illegal content. 
  • Publishing summaries of copyrighted data used in training. 
  • Limited Risk 
  • These AI systems require basic transparency for informed user decisions, particularly for AI that generates or manipulates visual and audio content, like deepfakes. Users should be aware when interacting with AI. 

The legislation sets a precedent for future digital regulation. As we saw with the GDPR, Governments outside the EU used the legislation as a foundation for their own laws and many corporations adopted the same privacy standards from within Europe for their businesses worldwide for efficiency. This could easily happen in the case of the EU AI Act with governments using it as a ‘starter for ten’. It will be interesting to see how the legislation will cater for algorithmic biases found within current iterations of the technology from facial recognition technology to other automated decision making algorithms.

The UK did publish its AI White Paper in March of this year and says it follows a “Pro-Innovation” approach. However, it seems to have decided to go ‘face first’ before any legislation is passed with facial recognition software recently used in the Beyoncé gig, King Charles’ coronation and during the Formula One Grand Prix. For many, it is the impact of the decision making the software is formulating through the power of AI which is worrying. The ICO does have useful guides on the use of AI which can be found here. 

As artificial intelligence technology rapidly advances, exemplified by Google’s impressive Gemini demo, the urgency for comprehensive regulation was becoming increasingly apparent. The EU has signalled its intent to avoid past oversights seen in the unchecked expansion of tech giants and be at the forefront of regulating this fascinating technology to ensure its ethical and responsible utilisation. 

Join our Artificial Intelligence and Machine Learning, How to Implement Good Information Governance workshop for hands-on insights, key resource awareness, and best practices, ensuring you’re ready to navigate AI complexities fairly and lawfully.

Clearview AI Wins Appeal Against GDPR Fine 

Last week a Tribunal overturned a GDPR Enforcement Notice and a Monetary Penalty Notice issued to Clearview AI, an American facial recognition company. In Clearview AI Inc v The Information Commissioner [2023] UKFTT 00819 (GRC), the First-Tier Tribunal (Information Rights) ruled that the Information Commissioner had no jurisdiction to issue either notice, on the basis that the GDPR/UK GDPR did not apply to the personal data processing in issue.  

Background 

Clearview is a US based company which describes itself as the “World’s Largest Facial Network”. Its online database contains 20 billion images of people’s faces and data scraped from publicly available information on the internet and social media platforms all over the world. It allows customers to upload an image of a person to its app; the person is then identified by the app checking against all the images in the Clearview database.  

In May 2022 the ICO issued a Monetary Penalty Notice of £7,552,800 to Clearview for breaches of the GDPR including failing to use the information of people in the UK in a way that is fair and transparent. Although Clearview is a US company, the ICO ruled that the UK GDPR applied because of Article 3(2)(b) (territorial scope). It concluded that Clearview’s processing activities “are related to… the monitoring of [UK resident’s] behaviour as far as their behaviour takes place within the United Kingdom.” 

The ICO also issued an Enforcement Notice ordering Clearview to stop obtaining and using the personal data of UK residents that is publicly available on the internet, and to delete the data of UK residents from its systems. (see our earlier blog for more detail on these notices.) 

The Judgement  

The First-Tier Tribunal (Information Rights) has now overturned the ICO’s enforcement and penalty notice against Clearview. It concluded that although Clearview did carry out data processing related to monitoring the behaviour of people in the UK (Article Art. 3(2)(b) of the UK GDPR), the ICO did not have jurisdiction to take enforcement action or issue a fine. Both the GDPR and UK GDPR provide that acts of foreign governments fall outside their scope; it is not for one government to seek to bind or control the activities of another sovereign state. However the Tribunal noted that the ICO could have taken action under the Law Enforcement Directive (Part 3 of the DPA 2018 in the UK), which specifically regulates the processing of personal data in relation to law enforcement. 

Learning Points 

While the Tribunal’s judgement in this case reflects the specific circumstances, some of its findings are of wider application: 

  • The term “behaviour” (in Article Art. 3(2)(b)) means something about what a person does (e.g., location, relationship status, occupation, use of social media, habits) rather than just identifying or describing them (e.g., name, date of birth, height, hair colour).  

  • The term “monitoring” not only comes up in Article 3(2)(b) but also in Article 35(3)(c) (when a DPIA is required). The Tribunal ruled that monitoring includes tracking a person at a fixed point in time as well as on a continuous or repeated basis.

  • In this case, Clearview was not monitoring UK residents directly as its processing was limited to creating and maintaining a database of facial images and biometric vectors. However, Clearview’s clients were using its services for monitoring purposes and therefore Clearview’s processing “related to” monitoring under Article 3(2)(b). 

  • A provider of services like Clearview, may be considered a joint controller with its clients where both determine the purposes and means of processing. In this case, Clearview was a joint controller with its clients because it imposed restrictions on how clients could use the services (i.e., only for law enforcement and national security purposes) and determined the means of processing when matching query images against its facial recognition database.  

Data Scraping 

The ruling is not a greenlight for data scraping; where publicly available data, usually from the internet, is collected and processed by companies often without the Data Subject’s knowledge. The Tribunal ruled that this was an activity to which the UK GDPR could apply. In its press release, reacting to the ruling, the ICO said: 

“The ICO will take stock of today’s judgment and carefully consider next steps.
It is important to note that this judgment does not remove the ICO’s ability to act against companies based internationally who process data of people in the UK, particularly businesses scraping data of people in the UK, and instead covers a specific exemption around foreign law enforcement.” 

This is a significant ruling from the First Tier Tribunal which has implications for the extra territorial effect of the UK GDPR and the ICO powers to enforce it. It merits an appeal by the ICO to the Upper Tribunal. Whether this happens depends very much on the ICO’s appetite for a legal battle with a tech company with deep pockets.  

This and other GDPR developments will be discussed by Robert Bateman in our forthcoming GDPR Updateworkshop.  

Exploring the Legal and Regulatory Challenges of AI and Chat GPT 

In our recent blog post, entitled “GDPR and AI: The Rise of the Machines”, we said that 2023 is going to be the year of Artificial Intelligence (AI). Events so far seem to suggest that advances in the technology as well legal and regulatory challenges are on the horizon.   

Generative AI, particularly large language models like ChatGPT, have captured the world’s imagination. ChatGPT registered 100 million monthly users in January alone; having only been launched in November and it set the record for the fastest growing platform since TikTok, which took nine months to hit the same usage level. In March 2023, it recorded 1.6 Billion user visits which are just mind-boggling numbers and shows how much of a technological advancement it will become. There have already been some amazing medical uses of generative AI including the ability to match drugs to patients, numerous stories of major cancer research breakthroughs as well as the ability for robots to do major surgery. 
 
However, it is important to take a step back and reflect on the risks of a technology that has made its own CEO “a bit scared” and which has caused the “Godfather of AI” to quit his job at Google. The regulatory and legal backlash against AI has already started. Recently, Italy became the first Western country to block ChatGPT. The Italian DPA highlighted privacy concerns relating to the model. Other European regulators are reported to be looking into the issue too. In April the European Data Protection Board launched a dedicated task force on ChatGPT. It said the goal is to “foster cooperation and to exchange information on possible enforcement actions conducted by data protection authorities.” Elsewhere, Canada has opened an investigation into OpenAI due to a complaint alleging the collection, use and disclosure of personal information is without consent. 

The UK Information Commissioner’s Office (ICO) has expressed its own concerns. Stephen Almond, Director of Technology and Innovation at the ICO, said in a blog post

“Data protection law still applies when the personal information that you’re processing comes from publicly accessible sources…We will act where organisations are not following the law and considering the impact on individuals.”  

Wider Concerns 

ChatGPT suffered its first major personal data breach in March.
According to a blog post by OpenAI, the breach exposed payment-related and other personal information of 1.2% of the ChatGPT Plus subscribers. But the concerns around AI and ChatGPT don’t stop at privacy law.   

An Australian mayor is considering a defamation suit against ChatGPT after it told users that he was jailed for bribery; in reality he was the whistleblower in the bribery case. Similarly it falsely accused a US law professor of sexual assault. The Guardian reported recently that ChatGPT is making up fake Guardian articles. There are concerns about copyright law too; there have been a number of songs that use AI to clone the voices of artists including Drake and The Weeknd which has since  been removed from streaming services after criticism from music publishers. There has also been a full AI-Generated Joe Rogan episode with the OpenAI CEO as well as with Donald Trump. These podcasts are definitely worth a sample, it is frankly scary how realistic they actually are. 

AI also poses a significant threat to jobs. A report by investment bank Goldman Sachs says it could replace the equivalent of 300 million full-time jobs. Our director, Ibrahim Hasan, recently gave his thoughts on this topic to BBC News Arabic. (You can watch him here. If you just want to hear Ibrahim “speak in Arabic” skip the video to 2min 48 secs!) 
 

EU Regulation 

With increasing concern about the future risks AI could pose to people’s privacy, their human rights or their safety, many experts and policy makers believe AI needs to be regulated. The European Union’s proposed legislation, the Artificial Intelligence (AI) Act, focuses primarily on strengthening rules around data quality, transparency, human oversight and accountability. It also aims to address ethical questions and implementation challenges in various sectors ranging from healthcare and education to finance and energy. 

The Act also envisages grading AI products according to how potentially harmful they might be and staggering regulation accordingly. So for example an email spam filter would be more lightly regulated than something designed to diagnose a medical condition – and some AI uses, such as social grading by governments, would be prohibited altogether. 

UK White Paper 

On 29th March 2023, the UK government published a white paper entitled “A pro-innovation approach to AI regulation.” The paper sets out a new “flexible” approach to regulating AI which is intended to build public trust and make it easier for businesses to grow and create jobs. Unlike the EU there will be no new legislation to regulate AI. In its press release, the UK government says: 

“The government will avoid heavy-handed legislation which could stifle innovation and take an adaptable approach to regulating AI. Instead of giving responsibility for AI governance to a new single regulator, the government will empower existing regulators – such as the Health and Safety Executive, Equality and Human Rights Commission and Competition and Markets Authority – to come up with tailored, context-specific approaches that suit the way AI is actually being used in their sectors.” 

The white paper outlines the following five principles that regulators are to consider facilitating the safe and innovative use of AI in their industries: 

  • Safety, Security and Robustness: applications of AI should function in a secure, safe and robust way where risks are carefully managed; 

  • Transparency and Explainability: organizations developing and deploying AI should be able to communicate when and how it is used and explain a system’s decision-making process in an appropriate level of detail that matches the risks posed by the use of the AI; 

  • Fairness: AI should be used in a way which complies with the UK’s existing laws (e.g., the UK General Data Protection Regulation), and must not discriminate against individuals or create unfair commercial outcomes; 

  • Accountability and Governance: measures are needed to ensure there is appropriate oversight of the way AI is being used and clear accountability for the outcomes; and 

  • Contestability and Redress: people need to have clear routes to dispute harmful outcomes or decisions generated by AI 

Over the next 12 months, regulators will be tasked with issuing practical guidance to organisations, as well as other tools and resources such as risk assessment templates, that set out how the above five principles should be implemented in their sectors. The government has said this could be accompanied by legislation, when parliamentary time allows, to ensure consistency among the regulators. 

Michelle Donelan MP, Secretary of State for Science, Innovation and Technology, considers that this this light-touch, principles-based approach “will enable . . . [the UK] to adapt as needed while providing industry with the clarity needed to innovate.” However, this approach does make the UK an outlier in comparison to global trends. Many other countries are developing or passing special laws to address alleged AI dangers, such as algorithmic rules imposed in China or the United States. Consumer groups and privacy advocates will also be concerned about the risks to society in the absence of detailed and unified statutory AI regulation.  

Want to know more about this rapidly developing area? Our forthcoming AI and Machine Learning workshop will explore the common challenges that this subject presents focussing on GDPR as well as other information governance and records management issues.  

GDPR News Roundup

So much has happened in the world of data protection recently. Where to start?

International Transfers

In April, the European Data Protection Board’s (EDPB) opinions (GDPR and Law Enforcement Directive (LED)) on UK adequacy were adopted. The EDPB has looked at the draft EU adequacy decisions. It acknowledge that there is alignment between the EU and UK laws but also expressed some concerns. It has though issued a non-binding opinion recommending their acceptance. If accepted the two adequacy decisions will run for an initial period of four years. More here.

Last month saw the ICO’s annual data protection conference go online due to the pandemic. Whilst not the same as a face to face conference, it was still a good event with lots of nuggets for data protection professionals including the news that the ICO is working on bespoke UK standard contractual clauses (SCCs) for international data transfers. Deputy Commissioner Steve Wood said: 

“I think we recognise that standard contractual clauses are one of the most heavily used transfer tools in the UK GDPR. We’ve always sought to help organisations use them effectively with our guidance. The ICO is working on bespoke UK standard clauses for international transfers, and we intend to go out for consultation on those in the summer. We’re also considering the value to the UK for us to recognise transfer tools from other countries, so standard data transfer agreements, so that would include the EU’s standard contractual clauses as well.”

Lloyd v Google 

The much-anticipated Supreme Court hearing in the case of Lloyd v Google LLC took place at the end of April. The case concerns the legality of Google’s collection and use of browser generated data from more than 4 million+ iPhone users during 2011-12 without their consent.  Following the two-day hearing, the Supreme Court will now decide, amongst other things, whether, under the DPA 1998, damages are recoverable for ‘loss of control’ of data without needing to identify any specific financial loss and whether a claimant can bring a representative action on behalf of a group on the basis that the group have the ‘same interest’ in the claim and are identifiable. The decision is likely to have wide ranging implications for representative actions, what damages can be awarded for and the level of damages in data protection cases. Watch this space!

Ticketmaster Appeal

In November 2020, the ICO fined Ticketmaster £1.25m for a breach of Articles 5(1)(f) and 32 GPDR (security). Ticketmaster appealed the penalty notice on the basis that there had been no breach of the GDPR; alternatively that it was inappropriate to impose a penalty, and that in any event the sum was excessive. The appeal has now been stayed by the First-Tier Tribunal until 28 days after the pending judgment in a damages claim brought against Ticketmaster by 795 customers: Collins & Others v Ticketmaster UK Ltd (BL-2019-LIV-000007). 

Age Appropriate Design Code

This code came into force on 2 September 2020, with a 12 month transition period. The Code sets out 15 standards organisations must meet to ensure that children’s data is protected online. It applies to all the major online services used by children in the UK and includes measures such as providing default settings which ensure that children have the best possible access to online services whilst minimising data collection and use.

With less than four months to go (2 September 2021) the ICO is urging organisations and businesses to make the necessary changes to their online services and products. We are planning a webinar on the code. Get in touch if interested.

AI and Automated Decision Making

Article 22 of GDPR provides protection for individuals against purely automated decisions with a legal or significant impact. In February, the Court of Amsterdam ordered Uber, the ride-hailing app, to reinstate six drivers who it was claimed were unfairly dismissed “by algorithmic means.” The court also ordered Uber to pay the compensation to the sacked drivers.

In April EU Commission published a proposal for a harmonised framework on AI. The framework seeks to impose obligations on both providers and users of AI. Like the GDPR the proposal includes fine levels and an extra-territorial effect. (Readers may be interested in our new webinar on AI and Machine Learning.)

Publicly Available Information

Just because information is publicly available it does not provide a free pass for companies to use it without consequences. Data protection laws have to be complied with. In November 2020, the ICO ordered the credit reference agency Experian Limited to make fundamental changes to how it handles personal data within its direct marketing services. The ICO found that significant ‘invisible’ processing took place, likely affecting millions of adults in the UK. It is ‘invisible’ because the individual is not aware that the organisation is collecting and using their personal data. Experian has lodged an appeal against the Enforcement Notice.

Interesting that recently the Spanish regulator has fined another credit reference agency, Equifax, €1m for several failures under the GDPR. Individuals complained about Equifax’s use of their personal data which was publicly available. Equifax had also failed to provide the individuals with a privacy notice. 

Data Protection by Design

The Irish data protection regulator issued its largest domestic fine recently. Irish Credit Bureau (ICB) was fined €90,000 following a change in the ICB’s computer code in 2018 resulted in 15,000 accounts having incorrect details recorded about their loans before the mistake was noticed. Amongst other things, the decision found that the ICB infringed Article 25(1) of the GDPR by failing to implement appropriate technical and organisational measures designed to implement the principle of accuracy in an effective manner and to integrate the necessary safeguards into the processing in order to meet the requirements of the GDPR and protect the rights of data subjects (aka DP by design and by default). 

Data Sharing 

The ICO’s Data Sharing Code of Practice provides organisations with a practical guide on how to share personal data in line with data protection law. Building on the code, the ICO recently outlined its plans to update its guidance on anonymisation and pseudonymisation, and to explore the role that privacy enhancing technologies might play in enabling safe and lawful data sharing.

UK GDPR Handbook

The UK GDPR Handbook is proving very popular among data protection professionals.

It sets out the full text of the UK GDPR laid out in a clear and easy to read format. It cross references the EU GDPR recitals, which also now form part of the UK GDPR, allowing for a more logical reading. The handbook uses a unique colour coding system that allows users to easily identify amendments, insertions and deletions from the EU GDPR. Relevant provisions of the amended DPA 2018 have been included where they supplement the UK GDPR. To assist users in interpreting the legislation, guidance from the Information Commissioner’s Office, Article 29 Working Party and the European Data Protection Board is also signposted. Read what others have said:

“A very useful, timely, and professional handbook. Highly recommended.”

“What I’m liking so far is that this is “just” the text (beautifully collated together and cross-referenced Articles / Recital etc.), rather than a pundits interpretation of it (useful as those interpretations are on many occasions in other books).”

“Great resource, love the tabs. Logical and easy to follow.”

Order your copy here.

These and other GDPR developments will also be discussed in detail in our online GDPR update workshop next week.

Exit mobile version
%%footer%%