GDPR and AI: The Rise of the Machines

2023 is going to be the year of AI. In January, Microsoft announced a multibillion dollar investment in OpenAI, the company behind image generation tool Dall-E and the chatbot ChatGPT. The public sector is increasingly leveraging the power of AI to undertake administrative tasks and create more tailored services that meet users’ needs.

The term “artificial intelligence” or “AI” refers to the use of machines and computer systems that can perform tasks normally requiring human intelligence. The past few years have seen rapid progress in this field. Advancements in deep learning algorithms, cloud computing, and data storage have made it possible for machines to process and analyse large amounts of data quickly and accurately. AI’s ability to interpret human language means that virtual assistants, such as Siri and Alexa, can now understand and respond to complex spoken commands at lightning speed.

The public sector is increasingly leveraging the power of AI to undertake administrative tasks and create more tailored services that meet users’ needs. Local government is using AI to simplify staff scheduling, predict demand for services and even estimate the risk of an individual committing fraud. Healthcare providers are now able to provide automated diagnoses based on medical imaging data from patients, thereby reducing wait times.

The Risks

With any major technological advance there are potential risks and downsides.  On Monday, ElevenLabs, an AI speech software company, said it had found an “increasing number of voice cloning misuse cases”. According to reports, hackers used the ElevenLabs software to create deepfake voices of famous people (including Emma Watson and Joe Rogan) making racist, transphobic and violent comments.

There are concerns about the impact of AI on employment and the future of work. In April 2021, the Court of Amsterdam ordered that Uber reinstate taxi drivers in the UK and Portugal who had been dismissed by “robo firing”; the use of an algorithm to make a decision about dismissal with no human involvement. The Court concluded that Uber’s had made the decisions “based solely on automated processing” within the meaning of Article 22(1) of the GDPR. It was ordered to reinstate the drivers’ accounts and pay them compensation.

As well as ethical questions about the use of AI in decision-making processes that affect people’s lives, AI-driven algorithms may lead to unintended biases or inaccurate decisions if not properly monitored and regulated. In 2021 the privacy pressure group, NOYB, filed a GDPR complaint against Amazon, claiming that Amazon’s algorithm discriminates against some customers by denying them the opportunity to pay for items by monthly invoice.

There is also a risk that AI is deployed without consideration of the privacy implications. In May 2022, the UK Information Commissioner’s Office fined Clearview AI Inc more than £7.5 million for breaches of GDPR. Clearview’s online database contains 20 billion images of people’s faces and data scraped from publicly available information on the internet and social media platforms all over the world. The company, which describes itself as the “World’s Largest Facial Network”, allows customers, including the police, to upload an image of a person to its app, which then uses AI to check it against all the images in the Clearview database. The app then provides a list of matching images with a link to the websites from where they came from. 

Practical Steps

Recently, the ICO conducted an inquiry after concerns were raised about the use of algorithms in decision-making in the welfare system by local authorities and the DWP. In this instance, the ICO did not find any evidence to suggest that benefit claimants are subjected to any harms or financial detriment as a result of the use of algorithms. It did though emphasise a number of practical steps that local authorities and central government can take when using algorithms or AI:

1. Take a data protection by design and default approach

Data processed using algorithms, data analytics or similar systems should be reactively and proactively reviewed to ensure it is accurate and up to date. If a local authority decides to engage a third party to process personal data using algorithms, data analytics or AI, they are responsible for assessing that they are competent to process personal data in line with the UK GDPR.

2. Be transparent with people about how you are using their data

Local authorities should regularly review their privacy policies, to ensure they comply with Articles 13 and 14, and identify areas for improvement. They should also bring any new uses of individuals’ personal data to their attention.

3. Identify the potential risks to people’s privacy

Local authorities should consider conducting a Data Protection Impact Assessment (DPIA) to help identify and minimise the data protection risks of using algorithms, AI or data analytics. A DPIA should consider compliance risks, but also broader risks to the rights and freedoms of people, including the potential for any significant social or economic disadvantage. 

In April 2021, the European Commission presented its proposal for a Regulation to harmonise rules for AI, also known as the “AI Act of the European Union’. Whilst there is still a long way to go before this proposal becomes legislation, it could create an impetus for the UK to further regulate AI. 

Use of AI has enormous benefits. It does though have a potential to adversely impact people’s lives and deny their fundamental rights. As such, understanding the implications of AI technology and how to use it in a fair and lawful manner is critical for data protection/information governance officers to understand. 

Want to know more about this rapidly developing area? Our forthcoming AI and Machine Learning workshop will explore the common challenges that this subject presents focussing on GDPR as well as other information governance and records management issues. 

Are you an experienced GDPR Practitioner wanting to take your skills to the next level? See our Advanced Certificate in GDPR Practice.

Mega GDPR Fines for Meta

On 4th January 2023, Ireland’s Data Protection Commission (DPC) announced the conclusion of two inquiries into the data processing operations of Meta Platforms Ireland Limited (“Meta Ireland”) in connection with the delivery of its Facebook and Instagram services. Not only does this decision significantly limit Meta’s ability to gather information from its users to tailor and sell advertising, it also provides useful insight into EU regulators’ view about how to comply with Principle 1 of GDPR i.e. the need to ensure personal data is “processed lawfully, fairly and in a transparent manner in relation to the data subject”(Article 5).

In decisions dated 31st December 2022, the DPC fined Meta Ireland €210 million and €180 million, relating to its Facebook and Instagram services respectively. The fines were imposed in connection with the company’s practise of monetising users’ personal data by running personalised adverts on their social media accounts. Information about a social media user’s digital footprint, such as what videos prompt them to stop scrolling or what types of links they click on, is used by marketers to get personalised adverts in front of people who are the most likely to buy their products. This practice helped Meta generate $118 billion in revenue in 2021.

The DPC’s decision was the result of two complaints from Facebook and Instagram users, supported by privacy campaign group NOYB, both of which raised the same basic issue: how Meta obtains legal permission from users to collect and use their personal data for personalised advertising. Article 6(1) of GDPR states that:

“Processing shall be lawful only if and to the extent that at least one of the following applies:

  1. the data subject has given consent to the processing of his or her personal data for one or more specific purposes;
  • processing is necessary for the performance of a contract to which the data subject is party or in order to take steps at the request of the data subject prior to entering into a contract;”

In advance of the GDPR coming into force on 25th May 2018, Meta Ireland changed the Terms of Service for its Facebook and Instagram services. It also flagged the fact that it was changing the legal basis upon which it relies to process users’ personal data under Article 6 in the context of the delivery of the Facebook’s and Instagram’s services (including behavioural advertising). Having previously relied on the consent of users to the processing of their personal data, the company now sought to rely on the “contract” legal basis for most (but not all) of its processing operations. Existing and new users were required to click “I accept” to indicate their acceptance of the updated Terms of Service in order to continue using Facebook and Instagram. The services would not be accessible if users declined to do so.

Meta Ireland considered that, on accepting the updated Terms of Service, a contract was concluded between itself and the user. Consequently the processing of the user’s personal data in connection with the delivery of its Facebook and Instagram services was necessary for the performance of this “contract” which includes the provision of personalised services and behavioural advertising.  This, it claimed, provided a lawful basis by reference to Article 6(1)(b) of the GDPR.

The complainants contended that Meta Ireland was in fact still looking to rely on consent to provide a lawful basis for its processing of users’ data. They argued that, by making the accessibility of its services conditional on users accepting the updated Terms of Service, Meta Ireland was in fact “forcing” them to consent to the processing of their personal data for behavioural advertising and other personalised services. This was not real consent as defined in Article 4 of GDPR:

“any freely given, specific, informed and unambiguous indication of the data subject’s wishes by which he or she, by a statement or by a clear affirmative action, signifies agreement to the processing of personal data relating to him or her;” (our emphasis)

Following comprehensive investigations, consultation with other EU DP regulators (a process required by GDPR in such cases) and final rulings by the European Data Protection Board, the DPC made a number of findings; notably:

1. Meta Ireland did not provide clear information about its processing of users’ personal data, resulting in users having insufficient clarity as to what processing operations were being carried out on their personal data, for what purpose(s), and by reference to which of the six legal bases identified in Article 6. The DPC said this violated Articles 12 (transparency) and 13(1)(c) (information to be provide to the data subject) of GDPR. It also considered it to be a violation of Article 5(1)(a), which states that personal data must be processed lawfully, fairly and transparently.

2. Meta Ireland cannot rely on the contract legal basis for justifying its processing. The delivery of personalised advertising (as part of the broader suite of personalised services offered as part of the Facebook and Instagram services) could not be said to be necessary to perform the core elements of what was said to be a much more limited form of contract. The DPC adopted this position following a ruling by the EDPB, which agreed with other EU regulators’ representations to the DPC.

In addition to the fines, Meta Ireland has been directed to ensure its data processing operations comply with GDPR within a period of 3 months. It has said it will appeal; not surprising considering the decision has the potential to require it to make costly changes to its personalised advertising-based business in the European Union, one of its largest markets. 

It is important to note that this decision still allows Meta to use non-personal data (such as the content of a story) to personalise adverts or to ask users to give their consent to targeted adverts. However under GDPR users should be able to withdraw their consent at any time.  If a large number do so, it would impact one of the most valuable parts of Meta’s business. 

The forthcoming appeals by Meta will provide much needed judicial guidance on the GDPR particular Principle 1. Given the social media giant’s deep pockets, expect this one to run and run.

This and other GDPR developments will be discussed in detail on our forthcoming GDPR Update workshop. 

Are you an experienced GDPR Practitioner wanting to take your skills to the next level? See our Advanced Certificate in GDPR Practice.

US Data Transfers and Privacy Shield 2.0 

On 14th December 2022, the European Commission published a draft ‘adequacy decision’, under Article 47 of the GDPR, endorsing a new legal framework for transferring personal data from the EU to the USA. Subject to approval by other EU institutions, the decision paves the way for “Privacy Shield 2.0” to be in effect by Spring 2023.

The Background

In July 2020, the European Court of Justice (ECJ) in “Schrems II”, ruled that organisations that transfer personal data to the USA can no longer rely on the Privacy Shield Framework as a legal transfer tool as it failed to protect the rights of EU data subjects when their data was accessed by U.S. public authorities. In particular, the ECJ found that US surveillance programs are not limited to what is strictly necessary and proportionate as required by EU law and hence do not meet the requirements of Article 52 of the EU Charter on Fundamental Rights. Secondly, with regard to U.S. surveillance, EU data subjects lack actionable judicial redress and, therefore, do not have a right to an effective remedy in the USA, as required by Article 47 of the EU Charter.

The ECJ stated that organisations transferring personal data to the USA can still use the Article 49 GDPR derogations or standard contractual clauses (SCCs). If using the latter, whether for transfers to the USA or other countries, the ECJ placed the onus on the data exporter to make a complex assessment  about the recipient country’s data protection legislation (a Transfer Impact Assessment or TIA), and to put in place “additional measures” to those included in the SCCs. 

Despite the Schrems II judgment, many organisations have continued to transfer personal data to the USA hoping that regulators will wait for a new Transatlantic data deal before enforcing the judgement.  Whilst the UK Information Commissioner’s Office (ICO) seems to have adopted a “wait and see” approach, other regulators have now started to take action. In February 2022, the French Data Protection Regulator, CNIL, ruled that the use of Google Analytics was a breach of GDPR due to the data being transferred to the USA without appropriate safeguards. This followed a similar decision by the Austrian Data Protection Authority in January. 

The Road to Adequacy

Since the Schrems ruling, replacing the Privacy Shield has been a priority for EU and US officials. In March 2022, it was announced that a new Trans-Atlantic Data Privacy Framework had been agreed in principle. In October, the US President signed an executive order giving effect to the US commitments in the framework. These include commitments to limit US authorities’ access to data exported from the EU to what is necessary and proportionate under surveillance legislation, to provide data subjects with rights of redress relating to how their data is handled under the framework regardless of their nationality, and to establish a Data Protection Review Court for determining the outcome of complaints.

Schrems III?

The privacy campaign group, noyb, of which Max Schrems is Honorary Chairman, is not impressed by the draft adequacy decision. It said in a statement:

“…the changes in US law seem rather minimal. Certain amendments, such as the introduction of the proportionality principle or the establishment of a Court, sound promising – but on closer examination, it becomes obvious that the Executive Order oversells and underperforms when it comes to the protection of non-US persons. It seems obvious that any EU “adequacy decision” that is based on Executive Order 14086 will likely not satisfy the CJEU. This would mean that the third deal between the US Government and the European Commission may fail.”

Max Schrems said: 

… As the draft decision is based on the known Executive Order, I can’t see how this would survive a challenge before the Court of Justice. It seems that the European Commission just issues similar decisions over and over again – in flagrant breach of our fundamental rights.”

The draft adequacy decision will now be reviewed by the European Data Protection Board (EDPB) and the European Member States. From the above statements it seems that if Privacy Shield 2.0 is finalised, a legal challenge against it is inevitable.

UK to US Data Transfers 

Personal data transfers are also a live issue for most UK Data Controllers including public authorities. Whether using an online meeting app, cloud storage solution or a simple text messaging service, all often involve a transfer of personal data to the US. At present use of such services usually involves a complicated TRA and execution of standard contractual clauses. A new UK international data transfer agreement (IDTA) came into force on 21st March 2022 but it still requires a TRA as well as supplementary measures where privacy risks are identified. 

Good news may be round the corner for UK data exporters. The UK Government is also in the process of making an adequacy decision for the US. We suspect it will strike a similar deal once the EU/US one is finalised.

This and other GDPR developments will be discussed in detail on our forthcoming GDPR Update workshop. 

Our next online GDPR Practitioner Certificate course, starting on 10th January, is fully booked. We have places on the course starting on 19th January. 

%d bloggers like this: