Exploring the Legal and Regulatory Challenges of AI and Chat GPT 

In our recent blog post, entitled “GDPR and AI: The Rise of the Machines”, we said that 2023 is going to be the year of Artificial Intelligence (AI). Events so far seem to suggest that advances in the technology as well legal and regulatory challenges are on the horizon.   

Generative AI, particularly large language models like ChatGPT, have captured the world’s imagination. ChatGPT registered 100 million monthly users in January alone; having only been launched in November and it set the record for the fastest growing platform since TikTok, which took nine months to hit the same usage level. In March 2023, it recorded 1.6 Billion user visits which are just mind-boggling numbers and shows how much of a technological advancement it will become. There have already been some amazing medical uses of generative AI including the ability to match drugs to patients, numerous stories of major cancer research breakthroughs as well as the ability for robots to do major surgery. 
 
However, it is important to take a step back and reflect on the risks of a technology that has made its own CEO “a bit scared” and which has caused the “Godfather of AI” to quit his job at Google. The regulatory and legal backlash against AI has already started. Recently, Italy became the first Western country to block ChatGPT. The Italian DPA highlighted privacy concerns relating to the model. Other European regulators are reported to be looking into the issue too. In April the European Data Protection Board launched a dedicated task force on ChatGPT. It said the goal is to “foster cooperation and to exchange information on possible enforcement actions conducted by data protection authorities.” Elsewhere, Canada has opened an investigation into OpenAI due to a complaint alleging the collection, use and disclosure of personal information is without consent. 

The UK Information Commissioner’s Office (ICO) has expressed its own concerns. Stephen Almond, Director of Technology and Innovation at the ICO, said in a blog post

“Data protection law still applies when the personal information that you’re processing comes from publicly accessible sources…We will act where organisations are not following the law and considering the impact on individuals.”  

Wider Concerns 

ChatGPT suffered its first major personal data breach in March.
According to a blog post by OpenAI, the breach exposed payment-related and other personal information of 1.2% of the ChatGPT Plus subscribers. But the concerns around AI and ChatGPT don’t stop at privacy law.   

An Australian mayor is considering a defamation suit against ChatGPT after it told users that he was jailed for bribery; in reality he was the whistleblower in the bribery case. Similarly it falsely accused a US law professor of sexual assault. The Guardian reported recently that ChatGPT is making up fake Guardian articles. There are concerns about copyright law too; there have been a number of songs that use AI to clone the voices of artists including Drake and The Weeknd which has since  been removed from streaming services after criticism from music publishers. There has also been a full AI-Generated Joe Rogan episode with the OpenAI CEO as well as with Donald Trump. These podcasts are definitely worth a sample, it is frankly scary how realistic they actually are. 

AI also poses a significant threat to jobs. A report by investment bank Goldman Sachs says it could replace the equivalent of 300 million full-time jobs. Our director, Ibrahim Hasan, recently gave his thoughts on this topic to BBC News Arabic. (You can watch him here. If you just want to hear Ibrahim “speak in Arabic” skip the video to 2min 48 secs!) 
 

EU Regulation 

With increasing concern about the future risks AI could pose to people’s privacy, their human rights or their safety, many experts and policy makers believe AI needs to be regulated. The European Union’s proposed legislation, the Artificial Intelligence (AI) Act, focuses primarily on strengthening rules around data quality, transparency, human oversight and accountability. It also aims to address ethical questions and implementation challenges in various sectors ranging from healthcare and education to finance and energy. 

The Act also envisages grading AI products according to how potentially harmful they might be and staggering regulation accordingly. So for example an email spam filter would be more lightly regulated than something designed to diagnose a medical condition – and some AI uses, such as social grading by governments, would be prohibited altogether. 

UK White Paper 

On 29th March 2023, the UK government published a white paper entitled “A pro-innovation approach to AI regulation.” The paper sets out a new “flexible” approach to regulating AI which is intended to build public trust and make it easier for businesses to grow and create jobs. Unlike the EU there will be no new legislation to regulate AI. In its press release, the UK government says: 

“The government will avoid heavy-handed legislation which could stifle innovation and take an adaptable approach to regulating AI. Instead of giving responsibility for AI governance to a new single regulator, the government will empower existing regulators – such as the Health and Safety Executive, Equality and Human Rights Commission and Competition and Markets Authority – to come up with tailored, context-specific approaches that suit the way AI is actually being used in their sectors.” 

The white paper outlines the following five principles that regulators are to consider facilitating the safe and innovative use of AI in their industries: 

  • Safety, Security and Robustness: applications of AI should function in a secure, safe and robust way where risks are carefully managed; 

  • Transparency and Explainability: organizations developing and deploying AI should be able to communicate when and how it is used and explain a system’s decision-making process in an appropriate level of detail that matches the risks posed by the use of the AI; 

  • Fairness: AI should be used in a way which complies with the UK’s existing laws (e.g., the UK General Data Protection Regulation), and must not discriminate against individuals or create unfair commercial outcomes; 

  • Accountability and Governance: measures are needed to ensure there is appropriate oversight of the way AI is being used and clear accountability for the outcomes; and 

  • Contestability and Redress: people need to have clear routes to dispute harmful outcomes or decisions generated by AI 

Over the next 12 months, regulators will be tasked with issuing practical guidance to organisations, as well as other tools and resources such as risk assessment templates, that set out how the above five principles should be implemented in their sectors. The government has said this could be accompanied by legislation, when parliamentary time allows, to ensure consistency among the regulators. 

Michelle Donelan MP, Secretary of State for Science, Innovation and Technology, considers that this this light-touch, principles-based approach “will enable . . . [the UK] to adapt as needed while providing industry with the clarity needed to innovate.” However, this approach does make the UK an outlier in comparison to global trends. Many other countries are developing or passing special laws to address alleged AI dangers, such as algorithmic rules imposed in China or the United States. Consumer groups and privacy advocates will also be concerned about the risks to society in the absence of detailed and unified statutory AI regulation.  

Want to know more about this rapidly developing area? Our forthcoming AI and Machine Learning workshop will explore the common challenges that this subject presents focussing on GDPR as well as other information governance and records management issues.