What’s the Problem with Deepseek? 

DeepSeek, the Chinese equivalent of ChatGPT, is making big waves in the AI world. Since its launch, it has quickly become the top-rated free app on Apple’s App Store, challenging the notion that the US leads the world in AI development. 

DeepSeek’s Chinese developers released the latest version of its app on 20th January (the day of US President Trump’s inauguration) rapidly gaining attention from AI experts and the tech industry. Powered by the open-source DeepSeek-V3 model, it was reportedly developed for less than $6 million, a fraction of the billions spent by its US rivals. Recently, OpenAI and other companies pledged to invest $500 billion in US AI infrastructure. President Trump announced this as “the largest AI infrastructure project in history” to maintain technological leadership in the US. However, DeepSeek’s emergence has impacted US tech stocks. On Monday the Nasdaq index dropped 3%, with chip-making giant Nvidia losing almost $600 billion in market value—the biggest one-day loss in US stock market history.  

Privacy Issues 

While the Chinese media and open-source AI proponents may be celebrating, DeepSeek’s rise necessitates scrutiny regarding its privacy and security risks. Some of these are:  

  • Data Collected: DeepSeek gathers sensitive personal data through natural conversations. 
  • Potential for Influence and Manipulation: As an AI chatbot, DeepSeek can shape opinions and conduct influence campaigns. 
  • Data Storage and Accessibility: Data stored on servers in China is fully accessible to the Chinese government. 
  • Level of User Engagement: Users may unknowingly reveal personal or confidential information through interactive conversations. 

Many of these issues are the same as TikTok which was temporarily banned in the US last week. 

Organisations need to closely monitor the AI models employees use; the US Navy recently advised its members to avoid using DeepSeek due to potential security and ethical concerns. It is also important to establish clear policies, procedures, and guidance, especially regarding GDPR compliance.  

Yesterday the Irish Data Protection Commission confirmed to TechCrunch that it has sent a note to DeepSeek requesting details concerning how the data of citizens in Ireland is processed by the company. The Italian data protection regulator has sent a similar note to the company and the DeepSeek mobile app no longer appears in both the Google and Apple app stores in Italy. 

Meanwhile (and with a straight face) OpenAI has accused DeepSeek of distilling knowledge from its models, breaching terms of use, and infringing on intellectual property. OpenAI, is itself facing numerous AI copyright lawsuits! 

2025 has just started and the AI news feed is already buzzing.  

Join ourArtificial Intelligence and Machine Learning, How to Implement Good Information Governanceworkshop.   

Enjoy reading our blog? Help us reach 10,000 subscribers by subscribing today! 

ICO 5th Call for Evidence on Generative AI 

Recently we wrote about how “How Generative AI’s Data Appetite is Fuelling Privacy Battles.” Last week the Information Commissioner’s Office (ICO) published its fifth call for evidence on Generative AI.  This call focuses on the allocation of accountability for data protection compliance across the generative AI supply chain. It is part of the ICO’s consultation series on generative AI ICO consultation series on generative AI and data protection

The fifth call for evidence addresses the recommendation for ICO guidance on the allocation of accountability in AI as a Service (AIaaS) contexts made in Sir Patrick Vallance’s Pro-innovation Regulation of Technologies Review.  
 
The allocation of accountability is complicated because of the different ways in which generative AI models, applications and services are developed, used and disseminated, but also the different levels of control and accountability that participating organisations may have.  
 
The ICO is interested in additional evidence on how this works in practice. In the meantime, it provides a summary of our current analysis, the policy positions we want to consult on and some examples which show how this analysis could be applied in practice.  
 
The deadline for submissions is 18th  September 2024.  

Enjoy reading our blog? Help us reach 10,000 subscribers by subscribing today! 
 
Join our Artificial Intelligence and Machine Learning, How to Implement Good Information Governance workshop for hands-on insights, key resource awareness, and best practices, ensuring you’re ready to navigate AI complexities fairly and lawfully. 

EU AI Act Approved by European Parliament  

On Wednesday 13th March 2024, the European Parliament approved the text of the harmonised rules on artificial intelligence, the so-called  “Artificial Intelligence Act” (AI Act). Agreed upon in negotiations with member states in December 2023, the Act was endorsed by MEPs with 523 votes in favour, 46 against and 49 abstentions. It aims to “protect fundamental rights, democracy, the rule of law and environmental sustainability from high-risk AI, while boosting innovation and establishing Europe as a leader in the field.” Despite Brexit, UK businesses and entities engaged in AI-related activities will still be affected by the Act if they intend to operate within the EU market. The Act will have an extra territorial reach just like the EU GDPR 

The main provisions of Act can be read here. In summary, the Act sets out comprehensive rules for AI applications, including a risk-based system to address potential threats to health and safety, and human rights. The Act will ban some AI applications which pose an “unacceptable risk”, such as real-time and remote biometric identification systems like facial recognition, and impose strict obligations on others considered as “high risk”, such as AI in EU-regulated product safety categories such as cars and medical devices. These obligations include adherence to data governance standards, transparency rules, and the incorporation of human oversight mechanisms.  

Next steps 

The Act is still subject to a final lawyer-linguist check and is expected to be finally adopted before the end of the legislature (through the so-called corrigendum procedure). It also needs to be formally endorsed by the Council of Europe. 

The Act will enter into force twenty days after its publication in the official Journal, and be fully applicable 24 months after its entry into force, except for: bans on prohibited practises, which will apply six months after the entry into force date; codes of practise (nine months after entry into force); general-purpose AI rules including governance (12 months after entry into force); and obligations for high-risk systems (36 months after entry into force). 

Influence on UK AI Regulation 

The EU’s regulatory approach will impact the UK Government’s decisions on AI governance. An AI White Paper was published in March last year entitled  
“A pro-innovation approach to AI regulation”. The paper sets out the UK’s preference not to place AI regulation on a statutory footing but to make use of “regulators’ domain-specific expertise to tailor the implementation of the principles to the specific context in which AI is used.” In January 2024, the ICO launched  a consultation series on Generative AI, examining how aspects of data protection law should apply to the development and use of the technology. It is expected to issue more AI guidance later in 2024. 

Our AI Act workshop will help you understand the new law in detail and its interaction with the UK’s objectives and strategy for AI regulation.