What’s the Problem with Deepseek? 

DeepSeek, the Chinese equivalent of ChatGPT, is making big waves in the AI world. Since its launch, it has quickly become the top-rated free app on Apple’s App Store, challenging the notion that the US leads the world in AI development. 

DeepSeek’s Chinese developers released the latest version of its app on 20th January (the day of US President Trump’s inauguration) rapidly gaining attention from AI experts and the tech industry. Powered by the open-source DeepSeek-V3 model, it was reportedly developed for less than $6 million, a fraction of the billions spent by its US rivals. Recently, OpenAI and other companies pledged to invest $500 billion in US AI infrastructure. President Trump announced this as “the largest AI infrastructure project in history” to maintain technological leadership in the US. However, DeepSeek’s emergence has impacted US tech stocks. On Monday the Nasdaq index dropped 3%, with chip-making giant Nvidia losing almost $600 billion in market value—the biggest one-day loss in US stock market history.  

Privacy Issues 

While the Chinese media and open-source AI proponents may be celebrating, DeepSeek’s rise necessitates scrutiny regarding its privacy and security risks. Some of these are:  

  • Data Collected: DeepSeek gathers sensitive personal data through natural conversations. 
  • Potential for Influence and Manipulation: As an AI chatbot, DeepSeek can shape opinions and conduct influence campaigns. 
  • Data Storage and Accessibility: Data stored on servers in China is fully accessible to the Chinese government. 
  • Level of User Engagement: Users may unknowingly reveal personal or confidential information through interactive conversations. 

Many of these issues are the same as TikTok which was temporarily banned in the US last week. 

Organisations need to closely monitor the AI models employees use; the US Navy recently advised its members to avoid using DeepSeek due to potential security and ethical concerns. It is also important to establish clear policies, procedures, and guidance, especially regarding GDPR compliance.  

Yesterday the Irish Data Protection Commission confirmed to TechCrunch that it has sent a note to DeepSeek requesting details concerning how the data of citizens in Ireland is processed by the company. The Italian data protection regulator has sent a similar note to the company and the DeepSeek mobile app no longer appears in both the Google and Apple app stores in Italy. 

Meanwhile (and with a straight face) OpenAI has accused DeepSeek of distilling knowledge from its models, breaching terms of use, and infringing on intellectual property. OpenAI, is itself facing numerous AI copyright lawsuits! 

2025 has just started and the AI news feed is already buzzing.  

Join ourArtificial Intelligence and Machine Learning, How to Implement Good Information Governanceworkshop.   

Enjoy reading our blog? Help us reach 10,000 subscribers by subscribing today! 

The Data (Use and Access) Bill: All change or much of the same? 

On 23rd October 2024, the Labour Government introduced into Parliament the Data Use and Access Bill. The Bill was highlighted in the King’s Speech in July (under its old name of the “Digital Information and Smart Data Bill”) where his Majesty announced that there would be “targeted reforms to some data laws that will maintain high standards of protection but where there is currently a lack of clarity impeding the safe development and deployment of some new technologies.” However this statement of intent does not match the reality; many of the Bill’s core provisions are a “cut and paste” of the Data Protection and Digital Information Bill (DP Bill), which failed to pass before last year’s snap General Election. 

Key Provisions 

Let’s examine the key provisions of the new Bill against those in the DP Bill. 

Smart Data: The new Bill retains the provisions from the DP Bill that will enable the creation of a legal framework for Smart Data. This involves companies securely sharing customer data, upon the customer’s (business or consumer) request, with authorised third-party providers (ATPs) who can enhance the customer data with broader, contextual ‘business’ data. These ATPs will provide the customer with innovative services to improve decision making and engagement in a market. Open Banking is the only current example of a regime that is comparable to a ‘Smart Data scheme’.
The new Bill will give such schemes a statutory footing, from which they can grow and expand.  

Digital Identity Products: Just like its predecessor, the new Bill contains provisions aimed at establishing digital verification services including digital identity products to help people quickly and securely identify themselves when they use online services
e.g. to help with moving house, pre-employment checks and buying age restricted goods and services. It is important to note that this is not the same as compulsory digital ID cards as some media outlets have reported. 

Research Provisions: The new Bill keeps the DP Bill’s provisions that clarify that companies can use personal data for research and development projects, as long as they follow data protection safeguards.  

Legitimate Interests: The new Bill retains the concept of ‘recognised legitimate interests’ under Article 6 of the UK GDPR- specific purposes for personal data processing such as national security, emergency response, and safeguarding for which Data Controllers will be exempt from conducting a full Legitimate Interests Assessment when processing personal data.  

Automated Decision Making: Like the DP Bill, the new Bill seeks to limit the right, under Article 22 of the UK GDPR, for a data subject not to be subject to automated decision making or profiling to only cases where Special Category Data is used.
Under new article 22A, a decision would qualify as being “based solely on automated processing” if there was “no meaningful human involvement in the taking of the decision”. This could give the green light to companies to use AI techniques on personal data scraped from the internet for the purposes of pre employment background checks. 

International Transfers: The new Bill maintains most of the DP Bill’s international transfer provisions. There will be a new approach to the test for adequacy applied by the UK Government to countries (and international organisations) and when Data Controllers are carrying out a Transfer Impact Assessment or TIA. The threshold for this new “data protection test” will be whether a jurisdiction offers protection that is “not materially lower” than under the UK GDPR 

Health and Social Care Information: The new Bill maintains, without any changes, the provisions that establish consistent information standards for health and adult social care IT systems in England, enabling the creation of unified medical records accessible across all related services. 

PECR Changes: One of the most significant changes, copied from the DP Bill, is the increase in fines for breaches of PECR, from £500,000 to UK GDPR levels; meaning organisations could face fines of up to  up to £17.5m of 4% of global annual turnover (whichever is higher) for the most serious infringements. Other changes include allowing cookies to be used without consent for the purposes of web analytics and to install automatic software updates.  

What is not in the new Bill? 

Most of the controversial parts of the DP Bill have been have not made it into the new Bill. These include: 

  • Replacing the terms “manifestly unfounded” or “excessive” requests, in Article 12 of the UK GDPR, with “vexatious” or “excessive” requests. Explanation and examples of such requests would also have been included.  
  • Exempting all controllers and processors from the duty to maintain a ROPA, under Article 30, unless they are carrying out high risk processing activities.  
  • The “strategic priorities” mechanism, which would have allowed the Secretary of State to set binding priorities for the Information Commissioner. 
  • The requirements for the Information Commissioner to submit codes of practice to the Secretary of State for review and recommendations.  

The Data Use and Access Bill, in its current form, will not fundamentally change UK data protection laws. This is unlikely to change during its passage through Parliament as most of its provisions are copied from the DP Bill introduced by those who are now the official Opposition.  

Enjoy reading our blog? Help us reach 10,000 subscribers by subscribing today!

Want more detail about the Bill and how it will affect your organisation? See our forthcoming  DUA Bill workshop. 

Are you a privacy professional wishing to advance your career in 2025? The Advanced Certificate in GDPR Practice is designed for experienced DPOs seeking to refine and expand their DPO skills and expertise. The course comprises of a rigorous set of engaging masterclasses that teach you to dissect complex data protection scenarios and give practical compliance advice. This immersive experience will empower you with the skills and confidence needed to tackle the most challenging data protection projects within your organisation 

RAC Employees Sentenced for Selling Personal Data 

On 8th October 2024, two former RAC employees were sentenced for unlawfully copying and selling over 29,500 lines of personal information.  

The two former employees worked as customer service specialists at the RAC’s call centre in Stretford. Their unlawful conduct was discovered by the RAC after it installed new security monitoring software. The software showed employee one of them had unlawfully accessed and copied personal information relating to people involved in road traffic accidents. A subsequent search of  employee one’s mobile phone showed the information was shared in a WhatsApp chat with employee two. Messages indicated that a third party was paying for the information. 

At a hearing at Minshull Street Crown Court on 8 October 2024, both former employees were sentenced to 6 month prison sentences, suspended for 18 months, and each were ordered to complete 150 hours of unpaid work. Both defendants had previously pleaded guilty to offences under the Computer Misuse Act 1990 and Data Protection Act 2018. Prosecution costs will be considered at a Proceeds of Crime hearing listed for 5 March 2025. 

Section 55 of the old Data Protection Act 1998 can still be used to bring a prosecution where an offence pre-dates the current Section 170 of the Data Protection Act 2018, as in the above case. It is interesting to note that the ICO also cited section 1 of the Computer Misuse Act 1990 which carries a maximum of 2 years imprisonment on indictment.   

In June 2023, the Information Commissioner’s Office (ICO) disclosed that, since 1st June 2018, 92 cases involving Section 170 offences were investigated by its Criminal Investigations Team. The most recent of these was in September 2024, when an employee pleaded guilty to retaining and selling 3,600 pieces of customer records obtained from the car leasing company he worked for. He was ordered to pay a fine of £1,200 and £300 costs. 

It is important to note that, if a disgruntled or rogue employee commits a data protection offence, the employer may also be liable for the consequences. More on our recent blog on this subject. 

Enjoy reading our blog? Help us reach 10,000 subscribers by subscribing today! 

When Oasis met GDPR

To celebrate the Gallagher brothers new tour, we asked ChatGPT to compose a poem about privacy using Oasis song titles. How many can you spot? Answers in comments.

In the wonderwall of data, we stand tall, Guarding our privacy, one and all. 

With champagne supernovadreams, we strive, To keep our personal info alive.

Don’t look back in anger, they say, As we navigate the GDPR way. 

Our supersonic rights, clear and bright, In the digital world, we fight the good fight.

Live forever in a world that’s free, From breaches and leaks, let it be. 

With some might say, we take a stand, For privacy laws across the land.

In this morning glory, we find our way, To protect our data, come what may. 

So let’s embrace the GDPR light, And keep our privacy shining bright.

Enjoy reading our blog? Help us reach 10,000 subscribers by subscribing today! 

ICO 5th Call for Evidence on Generative AI 

Recently we wrote about how “How Generative AI’s Data Appetite is Fuelling Privacy Battles.” Last week the Information Commissioner’s Office (ICO) published its fifth call for evidence on Generative AI.  This call focuses on the allocation of accountability for data protection compliance across the generative AI supply chain. It is part of the ICO’s consultation series on generative AI ICO consultation series on generative AI and data protection

The fifth call for evidence addresses the recommendation for ICO guidance on the allocation of accountability in AI as a Service (AIaaS) contexts made in Sir Patrick Vallance’s Pro-innovation Regulation of Technologies Review.  
 
The allocation of accountability is complicated because of the different ways in which generative AI models, applications and services are developed, used and disseminated, but also the different levels of control and accountability that participating organisations may have.  
 
The ICO is interested in additional evidence on how this works in practice. In the meantime, it provides a summary of our current analysis, the policy positions we want to consult on and some examples which show how this analysis could be applied in practice.  
 
The deadline for submissions is 18th  September 2024.  

Enjoy reading our blog? Help us reach 10,000 subscribers by subscribing today! 
 
Join our Artificial Intelligence and Machine Learning, How to Implement Good Information Governance workshop for hands-on insights, key resource awareness, and best practices, ensuring you’re ready to navigate AI complexities fairly and lawfully. 

How Generative AI’s Data Appetite is Fuelling Privacy Battles

Like the monster plant in Little Shop of Horrors, Generative AI has an insatiable appetite; for data though rather than food. Generative AI applications, like ChatGPT and Midjourney, need a constant supply of data to train (and improve) their output algorithms. In the early days of AI development, this data came from public sources especially the internet. However, this “data scraping” was not without legal obstacles.

Where personal data is used to train AI models, of course GDPR applies. The transparency provisions and the requirement for a legal basis are of particular importance. In 2022, the Information Commissioner’s Office (ICO) issued a fine of more than £7.5 million to Clearview AI for GDPR breaches in the way it compiled its online database containing 20 billion images of people’s faces and data scraped from the internet.  The company did manage to successfully appealthe fine but the ICO, and other GDPR regulators in the EU, have issued clear warnings to AI companies to ensure they comply with GDPR.

To satisfy Generative AI’s demand for more data, AI developers have been striking deals with tech companies for access to the latter’s user data. This includes data generated by users whilst using popular websites and apps. In February it was reported that Tumblr and WordPress.com are preparing to sell user data to Midjourney and OpenAI. And (surprise surprise) Meta and Alexa have exploited user data, in the past, to train their AI models.

Elon Musk’s X (formerly Twitter) came under fire recently after it started collecting and using its users’ data, including their posts, to train X’s Grok AI model. This was allegedly done without notifying X users or asking for their consent. In June, the Irish Data Protection Commission (DPC), X’s Lead Supervisory Authority, made an urgent application under Section 134 of the Irish Data Protection Act 2018. This allows the DPC, where it considers there is an urgent need to act to protect the rights and freedoms of data subjects, to request the High Court for an order requiring the data controller to suspend, restrict or prohibit the processing of personal data.

This was the first time that any Lead Supervisory Authority has taken such action, and the first time that the DPC has sought to utilise its powers under Section 134. The DPC said the application was made to protect the rights and freedoms of X’s EU/EEA users, and came after extensive engagement between the DPC and X regarding its AI model training.  Last week, the DPC announced that X had agreed to suspend its processing of the personal data contained in the public posts of X’s EU/EEA users which it processed between 7 May 2024 and 1 August 2024, for the purpose of training its AI model.   

But this agreement is not the end of X’s privacy woes. Noyb, a privacy advocacy group headed by Max Schrems, has filed nine more GDPR complaints with regulators across Europe alleging that X appears to have breached a number of other GDPR provisions including the GDPR principles and the transparency rules. Several other major tech firms have also faced regulatory setbacks in Europe over privacy issues raised by their AI plans. In June Meta announced that it was pausing its plan to process user posts and images on Facebook and Instagram to train its AI tools after a number of GDPR complaints. LinkedIn was also the subject of a similar complaint by consumer organisations.

AI is a priority for the ICO. It’s existing guidance on AI explains how to apply the concepts of data protection law when developing or deploying AI and the AI toolkit helps organisations identify and mitigate risks during the AI lifecycle. The ICO consultation series on generative AI and data protection closed in June.

The training of Generative AI does not just pose GDPR compliance issues. In December last year, the New York Times announced it was suing OpenAI and Microsoft for copyright infringement. The lawsuit claimed the “unlawful use” of the paper’s “copyrighted news articles, in-depth investigations, opinion pieces, reviews, how-to guides, and more” to create AI products “threatens The Times’s ability to provide that service”.

Please subscribe to this blog and help us to get to 10,000 subscribers.

Join our Artificial Intelligence and Machine Learning, How to Implement Good Information Governance workshop for hands-on insights, key resource awareness, and best practices, ensuring you’re ready to navigate AI complexities fairly and lawfully.

The EU AI Act Comes into Force Today

The EU AI Act comes into force today although not all the provisions will become enforceable straight away. 

The Act sets out comprehensive rules for AI applications, including a risk-based system to address potential threats to health and safety, and human rights. It will ban certain AI applications that pose an “unacceptable risk,” including real-time and remote biometric identification systems such as facial recognition. Additionally, it will impose strict obligations on those considered “high risk,” encompassing AI used in
EU-regulated product safety categories, for example, cars and medical devices.
These obligations include adherence to data governance standards, transparency rules, and the incorporation of human oversight mechanisms. 

Detailed guides have been produced by lawyers Stephenson Harwood and Bird and Bird.

Here are the key dates for your diary:

  • August 1st, 2024: The AI Act enters into force
  • February 2025: Chapters I (general provisions) & II (prohibited AI systems) will apply
  • August 2025: Chapter III Section 4 (notifying authorities), Chapter V (general purpose AI models), Chapter VII (governance), Chapter XII (confidentiality and penalties), and Article 78 (confidentiality) will apply, except for Article 101 (fines for General Purpose AI providers)
  • August 2026: the whole AI Act applies, except for Article 6(1) & corresponding obligations (one of the categories of high-risk AI systems)
  • August 2027 – Article 6(1) & corresponding obligations apply.

In the UK, despite media reports, the King’s Speech did not include a bill to regulate AI. The King said that the government would “seek to establish the appropriate legislation to place requirements on those working to develop the most powerful artificial intelligence models”. Expect a government consultation to be announced soon.

Our AI Act workshop will help you understand the new law in detail and its interaction with the UK’s objectives and strategy for AI regulation.

AI Bill to be included in King’s Speech

A bill to regulate Artificial Intelligence(AI) will be one of 35 bills to be included in the King’s Speech tomorrow according to the Financial Times. The Bill will seek to enhance the legal safeguards surrounding the most cutting-edge AI technologies, according to people briefed on the plans.

The 2024 Labour election manifesto contained pledges to support the development of AI. It stated Labour would ensure their “industrial strategy supports the development of the AI sector and removes planning barriers to new datacentres.”  The Bill seeks to follow through on the manifesto pledge to regulate AI but only in some cases:

“Labour will ensure the safe development and use of AI models by introducing binding regulation on the handful of companies developing the most powerful AI models and by banning the creation of sexually explicit deepfakes.”

The Bill is likely to focus on the production of large language models (LLMs), the general-purpose technology that underlies AI products such as OpenAI’s ChatGPT.
It is a departure from the previous government’s approach which was not to place AI regulation on a statutory footing but to make use of “regulators’ domain-specific expertise to tailor the implementation of the principles to the specific context in which AI is used.” 

The new Bill follows the EU’s tougher approach.  The EU AI Act was published in the Official Journal of the EU last Friday (July 12th 2024) firing the gun for the enforcement countdown. It will be on the EU statute books on 1st August 2024 and then become enforceable in stages.

The main provisions of Act can be read here. In summary, the Act sets out comprehensive rules for AI applications, including a risk-based system to address potential threats to health and safety, and human rights. The Act will ban certain AI applications that pose an “unacceptable risk,” including real-time and remote biometric identification systems such as facial recognition. Additionally, it will impose strict obligations on those considered “high risk,” encompassing AI used in EU-regulated product safety categories, for example, cars and medical devices. These obligations include adherence to data governance standards, transparency rules, and the incorporation of human oversight mechanisms.

It will be interesting to read the text of the new Bill when it is published especially how it overlaps with the provisions on the UK GDPR.

Our AI Act workshop will help you understand the new law in detail and its interaction with the UK’s objectives and strategy for AI regulation.

EU AI Act Published in the EU Official Journal

The EU AI Act was published in the Official Journal last Friday (July 12th 2024) firing the gun for the enforcement countdown.

In summary, the Act sets out comprehensive rules for AI applications, including a risk-based system to address potential threats to health and safety, and human rights. The Act will ban certain AI applications that pose an “unacceptable risk,” including real-time and remote biometric identification systems such as facial recognition. Additionally, it will impose strict obligations on those considered “high risk,” encompassing AI used in EU-regulated product safety categories, for example, cars and medical devices. These obligations include adherence to data governance standards, transparency rules, and the incorporation of human oversight mechanisms. 

For a more detailed guide read the document produced by lawyers Stephenson Harwood.

Here are the key dates for your calendar:

  • August 1st, 2024: The AI Act will enter into force

  • February 2025: Chapters I (general provisions) & II (prohibited AI systems) will apply

  • August 2025: Chapter III Section 4 (notifying authorities), Chapter V (general purpose AI models), Chapter VII (governance), Chapter XII (confidentiality and penalties), and Article 78 (confidentiality) will apply, except for Article 101 (fines for General Purpose AI providers)

  • August 2026: the whole AI Act applies, except for Article 6(1) & corresponding obligations (one of the categories of high-risk AI systems)

  • August 2027 – Article 6(1) & corresponding obligations apply.


Our AI Act workshop will help you understand the new law in detail and its interaction with the UK’s objectives and strategy for AI regulation. 

Information Governance: The Future

So now we have a Labour Government, what can we expect vis a vis information governance?

Data Protection

Before the snap election was announced, most information professionals were getting ready to implement the Data Protection and Digital Information Bill which was making its way through the House of Lords and was set to be passed in July. The Bill would have amended the UK GDPR to make it, according to the Government, “a tailored, business-friendly British system of data protection.” The election put an end to the Bill which failed to make it through Parliamentary “wash up” stage.

The Labour Party had nothing to say on this topic in its manifesto, apart from a pledge to “improve data sharing across services, with a single unique identifier, to better support children and families.” It also said it intends to create a “National Data Library” to bring together existing research programmes and “help deliver data-driven public services”.

It is still likely that some Data Protection law reform will be undertaken by the new Government. Some of the less controversial aspects of the Bill, such as making it easier to use personal data for research and re organisation of the ICO, could return.
But we are not going to see wholesale reform in the first few years, especially as the Government will not want to jeopardise the UK’s EU adequacy status which is due for renewal by June 2025.Thankfully the introduction of digital ID cards have also been ruled out, after Tony Blair suggested they could help control immigration.

AI Regulation

The rapid advancements in Artificial Intelligence (AI), and their potential to impact on people’s rights and freedoms, has led to calls for better regulation. The Labour manifesto contains pledges to support the development of AI. It says Labour will ensure their “industrial strategy supports the development of the AI sector and removes planning barriers to new datacentres.” There is also a pledge to regulate AI but only in some cases:

“Labour will ensure the safe development and use of AI models by introducing binding regulation on the handful of companies developing the most powerful AI models and by banning the creation of sexually explicit deepfakes.”

But there is no real detail about what AI regulation will look like under Labour.
Perhaps the party will take the lead from the TUC ,which produced an AI Bill in April, or the EU which recently passed the EU AI Act.

Online Safety

The Labour manifesto states that the party will “build on” on the Online Safety Act, “bringing forward provisions as quickly as possible, and explore further measures to keep everyone safe online, particularly when using social media”. Labour also intends to give coroners “more powers to access information held by technology companies after a child’s death” and to create a “Regulatory Innovation Office” which will help existing regulators “update regulation, speed up approval timelines and co-ordinate issues that span existing boundaries”.

Freedom of Information

Freedom of Information laws are always popular with opposition parties who wish to critically assess government policies or discover uncomfortable truths (at least for the Government) about their implementation. But in government such laws are often seen as an inconvenience (just ask Tony Blair). None of the parties made any specific mention of FOI in their manifestos. This is surprising; the Labour Party has been arguing for many years that private contractors delivering public services should be subject to FOI laws. Perhaps they will look again at strengthening FOI. 

This and other data protection developments will be discussed in detail on our forthcoming  GDPR Update  workshop.