A Pinch of GDPR: Gregg Wallace Serves Up a Data Rights Claim 

Gregg Wallace, the former MasterChef presenter, has issued proceedings against the BBC and BBC Studios for failing to respond to his subject access requests (SAR) in accordance with the UK GDPR.  Wallace was sacked by the BBC in July following an inquiry into alleged misconduct. As the saying goes, “Revenge is a dish best served cold!”  

Background 

According to court documents, seen by the PA news agency, in March 2025 Wallace made SARs to the BBC and its subsidiary BBC Studios for all personal data held about him. Both requests related to his “work, contractual relations and conduct” spanning 21 years. 

The BBC acknowledged the request and deemed it “complex”. They probably invoked  Article 12(3) of the UK GDPR which allows a Data Controller to extend the one month SAR time limit by a further two months where necessary “taking into account the complexity and number of the requests.” By August, the BBC had apologised for the delay and said it was taking “reasonable steps” to process the request,  but still no data had been provided. BBC Studios, meanwhile, said it would withhold parts of the data because of “freedom of expression.” 

The court documents assert that the defendants had “wrongly redacted” information and had “unlawfully failed to supply all of the claimant’s personal data”. Wallace seeks “up to £10,000” for distress and harassment and an order compelling both entities to comply with his SARs.   

Freedom of Expression Exemption 

BBC Studios’ reliance on “freedom of expression” invites scrutiny. The exemption in Schedule 2 Part 5 of Data Protection Act 2018 (DPA 2018) applies only to personal data processing carried out for the special purposes (journalistic, artistic, academic, or literary)  and only so far as compliance would be incompatible with those purposes. 

The special purposes exemption is interpreted quite narrowly by the courts. If the withheld data consists of production notes, editorial discussions, or source material for broadcast, BBC Studios’ argument has force. But if the data relates to HR investigations, conduct complaints, or contractual matters, the processing is unlikely to be “journalistic”.  

Distress and Damages 

Article 82 UK GDPR gives a data subject a right to compensation for material or non-material damage for any breach of the UK GDPR. Section 168 of the DPA 2018 confirms that “non-material damage” includes distress. However the relevant case law shows (1) the courts distinguishing trivial upset from genuine distress and (2) modest damages being awarded. A long delay in responding to a SAR, especially in the midst of reputational damage, is not trivial. However, if Wallace’s is successful in his claim he is unlikely to be awarded anything close to £10,000: typical awards for emotional harm in data-rights breaches sit between £500 and £2,500. (The excellent Panopticon blog is a must-read for anyone needing help in navigating causation and quantum in such cases.) Furthermore, by limiting his claim to £10,000, Wallace’s case will probably be allocated to the Small Claims track where minimal costs are recoverable.  

ICO Action 

This court action by Greg Wallace may also draw the attention of the Information Commissioner’s Office (ICO). In March 2025, the ICO issued reprimands to two Scottish councils for repeatedly failing to respond to SARs within the statutory timeframe.  There is also the theoretical possibility of a criminal prosecution if the ICO, upon investigation, finds that the BBC has deliberately frustrated the requests.   
 
Section 173 of the DPA 2028 makes it a criminal offence, where a person has made a SAR, to “alter, deface, block, erase, destroy or conceal information with the intention of preventing disclosure of all or part of the information that the person making the request would have been entitled to receive.” In September, Jason Blake, the director of a care home in Bridlington, was found guilty of an offence under S.173.  The court ordered him to pay a fine of £1,100 and additional costs of £5,440.   

Other Celebrity SARs 
 
This is not the first time a primetime BBC show has crossed paths with GDPR. A few years ago, some celebrity contestants on  Strictly Come Dancing alleged mistreatment by professional dancers and production staff. Lawyers acting on behalf of one of the dancers at the centre of the allegations, made a GDPR subject access request for, amongst other things, “all internal BBC correspondence related to the issue, including emails and text messages”.  

In July 2023, Dame Alison Rose, the then CEO of NatWest, resigned after Nigel Farage made a SAR which disclosed information that contradicted the bank’s justification for downgrading his account. There is potentially more SAR court drama to come. In March, the campaign group, Good Law Project(GLP),  “filed a trailblazing new group action” against Farage’s Reform UK at the High Court. GLP claims that Reform failed to comply with a number of SARs and is seeking damages on behalf of the data subjects.  

Whilst Greg Wallace’s case is unlikely to result in a groundbreaking legal judgment or a headline-making damages award, high-profile celebrities pursuing data protection claims are always a welcome development. They help raise awareness of data rights and, conveniently, give information governance professionals a perfect excuse to indulge in a reality TV binge, just in case any other interesting data protection issues arise! 

Our How to Handle a Subject Access Request workshop will help you navigate complex Subject Access Requests.

When the RIPA Inspector Calls 

Every local authority using (or having the ability to use) covert surveillance, under the Regulation of Investigatory Powers Act 2000 (RIPA), should expect regular inspections by the Investigatory Powers Commissioner’s Office (IPCO). Typically, these are conducted every three years, though frequency may vary based on activity levels and past findings. These inspections are a key part of demonstrating lawful and proportionate use of surveillance powers.  

The Inspection Process 

IPCO inspections are now commonly conducted remotely, although on-site visits still occur when deemed necessary. You will usually be given advance notice and asked to submit key documents, including your RIPA policy, examples of authorisations (even if only historical), and training records. 

The inspection will generally follow this structure: 

  1. Document Review: The inspector will examine your authority’s policy and procedures to assess whether they reflect current law and Home Office Codes of Practice.  
  1. Case Sampling: Even if your authority hasn’t used RIPA powers in recent years, inspectors will want to see how you handle applications when they occur, or how you maintain readiness. If you have used powers, expect a thorough review of sample applications, authorisations, reviews, renewals and cancellations. 
  1. Interviews with Key Personnel: Typically, the inspector will speak with the Senior Responsible Officer (SRO), Authorising Officers and the RIPA Coordinator. They will be looking for a clear understanding of roles, responsibilities, and legal thresholds for authorisation. 
  1. Feedback and Report: The inspector will provide immediate feedback and later issue a formal report highlighting commendations, recommendations and any required actions. 

Common Inspection Findings 

As part of our provision of tailored in house training, we have to read IPCO inspection reports. The following is a list of common mistakes highlighted by IPCO. They are not attributable to any particular organisation.  

RIPA Forms 

  • Use of out of date forms 
  • No Unique Reference Number (URN)  
  • Not amending forms so that only those grounds are present which are available to the public authority e.g. councils – preventing or detecting crime  
  • Pre completed forms  
  • Use of cut and paste in boxes/repetitive narrative 

Authorisation Process  

  • Rubber stamping – no real thought given to authorisation  
  • Necessity, proportionality and collateral intrusion not fully understood/considered  
  • Likelihood of obtaining Confidential Information not fully considered 
  • Some ‘open source’ internet research is being conducted which may actually meet the criteria of Directed Surveillance and therefore require authorisation  
  • Confusion regarding reviews and renewals  
  • Lack of understanding of when a person is a CHIS 
  • Too many Authorising Officers 
  • Authorising Officers are not making adequate provision for destruction of product that is collateral intrusion or of no value to the operation  
  • Joint investigations without authorisation and/or record keeping 
  • Lack of robust management and quality assurance procedures  

Social Media 

  • Failing to consider the application of RIPA to social media monitoring 
  • Lack of understanding of when the Directed Surveillance and CHIS definitions are met 

Record Keeping  

  • Central records not compliant with the Code of Practice  
  • Inadequate monitoring, recording and audit of surveillance equipment  
  • Inadequate handling and storage of surveillance product/evidence 
 

Policies and Procedure Documents 

  • Inadequate/no RIPA policy  
  • Inadequate/out of date guidance document  
  • No CCTV protocol/procedure  

Preparing for an IPCO Inspection 

The key to a smooth inspection lies in preparation. This starts long before the inspection is announced: 

  1. Review and Update Your Policy Regularly: Your RIPA policy should be reviewed at least annually and whenever guidance or legislation changes. Make sure it is accessible to relevant staff and reflects current best practice. 
  1. Keep Your RIPA Registers in Order: Whether your authority uses an electronic register or paper records, they must be accurate and up to date. This includes entries for authorisations that were refused, cancelled or not proceeded with. 
  1. Prioritise Training (see below) 
  1. Test Your Processes: Carry out internal audits or mock inspections. Review recent authorisations (if any), check register completeness, and ensure all relevant staff understand their responsibilities. 
  1. Engage Your SRO: The SRO isn’t just a figurehead; they should champion compliance, oversee training provision, ensure policy updates, and actively monitor RIPA use within the authority. 
  1. Learn from Past Reports: If your authority has had previous inspections, review past reports and ensure all recommendations have been addressed. Be ready to explain what improvements have been made. 
  1. Stay Connected: Keep up with Home Office guidance, IPCO publications and professional networks. Sharing good practice with other local authorities can help avoid common pitfalls. 

Training and Awareness 

The last annual report (2023) published by IPCO states: 

“As a general rule, we encourage local authorities to ensure that authorising officers (AOs) and those members of staff engaged in investigative or enforcement roles, receive either classroom-based or online training from a trusted supplier on an annual or biennial basis.” 

When it comes to training, there is no one size fits all solution. It should be tailored depending on the audience, their role and frequency of using surveillance powers. Consider: 

  • Initial Training for New Staff: Any officer designated as an Authorising Officer or investigator must receive formal RIPA training before undertaking the role. 
  • Refresher Training: Aim for annual refresher sessions. Even if you’ve had no activity, this keeps knowledge alive and demonstrates proactive governance. 
  • Wider Awareness Training: Consider regular briefings for investigative and enforcement teams so they understand when RIPA applies and how to seek authorisation. 

By embedding a culture of continual learning, maintaining robust policies and records, and keeping oversight active, you’ll not only pass your inspection with confidence but also ensure your authority upholds the highest standards of accountability and public trust. 

How We Can Help 

Act Now have a range of training solutions to assist you to raise RIPA awareness and prepare for IPCO inspections: 

  • RIPA Essentials. An e learning course, consisting of an animated video followed by an online quiz. In just 30 minutes your employees can learn about the main provisions of Part 2 of RIPA including the different types of covert surveillance, the serious crime test and the authorisation process. The course also covers how RIPA applies to social media monitoring and how to handle the product of surveillance having regard to data protection.  
  • Online workshops: Our RIPA workshops  provide a thorough explanation of the RIPA requirements, processes and documentation to ensure compliance. Case studies and real life examples help to embed the learning. 
  • In House Training: We have RIPA experts who can deliver customised in house training to your organisation, whether online or face to face. Our associates include Naomi Mathews who is a Senior Solicitor and a co-ordinating officer for RIPA at a large local authority in the Midlands. She is also the authority’s Data Protection Officer and Senior Responsible Officer for CCTV.  

Retail Under Siege Through AI Enabled Cyber Attacks 

The UK retail sector has come under siege in 2025, with an unprecedented wave of cyber attacks. After the Ticketmaster breach in 2024 where millions of users were affected, one would assume retailers had taken note. However, From Marks & Spencer to Louis Vuitton, companies large and small are grappling with relentless, tech-enhanced intrusions that threaten customer trust and digital resilience. It’s almost a daily occurrence these days receiving an email from a company apologising for a data breach. There also seems to be no retailer safe regardless of their size or stature. Sometimes it is a retailer that you may not have even shopped with for a number of years at which point I’m sure you must be thinking, ‘What’s their data retention policy?’ 
 
Below we take a look at some of the major breaches and attacks of 2025 and what you can do to protect your information online. 

High-Profile Retail Cyberattacks of 2025 

Here’s a snapshot of the most disruptive recent cyber incidents: 

Company Date Attack Type Impact & Highlights 
Louis Vuitton UK July 2025 Data breach Customer contact details & purchase history stolen; phishing scams followed 
Marks & Spencer April 2025 Ransomware £3.8M/day in lost revenue; £700M market value wiped; credential theft via vendor 
Harrods May 2025 Attempted breach Real-time containment; no confirmed data loss but serious operational disruption 
Co-op UK May 2025 Ransomware Customer data compromised; back-office systems disabled 
Peter Green Chilled May 2025 Ransomware Disrupted cold-chain deliveries to Tesco, Aldi, Waitrose 
Victoria’s Secret Spring 2025 Web attack E-commerce platform outage during peak shopping period 

These incidents underscore one clear truth: cybercrime is evolving, and no retailer, no matter its size or prestige, is immune. What is worrying is, companies with infinite resources are still extremely vulnerable. 

The Role of AI  

In many of these data breaches, AI was used by hackers to accelerate and deepen the damage. Their tactics included: 

  • Hyper-Personalised Phishing: AI-generated messages mimicked trusted communications, referencing recent purchases to trick recipients. Louis Vuitton customers received convincing fake discount offers. 
  • Credential Cracking and MFA Bypass: AI automated brute-force login attacks, while adversary-in-the-middle techniques stole session tokens to sidestep multi-factor authentication. 
  • Network Reconnaissance: Malicious bots used AI to scan retail systems, identify vulnerabilities, and map out supply chains for deeper impact. 
  • Autonomous Ransomware: Sophisticated strains like DragonForce adapted in real time to avoid detection and self-propagate through connected systems. 
  • Voice Phishing (Vishing): AI-generated voices impersonated IT staff to deceive employees into disclosing access credentials; a tactic especially potent in luxury retail. 

AI has supercharged cybercrime, making attacks faster, more targeted, and far harder to detect. With the emergence of (RaaS) ransomware as a service and (DLS) there is now a marketplace for our data that is much more accessible. 

How Consumers Can Protect Their Data 

While companies bear the financial burden of breaches, consumers often suffer the most; through stolen data, financial fraud, and disrupted services. Lessons for consumers include: 

  • Even luxury brands are vulnerable – don’t assume prestige equals protection. 
  • Cyberattacks are increasingly tailored based on what you buy, how often you shop, and where you live. 
  • Supply chains and vendor access are weak points; your data might be exposed even if the retailer itself isn’t directly breached. 

Whether you shop in-store or online, these simple steps can dramatically improve the security of your personal data: 

Digital Defence 

  • Use Strong, Unique Passwords: A password manager can help you avoid reuse and weak combinations. 
  • Enable Multi-Factor Authentication: Critical for accounts tied to payments or personal information. 
  • Monitor Your Financial Activity: Check bank statements and credit reports for irregularities. Set up alerts where possible. 
  • Be Phishing-Aware: Always verify communications by visiting the retailer’s official website. Don’t click suspicious links or download unexpected attachments. 
  • Don’t Save Your Payment Data: If you can avoid saving your payment/address details with a retailer online then always avoid.  

Data Discipline 

  • Limit the Personal Data You Share: Don’t offer extra details to loyalty schemes or retailers unless absolutely necessary. 
  • Freeze Your Credit (If Breached): Prevent identity thieves from opening new accounts using your stolen details. 

Payment Hygiene 

  • Use Credit Cards Online: They offer better fraud protection and don’t expose your actual bank balance. In addition, you have certain buyer protections when buying on credit card
  • Avoid Public Wi-Fi for Shopping: Use a VPN or shop from secure, private networks. 

The digital age has made shopping easier; but also riskier. Cybersecurity now requires a partnership between retailers and consumers. Companies must implement
zero-trust architectures. AI-powered threat detection and employee cyber-awareness training. Meanwhile, consumers should stay informed, cautious, and quick to respond when their personal data is at risk. 

According to Stanford University’s recent study, human error accounted for 88% of data breaches and a recent Accenture study found that there has been a 97% increase in cyber threats since the start of the Russia/Ukraine war.  
 
We have two workshops coming up (How to Increase Cyber Security in your Organisation and Cyber Security for DPOs) which are ideal for organisations who wish to upskill their employees about cyber security. 

When AI Misses the Line: What Wimbledon 2025 Teaches Us About Deploying AI in the Workplace 

This year’s Wimbledon Tennis Championships are not just a showcase for elite athleticism but also a high-profile test of Artificial Intelligence. For the first time in the tournament’s 148-year history, all line calls across its 18 courts are made entirely by Hawk-Eye Live, an AI-assisted system that has replaced human line judges. This follows, amongst others, the Semi-Assisted Offside System deployed in last year’s football Champions League after its success in the Qatar World Cup.  

The promise? Faster decisions, greater consistency, and reduced human error. 
The reality? Multiple malfunctions, public apologies, and growing mistrust among players and fans (not to mention losing the ‘best dressed officials’ in sport). 

What Went Wrong? 

  • System Failure Mid-Match: During a high-stakes women’s singles match between Anastasia Pavlyuchenkova and Sonay Kartal, the line-calling system was accidentally switched off for several points. No alerts were raised, and the match proceeded with no accurate judgments. Wimbledon officials later admitted human error was to blame, not the AI. 
  • Misclassification Errors: In the men’s quarter-final between Taylor Fritz and Karen Khachanov, Hawk-Eye incorrectly called a rally forehand a “fault,” apparently confusing it with a serve. Play was halted and the point was replayed, leaving fans and players confused and frustrated. 
  • User Experience Failures: Multiple players, including Emma Raducanu and Jack Draper, complained that some calls were “clearly wrong” and that the system’s announcements were too quiet to hear amid crowd noise. Some players called for the return of human line judges, citing a lack of trust in the technology.  

Lessons for AI and IG Professionals 

Wimbledon’s AI hiccup offers more than a headline; it surfaces deep issues around trust, oversight, and operational design that are relevant to any AI deployment in the workplace. Here are the key lessons: 

1. Automation ≠ Autonomy 

The Wimbledon system is not truly autonomous; it relies on human operators to activate it before each match. When staff forgot to do so, the AI didn’t intervene or alert anyone. This exposes a major pitfall: automated systems are only as reliable as their orchestration layers. 

Governance Principle: Ensure clear workflows and audit trails around when and how AI systems are initiated, paused, or overridden. Build in fail-safe triggers and status checks to prevent silent failures. 

2. Build in Redundancy and Exception Handling 

AI systems excel at pattern recognition in controlled environments but can fail spectacularly at edge cases. Wimbledon’s AI was likely trained on thousands of hours of ball trajectories – but it still confused a forehand rally shot with a serve under unusual conditions. 

Governance Principle: Plan for edge case management. When the AI encounters uncertainty, it should either defer to human review or trigger a fallback protocol.  

3. Usability is a Core Component of Accuracy 

Even when the AI was functioning correctly, players couldn’t always hear the line calls due to low audio volume. What good is a precise call if the user can’t perceive it? 

Governance Principle: Don’t separate accuracy from usability. A technically correct output must be understandable, accessible, and actionable to its end users. Invest in UI/UX design early in the AI lifecycle. 

4. Transparency Builds Trust 

Wimbledon’s initial response (vague statements and slow clarifications) only fuelled player frustration. Trust was eroded not just because of the error, but because of how it was handled. 

Governance Principle: When deploying AI, especially in high-stakes environments, build a culture of transparent accountability. Log decisions, explain anomalies, and communicate clearly when things go wrong. 

5. Hybrid Systems Are Often More Effective Than Pure AI 

While Wimbledon has fully replaced line judges with AI, there’s a strong case for a hybrid model. A combination of automated systems with empowered human oversight could preserve both accuracy and human judgment. 

Governance Principle: Consider augmented intelligence models, where AI supports rather than replaces human decision-makers. This ensures operational continuity and enables learning from both machine and human feedback. 

6. Respect Context and Culture 

Wimbledon isn’t just any tournament; it’s steeped in tradition, where human line judges are part of the spectacle. Removing them altered the tournament’s character, sparking emotional backlash from players and spectators alike. 

Governance Principle: Understand the organisational and cultural context where AI is deployed. Technology doesn’t operate in a vacuum. Change management, stakeholder engagement, and empathy are as important as algorithms. 

The problems with Wimbledon’s AI line-calling system are symptoms of incomplete design thinking. Whether you’re deploying AI in HR analytics, document classification, or customer service, the Wimbledon experience shows that trust isn’t just built on data; it’s built on reliability, clarity, and human-centred design. 

In a world increasingly mediated by automation, we must remember: AI doesn’t replace the need for governance. It raises the stakes for getting it right. And we just wish it was around for the “Hand of God” goal

Are you looking to enhance your career with an AI governance qualification? Our AI Governance Practitioner Certificate is designed to equip compliance professionals with the essential knowledge and skills to navigate this transformative technology while upholding the highest standards of data protection and information governance. The first course was fully booked, and we have added more dates.

The New Data (Use and Access) Act 2025 

The Data (Use and Access) Act 2025 received Royal Assent on 19th June 2025. It is important to note that the new Act will not replace current UK data protection legislation. Rather it will amend the UK GDPR as well as the Privacy and Electronic Communications (EC Directive) Regulations 2003 (PECR) and the Data Protection Act 2018. Most of these amendments will commence in stages, 2 to 12 months after Royal Assent. Exact dates for each measure will be set out in commencement regulations. 

The Bill was introduced into Parliament in October last year. It was trailed in the King’s Speech in July (under its old name of the “Digital Information and Smart Data Bill”) with his Majesty announcing that there would be “targeted reforms to some data laws that will maintain high standards of protection but where there is currently a lack of clarity impeding the safe development and deployment of some new technologies.” However, this statement of intent does not match the reality; many of the core provisions are a “cut and paste” of the Data Protection and Digital Information(No.2) Bill (“DP Bill”), which was dropped by the Conservative Government in the Parliamentary “wash up” stage before last year’s snap General Election. 

Key Provisions 

Let’s examine the key provisions of the new Act.  

Smart Data: The Act retains the provisions from the DP Bill that will enable the creation of a legal framework for Smart Data. This involves companies securely sharing customer data, upon the customer’s (business or consumer) request, with authorised third-party providers (ATPs) who can enhance the customer data with broader, contextual ‘business’ data. These ATPs will provide the customer with innovative services to improve decision making and engagement in a market. Open Banking is the only current example of a regime that is comparable to a ‘Smart Data scheme’. The Act will give such schemes a statutory footing, from which they can grow and expand.  

Digital Identity Products: Just like its predecessor, the Act contains provisions aimed at establishing digital verification services including digital identity products to help people quickly and securely identify themselves when they use online services e.g. to help with moving house, pre-employment checks and buying age restricted goods and services. It is important to note that this is not the same as compulsory digital ID cards as some media outlets have reported. 

Research Provisions: The Act keeps the DP Bill’s provisions that clarify that companies can use personal data for research and development projects, as long as they follow data protection safeguards.  

Legitimate Interests: The Act retains the concept of ‘recognised legitimate interests’ under Article 6 of the UK GDPR- specific purposes for personal data processing such as national security, emergency response, and safeguarding for which Data Controllers will be exempt from conducting a full “Legitimate Interests Assessment” when processing personal data.  

Subject Access Requests: The Act it makes it clear that Data Controllers only have to make reasonable and proportionate searches when someone asks for access to their personal data. 

Automated Decision Making: Like the DP Bill, the Act seeks to limit the right, under Article 22 of the UK GDPR, for a data subject not to be subject to automated decision making or profiling to only cases where Special Category Data is used. Under new article 22A, a decision would qualify as being “based solely on automated processing” if there was “no meaningful human involvement in the taking of the decision”. This could give the green light to companies to use AI techniques on personal data scraped from the internet for the purposes of pre employment background checks. 

International Transfers: The Act maintains most of the DP Bill’s international transfer provisions. There will be a new approach to the test for adequacy applied by the UK Government to countries (and international organisations) and when Data Controllers are carrying out a Transfer Impact Assessment or TIA. The threshold for this new “data protection test” will be whether a jurisdiction offers protection that is “not materially lower” than under the UK GDPR 

Health and Social Care Information: The Act maintains, without any changes, the provisions that establish consistent information standards for health and adult social care IT systems in England, enabling the creation of unified medical records accessible across all related services. 

PECR Changes: One of the most significant changes, copied from the DP Bill, is the increase in fines for breaches of PECR, from £500,000 to UK GDPR levels; meaning organisations could face fines of up to  up to £17.5m of 4% of global annual turnover (whichever is higher) for the most serious infringements. Other changes include allowing cookies to be used without consent for the purposes of web analytics and to install automatic software updates and extending the “soft opt” in for electronic marketing to charities.  

A full list of the changes to the UK data protection regime can be read on the ICO website.  

What is not in the new Act? 

Most of the controversial parts of the DP Bill have been have not made it into the Act. These include: 

  • Replacing the terms “manifestly unfounded” or “excessive” requests, in Article 12 of the UK GDPR, with “vexatious” or “excessive” requests. Explanation and examples of such requests would also have been included.  
  • Exempting all controllers and processors from the duty to maintain a ROPA, under Article 30, unless they are carrying out high risk processing activities.  
  • The “strategic priorities” mechanism, which would have allowed the Secretary of State to set binding priorities for the Information Commissioner. 
  • The requirements for the Information Commissioner to submit codes of practice to the Secretary of State for review and recommendations.  

The UK’s adequacy status under the EU GDPR now expires on 27th December following the recent announcement of a six month extension. Whilst the EU will commence a formal review of adequacy once the Bill receives Royal Assent, nothing in the Bill will jeopardise the free flow of personal between the EU and the UK. The situation would perhaps have been different had the DP Bill made it on to the statute books.  

AI and Copyright 

Much of the delay to the Bill was passing was caused by an issue which was not originally intended to be addressed in the Bill; that of the use of copyright works to train AI. Like the monster plant in Little Shop of Horrors, AI has an insatiable appetite; for data though rather than food. AI applications need a constant supply of data to train (and improve) their output algorithms. This obviously concerns copyright holders such as musicians and writers whose work may be used to train AI models to produce similar output, without the former receiving any financial compensation. A number of copyright infringements lawsuits are set to hit the courts soon. Amongst them, Getty Images’ is suing Stability AI accusing it of using Getty images to train its Stable Diffusion system, which can generate images from text inputs. Similar lawsuits have been launched in the US by novelists and news outlets. 

During the passage of the Bill through Parliament, there was strong disagreement between the Lords and the Commons over an amendment introduced by the crossbench peer and former film director Beeban Kidron. The amendment would have required AI developers to be transparent with copyright owners, about using their material to train AI models. 400 British musicians, writers and artists, including Sir Paul McCartney, signed a letter urging the Government to adopt the amendment. They argued that failing to do so would mean them “giving away” their work to tech firms.  

In the end, the Baroness Kidron dropped her amendment follow repeated rejection in the Commons. I expect this issue to raise its head again soon. The Government’s consultation on AI and copyright ended in February. Amongst other options, it proposes to give copyright holders the right to opt-out of their works being used for training AI. However, the music industry believes that such a measure would offer insufficient protection for copyright holders. In an interview with the BBC, Sir Elton John described the government as “absolute losers” and said he feels “incredibly betrayed” over the Government’s plans. 

Once the Government publishes it response to the copyright consultation, it will have to consider how to take the matter forward. Whether this comes in the form of a new copyright bill or AI regulation bill, expect more parliamentary wranglings as well as celebrity interviews.  

Data protection professionals need to assess the changes to the UK data protection regime. Our half day workshop will explore the new Act in detail giving you an action plan for compliance. A revised UK GDPR Handbook is now available incorporating the changes made by the DUA Act.

The Data (Use and Access) Bill Ready for the Statute Books 

The Data (Use and Access) Bill has cleared the final hurdle in Parliament and will soon become the Data (Use and Access) Act 2025 following Royal Assent.  

The new Act will amend the UK GDPR as well as PECR and the Data Protection Act 2018. The key changes are summarised in our blog post here. Most of these are not particularly controversial and were in the Data Protection and Digital Information Bill  which failed to make it through Parliamentary “wash up” stage when the General Election was announced last year. 

Much of the delay to the passing of the Bill was caused by amendments proposed by Baroness Kidron in the House of Lords. She wanted more protection for artists whose data is often used to train AI models, especially Generative AI. Her amendment would have required developers to be transparent with copyright owners about using their material to train AI models. 400 British musicians, writers and artists signed a letter saying the Government’s failing to adopt the amendment would mean them “giving away” their work to tech firms. In the end Baroness Kidron, following repeated rejections of her amendment in the House of Commons during the “ping pong” stage, decided to withdraw gracefully. Expect this issue to come up again when the government eventually brings forth AI legislation as mentioned in the King’s Speech. 

We expect most of the substantive provisions to come into force a few months after commencement. Plenty of time for us to update the UK GDPR Handbook

Data protection professionals need to assess the changes to the UK data protection regime. A revised UK GDPR Handbook is now available incorporating the changes made by the DUA Act.

Why Risk Management is Essential for IG Professionals 

GDPR compliance is very much about risk management. Throughout the UK and EU GDPR, Data Controllers are required to implement protective measures corresponding to the level of risk of their personal data processing activities. Consequently, risk management is a foundational skill which all data protection and information governance professionals need to develop.  

Risk in the UK GDPR 

Key provisions of the UK GDPR which mandate a risk-based approach include: 

Article 24 Responsibility of the Controller 

“Taking into account the nature, scope, context and purposes of processing as well as the risks of varying likelihood and severity for the rights and freedoms of natural persons, the controller shall implement appropriate technical and organisational measures to ensure and to be able to demonstrate that processing is performed in accordance with this Regulation. Those measures shall be reviewed and updated where necessary.” 

Article 25 Data Protection by Design and by Default 

“Taking into account the state of the art, the cost of implementation and the nature, scope, context and purposes of processing as well as the risks of varying likelihood and severity for rights and freedoms of natural persons posed by the processing, the controller shall, both at the time of the determination of the means for processing and at the time of the processing itself, implement appropriate technical and organisational measures, such as pseudonymisation, which are designed to implement data-protection principles, such as data minimisation, in an effective manner and to integrate the necessary safeguards into the processing in order to meet the requirements of this Regulation and protect the rights of data subjects.” 

Article 32 Security of Processing 

“Taking into account the state of the art, the costs of implementation and the nature, scope, context and purposes of processing as well as the risk of varying likelihood and severity for the rights and freedoms of natural persons, the controller and the processor shall implement appropriate technical and organisational measures to ensure a level of security appropriate to the risk,…” 

Article 33 Notification of a Personal Data Breach to the Commissioner 

“In the case of a personal data breach, the controller shall without undue delay and, where feasible, not later than 72 hours after having become aware of it, notify the personal data breach to the Commissioner , unless the personal data breach is unlikely to result in a risk to the rights and freedoms of natural persons. Where the notification under this paragraph is not made within 72 hours, it shall be accompanied by reasons for the delay.” 

Article 33 Notification of a Personal Data Breach to the Data Subject 

“When the personal data breach is likely to result in a high risk to the rights and freedoms of natural persons, the controller shall communicate the personal data breach to the data subject without undue delay.” 

Article 35 Data Protection Impact Assessments (DPIAs) 

“Where a type of processing in particular using new technologies, and taking into account the nature, scope, context and purposes of the processing, is likely to result in a high risk to the rights and freedoms of natural persons, the controller shall, prior to the processing, carry out an assessment of the impact of the envisaged processing operations on the protection of personal data.” 

Even where the word ‘risk’ is not explicitly used, the concept underpins a number of data protection principles in the UK (and EU) GDPR. For example: 

Accountability Principle  
Data Controllers must be able to demonstrate compliance. This involves documenting risk assessments, decisions, and mitigations; all of which are key components of risk management. 

Lawfulness, Fairness, and Transparency  
Fair and transparent processing demands that Data Controllers consider the potential impacts on data subjects; essentially, assessing and managing risks to data subjects’ rights. 

Data Minimisation and Purpose Limitation 
Ensuring that only necessary data is collected and processed inherently involves evaluating what is proportionate and appropriate, which are concepts rooted in risk assessment. 

Practical Skills DPOs and IG Officers Need 

Given the prominence of risk in the GDPR, DPOs and IG professionals should cultivate the following competencies: 

  • Risk Identification: Being able to recognise threats to data confidentiality, integrity, and availability; whether technical (e.g. cyberattacks) or organisational (e.g. poor access controls). 
  • Risk Analysis: Assessing the likelihood and potential impact of risks and understanding their relevance to the rights and freedoms of individuals. 
  • Risk Evaluation and Prioritisation: Comparing estimated risks against risk tolerance and legal thresholds (e.g. what constitutes ‘high risk’ under Article 35). 
  • Mitigation Planning: Developing and implementing controls to reduce risk to an acceptable level; whether through encryption, training, anonymisation, or policy development. 
  • Ongoing Monitoring: Risk is not static. DPOs must continuously monitor changes in technology, regulation, and business practices that may affect data risk profiles. 

For data protection and IG professionals, risk management is not a ‘nice-to-have’; it is a foundational skill.  

Interested in developing your risk management skills further? Consider enrolling on our new Risk Management in IG workshop 

Article 15 GDPR and “Meaningful Information” about Automated Decision-Making: What does this mean for AI? 

Article 15 of the EU and UK GDPR not only gives Data Subjects the right to obtain their personal data from the Data Controller but also the right to receive additional information about the processing. This includes: 

 “the existence of automated decision-making, including profiling, referred to in Article 22(1) and (4) and, at least in those cases, meaningful information about the logic involved, as well as the significance and the envisaged consequences of such processing for the data subject.” 

A recent ruling by the European Court of Justice (ECJ) sheds light on the concept of “meaningful information” and will have implications for those deploying AI systems. The case in question, C-203/22 Dun & Bradstreet Austria GmbH, concerns an Austrian mobile telecom operator. The company refused to enter into a contract with a customer due to their poor credit score. This decision was based on an automated credit evaluation provided by a third-party credit agency. 

The customer requested access to the information held by the credit agency so that they could understand the decision. The customer was dissatisfied with the disclosed information and so took legal action to demand further clarification on the logic behind the automated decision-making process. The core issue was whether the credit agency was obligated to provide more detailed information about the automated process under Article 15(1)(h) GDPR (as quoted above). The agency argued that doing so would expose trade secrets. However, the court ruled that it must provide “meaningful information about the logic involved” as required by GDPR. 

The Enforcement Court in Austria, tasked with enforcing the ruling, referred the following questions to the ECJ: 

  1. Does “meaningful information about the logic involved” require the controller to provide a comprehensive explanation of the procedures and principles used to come to a specific decision? 
  1. In cases where the controller argues that the requested information involves third-party data protected by the GDPR or trade secrets, is the controller obliged to submit the potentially protected information to supervisory authorities or courts for review? 

Meaningful Information 

In response to the first question, the ECJ confirmed that the phrase “meaningful information about the logic involved” fundamentally refers to all relevant details concerning the automated decision-making process. This includes an explanation of the procedures and principles used to arrive at the decision. 

While the ECJ made it clear that “meaningful information” does not require the disclosure of complex algorithms, it does require a sufficiently detailed explanation of the decision-making process. It emphasised that, in line with Articles 13(2)(f) and 14(2)(g) of the GDPR, which establish transparency requirements, the information must be clear, concise, and easily understandable. Data Subjects should be able to comprehend how their personal data is being processed. The right of access enshrined in Article 15 of the GDPR allows individuals to verify the accuracy and lawfulness of the processing of their personal data, which is a crucial safeguard under Article 22(3) that governs automated decision-making and profiling. 

Trade Secrets  

On the second question, the ECJ struck a delicate balance between Data Subjects’ right to access their data and the protection of third-party rights, such as trade secrets. It reiterated that while data protection is a fundamental right, it must be weighed against intellectual property protections as outlined in Recital 63 of the GDPR. 

The ECJ said that if providing access to personal data could violate the rights of third parties, such as revealing trade secrets, the controller must assess whether it is possible to disclose the information without infringing on third party rights. In cases of conflict, the issue must be referred to the relevant supervisory authority or court to decide on an appropriate solution. 

Importantly, the ECJ ruled that no Member State can impose a blanket ban on disclosing business or trade secrets, as doing so would undermine the GDPR’s requirement for a balanced approach to competing rights. In situations where access requests are contested, controllers are required to provide relevant information to supervisory authorities or courts, enabling an informed decision based on the principle of proportionality. 

So what are the implications of this ECJ ruling for AI systems 

While the ruling specifically focusses on the EU GDPR, it underscores the growing importance of transparency in data processing practices, especially when implementing automated decision-making processes. Organisations using AI for automated decision-making must ensure transparency by providing data subjects with clear, understandable explanations of how decisions are made even if complex algorithms are involved. Developers must design systems that can deliver “meaningful information” about the logic behind automated outcomes, while deployers must ensure this information is communicated effectively to individuals. Transparency is also a key theme of the recently enacted EU AI Act

Act Now recently launched the AI Governance Practitioner Certificate. This course is designed to equip compliance professionals with the essential knowledge and skills to navigate this transformative technology being implemented within their organisations while upholding the highest standards of data protection and information governance. 

What is the Role of IG Professionals in AI Governance? 

The rapid rise of AI deployment in the workplace brings a host of legal and ethical challenges. AI governance is essential to addresses these challenges and ensuring AI systems are transparent, accountable, and aligned with organisational values. 

AI governance requires a multidisciplinary approach involving, amongst others, IT, legal, compliance and industry specialists. IG professionals also possess a unique skill set that makes them key stakeholders in the governance process. Here’s why they should actively position themselves to play a key role in AI governance within their organisations. 

AI Governance is Fundamentally a Data Governance Issue 

At its core, AI is a data-driven technology. The fairness and reliability of AI models depend on the quality, accuracy, and management of data. If AI systems are trained on poor-quality or biased data, they can produce flawed and discriminatory outcomes. (See Amnesty International’s report into police data and algorithms.)  

IG professionals specialise in ensuring that data is accurate, well-structured, and fit for purpose. Without strong data governance, organisations risk deploying AI systems that amplify biases, make inaccurate predictions, or fail to comply with regulatory requirements. 

Regulatory and Compliance Expertise is Critical 

AI governance is increasingly being shaped by regulatory frameworks around the world. The EU AI Act and regulations and guidance from other jurisdictions highlight the growing emphasis on AI accountability, transparency, and risk management. 

IG professionals have expertise in interpreting legislation (such as GDPR, PECR and DPA amongst others) which positions them to help organisations navigate the complex legal landscape surrounding AI. They can ensure that AI governance frameworks comply with data protection principles, consumer rights, and ethical AI standards, reducing the risk of legal penalties and reputational damage. 

Managing AI Risks and Ensuring Ethical AI Practices 

AI introduces new risks, including algorithmic bias, privacy violations, security vulnerabilities, and explainability challenges. Left unchecked, these risks can undermine trust in AI and expose organisations to significant operational and reputational harm. 

IG Governance professionals excel in risk management (After all, that is what DPIAs are about). They are trained to assess and mitigate risks related to data security, data integrity, and compliance, which directly translates to AI governance. By working alongside IT and ethics teams, they can help establish clear policies, accountability structures, and risk assessment frameworks to ensure AI is deployed responsibly. 

Bridging the Gap Between IT, Legal, and Business Functions 

One of the biggest challenges in AI governance is the lack of alignment between different business functions. AI development is often led by technical teams, while compliance and risk management sit with legal and governance teams. Without effective collaboration, governance efforts can become fragmented or ineffective. 

IG professionals act as natural bridges between these groups. Their work already involves coordinating across departments to align data policies, privacy standards, and regulatory requirements. By taking an active role in AI governance, they can ensure cross-functional collaboration, helping organisations balance innovation with compliance. 

Addressing Data Privacy and Security Concerns 

AI often processes vast amounts of sensitive personal data, making privacy and security critical concerns. Organisations must ensure that AI systems comply with data protection laws, implement robust security measures, and uphold individuals’ rights over their data. 

IG and Data Governance professionals are well-versed in data privacy principles, data minimisation, encryption, and access controls. Their expertise is essential in ensuring that AI systems are designed and deployed with privacy-by-design principles, reducing the risk of data breaches and regulatory violations. 

AI Governance Should Fit Within Existing Frameworks 

Organisations already have established governance structures for data management, records retention, compliance, and security. Instead of treating AI governance as an entirely new function, it should be integrated into existing governance models. 

IG and Data Governance professionals are skilled at implementing governance frameworks, policies, and best practices. Their experience can help ensure that AI governance is scalable, sustainable, and aligned with the organisation’s broader data governance strategy. 

Proactive Involvement Prevents Being Left Behind 

If IG professionals do not step up, AI governance may be driven solely by IT, data science, or business teams. While these functions bring valuable expertise, they may overlook regulatory, ethical, and risk considerations. Fundamentally, as IG professionals, our goal is to ensure organisations are using data and any new technology responsibly. 

So we are not saying that IG and DP professionals should become the new AI overlords. But by proactively positioning themselves as key stakeholders in AI governance, IG and Data Governance professionals ensure that organisations take a holistic approach – one that balances innovation, compliance, and risk management. Waiting to be invited to the AI governance conversation risks being sidelined in decisions that will have long-term implications for data governance and organisational risk. 

Final Thoughts 

To reiterate, AI governance should not be the sole responsibility of IG and Data Governance professionals – it requires a collaborative, cross-functional approach. However, their expertise in data integrity, privacy, compliance, and risk management makes them essential players in the AI governance ecosystem. 

As organisations increasingly rely on AI-driven decision-making, IG and Data Governance professionals must ensure that these systems are accountable, transparent, and legally compliant. By stepping up now, they can shape the future of AI governance within their organisations and safeguard them from regulatory, ethical, and operational pitfalls. 

Our new six module AI Governance Practitioner Certificate will empower you to understand AI’s potential, address its challenges, and harness its power responsibly for the public benefit.  

AI in Local Government: Navigating the Legal Issues 

Artificial Intelligence is revolutionising many sectors, and local government is no exception. Councils are increasingly integrating AI to enhance service delivery, optimise resource management, and engage with citizens. AI Use cases include: 

  • Infrastructure Maintenance and Management: Blackpool Council uses AI for road maintenance through Project Amber; employing AI-powered satellite imagery to detect road damage and potholes.  
  • Public Engagement: Newham Council uses Chatbot Max, a multilingual chatbot, to assist residents with parking permits and penalty charge queries. The council says that in six months, the chatbot handled over 10,000 questions, saved 84 hours in call time, and generated £40,000 in savings.  
  • Crime Prevention and Detection: Wolverhampton Council has installed AI powered CCTV cameras to crack down on fly-tippers. The cameras have 360 degree vision and can recognise when someone is fly-tipping, sending an immediate report to the Council. 
  • Predictive Analytics for Social Services: In 2018 Hackney Council trialled the Early Help Predictive System . By analysing data on debt, housing, unemployment, school attendance, and domestic violence, the AI system profiled families to determine their need for intervention. Although this pilot programme was dropped a year later, there are many other AI tools which aim to help cash strapped councils speed up social work. One such tool is Magic Notes which records social work meetings and emails the social worker a transcript, summary and suggested actions for inclusion in case notes. 

Expect many more AI use cases soon, as the public sector is made to give truth to the Prime Minister recent speech in which he pledged that the Government will use AI’s power to ”turbocharge” the economy and improve public services. 

Legal Considerations  

While AI offers numerous benefits, several legal issues have to be navigated to ensure responsible and lawful use. These include: 

Data Protection and Privacy: Where personal data is involved training or deploying AI models, of course the GDPR applies. The transparency provisions and the requirement for a legal basis are of particular importance. In 2022, the Information Commissioner’s Office (ICO) issued a fine of more than £7.5 million to Clearview AI for GDPR breaches. This related to the way the company compiled its online database containing 20 billion images of people’s faces and data scraped from the internet.  The company did manage to successfully appeal the fine but the ICO, and other GDPR regulators in the EU, have issued clear warnings to AI companies to ensure they comply with GDPR. 

Transparency and Explainability: The decision-making processes of AI systems can be opaque. Clear information about how AI systems operate and make decisions should be provided. The London Borough of Camden has co-created a Data Charter with residents to ensure clarity and accessibility regarding data use, including AI applications. They produced accessible communications and animated explainers to demystify AI processes for the public.  

Bias and Discrimination: AI systems trained on biased data can perpetuate existing inequalities. Last year, a black Uber Eats driver received a payout after “racially discriminatory” facial-recognition checks prevented him accessing the app to secure work. Councils must be vigilant in auditing AI algorithms to detect and mitigate biases. This involves regular assessments and adjustments to ensure AI applications promote fairness and equality. 

Intellectual Property and Copyright: The use of AI, especially Generative AI applications like ChatGPT, may involve the use of copyrighted materials, raising intellectual property concerns. In December, the Government launched a consultation on Copyright and Artificial Intelligence.  

Accountability and Liability: Determining liability when AI systems cause harm is a complex legal issue. Clear accountability frameworks must be established ensuring that there is always human oversight of AI decisions. This includes defining who is responsible for AI-driven actions and implementing mechanisms for redress in cases of error. 

Regulatory Compliance: There is still no sign on an AI Bill which was mentioned in the King’s Speech. However there is plenty of AI guidance for the public sector. The recently published AI Playbook for the UK Government updates and expands on the Generative AI Framework for HMG. It aims to “help government departments and public sector organisations harness the power of a wider range of AI technologies safely, effectively, and responsibly.”  

The adoption of AI in local government presents a unique challenge especially for compliance professionals. By developing a deeper understanding of AI, they can play a leading role in addressing the legal and ethical dilemmas posed by emerging AI technologies as well as position themselves as forward-thinking leaders who can bridge the gap between law, ethics, and technology.  

Act Now recently launched the AI Governance Practitioner Certificate. This course is designed to equip compliance professionals with the essential knowledge and skills to navigate this transformative technology while upholding the highest standards of data protection and information governance.   

We are registering interest in this course which, subject to demand, will run in July, October and November. Register your interest now (no obligation).