EU Leads Global AI Regulation with Landmark Legislation

European representatives in Strasbourg recently concluded an extensive 37-hour discussion, resulting in the world’s first extensive framework for regulating artificial intelligence. This ground-breaking agreement, facilitated by European Commissioner Thierry Breton and Spain’s AI Secretary of State, Carme Artigas, is set to shape how social media and search engines operate, impacting major companies. 

The deal, achieved after lengthy negotiations and hailed as a significant milestone, puts the EU at the forefront of AI regulation globally, surpassing the US, China, and the UK. The new legislation, expected to be enacted by 2025, involves comprehensive rules for AI applications, including a
risk-based system to address potential threats to health, safety, and human rights. 

Key components of the agreement include strict controls on AI-driven surveillance and real-time biometric technologies, with specific exceptions for law enforcement under certain circumstances. The European Parliament ensured a ban on such technologies, except in cases of terrorist threats, search for victims, or serious criminal investigations. 

MEP Brando Benefei and Dragoș Tudorache, who led the negotiations, emphasised the aim of developing an AI ecosystem in Europe that prioritises human rights and values. The agreement also includes provisions for independent authorities to oversee predictive policing and uphold the presumption of innocence. 

Tudorache highlighted the balance struck between equipping law enforcement with necessary tools and banning AI technologies that could pre-emptively identify potential criminals. (Minority Report anyone?)
The highest risk AI systems will now be regulated based on the computational power required for training, with GPT4 being a notable example and the only technology fulfilling this criterion. 

Some Key Aspects 
 
The new EU AI Act delineates distinct regulations for AI systems based on their perceived level of risk, effectively categorizing them into “Unacceptable Risk,” “High Risk,” “Generative AI,” and “Limited Risk” groups, each with specific obligations for providers and users. 

Unacceptable Risk 

AI systems deemed a threat to people’s safety or rights will be prohibited. This includes: 

  • AI-driven cognitive behavioural manipulation, particularly targeting vulnerable groups, like voice-activated toys promoting hazardous behaviours in children. 
  • Social scoring systems that classify individuals based on behaviour,
    socio-economic status, or personal characteristics. 
  • Real-time and remote biometric identification systems, like facial recognition. 
  • Exceptions exist, such as “post” remote biometric identification for serious crime investigations, subject to court approval. 

High Risk 

AI systems impacting safety or fundamental rights fall under high-risk, subdivided into: 

  • AI in EU-regulated product safety categories, like toys, aviation, cars, medical devices, and lifts. 
  • Specific areas requiring EU database registration, including biometric identification, critical infrastructure management, education, employment, public services access, law enforcement, migration control, and legal assistance. 
  • High-risk AI systems must undergo pre-market and lifecycle assessments. 

Generative AI 

AI like ChatGPT must adhere to transparency protocols: 

  • Disclosing AI-generated content. 
  • Preventing generation of illegal content. 
  • Publishing summaries of copyrighted data used in training. 
  • Limited Risk 
  • These AI systems require basic transparency for informed user decisions, particularly for AI that generates or manipulates visual and audio content, like deepfakes. Users should be aware when interacting with AI. 

The legislation sets a precedent for future digital regulation. As we saw with the GDPR, Governments outside the EU used the legislation as a foundation for their own laws and many corporations adopted the same privacy standards from within Europe for their businesses worldwide for efficiency. This could easily happen in the case of the EU AI Act with governments using it as a ‘starter for ten’. It will be interesting to see how the legislation will cater for algorithmic biases found within current iterations of the technology from facial recognition technology to other automated decision making algorithms.

The UK did publish its AI White Paper in March of this year and says it follows a “Pro-Innovation” approach. However, it seems to have decided to go ‘face first’ before any legislation is passed with facial recognition software recently used in the Beyoncé gig, King Charles’ coronation and during the Formula One Grand Prix. For many, it is the impact of the decision making the software is formulating through the power of AI which is worrying. The ICO does have useful guides on the use of AI which can be found here. 

As artificial intelligence technology rapidly advances, exemplified by Google’s impressive Gemini demo, the urgency for comprehensive regulation was becoming increasingly apparent. The EU has signalled its intent to avoid past oversights seen in the unchecked expansion of tech giants and be at the forefront of regulating this fascinating technology to ensure its ethical and responsible utilisation. 

Join our Artificial Intelligence and Machine Learning, How to Implement Good Information Governance workshop for hands-on insights, key resource awareness, and best practices, ensuring you’re ready to navigate AI complexities fairly and lawfully.

ICO Reprimand for NHS Patient Data Breach

In a concerning revelation of data security lapses, NHS Fife has been formally reprimanded by the Information Commissioner’s Office (ICO) following an incident where an unauthorised individual accessed sensitive patient information. The breach occurred in a hospital ward and highlights key learnings for all organisations regarding security protocols for personal data.

Incident Overview

The case came to light after the ICO, discovered that the personal information of 14 patients was compromised. The incident, which took place in February 2023, involved an individual who was able to access secure documents and participate in administering care to a patient, highlighting a lack of identity verification checks at the hospital.

ICO Investigation Findings

The ICO’s investigation unveiled several deficiencies in NHS Fife’s approach to data protection. Notably, staff training on safeguarding personal information was found to be inadequate. The ICO found training rates across the hospital were at only 42% although on the ward it was at 82%. This low rate was attributed to the Covid-19 Pandemic and a three-year training cycle. Additionally, the ICO pointed out that the hospital’s CCTV system had been mistakenly turned off by a staff member before the incident as part of wider energy-saving measures being implemented across the hospital. Although this would not have prevented the incident, it further complicated the recovery of the missing documents as the individual was not able to be identified.

Natasha Longson, ICO Head of Investigations, stressed the importance of stringent data security in healthcare. “Patient data is highly sensitive and needs the highest level of security. Trust in data security is pivotal when accessing healthcare services,” she remarked. 

Echoes of NHS Lanarkshire Incident

This is not the first instance of such a breach within the NHS system. Months earlier, NHS Lanarkshire faced a similar reprimand for unauthorised staff use of WhatsApp to share patient data over the course of two years, leading to data access by a non-staff member.

In the Lanarkshire incident, between April 2020 and April 2022, 26 staff at NHS Lanarkshire had access to a WhatsApp group where patient data was entered on more than 500 occasions, including names, phone numbers and addresses. Images, videos and screenshots, which included clinical information, were also shared. While it was made available for communicating basic information only at the start of the pandemic, WhatsApp was not approved by NHS Lanarkshire for processing patient data and was adopted by these staff without the organisation’s knowledge. A non-staff member was also added to the WhatsApp group in error, resulting in the inappropriate disclosure of personal information to an unauthorised individual. Additionally, it is worth bearing in mind, public sector organisations face the added risk of WhatsApp communications being disclosed to court proceedings after the High Court ruling in July of this year. The product of that ruling is currently being played out for us now

Corrective Measures and Recommendations

In response to this incident, NHS Fife has introduced new procedures, including stringent sign-in and out systems for documents containing patient data and updated ID verification processes. The ICO has also recommended that NHS Fife enhance its data protection strategies by conducting more frequent training for staff and providing clear written security guidelines as well as updating policies and procedures whilst clearly highlighting archived policies. The ICO also requested to be updated on these measures in a six-month follow up. 

Organisations can use these findings to ensure that all the recommendations mentioned above are being implemented within their organisations. The ICO added:

“Every healthcare organisation should look at this case as a lesson learned and consider their own policies when it comes to security checks and authorised access. We are pleased to see that NHS Fife has introduced new measures to prevent similar incidents from occurring in the future.”

Learn more about data breaches with our UK GDPR Practitioner Certificate. Dive into the issues discussed in this blog and secure your spot before spaces run out.

UK Hospital Trust Reprimanded for GDPR Infringements 

The University Hospitals of Derby and Burton NHS Foundation Trust (UHDB), was recently issued a reprimand (30/10/23) by the Information Commissioner for multiple infringements of the UK General Data Protection Regulation (UK GDPR). This decision highlights significant concerns regarding the management and security of patient data. 

Background of the Case 

UHDB, formed by the merger of the Derby Teaching Hospital NHS Foundation Trust and Burton Hospitals NHS Foundation Trusts in July 2018, operates five hospitals across various locations.
The infringement was initially detected at The Florence Nightingale Community Hospital in Derby. 

The issue revolved around UHDB’s handling of patient referrals for outpatient appointments. These referrals, containing sensitive health data, were processed via an electronic referral system (e-RS). The system, however, was plagued with a critical flaw where referrals would disappear from the worklist after a certain period, resulting in significant delays and data loss. 

Key Findings of the Investigation 

The investigation into UHDB’s practices uncovered several alarming facts: 

Data Subjects Affected: Approximately 4,768 individuals were directly impacted, with over 4,199 experiencing delayed medical referrals. The delayed response potentially caused distress and inconvenience to patients, some of whom waited over two years for treatment. 

Organisational Failings: UHDB was found lacking in implementing adequate organisational measures to prevent accidental data loss, especially concerning special category data. 

Inadequate Processes: The reliance on manual processes and email communications for managing referral drop-offs was deemed ineffective and insecure. 

Lack of Formal Oversight: There was no formal oversight ensuring the effective management and reinstatement of referrals onto the worklist. 

Absence of Risk Assessments: No risk assessment was conducted in relation to handling referral drop-offs, a measure that could have identified and minimised data protection risks. 

Remedial Actions and Recommendations 

In response to the reprimand, UHDB has taken several remedial steps, including conducting full internal and external reviews, contacting affected patients, creating a new Standard Operating Procedure (SOP), and introducing robotic process automation to reduce human error. 

The Commissioner recommended further actions for UHDB, emphasising the need for continuous support to affected data subjects, assessment and monitoring of new processes, and sharing lessons learned across the organisation to prevent future incidents. 

Implications and Conclusions 

This case serves as a stark reminder of the critical importance of data protection in the healthcare sector. It underscores the need for robust systems and processes to safeguard sensitive patient information and the potential consequences of failing to comply with GDPR regulations. 

UHDB’s commitment to rectifying these issues is commendable, yet the incident raises broader questions about data management practices in the NHS and the healthcare sector at large.

The British Library Hack: A Chapter in Ransomware Resilience

In a stark reminder of the persistent threat of cybercrime, the British Library has confirmed a data breach incident that has led to the exposure of sensitive personal data, with materials purportedly up for auction online. An October intrusion by a notorious cybercrime group targeted the library, which is home to an extensive collection, including over 14 million books.

Recently, the ransomware group Rhysida claimed responsibility, publicly displaying snippets of sensitive data, and announcing the sale of this information for a significant sum of around £600k to be paid in cryptocurrency.

While the group boasts about the data’s exclusivity and sets a firm bidding deadline (today 27th November 2023), the library has only acknowledged a leak of what seems to be internal human resources documents. It has not verified the identity of the attackers nor the authenticity of the sale items. The cyber attack has significantly disrupted the library’s operations, leading to service interruptions expected to span several months.

In response, the library has strengthened its digital defenses, sought expert cybersecurity assistance, and urged its patrons to update their login credentials as a protective measure. The library is working closely with the National Cyber Security Centre and law enforcement to investigate, but details remain confidential due to the ongoing inquiry.

The consequences of the attack have necessitated a temporary shutdown of the library’s online presence. Physical locations, however, remain accessible. Updates can be found the British Library’s X (née twitter) feed. The risk posed by Rhysida has drawn attention from international agencies, with recent advisories from the FBI and US cybersecurity authorities. The group has been active globally, with attacks on various sectors and institutions.

The British Library’s leadership has expressed appreciation for the support and patience from its community as it navigates the aftermath of the cyber attack.

What is a Ransomware Attack?

A ransomware attack is a type of malicious cyber operation where hackers infiltrate a computer system to encrypt data, effectively locking out the rightful users. The attackers then demand payment, often in cryptocurrency, for the decryption key. These attacks can paralyse organisations, leading to significant data loss and disruption of operations.

Who is Rhysida?

The Rhysida ransomware group first came to the fore in May of 2023, following the emergence of their victim support chat portal hosted via the TOR browser. The group identifies as a “cybersecurity team” who highlight security flaws by targeting victims’ systems and spotlighting the supposed potential ramifications of the involved security issues.

How to prevent a Ransomware Attack?

Hackers are becoming more and more sophisticated in ways they target our personal data. We have seen this with banking scams recently. However there are some measures we can implement personally and within our organisations to prevent a ransomware attack.

  1. Avoid Unverified Links: Refrain from clicking on links in spam emails or unfamiliar websites. Hackers frequently disseminate ransomware via such links, which, when clicked, can initiate the download of malware. This malware can then encrypt your data and hold it for ransom​​.

  2. Safeguard Personal Information: It’s crucial to never disclose personal information such as addresses, NI numbers, login details, or banking information online, especially in response to unsolicited communications​​.

  3. Educate Employees: Increasing awareness among employees can be a strong defence. Training should focus on identifying and handling suspicious emails, attachments, and links. Additionally, having a contingency plan in the event of a ransomware infection is important​​.

  4. Implement a Firewall: A robust firewall can act as a first line of defence, monitoring incoming and outgoing traffic for threats and signs of malicious activity. This should be complemented with proactive measures such as threat hunting and active tagging of workloads​​.

  5. Regular Backups: Maintain up-to-date backups of all critical data. In the event of a ransomware attack, having these backups means you can restore your systems to a previous, unencrypted state without having to consider ransom demands.

  6. Create Inventories of Assets and Data: Having inventories of the data and assets you hold allows you to have an immediate knowledge of what has been compromised in the event of an attack whilst also allowing you to update security protocols for sensitive data over time.

  7. Multi-Factor Authentication: Identifying legitimate users in more than one way ensures that you are only granting access to those intended. 

These are some strategies organisations can use as part of a more comprehensive cybersecurity protocol which will significantly reduce the risk of falling victim to a ransomware attack. 

Join us on our workshop “How to increase Cyber Security in your Organisation” and Cyber Security for DPO’s where we discuss all of the above and more helping you create the right foundations for Cyber resilience within your organisation. 

The NHS-Palantir Deal: A Pandora’s Box for Patient Privacy? 

The National Health Service (NHS) of England’s recent move to sign a £330 million deal with Palantir Technologies Inc. has set off alarm bells in the realm of patient privacy and data protection. Palantir, a data analytics company with roots in the U.S. intelligence and military sectors, is now at the helm of creating a mammoth NHS data platform. This raises critical questions: Is patient privacy the price of progress? 

The Controversial Contractor 

Palantir’s pedigree of working closely with entities like the CIA and its contribution to the UK Ministry of Defence has painted a target on the back of the NHS’s decision. This association, coupled with its founder’s contentious remarks about the NHS, casts a long shadow over the appointment. Critics highlight Palantir’s controversial history, notably its involvement in supporting the US immigration enforcement’s stringent policies under the Trump administration. The ethical ramifications of such affiliations are profound, given the sensitive nature of health data. Accenture, PwC, NECS and Carnall Farrar will all support Palantir, NHS England said on Tuesday. 

Data Security vs. Data Exploitation 

NHS England assures that the new “federated data platform” (FDP) will be a secure, privacy-enhancing technology that will revolutionise care delivery. The promise is a streamlined, efficient service with live data at clinicians’ fingertips. However, the concern of the potential for data exploitation looms large. Can a firm, with a not-so-distant history of aiding in surveillance, be trusted with the most intimate details of our lives—our health records? 

The Right to Opt-Out: A Right Denied? 

The debate intensifies around the right—or the apparent lack thereof—for patients to opt out of this data sharing. With the NHS stating that all data will be anonymised and used solely for “direct patient care,” they argue that an opt-out is not necessary. Yet, this has not quelled the concerns of privacy advocates and civil liberty groups who foresee a slippery slope towards a panopticon oversight of personal health information. 

Skepticism is further fuelled by the NHS’s troubled history with data projects, where previous attempts to centralise patient data have collapsed under public opposition. The fear that history might repeat itself is palpable, and the NHS’s ability to sway public opinion in favour of the platform remains a significant hurdle. 

Conclusion 

As we venture further into an age where data is king, the NHS-Palantir partnership is a litmus test for the delicate balance between innovation and privacy. The NHS’s venture is indeed ambitious, but it must not be deaf to the cacophony of concerns surrounding patient privacy. Transparency, robust data governance, and the right to opt out must not be side-lined in the pursuit of technological advancement. After all, when it comes to our personal health data, should we not have the final say in who holds the keys to our digital lives? 

Take a look at our highly popular Data Ethics Course. Places fill up fast so if you would like learn more in this fascinating area, book your place now. 

UK Biobank’s Data Sharing Raises Alarm Bells

An investigation by The Observer has uncovered that the UK Biobank, a repository of health data from half a million UK citizens, has been sharing information with insurance companies. This development contravenes the Biobank’s initial pledge to keep this sensitive data out of the hands of insurers, a promise that was instrumental in garnering public trust at the outset. UK Biobank has since come out and responded to the article calling it “disingenuous” and “extremely misleading”. 

A Promise Made, Then Modified 

The UK Biobank was set up in 2006 as a goldmine for scientific discovery, offering researchers access to a treasure trove of biological samples and associated health data. With costs for access set between £3,000 and £9,000, the research derived from this data has been nothing short of revolutionary. However, the foundations of this scientific jewel are now being questioned. 

When the project was first announced, clear assurances were given that data would not be made available to insurance companies, mitigating fears that genetic predispositions could be used discriminatorily in insurance assessments. These assurances appeared in the Biobank’s FAQs and were echoed in parliamentary discussions. 

Changing Terms Amidst Grey Areas 

The Biobank contends that while it does strictly regulate data access, allowing only verified researchers to delve into its database, this includes commercial entities such as insurance firms if the research is deemed to be in the public interest. The boundaries of what constitutes “health-related” and “public interest” are now under scrutiny.   

However, according to the Observer investigation, evidence suggests that this nuance—commercial entities conducting health-related research—was not clearly communicated to participants, especially given the categorical assurances given previously although the UK Biobank categorically denies this and shared its consent form and information leaflet. 

Data Sharing: The Ethical Quandary 

This breach of the original promise has raised the ire of experts in genetics and data privacy, with Prof Yves Moreau highlighting the severity of the breach of trust. The concern is not just about the sharing of data but about the integrity of consent given by participants. The Biobank’s response indicates that the commitments made were outdated and that the current policy, which includes sharing anonymised data for health-related research, was made clear to participants upon enrolment. 

The Ripple Effect of Biobank’s Data Policies 

Further complicating matters is the nature of the companies granted access. Among them are ReMark International, a global insurance consultancy, Lydia.ai, a Canadian “insurtech” firm that wants to give people “personalised and predictive health scores”, and Club Vita, a longevity data analytics company. These companies have utilised Biobank data for projects ranging from disease prediction algorithms to assessing longevity risk factors. The question that is raised is how can one ensure that this is in fact in the Public Interest, do we take a commercial entities word for this? UK Biobank says all research conducted is “consistent with being health-related and in the public interest” and it has an expert data access committee who decide on any complex issues but the who checks the ethics of the ethics committee? The issues with this self-regulation are axiomatic. 

The Fallout and the Future 

This situation has led to a broader conversation about the ethical use of volunteered health data and the responsibility of custodians like the UK Biobank to uphold public trust. As technology evolves and the appetite for data grows across industries, the mechanisms of consent and transparency may need to be revisited.  The Information Commissioner’s Office is now considering the case, spotlighting the crucial need for clarity and accuracy in how organisations manage and utilise sensitive personal information. 

As the UK Biobank navigates these turbulent waters, the focus shifts to how institutions like it can maintain the delicate balance between facilitating scientific progress and safeguarding the privacy rights of individuals who contribute their personal data for the greater good. For the UK Biobank, regaining the trust of its participants and the public is now an urgent task, one that will require more than just a careful review of policies but a reaffirmation of its commitment to ethical stewardship of the data entrusted to it. 

Take a look at our highly popular Data Ethics Course. Places fill up fast so if you would like learn more in this fascinating area, book your place now. 

CJEU’s FT v. DW Ruling: Navigating Data Subject Access Requests 

In the landmark case FT v. DW (Case C 307/22), the Court of Justice of the European Union (CJEU), delivered a ruling that sheds light on the intricacies of data subject access requests under the EU General Data Protection Regulation (GDPR). The dispute began when DW, a patient, sought an initial complimentary copy of their dental medical records from FT, a dentist, citing concerns about possible malpractice. FT, however, declined the request based on German law, which requires patients to pay for copies of their medical records. The ensuing legal tussle ascended through the German courts, eventually reaching the CJEU, which had to ponder three pivotal questions. These are detailed below. 

Question 1: The Right to a Free Copy of Personal Data 

The first deliberation was whether the GDPR mandates healthcare providers to provide patients with a cost-free copy of their personal data, irrespective of the request’s motive, which DW’s case seemed to imply was for potential litigation. The CJEU, examining Articles 12(5) and 15(3) of the GDPR and indeed Recital 63, concluded that the regulation does indeed stipulate that the first copy of personal data should be free and that individuals need not disclose their reasons for such requests, highlighting the GDPR’s overarching principle of transparency. 

Question 2: Economic Considerations Versus Rights under the GDPR 

The second matter concerned the intersection of the GDPR with
pre-existing national laws that might impinge upon the economic interests of data controllers, such as healthcare providers. The CJEU assessed whether Article 23(1)(i) of the GDPR could uphold a national rule that imposes a fee for the first copy of personal data. The court found that while Article 23(1)(i) could apply to laws pre-dating the GDPR, it does not justify charges for the first copy of personal data, thus prioritizing the rights of individuals over the economic interests of data controllers. 

Question 3: Extent of Access to Medical Records 

The final issue addressed the extent of access to personal data, particularly whether it encompasses the entire medical record or merely a summary. The CJEU clarified that according to Article 15(3) of the GDPR, a “copy” entails a complete and accurate representation of the personal data, not merely a physical document or an abridged version. This means that a patient is entitled to access the full spectrum of their personal data within their medical records, ensuring they can fully verify and understand their information. 

Conclusion 

The CJEU’s decision in FT v DW reaffirms the GDPR’s dedication to data subject rights and offers a helpful interpretation of the GDPR. It highlights the right of individuals to a free first copy of their personal data for any purpose, refuting the imposition of fees by national law for such access, and establishing the right to a comprehensive reproduction of personal data contained within medical records. The judgement goes on to say the data must be complete even if the term ‘copy’ is used as well as being contextual and intelligible as is required by Article 12(1) of the GDPR. 

We will be examining the impact of this on our upcoming Handling SARs course as well as looking at the ruling in our GDPR Update course. Places are limited so book early to avoid disappointment.

Clearview AI Wins Appeal Against GDPR Fine 

Last week a Tribunal overturned a GDPR Enforcement Notice and a Monetary Penalty Notice issued to Clearview AI, an American facial recognition company. In Clearview AI Inc v The Information Commissioner [2023] UKFTT 00819 (GRC), the First-Tier Tribunal (Information Rights) ruled that the Information Commissioner had no jurisdiction to issue either notice, on the basis that the GDPR/UK GDPR did not apply to the personal data processing in issue.  

Background 

Clearview is a US based company which describes itself as the “World’s Largest Facial Network”. Its online database contains 20 billion images of people’s faces and data scraped from publicly available information on the internet and social media platforms all over the world. It allows customers to upload an image of a person to its app; the person is then identified by the app checking against all the images in the Clearview database.  

In May 2022 the ICO issued a Monetary Penalty Notice of £7,552,800 to Clearview for breaches of the GDPR including failing to use the information of people in the UK in a way that is fair and transparent. Although Clearview is a US company, the ICO ruled that the UK GDPR applied because of Article 3(2)(b) (territorial scope). It concluded that Clearview’s processing activities “are related to… the monitoring of [UK resident’s] behaviour as far as their behaviour takes place within the United Kingdom.” 

The ICO also issued an Enforcement Notice ordering Clearview to stop obtaining and using the personal data of UK residents that is publicly available on the internet, and to delete the data of UK residents from its systems. (see our earlier blog for more detail on these notices.) 

The Judgement  

The First-Tier Tribunal (Information Rights) has now overturned the ICO’s enforcement and penalty notice against Clearview. It concluded that although Clearview did carry out data processing related to monitoring the behaviour of people in the UK (Article Art. 3(2)(b) of the UK GDPR), the ICO did not have jurisdiction to take enforcement action or issue a fine. Both the GDPR and UK GDPR provide that acts of foreign governments fall outside their scope; it is not for one government to seek to bind or control the activities of another sovereign state. However the Tribunal noted that the ICO could have taken action under the Law Enforcement Directive (Part 3 of the DPA 2018 in the UK), which specifically regulates the processing of personal data in relation to law enforcement. 

Learning Points 

While the Tribunal’s judgement in this case reflects the specific circumstances, some of its findings are of wider application: 

  • The term “behaviour” (in Article Art. 3(2)(b)) means something about what a person does (e.g., location, relationship status, occupation, use of social media, habits) rather than just identifying or describing them (e.g., name, date of birth, height, hair colour).  

  • The term “monitoring” not only comes up in Article 3(2)(b) but also in Article 35(3)(c) (when a DPIA is required). The Tribunal ruled that monitoring includes tracking a person at a fixed point in time as well as on a continuous or repeated basis.

  • In this case, Clearview was not monitoring UK residents directly as its processing was limited to creating and maintaining a database of facial images and biometric vectors. However, Clearview’s clients were using its services for monitoring purposes and therefore Clearview’s processing “related to” monitoring under Article 3(2)(b). 

  • A provider of services like Clearview, may be considered a joint controller with its clients where both determine the purposes and means of processing. In this case, Clearview was a joint controller with its clients because it imposed restrictions on how clients could use the services (i.e., only for law enforcement and national security purposes) and determined the means of processing when matching query images against its facial recognition database.  

Data Scraping 

The ruling is not a greenlight for data scraping; where publicly available data, usually from the internet, is collected and processed by companies often without the Data Subject’s knowledge. The Tribunal ruled that this was an activity to which the UK GDPR could apply. In its press release, reacting to the ruling, the ICO said: 

“The ICO will take stock of today’s judgment and carefully consider next steps.
It is important to note that this judgment does not remove the ICO’s ability to act against companies based internationally who process data of people in the UK, particularly businesses scraping data of people in the UK, and instead covers a specific exemption around foreign law enforcement.” 

This is a significant ruling from the First Tier Tribunal which has implications for the extra territorial effect of the UK GDPR and the ICO powers to enforce it. It merits an appeal by the ICO to the Upper Tribunal. Whether this happens depends very much on the ICO’s appetite for a legal battle with a tech company with deep pockets.  

This and other GDPR developments will be discussed by Robert Bateman in our forthcoming GDPR Updateworkshop.  

Act Now in Dubai: Season 2 

On the 2ndand 3rd October 2023, the UAE held the first ever privacy and data protection law conference; a unique event organised by the Dubai International Financial Centre (DIFC) and data protection practitioners in the Middle East. The conference brought together data protection and security compliance professionals from across the world to discuss the latest developments in the Middle East data protection framework.  

Data Protection law in the Middle East has seen some rapid developments recently. The UAE has enacted its first federal law to comprehensively regulate the processing of personal data in all seven emirates. Once in force (expected to be early next year) this will sit alongside current data protection laws regulating businesses in the various UAE financial districts such as the Dubai International Financial Centre (DIFC) Data Protection Law No. 5 of 2020 and the Abu Dhabi Global Market (ADGM) Data Protection Regulations 2021. Jordan, Oman, Bahrain and Qatar also have comprehensive data protection laws.  Currently what is causing most excitement in the Middle East data protection community is Saudi Arabia’s Personal Data Protection Law (PDPL) which came into force on 14th September 2022.  

The conference agenda covered various topics including the interoperability of data protection laws in the GCC, unlocking data flows in the region, smart cities, the use of facial recognition and data localisation. The focus of day 2 was on AI and machine learning. There were some great panels on this topic discussing AI standards, transparency and the need for regulation.   

Speakers included the UAE Minister for AI, His Excellency Omar Sultan Al Olama, as well as leading data protection lawyers and practitioners from around the world. Elisabeth Denham, former UK Information Commissioner, also addressed the delegates alongside data protection regulators from across the region. Act Now’s director, Ibrahim Hasan, was invited to take part in a panel discussion to share his experience of GDPR litigation and enforcement action in the UK and EU and what lessons can be drawn for the Middle East. 

Alongside Ibrahim, the Act Now team were at the conference to answer delegates’ questions about our UAE and KSA training programmes.
Act Now has delivered training  extensively in the Middle East to a wide range of delegates including representatives of the telecommunications, legal and technology sectors. We were pleased to see there that there was a lot of interest in our courses especially our DPO certificates.  

Following the conference, Ibrahim was invited to deliver a guest lecture to law students at Middlesex University Dubai. This is the biggest university in Dubai with over 4500 students from over 118 countries. Ibrahim talked about the importance of Data Protection law and job opportunities in the information governance profession. He was pleasantly surprised by the students’ interest in the subject and their willingness to consider IG as an alternative career path. A fantastic end to a successful trip. Our thanks to the conference organisers, particularly Lori Baker at the DIFC Commissioner’s Office, and our friends at Middlesex University Dubai for inviting us to address the students.  

Now is the time to train your staff in the new data protection laws in the Middle East. We can deliver online as well as face to face training. All of our training starts with a free analysis call to ensure you have the right level and most appropriate content for your organisation’s needs. Please get in touch to discuss your training or consultancy needs.  

All Go for UK to US Data Transfers 

On 10th July 2023, the European Commission adopted its adequacy decision under Article 45 of GDPR for the EU-U.S. Data Privacy Framework (DPF).
It concluded that the United States ensures an adequate level of protection, comparable to that of the European Union, for personal data transferred from the EU to US companies under the new framework. It means that personal data can flow safely from the EU to US companies participating in the Framework, without having to put in place additional data protection safeguards under the GDPR. 

The question then is, “What about transfers from the UK to the US which were not covered by the above?” The Data Protection (Adequacy) (United States of America) Regulations 2023 (SI 2023/1028) will come into force on 12th October 2023. The effect of the Regulations will be that, as of 12th October 2023, a transfer of personal data from the UK to an entity in the USA which has self-certified to the Trans-Atlantic EU-US Data Privacy Framework and its UK extension and which will abide by the EU-US Data Privacy Framework Principles, will be deemed to offer an adequate level of protection for personal data and shall be lawful in accordance with Article 45(1) UK GDPR.  

Currently, data transfers from the UK to the US under the UK GDPR must either be based on a safeguard, such as standard contractual clauses or binding corporate rules, or fall within the scope of a derogation under Article 49 UK GDPR. 

UK Data Controllers need to update privacy policies and document their own processing activities as necessary to reflect any changes in how they transfer personal data to the US. 

The new US – EU Data Privacy Framework will be discussed in detail on our forthcomingInternational Transfers workshop. 

Exit mobile version
%%footer%%