Seasons Greetings

As we end another year, the Act Now team would like to wish everyone ‘Seasons’ greetings’ and best wishes for the new year. Thank you to all our delegates and colleagues for their continued support and dedication.

Our office will be closed for the holiday season from Thursday, 21st December, and we will return on Thursday, 4th January 2023.

Data Protection Bill Faces Scrutiny:
Commissioner Calls for Tighter Safeguards 

In a recent development, the Information Commissioner has weighed in on the debate surrounding the Data Protection and Digital Information Bill (DPDI Bill), legislation aimed at modernising data protection in the UK. While acknowledging the government’s efforts to strengthen the independence of the Information Commissioner’s Office (ICO) and update data protection practices, the Commissioner’s response highlights significant concerns, particularly around the use of personal data in social security contexts. We wrote a detailed breakdown on our blog here

The response, detailed and thorough, applauds the government’s amendments to the bill, recognising their potential to enhance ICO’s autonomy and bring data protection practices up to date with the digital age. However, the Commissioner expresses reservations about the adequacy of safeguards in the current draft of the bill, especially in terms of personal data handling for social security purposes. 

The Commissioner’s concern primarily revolves around the need for more precise language in the bill. This is to ensure that its provisions are fully aligned with established data protection principles, thereby safeguarding individual rights.
The response suggests that the current wording might be too broad or vague, potentially leading to misuse or overreach in the handling of personal data. 

Importantly, the Commissioner has provided detailed technical feedback for further improvements to the bill. It indicates a need for scrutiny and adjustments to the bill to ensure that it not only meets its intended purpose but also robustly protects the rights of individuals. 

While the Commissioner supports the bill’s overarching aim to enhance the UK’s data protection regime, the emphasis is clearly on the necessity of refining the bill.
This is to ensure it strikes the right balance between enabling data use for public and economic benefits and protecting individual privacy rights. 

The response from the Information Commissioner is a significant moment in the ongoing development of the DPDI Bill. It underscores the complexity and importance of legislating in the digital age, where data plays a crucial role in both the economy and personal privacy. 

As the bill progresses, the government and legislators should consider the Commissioner’s input. The balance they strike in the final version of the bill will be a key indicator of the UK’s approach to data protection in a rapidly evolving digital landscape. 

Learn more about the updated bill with our Data Protection and Digital Information Bill: Preparing for GDPR and PECR Reforms workshop. Dive into the issues discussed in this blog and secure your spot now.

The Hidden Reach of the Prevent Strategy:
Beyond Counter-Terrorism Units

The UK government’s anti-radicalisation program, Prevent, is reportedly sharing the personal details of thousands of individuals more extensively than previously known. This sharing includes not just counter-terrorism units, but also airports, ports, immigration services, and officials at the Home Office and the Foreign, Commonwealth and Development Office (FCDO). Critics argue that such widespread data sharing could be illegal, as it involves moving sensitive personal data between databases without the consent of the individuals. 

A Metropolitan police document titled “Prevent case management guidance” indicates that Prevent details are also shared with the ports authority watchlist. This raises concerns that individuals may face increased scrutiny at airports or be subjected to counter-terrorism powers without reasonable suspicion. The document also mentions that foreign nationals may have their backgrounds checked by the FCDO and immigration services for any overseas convictions or intelligence. 

Furthermore, the Acro Criminal Records Office, which manages UK criminal records, is notified about individuals referred to Prevent, despite the program dealing with individuals who haven’t necessarily engaged in criminal behaviour.
Counter-terror police emphasise their careful approach to data sharing, which aims to protect vulnerable individuals. 

Prevent’s goal is to divert people from terrorism before they offend, and most people are unaware of their referral to the program. 95% of referrals result in no further action. A secret database, the National Police Prevent Case Management database, was previously disclosed in 2019, revealing the storage of details of those referred to Prevent. 

Newly disclosed information, obtained through a freedom of information request by the Open Rights Group (ORG), reveals that Prevent data is shared across various police databases, including the Police National Computer, specialised counter-terrorism and local intelligence systems, and the National Crime Agency. 

The sharing of this data was accidentally revealed due to a redaction error in a heavily edited Met document. Despite its sensitive nature, the ORG decided to make the document public. Sophia Akram of the ORG expressed concerns over the extent of the data sharing and potential harms, suggesting that it could be unfair and possibly unlawful. 

The guidance also indicates that data is retained and used even in cases where no further action is taken. There are concerns about the impact on young people’s educational opportunities, as Prevent requires public bodies like schools and the police to identify individuals at risk of extremism. 

Recent figures show thousands of referrals to Prevent, predominantly from educational institutions. From April 2022 to March 2023, a total of 6,817 individuals were directed to the Prevent program. Within this group, educational institutions were responsible for 2,684 referrals. Breaking down the referrals by age, there were 2,203 adolescents between the ages of 15 and 20, and 2,119 referrals involved children aged 14 or younger.

There are worries about the long-term consequences for children and young people referred to the program. Several cases have highlighted the intrusive nature of this data sharing and its potential impact on individuals’ lives. Cases in which students have missed gaining a place at a sixth form college and other cases involving children as young as four years old.  

Prevent Watch, an organisation monitoring the program, has raised alarms about the data sharing, particularly its effect on young children. The FoI disclosures challenge the notion that Prevent is non-criminalising, as data on individuals, even those marked as ‘no further action’, can be stored on criminal databases and flagged on watchlists. 

Counter-terrorism policing spokespeople defend the program, emphasising its
multi-agency nature and focus on protecting people from harm. They assert that data sharing is carefully managed and legally compliant, aiming to safeguard vulnerable individuals from joining terror groups or entering conflict zones. 

Learn more about data sharing with our UK GDPR Practitioner Certificate. Dive into the issues discussed in this blog and secure your spot now.

Act Now Partners with Middlesex
University Dubai for UAE’s first
Executive Certificate in DP Law

Act Now Training, in collaboration with Middlesex University Dubai, is excited to announce the launch of the UAE’s first Data Protection Executive training programme. This qualification is ideal as a foundation for businesses and organisations aiming to comply with the UAE Federal Data Protection Law.

This practical course focusses on developing a data protection framework and ensuring compliance with the UAE Data Protection Law’s strict requirements. This is particularly relevant given the recent advancements in Data Protection law in the Middle East, including the UAE’s first comprehensive national data protection law, Federal Decree Law No. 45/2021. 

This law regulates personal data processing, emphasising transparency, accountability, and data subject rights. It applies to all organisations processing personal data within the UAE and abroad for UAE residents.

The importance of understanding this law is paramount for every business and organisation, as it necessitates a thorough reassessment of personal data handling practices. Non-compliance can lead to severe penalties and reputational damage.

The Executive Certificate in UAE DP Law is a practical qualification delivered over 5-weeks in two half day sessions per week and offers numerous benefits:

  1. Expertise in Cutting-Edge Legislation: Gain in-depth knowledge of the UAE’s data protection law, essential for professionals at the forefront of data protection practices.

  2. Professional Development: This knowledge enhances your resume, especially for roles in compliance, legal, and IT sectors, showing a commitment to legal reforms.

  3. Practical Application: The course’s structured format allows gradual learning and practical application of complex legal concepts, ensuring a deep understanding of the law.

  4. Risk Mitigation: Understanding the law aids in helping organisations avoid penalties and reputational harm due to non-compliance.

  5. Networking Opportunities: The course provides valuable connections in the field of data protection and law.

  6. Empowerment of Data Subjects: Delegates gain insights into their rights as data subjects, empowering them to protect their personal data effectively.

Delegates will receive extensive support, including expert instruction, comprehensive materials, interactive sessions, practical exercises, group collaboration, ongoing assessment, and additional resources for further learning. Personal tutor support is also provided throughout the course.

This program is highly recommended for officers in organisations both inside and outside the UAE that conduct business in the region or have customers, agents, and employees there. 

Act Now will be delivering and has designed the curriculum. Act Now Training is the UK’s premier provider of information governance training and consultancy, serving government organisations, multinational corporations, financial institutions, and corporate law firms.   

With a history of delivering practical, high-quality training since 2002.
Act Now’s skills-based training approach has led to numerous awards including most recently the Supplier of Year Award 2022-23 by the Information and Records Management Society in the UK. 

Our associates have decades of hands-on global Information Governance experience and thus are able to break down this complex area with real world examples making it easy to understand, apply and even fun!

Middlesex University Dubai is a 5 star rated KHDA university and one of three global campuses including London and Mauritius. It is the largest UK University in the UAE with over 5000 student enrolments from over 120 nationalities.

For more information and to register your interest, visit Middlesex University Dubai’s website. Alternatively you can Click Here.

EU Leads Global AI Regulation with Landmark Legislation

European representatives in Strasbourg recently concluded an extensive 37-hour discussion, resulting in the world’s first extensive framework for regulating artificial intelligence. This ground-breaking agreement, facilitated by European Commissioner Thierry Breton and Spain’s AI Secretary of State, Carme Artigas, is set to shape how social media and search engines operate, impacting major companies. 

The deal, achieved after lengthy negotiations and hailed as a significant milestone, puts the EU at the forefront of AI regulation globally, surpassing the US, China, and the UK. The new legislation, expected to be enacted by 2025, involves comprehensive rules for AI applications, including a
risk-based system to address potential threats to health, safety, and human rights. 

Key components of the agreement include strict controls on AI-driven surveillance and real-time biometric technologies, with specific exceptions for law enforcement under certain circumstances. The European Parliament ensured a ban on such technologies, except in cases of terrorist threats, search for victims, or serious criminal investigations. 

MEP Brando Benefei and Dragoș Tudorache, who led the negotiations, emphasised the aim of developing an AI ecosystem in Europe that prioritises human rights and values. The agreement also includes provisions for independent authorities to oversee predictive policing and uphold the presumption of innocence. 

Tudorache highlighted the balance struck between equipping law enforcement with necessary tools and banning AI technologies that could pre-emptively identify potential criminals. (Minority Report anyone?)
The highest risk AI systems will now be regulated based on the computational power required for training, with GPT4 being a notable example and the only technology fulfilling this criterion. 

Some Key Aspects 
 
The new EU AI Act delineates distinct regulations for AI systems based on their perceived level of risk, effectively categorizing them into “Unacceptable Risk,” “High Risk,” “Generative AI,” and “Limited Risk” groups, each with specific obligations for providers and users. 

Unacceptable Risk 

AI systems deemed a threat to people’s safety or rights will be prohibited. This includes: 

  • AI-driven cognitive behavioural manipulation, particularly targeting vulnerable groups, like voice-activated toys promoting hazardous behaviours in children. 
  • Social scoring systems that classify individuals based on behaviour,
    socio-economic status, or personal characteristics. 
  • Real-time and remote biometric identification systems, like facial recognition. 
  • Exceptions exist, such as “post” remote biometric identification for serious crime investigations, subject to court approval. 

High Risk 

AI systems impacting safety or fundamental rights fall under high-risk, subdivided into: 

  • AI in EU-regulated product safety categories, like toys, aviation, cars, medical devices, and lifts. 
  • Specific areas requiring EU database registration, including biometric identification, critical infrastructure management, education, employment, public services access, law enforcement, migration control, and legal assistance. 
  • High-risk AI systems must undergo pre-market and lifecycle assessments. 

Generative AI 

AI like ChatGPT must adhere to transparency protocols: 

  • Disclosing AI-generated content. 
  • Preventing generation of illegal content. 
  • Publishing summaries of copyrighted data used in training. 
  • Limited Risk 
  • These AI systems require basic transparency for informed user decisions, particularly for AI that generates or manipulates visual and audio content, like deepfakes. Users should be aware when interacting with AI. 

The legislation sets a precedent for future digital regulation. As we saw with the GDPR, Governments outside the EU used the legislation as a foundation for their own laws and many corporations adopted the same privacy standards from within Europe for their businesses worldwide for efficiency. This could easily happen in the case of the EU AI Act with governments using it as a ‘starter for ten’. It will be interesting to see how the legislation will cater for algorithmic biases found within current iterations of the technology from facial recognition technology to other automated decision making algorithms.

The UK did publish its AI White Paper in March of this year and says it follows a “Pro-Innovation” approach. However, it seems to have decided to go ‘face first’ before any legislation is passed with facial recognition software recently used in the Beyoncé gig, King Charles’ coronation and during the Formula One Grand Prix. For many, it is the impact of the decision making the software is formulating through the power of AI which is worrying. The ICO does have useful guides on the use of AI which can be found here. 

As artificial intelligence technology rapidly advances, exemplified by Google’s impressive Gemini demo, the urgency for comprehensive regulation was becoming increasingly apparent. The EU has signalled its intent to avoid past oversights seen in the unchecked expansion of tech giants and be at the forefront of regulating this fascinating technology to ensure its ethical and responsible utilisation. 

Join our Artificial Intelligence and Machine Learning, How to Implement Good Information Governance workshop for hands-on insights, key resource awareness, and best practices, ensuring you’re ready to navigate AI complexities fairly and lawfully.

ICO Reprimand for NHS Patient Data Breach

In a concerning revelation of data security lapses, NHS Fife has been formally reprimanded by the Information Commissioner’s Office (ICO) following an incident where an unauthorised individual accessed sensitive patient information. The breach occurred in a hospital ward and highlights key learnings for all organisations regarding security protocols for personal data.

Incident Overview

The case came to light after the ICO, discovered that the personal information of 14 patients was compromised. The incident, which took place in February 2023, involved an individual who was able to access secure documents and participate in administering care to a patient, highlighting a lack of identity verification checks at the hospital.

ICO Investigation Findings

The ICO’s investigation unveiled several deficiencies in NHS Fife’s approach to data protection. Notably, staff training on safeguarding personal information was found to be inadequate. The ICO found training rates across the hospital were at only 42% although on the ward it was at 82%. This low rate was attributed to the Covid-19 Pandemic and a three-year training cycle. Additionally, the ICO pointed out that the hospital’s CCTV system had been mistakenly turned off by a staff member before the incident as part of wider energy-saving measures being implemented across the hospital. Although this would not have prevented the incident, it further complicated the recovery of the missing documents as the individual was not able to be identified.

Natasha Longson, ICO Head of Investigations, stressed the importance of stringent data security in healthcare. “Patient data is highly sensitive and needs the highest level of security. Trust in data security is pivotal when accessing healthcare services,” she remarked. 

Echoes of NHS Lanarkshire Incident

This is not the first instance of such a breach within the NHS system. Months earlier, NHS Lanarkshire faced a similar reprimand for unauthorised staff use of WhatsApp to share patient data over the course of two years, leading to data access by a non-staff member.

In the Lanarkshire incident, between April 2020 and April 2022, 26 staff at NHS Lanarkshire had access to a WhatsApp group where patient data was entered on more than 500 occasions, including names, phone numbers and addresses. Images, videos and screenshots, which included clinical information, were also shared. While it was made available for communicating basic information only at the start of the pandemic, WhatsApp was not approved by NHS Lanarkshire for processing patient data and was adopted by these staff without the organisation’s knowledge. A non-staff member was also added to the WhatsApp group in error, resulting in the inappropriate disclosure of personal information to an unauthorised individual. Additionally, it is worth bearing in mind, public sector organisations face the added risk of WhatsApp communications being disclosed to court proceedings after the High Court ruling in July of this year. The product of that ruling is currently being played out for us now

Corrective Measures and Recommendations

In response to this incident, NHS Fife has introduced new procedures, including stringent sign-in and out systems for documents containing patient data and updated ID verification processes. The ICO has also recommended that NHS Fife enhance its data protection strategies by conducting more frequent training for staff and providing clear written security guidelines as well as updating policies and procedures whilst clearly highlighting archived policies. The ICO also requested to be updated on these measures in a six-month follow up. 

Organisations can use these findings to ensure that all the recommendations mentioned above are being implemented within their organisations. The ICO added:

“Every healthcare organisation should look at this case as a lesson learned and consider their own policies when it comes to security checks and authorised access. We are pleased to see that NHS Fife has introduced new measures to prevent similar incidents from occurring in the future.”

Learn more about data breaches with our UK GDPR Practitioner Certificate. Dive into the issues discussed in this blog and secure your spot before spaces run out.

UK Hospital Trust Reprimanded for GDPR Infringements 

The University Hospitals of Derby and Burton NHS Foundation Trust (UHDB), was recently issued a reprimand (30/10/23) by the Information Commissioner for multiple infringements of the UK General Data Protection Regulation (UK GDPR). This decision highlights significant concerns regarding the management and security of patient data. 

Background of the Case 

UHDB, formed by the merger of the Derby Teaching Hospital NHS Foundation Trust and Burton Hospitals NHS Foundation Trusts in July 2018, operates five hospitals across various locations.
The infringement was initially detected at The Florence Nightingale Community Hospital in Derby. 

The issue revolved around UHDB’s handling of patient referrals for outpatient appointments. These referrals, containing sensitive health data, were processed via an electronic referral system (e-RS). The system, however, was plagued with a critical flaw where referrals would disappear from the worklist after a certain period, resulting in significant delays and data loss. 

Key Findings of the Investigation 

The investigation into UHDB’s practices uncovered several alarming facts: 

Data Subjects Affected: Approximately 4,768 individuals were directly impacted, with over 4,199 experiencing delayed medical referrals. The delayed response potentially caused distress and inconvenience to patients, some of whom waited over two years for treatment. 

Organisational Failings: UHDB was found lacking in implementing adequate organisational measures to prevent accidental data loss, especially concerning special category data. 

Inadequate Processes: The reliance on manual processes and email communications for managing referral drop-offs was deemed ineffective and insecure. 

Lack of Formal Oversight: There was no formal oversight ensuring the effective management and reinstatement of referrals onto the worklist. 

Absence of Risk Assessments: No risk assessment was conducted in relation to handling referral drop-offs, a measure that could have identified and minimised data protection risks. 

Remedial Actions and Recommendations 

In response to the reprimand, UHDB has taken several remedial steps, including conducting full internal and external reviews, contacting affected patients, creating a new Standard Operating Procedure (SOP), and introducing robotic process automation to reduce human error. 

The Commissioner recommended further actions for UHDB, emphasising the need for continuous support to affected data subjects, assessment and monitoring of new processes, and sharing lessons learned across the organisation to prevent future incidents. 

Implications and Conclusions 

This case serves as a stark reminder of the critical importance of data protection in the healthcare sector. It underscores the need for robust systems and processes to safeguard sensitive patient information and the potential consequences of failing to comply with GDPR regulations. 

UHDB’s commitment to rectifying these issues is commendable, yet the incident raises broader questions about data management practices in the NHS and the healthcare sector at large.

The British Library Hack: A Chapter in Ransomware Resilience

In a stark reminder of the persistent threat of cybercrime, the British Library has confirmed a data breach incident that has led to the exposure of sensitive personal data, with materials purportedly up for auction online. An October intrusion by a notorious cybercrime group targeted the library, which is home to an extensive collection, including over 14 million books.

Recently, the ransomware group Rhysida claimed responsibility, publicly displaying snippets of sensitive data, and announcing the sale of this information for a significant sum of around £600k to be paid in cryptocurrency.

While the group boasts about the data’s exclusivity and sets a firm bidding deadline (today 27th November 2023), the library has only acknowledged a leak of what seems to be internal human resources documents. It has not verified the identity of the attackers nor the authenticity of the sale items. The cyber attack has significantly disrupted the library’s operations, leading to service interruptions expected to span several months.

In response, the library has strengthened its digital defenses, sought expert cybersecurity assistance, and urged its patrons to update their login credentials as a protective measure. The library is working closely with the National Cyber Security Centre and law enforcement to investigate, but details remain confidential due to the ongoing inquiry.

The consequences of the attack have necessitated a temporary shutdown of the library’s online presence. Physical locations, however, remain accessible. Updates can be found the British Library’s X (née twitter) feed. The risk posed by Rhysida has drawn attention from international agencies, with recent advisories from the FBI and US cybersecurity authorities. The group has been active globally, with attacks on various sectors and institutions.

The British Library’s leadership has expressed appreciation for the support and patience from its community as it navigates the aftermath of the cyber attack.

What is a Ransomware Attack?

A ransomware attack is a type of malicious cyber operation where hackers infiltrate a computer system to encrypt data, effectively locking out the rightful users. The attackers then demand payment, often in cryptocurrency, for the decryption key. These attacks can paralyse organisations, leading to significant data loss and disruption of operations.

Who is Rhysida?

The Rhysida ransomware group first came to the fore in May of 2023, following the emergence of their victim support chat portal hosted via the TOR browser. The group identifies as a “cybersecurity team” who highlight security flaws by targeting victims’ systems and spotlighting the supposed potential ramifications of the involved security issues.

How to prevent a Ransomware Attack?

Hackers are becoming more and more sophisticated in ways they target our personal data. We have seen this with banking scams recently. However there are some measures we can implement personally and within our organisations to prevent a ransomware attack.

  1. Avoid Unverified Links: Refrain from clicking on links in spam emails or unfamiliar websites. Hackers frequently disseminate ransomware via such links, which, when clicked, can initiate the download of malware. This malware can then encrypt your data and hold it for ransom​​.

  2. Safeguard Personal Information: It’s crucial to never disclose personal information such as addresses, NI numbers, login details, or banking information online, especially in response to unsolicited communications​​.

  3. Educate Employees: Increasing awareness among employees can be a strong defence. Training should focus on identifying and handling suspicious emails, attachments, and links. Additionally, having a contingency plan in the event of a ransomware infection is important​​.

  4. Implement a Firewall: A robust firewall can act as a first line of defence, monitoring incoming and outgoing traffic for threats and signs of malicious activity. This should be complemented with proactive measures such as threat hunting and active tagging of workloads​​.

  5. Regular Backups: Maintain up-to-date backups of all critical data. In the event of a ransomware attack, having these backups means you can restore your systems to a previous, unencrypted state without having to consider ransom demands.

  6. Create Inventories of Assets and Data: Having inventories of the data and assets you hold allows you to have an immediate knowledge of what has been compromised in the event of an attack whilst also allowing you to update security protocols for sensitive data over time.

  7. Multi-Factor Authentication: Identifying legitimate users in more than one way ensures that you are only granting access to those intended. 

These are some strategies organisations can use as part of a more comprehensive cybersecurity protocol which will significantly reduce the risk of falling victim to a ransomware attack. 

Join us on our workshop “How to increase Cyber Security in your Organisation” and Cyber Security for DPO’s where we discuss all of the above and more helping you create the right foundations for Cyber resilience within your organisation. 

The NHS-Palantir Deal: A Pandora’s Box for Patient Privacy? 

The National Health Service (NHS) of England’s recent move to sign a £330 million deal with Palantir Technologies Inc. has set off alarm bells in the realm of patient privacy and data protection. Palantir, a data analytics company with roots in the U.S. intelligence and military sectors, is now at the helm of creating a mammoth NHS data platform. This raises critical questions: Is patient privacy the price of progress? 

The Controversial Contractor 

Palantir’s pedigree of working closely with entities like the CIA and its contribution to the UK Ministry of Defence has painted a target on the back of the NHS’s decision. This association, coupled with its founder’s contentious remarks about the NHS, casts a long shadow over the appointment. Critics highlight Palantir’s controversial history, notably its involvement in supporting the US immigration enforcement’s stringent policies under the Trump administration. The ethical ramifications of such affiliations are profound, given the sensitive nature of health data. Accenture, PwC, NECS and Carnall Farrar will all support Palantir, NHS England said on Tuesday. 

Data Security vs. Data Exploitation 

NHS England assures that the new “federated data platform” (FDP) will be a secure, privacy-enhancing technology that will revolutionise care delivery. The promise is a streamlined, efficient service with live data at clinicians’ fingertips. However, the concern of the potential for data exploitation looms large. Can a firm, with a not-so-distant history of aiding in surveillance, be trusted with the most intimate details of our lives—our health records? 

The Right to Opt-Out: A Right Denied? 

The debate intensifies around the right—or the apparent lack thereof—for patients to opt out of this data sharing. With the NHS stating that all data will be anonymised and used solely for “direct patient care,” they argue that an opt-out is not necessary. Yet, this has not quelled the concerns of privacy advocates and civil liberty groups who foresee a slippery slope towards a panopticon oversight of personal health information. 

Skepticism is further fuelled by the NHS’s troubled history with data projects, where previous attempts to centralise patient data have collapsed under public opposition. The fear that history might repeat itself is palpable, and the NHS’s ability to sway public opinion in favour of the platform remains a significant hurdle. 

Conclusion 

As we venture further into an age where data is king, the NHS-Palantir partnership is a litmus test for the delicate balance between innovation and privacy. The NHS’s venture is indeed ambitious, but it must not be deaf to the cacophony of concerns surrounding patient privacy. Transparency, robust data governance, and the right to opt out must not be side-lined in the pursuit of technological advancement. After all, when it comes to our personal health data, should we not have the final say in who holds the keys to our digital lives? 

Take a look at our highly popular Data Ethics Course. Places fill up fast so if you would like learn more in this fascinating area, book your place now. 

UK Biobank’s Data Sharing Raises Alarm Bells

An investigation by The Observer has uncovered that the UK Biobank, a repository of health data from half a million UK citizens, has been sharing information with insurance companies. This development contravenes the Biobank’s initial pledge to keep this sensitive data out of the hands of insurers, a promise that was instrumental in garnering public trust at the outset. UK Biobank has since come out and responded to the article calling it “disingenuous” and “extremely misleading”. 

A Promise Made, Then Modified 

The UK Biobank was set up in 2006 as a goldmine for scientific discovery, offering researchers access to a treasure trove of biological samples and associated health data. With costs for access set between £3,000 and £9,000, the research derived from this data has been nothing short of revolutionary. However, the foundations of this scientific jewel are now being questioned. 

When the project was first announced, clear assurances were given that data would not be made available to insurance companies, mitigating fears that genetic predispositions could be used discriminatorily in insurance assessments. These assurances appeared in the Biobank’s FAQs and were echoed in parliamentary discussions. 

Changing Terms Amidst Grey Areas 

The Biobank contends that while it does strictly regulate data access, allowing only verified researchers to delve into its database, this includes commercial entities such as insurance firms if the research is deemed to be in the public interest. The boundaries of what constitutes “health-related” and “public interest” are now under scrutiny.   

However, according to the Observer investigation, evidence suggests that this nuance—commercial entities conducting health-related research—was not clearly communicated to participants, especially given the categorical assurances given previously although the UK Biobank categorically denies this and shared its consent form and information leaflet. 

Data Sharing: The Ethical Quandary 

This breach of the original promise has raised the ire of experts in genetics and data privacy, with Prof Yves Moreau highlighting the severity of the breach of trust. The concern is not just about the sharing of data but about the integrity of consent given by participants. The Biobank’s response indicates that the commitments made were outdated and that the current policy, which includes sharing anonymised data for health-related research, was made clear to participants upon enrolment. 

The Ripple Effect of Biobank’s Data Policies 

Further complicating matters is the nature of the companies granted access. Among them are ReMark International, a global insurance consultancy, Lydia.ai, a Canadian “insurtech” firm that wants to give people “personalised and predictive health scores”, and Club Vita, a longevity data analytics company. These companies have utilised Biobank data for projects ranging from disease prediction algorithms to assessing longevity risk factors. The question that is raised is how can one ensure that this is in fact in the Public Interest, do we take a commercial entities word for this? UK Biobank says all research conducted is “consistent with being health-related and in the public interest” and it has an expert data access committee who decide on any complex issues but the who checks the ethics of the ethics committee? The issues with this self-regulation are axiomatic. 

The Fallout and the Future 

This situation has led to a broader conversation about the ethical use of volunteered health data and the responsibility of custodians like the UK Biobank to uphold public trust. As technology evolves and the appetite for data grows across industries, the mechanisms of consent and transparency may need to be revisited.  The Information Commissioner’s Office is now considering the case, spotlighting the crucial need for clarity and accuracy in how organisations manage and utilise sensitive personal information. 

As the UK Biobank navigates these turbulent waters, the focus shifts to how institutions like it can maintain the delicate balance between facilitating scientific progress and safeguarding the privacy rights of individuals who contribute their personal data for the greater good. For the UK Biobank, regaining the trust of its participants and the public is now an urgent task, one that will require more than just a careful review of policies but a reaffirmation of its commitment to ethical stewardship of the data entrusted to it. 

Take a look at our highly popular Data Ethics Course. Places fill up fast so if you would like learn more in this fascinating area, book your place now.