Nathan Bent Joins Act Now Training Team 

Act Now Training is pleased to announce that Nathan Bent has joined its team of associates

Nathan is a data protection specialist and a Certified Data Ethics Practitioner with the Open Data Institute (ODI). He has delivered over 40 courses in the past six months, covering Data Governance, Data Protection, Data Privacy, Data Security and Data Ethics. 

Throughout his career, Nathan has been recognised as a caring, joyful, and
people-oriented leader who is passionate about developing others and sharing knowledge. Nathan brings this passion and experience into every learning session, using practical, easy-to-understand examples, case studies, and real-life experiences to help each participant succeed. 

Nathan has worked as a Data Protection Officer, Data and Information Governance Manager, Chief Information Security Officer and Head of Data Governance and Technology in varying sectors from Engineering and Energy, to Social Housing and MedTech.  Over the past 25 years, he has led, coached, and educated teams that deliver data management, complex data insights, forecasting, statistical analysis, big data, data visualisation, data ethics, and legal and med-tech systems. 

Nathan said: 

“I am so pleased to be joining the Act Now team whose values and ethos are so closely aligned to mine. I have a lifelong passion for learning and knowledge. My training sessions are known for being dynamic, full of enthusiasm and often filled with laughter. I have made it my mission over the years to make what are often dry or (dare I say) boring subjects, fun and engaging.” 

Alongside Dr. Cedric Krummes, Nathan, our second recent appointment, will assist us in continuing to serve and deliver training courses for our clients. He will conduct
one-day workshops and our new AI Governance Practitioner Certificate. We warmly welcome Nathan to our team of dedicated and passionate trainers. 

Supporting Careers in Data Protection Through Apprenticeships 

In today’s digital landscape, data protection and information governance have become critical risk areas for organisations across all sectors. With increasing regulatory demands and evolving threats, the need for skilled professionals in this field has never been greater. Recognising this growing skills gap, Damar Training, with the support of Act Now Training,  launched its innovative Data Protection and Information Governance Apprenticeship programme in late 2022, quickly establishing itself as the leading provider in England.

The programme was developed through extensive consultation with employers, including members of the apprenticeship Trailblazer Group, to ensure it would be commercially attractive, impactful, and of the highest quality. This collaborative approach has led to excellent engagement from employers and individuals, with 243 apprentices starting the programme to date, making Damar the largest provider of this apprenticeship standard in England.

A Flexible, Comprehensive Learning Journey

What sets Damar’s apprenticeship apart is its thoughtfully designed modular structure, with carefully sequenced six-week blocks of learning that cater to diverse learning styles and organisational needs. The gradual layering of technical content and learning activity, designed with the assistance of Act Now Training, ensure that apprentices from both public and private sectors receive an outstanding foundation in the knowledge, skills, and behaviours required for success in data protection roles.

The delivery model combines self-directed learning through engaging online resources with regular one-to-one coaching visits and group coaching sessions.
Extended technical workshops (underpinned by Act Now’s expertise) and quarterly review meetings provide additional support, while dedicated forums allow apprentices to stay updated with the latest developments, engage with peers, and consult with coaches.

This comprehensive approach has yielded impressive results. With a retention rate of 68%, an achievement rate of 65%, and an EPA pass rate of 95% – all above national averages – the programme demonstrates exceptional quality, particularly remarkable for a relatively new offering.

Industry-Leading Expertise

A key strength of Damar’s apprenticeship is its partnership with Act Now, an
award-winning data protection consultancy. This collaboration ensures that the programme’s content remains at the cutting edge of industry developments, including emerging areas such as Artificial Intelligence regulation.

Sarah Murray, Data Protection Officer at ClearData, highlights this benefit: 

“One of the particular stand-outs for me is the workshops. With the content supported by
Act Now, who have such a good reputation in this field, the workshops really put all of the theory into real-life practice.”

Real-World Impact for Employers and Apprentices

The programme serves some of the UK’s major employers, including Heathrow Airport, National Express, the BBC, Auto Trader, Betfred, and Dunelm, alongside various NHS Trusts, universities, government departments, and local councils.

For apprentices, the transformation goes beyond technical knowledge. Many begin with only basic data protection skills and limited confidence. Through the programme, they develop not only technical expertise but also a deeper understanding of the “why” behind data protection practices and the confidence to advise others with authority.

This growth translates into tangible career progression, with 99% of apprentices experiencing positive outcomes – 53% remaining in their current roles with enhanced skills, 18% securing permanent positions, and 28% gaining promotions or additional responsibilities. Some have even become data protection officers with overall responsibility for their organisation’s data protection function.

Employers benefit from immediate practical impacts. Apprentices have improved information assurance audits at Lincoln University, created artificial intelligence policies for Norfolk and Waveney Integrated Care Board, and developed triage request processes for data protection requirements at The Christie NHS Foundation Trust.

Stacey Lawrence, Data Protection Manager at Manchester Airport, emphasises this value: 

“The impact that both apprentices have brought to Manchester Airport has been huge. They work on the front line, to manage all enquiries, data protection breaches, and individual rights requests, and without them we simply wouldn’t be able to do the really sterling work that we do every day.”

A Future-Focused Approach

Damar continues to evolve the programme based on feedback from coaches, apprentices, and employers. Recent improvements include enhanced EPA preparation sessions, now embedded into group coaching. The company maintains close ties with the trailblazer group and leverages Act Now’s expertise to stay ahead of legislative developments.

With another 22 apprentices due to commence in April, the programme’s growth trajectory remains strong. Many employers, including Manchester Airport Group and Nottingham University Hospitals, are returning for their second or third data protection apprentice – perhaps the strongest testament to the programme’s value.

For organisations seeking to strengthen their data protection capabilities and individuals looking to build rewarding careers in this critical field, Damar Training’s Data Protection and Information Governance Apprenticeship offers a proven pathway to success.

If you would like to learn more about the DP and IG  Apprenticeship, please get in touch

AI in Local Government: Navigating the Legal Issues 

Artificial Intelligence is revolutionising many sectors, and local government is no exception. Councils are increasingly integrating AI to enhance service delivery, optimise resource management, and engage with citizens. AI Use cases include: 

  • Infrastructure Maintenance and Management: Blackpool Council uses AI for road maintenance through Project Amber; employing AI-powered satellite imagery to detect road damage and potholes.  
  • Public Engagement: Newham Council uses Chatbot Max, a multilingual chatbot, to assist residents with parking permits and penalty charge queries. The council says that in six months, the chatbot handled over 10,000 questions, saved 84 hours in call time, and generated £40,000 in savings.  
  • Crime Prevention and Detection: Wolverhampton Council has installed AI powered CCTV cameras to crack down on fly-tippers. The cameras have 360 degree vision and can recognise when someone is fly-tipping, sending an immediate report to the Council. 
  • Predictive Analytics for Social Services: In 2018 Hackney Council trialled the Early Help Predictive System . By analysing data on debt, housing, unemployment, school attendance, and domestic violence, the AI system profiled families to determine their need for intervention. Although this pilot programme was dropped a year later, there are many other AI tools which aim to help cash strapped councils speed up social work. One such tool is Magic Notes which records social work meetings and emails the social worker a transcript, summary and suggested actions for inclusion in case notes. 

Expect many more AI use cases soon, as the public sector is made to give truth to the Prime Minister recent speech in which he pledged that the Government will use AI’s power to ”turbocharge” the economy and improve public services. 

Legal Considerations  

While AI offers numerous benefits, several legal issues have to be navigated to ensure responsible and lawful use. These include: 

Data Protection and Privacy: Where personal data is involved training or deploying AI models, of course the GDPR applies. The transparency provisions and the requirement for a legal basis are of particular importance. In 2022, the Information Commissioner’s Office (ICO) issued a fine of more than £7.5 million to Clearview AI for GDPR breaches. This related to the way the company compiled its online database containing 20 billion images of people’s faces and data scraped from the internet.  The company did manage to successfully appeal the fine but the ICO, and other GDPR regulators in the EU, have issued clear warnings to AI companies to ensure they comply with GDPR. 

Transparency and Explainability: The decision-making processes of AI systems can be opaque. Clear information about how AI systems operate and make decisions should be provided. The London Borough of Camden has co-created a Data Charter with residents to ensure clarity and accessibility regarding data use, including AI applications. They produced accessible communications and animated explainers to demystify AI processes for the public.  

Bias and Discrimination: AI systems trained on biased data can perpetuate existing inequalities. Last year, a black Uber Eats driver received a payout after “racially discriminatory” facial-recognition checks prevented him accessing the app to secure work. Councils must be vigilant in auditing AI algorithms to detect and mitigate biases. This involves regular assessments and adjustments to ensure AI applications promote fairness and equality. 

Intellectual Property and Copyright: The use of AI, especially Generative AI applications like ChatGPT, may involve the use of copyrighted materials, raising intellectual property concerns. In December, the Government launched a consultation on Copyright and Artificial Intelligence.  

Accountability and Liability: Determining liability when AI systems cause harm is a complex legal issue. Clear accountability frameworks must be established ensuring that there is always human oversight of AI decisions. This includes defining who is responsible for AI-driven actions and implementing mechanisms for redress in cases of error. 

Regulatory Compliance: There is still no sign on an AI Bill which was mentioned in the King’s Speech. However there is plenty of AI guidance for the public sector. The recently published AI Playbook for the UK Government updates and expands on the Generative AI Framework for HMG. It aims to “help government departments and public sector organisations harness the power of a wider range of AI technologies safely, effectively, and responsibly.”  

The adoption of AI in local government presents a unique challenge especially for compliance professionals. By developing a deeper understanding of AI, they can play a leading role in addressing the legal and ethical dilemmas posed by emerging AI technologies as well as position themselves as forward-thinking leaders who can bridge the gap between law, ethics, and technology.  

Act Now recently launched the AI Governance Practitioner Certificate. This course is designed to equip compliance professionals with the essential knowledge and skills to navigate this transformative technology while upholding the highest standards of data protection and information governance.   

We are registering interest in this course which, subject to demand, will run in July, October and November. Register your interest now (no obligation).  

New AI Governance Practitioner Certificate

Artificial intelligence (AI) has seen huge advances in the last two years. A few years ago it was just a subject of geeky tech discussions; now it is playing a role in every aspect of our lives. Last month, the Prime Minister set out the Government’s plans to ‘unleash AI’ across the UK with the aim of boosting growth and delivering services more efficiently. 

For data protection professionals, the significance of understanding AI cannot be overstated. AI systems are data-hungry beasts; from predictive algorithms in healthcare to recommendation engines in e-commerce, these technologies process vast amounts of personal and non-personal data to deliver insights and functionality. However, the power of AI also brings challenges related to privacy, security, and ethical considerations. Key GDPR principles such as data minimisation, accountability, and transparency align closely with the ethical use of AI. However, the complexity of AI—with its opaque decision-making processes and reliance on intricate datasets—can pose unique challenges to compliance. 

By developing a deeper understanding of AI, data protection professionals can play a leading role in addressing the legal and ethical dilemmas posed by emerging AI technologies as well as position themselves as forward-thinking leaders who can bridge the gap between law, ethics, and technology. 

Building the AI Skillset 

Act Now is pleased to launch our new AI Governance Practitioner Certificate. 

This course is designed to equip data protection professionals with the essential knowledge and skills to navigate this transformative technology while upholding the highest standards of data protection and information governance.  

In just six modules, this immersive course will empower you to understand AI’s potential, address its challenges, and harness its power responsibly for public benefit. The course is tailored specifically for those working in information governance and data protection, offering practical insights and actionable strategies to integrate AI in the workplace and for the public benefit. From real-world case studies to the latest regulatory updates, you’ll gain a deep understanding of how to manage AI ethically, securely, and in compliance with emerging laws.   

By completing the course, you will gain the skills to: 

  1. Explain foundational AI concepts, including its technologies, applications, and key milestones in its evolution. 
  1. Identify real-world examples of AI risks and demonstrate an understanding of their legal and ethical dimensions. 
  1. Interpret the role of key legal and regulatory frameworks, such as GDPR and the EU AI Act, in governing AI systems. 
  1. Evaluate organisational strategies for ensuring transparency, accountability, and fairness in AI development. 
  1. Propose ethical and compliance-focused solutions to mitigate AI risks while balancing innovation and regulatory adherence. 
  1. Apply course concepts to analyse case studies and participate in informed discussions about AI’s role in society and industry. 

This course will also assist your organisation to comply with the EU AI Act’s AI Literacy obligation which came into force on 2nd February 2025. This requires providers and deployers of AI systems to ensure that they have a workforce that has the skills and understanding regarding how to develop and use AI, its opportunities and risks. 

Register Your Interest 

We are registering interest in this course which, subject to demand, will run in July, October and November. Register your interest now (no obligation). 

UK Government Publishes AI Playbook

AI is in the news headlines almost on a daily basis. 

Last week, at a global summit in Paris, the UK and US refused to sign an international declaration on AI which pledges an “open”, “inclusive” and “ethical” approach to the technology’s development. The UK government said it had not been able to do so because of concerns about national security and “global governance” despite dozens of countries, including France, China and India, signing the declaration. The US Vice President, JD Vance, told world leaders that AI was “an opportunity that the Trump administration will not squander” and that “pro-growth AI policies” should be prioritised over safety. 

What a difference a few months make! In September 2024, the UK, US and EU signed the Council of Europe Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law. This was the world’s first AI treaty and included provisions to protect the public and their data, human rights, democracy and the rule of law.  

So where does this leave the UK’s plans for legislation? In January, the Prime Minister set out the Government’s plans to use AI across the UK, pledging to use AI’s power to ”turbocharge” the economy and improve public services. Last week it published the AI Playbook for the UK Government, which updates and expands on the Generative AI Framework for HMG. This updated guidance aims to “help government departments and public sector organisations harness the power of a wider range of AI technologies safely, effectively, and responsibly.” The foreword states:  

“The AI Playbook will support the public sector in better understanding what AI can and cannot do, and how to mitigate the risks it brings. It will help ensure that AI technologies are deployed in responsible and beneficial ways, safeguarding the security, wellbeing, and trust of the public we serve.”  

The playbook defines 10 common principles to guide the safe, responsible and effective use of AI) in government and public sector organisations: 

Principle 1: You know what AI is and what its limitations are 

Principle 2: You use AI lawfully, ethically and responsibly 

Principle 3: You know how to use AI securely 

Principle 4: You have meaningful human control at the right stage 

Principle 5: You understand how to manage the AI life cycle 

Principle 6: You use the right tool for the job 

Principle 7: You are open and collaborative 

Principle 8: You work with commercial colleagues from the start 

Principle 9: You have the skills and expertise needed to implement and use AI 

Principle 10: You use these principles alongside your organisation’s policies and have the right assurance in place 

We know that the UK will at some point publish an AI Bill. The playbook gives us a clue about key UK AI considerations/approach which could be adopted into legislation.  

By refusing to sign the Paris declaration and publishing the play book, the UK is attempting to promote transparent and trustworthy AI whilst being careful not to upset the big US tech firms (who now have the ear of the Trump administration) and the Trump administration at a sensitive time for the UK/US trade relationship. Time will tell if this strategy is successful. 

Enjoy reading our blog? Help us reach 10,000 subscribers by subscribing today! 

Do you wish to keep abreast of AI developments? Do you need to sharpen your AI deployment skills? Join our forthcoming AI workshops  Artificial Intelligence: How to Implement Good Information Governance  and the EU AI Act and UK Approach to Regulation. We can also help with your AI literacy training programme through our in house customised training. Get in touch for a quote.   

Prohibited AI Systems under the EU AI Act 

This week we wrote about the first parts of the EU AI Act becoming effective on Sunday. One of these was a ban on prohibited AI systems. These are AI practices that are deemed unacceptable due to their potential risks to European values and fundamental rights. 

Yesterday, the European Commission published its Guidelines on Prohibited Artificial Intelligence (AI) Practices. They specifically address practices such as harmful manipulation, social scoring, and real-time remote biometric identification, amongst others.  
   
It is important to note that the guidelines are in draft and subject to formal approval. Nevertheless they offer valuable insight and should be studied carefully by AI developers and users in the EU and beyond. This includes UK organisations due to the extra territorial nature of the EU AI Act.  

Breach of the Prohibited AI Systems provisions of the EU AI Act carries a maximum fine of €35 million or 7% of total worldwide annual turnover (whichever is higher). However, the fining provisions do not come into force until 2nd August 2025. 

Do you wish to keep abreast of AI developments? Do you need to sharpen your AI deployment skills? Join our forthcoming AI workshops  Artificial Intelligence: How to Implement Good Information Governance  and the EU AI Act and UK Approach to Regulation. We can also help with your AI literacy training programme through our in house customised training. Get in touch for a quote.

What’s the Problem with Deepseek? 

DeepSeek, the Chinese equivalent of ChatGPT, is making big waves in the AI world. Since its launch, it has quickly become the top-rated free app on Apple’s App Store, challenging the notion that the US leads the world in AI development. 

DeepSeek’s Chinese developers released the latest version of its app on 20th January (the day of US President Trump’s inauguration) rapidly gaining attention from AI experts and the tech industry. Powered by the open-source DeepSeek-V3 model, it was reportedly developed for less than $6 million, a fraction of the billions spent by its US rivals. Recently, OpenAI and other companies pledged to invest $500 billion in US AI infrastructure. President Trump announced this as “the largest AI infrastructure project in history” to maintain technological leadership in the US. However, DeepSeek’s emergence has impacted US tech stocks. On Monday the Nasdaq index dropped 3%, with chip-making giant Nvidia losing almost $600 billion in market value—the biggest one-day loss in US stock market history.  

Privacy Issues 

While the Chinese media and open-source AI proponents may be celebrating, DeepSeek’s rise necessitates scrutiny regarding its privacy and security risks. Some of these are:  

  • Data Collected: DeepSeek gathers sensitive personal data through natural conversations. 
  • Potential for Influence and Manipulation: As an AI chatbot, DeepSeek can shape opinions and conduct influence campaigns. 
  • Data Storage and Accessibility: Data stored on servers in China is fully accessible to the Chinese government. 
  • Level of User Engagement: Users may unknowingly reveal personal or confidential information through interactive conversations. 

Many of these issues are the same as TikTok which was temporarily banned in the US last week. 

Organisations need to closely monitor the AI models employees use; the US Navy recently advised its members to avoid using DeepSeek due to potential security and ethical concerns. It is also important to establish clear policies, procedures, and guidance, especially regarding GDPR compliance.  

Yesterday the Irish Data Protection Commission confirmed to TechCrunch that it has sent a note to DeepSeek requesting details concerning how the data of citizens in Ireland is processed by the company. The Italian data protection regulator has sent a similar note to the company and the DeepSeek mobile app no longer appears in both the Google and Apple app stores in Italy. 

Meanwhile (and with a straight face) OpenAI has accused DeepSeek of distilling knowledge from its models, breaching terms of use, and infringing on intellectual property. OpenAI, is itself facing numerous AI copyright lawsuits! 

2025 has just started and the AI news feed is already buzzing.  

Join ourArtificial Intelligence and Machine Learning, How to Implement Good Information Governanceworkshop.   

Enjoy reading our blog? Help us reach 10,000 subscribers by subscribing today! 

Government Announces plans to ‘unleash AI’ across UK 

In a major policy speech on Monday, the Prime Minister set out the Government’s plans to use AI across the UK with the aim of boosting growth and delivering services more efficiently. The speech was part of a response to a report by Matt Clifford, a tech entrepreneur, who was commissioned to devise the AI Opportunities Action Plan.
The key Government proposal are: 

  • Adopting all 50 recommendations made by the Clifford report. 
  • A new approach to building the infrastructure required to develop AI, including building more data centres. 
  • The creation of a series of AI “growth zones”, where planning approvals for data centres will be accelerated and there will be improved access to the energy grid. 
  • An increase in the UK’s compute capacity 20-fold by 2030, including by building a new supercomputer. 
  • Promoting more AI use in the public sector to enable its workers to spend less time doing admin and more time delivering services. Some examples were given of how AI could be used; for example to inspect roads and spot potholes around the country, and in hospitals for tasks such as diagnosing cancer more quickly.  

The full list of proposals and timescales can be read here

Alongside Monday’s announcement, the government revealed tech companies had committed a total of £14 billion of investment in AI infrastructure in the UK, which they expect to create 13,250 jobs. But there are serious challenges ahead in terms of the cost of the proposals, amid concerns over borrowing and the falling value of the pound, as well as the data security and privacy implications.  

There is also the challenge of regulatory uncertainty. The UK does not have any AI legislation. On this subject the Government response is not very specific; promising to ensure “we have the right regulatory regime that addresses risks and actively supports innovation will drive AI trust and adoption across the economy. The government will set out its approach on AI regulation and will act to ensure that we have a competitive copyright regime that supports both our AI sector and the creative industries.” 

When an AI Bill does finally appear, it is likely to focus on the production of large language models (LLMs), the general-purpose technology that underpins AI products such as OpenAI’s ChatGPT and Microsoft’s Copilot. This is the area where there is most controversy with copyright owners, such as authors, complaining that their work has been unfairly used to train AI models. In December, the Government launched a consultation on Copyright and Artificial Intelligence.  

AI Literacy 

In his speech Keir Starmer pledged to use the power of AI to ”turbocharge” the economy and improve public services.  This requires a workforce that has the skills and understanding regarding how to develop and use AI, its opportunities and risks
i.e. AI literacy. The EU AI Act includes specific AI literacy requirements, under Article 4, which come into force on 2nd February 2025: 

“Providers and deployers of AI systems shall take measures to ensure, to their best extent, a sufficient level of AI literacy of their staff and other persons dealing with the operation and use of AI systems on their behalf, taking into account their technical knowledge, experience, education and training and the context the AI systems are to be used in, and considering the persons or groups of persons on whom the AI systems are to be used.” 

It will be interesting to see if any proposed UK AI legislation contains the same requirement for AI literacy.  

With the Government’s AI plans going full steam ahead it is essential that Data Protection Officers and compliance professionals develop their understanding of AI so that they can shape the future conversation and ensure that new AI tech strikes the right balance between innovation and risk management.  

Enjoy reading our blog? Help us reach 10,000 subscribers by subscribing today!

Join our Artificial Intelligence and Machine Learning, How to Implement Good Information Governance workshop.  

Records Management and AI 

In 2025 Artificial Intelligence (AI) will continue to redefine the way we live, work, and interact. From improving healthcare outcomes to optimising supply chains, AI projects hold the promise of unprecedented advancements. Accuracy, explainability, transparency are often cited, amongst others, as key concepts in discussions about successful implementation of AI projects. However, one critical component is sometimes overlooked: good records management.  

Data Integrity and Quality 

The foundation of any AI system, especially Generative AI, is data. AI algorithms rely on vast amounts of data to learn, make predictions, and generate insights. Therefore, the accuracy, completeness, and reliability of this data are paramount. Good records management ensures that data is systematically collected, organised, and maintained throughout its lifecycle. By implementing rigorous records management practices, organisations can avoid the pitfalls of incomplete or inaccurate data, which can lead to flawed AI models and unreliable outcomes. 

In the healthcare sector, AI models are increasingly used to diagnose diseases and recommend treatment plans. The accuracy of these models depends on the quality of medical records and patient data. Poor records management can result in missing or erroneous data, potentially jeopardising patient safety and leading to incorrect diagnoses. Conversely, well-managed records provide a robust dataset for training AI algorithms, enhancing their accuracy and reliability. 

Compliance with Legal and Regulatory Requirements 

Good records management practices are essential for ensuring compliance with legal and regulatory requirements. AI projects often involve the collection and processing of personal data. The GDPR impose stringent requirements on how organisations handle this data. By maintaining accurate and up-to-date records of data collection, usage, storage, and disposal, organisations can demonstrate their commitment to data protection and privacy. Additionally, effective records management enables organisations to respond promptly to data access requests, audits, and inquiries, further enhancing compliance and transparency. 

Data Security and Risk Management 

Data breaches and cyber-attacks are significant threats to AI projects, as they can compromise the integrity and confidentiality of sensitive information. Good records management practices play a crucial role in mitigating these risks. By implementing robust data governance frameworks, organizations can establish clear protocols for data access, storage, and protection. 

Effective records management involves the use of encryption, access controls, and regular audits to safeguard data against unauthorized access and breaches. In the event of a security incident, well-managed records provide a clear trail of data activity, enabling organisations to quickly identify and address vulnerabilities. This proactive approach to data security not only protects the organisation’s assets but also fosters trust among stakeholders. 

Facilitating Data Integration and Interoperability 

AI projects often require the integration of data from multiple sources, including internal databases, external partners, and public datasets. Good records management practices facilitate seamless data integration and interoperability, ensuring that data from diverse sources can be combined and analysed effectively. 

By standardizing data formats, metadata, and classification schemes, records management enables organizations to harmonize disparate data sets and create a unified data repository. This interoperability is essential for the development of comprehensive AI models that leverage diverse data inputs to generate more accurate and holistic insights. Moreover, well-managed records provide a clear audit trail, allowing organisations to trace the provenance and lineage of data used in AI projects. 

Enhancing Accountability and Transparency 

Transparency and accountability are critical factors in the ethical deployment of AI systems. Stakeholders, including customers, regulators, and the public, demand visibility into how AI models are developed, trained, and used. Good records management practices provide the documentation and audit trails necessary to demonstrate accountability and transparency. 

For example, the development of an AI model for credit scoring requires documentation of the data sources, algorithms, and decision-making processes used. Effective records management ensures that this information is systematically recorded and readily accessible for review. In cases where AI decisions are challenged or questioned, well-maintained records provide the evidence needed to explain and justify the outcomes, thereby enhancing accountability and trust. 

Good records management is the linchpin of successful AI implementation. It ensures data integrity and quality, facilitates compliance with legal and regulatory requirements, enhances data security, enables data integration and interoperability, and promotes accountability and transparency. As AI continues to evolve and reshape industries, organisations must prioritise robust records management practices to unlock the full potential of their AI initiatives. By doing so, they can build a solid foundation for sustainable and ethical AI deployment, ultimately driving innovation and creating value for all stakeholders. 

Get ahead of the game with our Information and Records Management Practitioner Certificate.  Whether you are a records manager, Freedom of Information Officer or Data Protection Officer this practitioner level certificate will teach you the theory of records management alongside practical hands-on application. The next course starts in two weeks with a special introductory price. Places are limited, so please book now to avoid disappointment.  

New International Treaty on AI Signed 

In September the UK, EU, and US signed the Council of Europe Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law (AI Convention). It is the world’s first AI treaty including provisions to protect the public and their data, human rights, democracy and the rule of law. 

The Convention requires signatory countries to monitor the development of AI and ensure any technology using AI is managed within strict parameters. It also commits countries to act against activities which fall outside of these parameters and to tackle the misuse of AI models which pose a risk to public services and the wider public. 

The Convention sets out 3 over-arching safeguards: 

  • protecting human rights, including ensuring people’s data is used appropriately, their privacy is respected and AI does not discriminate against them 
  • protecting democracy by ensuring countries take steps to prevent public institutions and processes being undermined 
  • protecting the rule of law, by putting the onus on signatory countries to regulate AI-specific risks, protect its citizens from potential harms and ensure it is used safely 

The Convention does not apply directly; legislators in each jurisdiction have to implement it into their domestic law and there is a wide degree of freedom over how it is interpreted and applied. The European Commission has said the Convention will be implemented in the EU via the recently enacted EU AI Act which will become enforceable in stages over the next few years.  

The UK Position 

The UK has no AI regulation (yet). Despite media reports, the recent King’s Speech did not include a bill to regulate AI. The King said that the government would “seek to establish the appropriate legislation to place requirements on those working to develop the most powerful artificial intelligence models”. We expect a government consultation to be announced soon. However, it is likely that new AI requirements will be introduced in other forthcoming legislation e.g. the Product Safety and Metrology Bill. The published summary of this bill states that it aims to “support growth, provide regulatory stability, and deliver greater protection for consumers by addressing new product risks and opportunities, allowing the UK to keep pace with technological advances such as AI.” Managing AI in the context of product safety aligns with certain aspects of the EU AI Act.  

When an AI Bill does finally appear, it is likely to focus on the production of large language models (LLMs), the general-purpose technology that underpins AI products such as OpenAI’s ChatGPT and Microsoft’s Copilot. As the Labour election manifesto stated: 

“Labour will ensure the safe development and use of AI models by introducing binding regulation on the handful of companies developing the most powerful AI models and by banning the creation of sexually explicit deepfakes.” 

Whatever shape the UK’s AI regulation takes, the government will have to ensure that the AI Convention is implemented. Shabana Mahmood, Lord Chancellor and Justice Secretary, said:  

“Artificial intelligence has the capacity to radically improve the responsiveness and effectiveness of public services, and turbocharge economic growth. However, we must not let AI shape us – we must shape AI. This convention is a major step to ensuring that these new technologies can be harnessed without eroding our oldest values, like human rights and the rule of law.” 

If you are a DPO needing to stay abreast of the latest developments and best practices in AI implementation,  join our Artificial Intelligence and Machine Learning, How to Implement Good Information Governance workshop.