AI Governance Practitioner Certificate: Final Course for 2025 

Act Now is pleased to report that the next AI Governance Practitioner Certificate course, starting in September, is fully booked. There are still a few places available on the next course, starting in October, which is the final one in 2025. 

The AI Governance Practitioner Certificate is designed to equip Information Governance professionals with the essential knowledge and skills to navigate AI deployment within their organisations. As we detailed in our previous blog “What is the role of IG Professionals in AI Governance?”, IG professionals should be aware of how this technology works so that they can help to ensure that there is responsible deployment from an IG perspective, just as would be the case with any new technology.   

So far thirty delegates, from a variety of backgrounds, have successfully completed the course, giving great feedback. Delegates have complimented us on the scope of the syllabus and the delivery style. Cora Suckley, Information Governance Service Manager, Digital Health and Care Wales said: 

“The AI Governance Practitioner Certificate exceeded my expectations. The content was comprehensive and well-structured, successfully bridging the gap between technical AI concepts and essential governance frameworks. The course delved into responsible AI principles, risk management, compliance, policy and ethical considerations, equipping me with practical tools to navigate the evolving regulatory landscape. 

The instructor was excellent and made the sessions interactive, highly engaging and applicable, providing real-world examples. This course provides a solid foundation for implementing AI governance in a meaningful and effective way.” 

The final course for 2025 starts in October. Places are limited so book early to avoid disappointment.  

AI Governance Practitioner Certificate: First Cohort Successfully Completes Course 

Act Now is pleased to report that the first cohort of its new AI Governance Practitioner Certificate has successfully completed the course. 

This course is designed to equip Information Governance professionals with the essential knowledge and skills to navigate AI deployment within their organisations. As we detailed in our previous blog “What is the role of IG Professionals in AI Governance?”, IG professionals should be aware of how this technology works so that they can help to ensure that there is responsible deployment from an IG perspective, just as would be the case with any new technology.   

The first course ran over a four week period in May and June. It consisted of ten delegates from the health sector in Wales. They all successfully completed the course assessment in July. 

The course was extremely well received by the delegates who complimented us on the scope of the syllabus and the delivery style: 

“I took a huge amount from the course which will help shape the development of processes for us internally in the coming months.” Dave Parsons , WASPI Code Manager (Wales Accord on the Sharing of Personal Information)  

“This was a superb course with a lot of information delivered at a carefully managed rate that encouraged discussion and reflection.  Literacy in AI and its application is vital – without it we cannot comprehend the ever changing level of IG threat and risk.” MA, Digital Health and Care Wales

The training was very good. The instructor was also very knowledgeable about the subject.” HP, Digital Health and Care Wales

Cora Suckley, Information Governance Service Manager, Digital Health and Care Wales said: 

“The AI Governance Practitioner Certificate exceeded my expectations. The content was comprehensive and well-structured, successfully bridging the gap between technical AI concepts and essential governance frameworks. The course delved into responsible AI principles, risk management, compliance, policy and ethical considerations, equipping me with practical tools to navigate the evolving regulatory landscape. 

The instructor was excellent and made the sessions interactive, highly engaging and applicable, providing real-world examples. This course provides a solid foundation for implementing AI governance in a meaningful and effective way.” 

Two more cohorts are currently completing the course. The next course starts in September and has a few places left.  

What is the Role of IG Professionals in AI Governance? 

The rapid rise of AI deployment in the workplace brings a host of legal and ethical challenges. AI governance is essential to addresses these challenges and ensuring AI systems are transparent, accountable, and aligned with organisational values. 

AI governance requires a multidisciplinary approach involving, amongst others, IT, legal, compliance and industry specialists. IG professionals also possess a unique skill set that makes them key stakeholders in the governance process. Here’s why they should actively position themselves to play a key role in AI governance within their organisations. 

AI Governance is Fundamentally a Data Governance Issue 

At its core, AI is a data-driven technology. The fairness and reliability of AI models depend on the quality, accuracy, and management of data. If AI systems are trained on poor-quality or biased data, they can produce flawed and discriminatory outcomes. (See Amnesty International’s report into police data and algorithms.)  

IG professionals specialise in ensuring that data is accurate, well-structured, and fit for purpose. Without strong data governance, organisations risk deploying AI systems that amplify biases, make inaccurate predictions, or fail to comply with regulatory requirements. 

Regulatory and Compliance Expertise is Critical 

AI governance is increasingly being shaped by regulatory frameworks around the world. The EU AI Act and regulations and guidance from other jurisdictions highlight the growing emphasis on AI accountability, transparency, and risk management. 

IG professionals have expertise in interpreting legislation (such as GDPR, PECR and DPA amongst others) which positions them to help organisations navigate the complex legal landscape surrounding AI. They can ensure that AI governance frameworks comply with data protection principles, consumer rights, and ethical AI standards, reducing the risk of legal penalties and reputational damage. 

Managing AI Risks and Ensuring Ethical AI Practices 

AI introduces new risks, including algorithmic bias, privacy violations, security vulnerabilities, and explainability challenges. Left unchecked, these risks can undermine trust in AI and expose organisations to significant operational and reputational harm. 

IG Governance professionals excel in risk management (After all, that is what DPIAs are about). They are trained to assess and mitigate risks related to data security, data integrity, and compliance, which directly translates to AI governance. By working alongside IT and ethics teams, they can help establish clear policies, accountability structures, and risk assessment frameworks to ensure AI is deployed responsibly. 

Bridging the Gap Between IT, Legal, and Business Functions 

One of the biggest challenges in AI governance is the lack of alignment between different business functions. AI development is often led by technical teams, while compliance and risk management sit with legal and governance teams. Without effective collaboration, governance efforts can become fragmented or ineffective. 

IG professionals act as natural bridges between these groups. Their work already involves coordinating across departments to align data policies, privacy standards, and regulatory requirements. By taking an active role in AI governance, they can ensure cross-functional collaboration, helping organisations balance innovation with compliance. 

Addressing Data Privacy and Security Concerns 

AI often processes vast amounts of sensitive personal data, making privacy and security critical concerns. Organisations must ensure that AI systems comply with data protection laws, implement robust security measures, and uphold individuals’ rights over their data. 

IG and Data Governance professionals are well-versed in data privacy principles, data minimisation, encryption, and access controls. Their expertise is essential in ensuring that AI systems are designed and deployed with privacy-by-design principles, reducing the risk of data breaches and regulatory violations. 

AI Governance Should Fit Within Existing Frameworks 

Organisations already have established governance structures for data management, records retention, compliance, and security. Instead of treating AI governance as an entirely new function, it should be integrated into existing governance models. 

IG and Data Governance professionals are skilled at implementing governance frameworks, policies, and best practices. Their experience can help ensure that AI governance is scalable, sustainable, and aligned with the organisation’s broader data governance strategy. 

Proactive Involvement Prevents Being Left Behind 

If IG professionals do not step up, AI governance may be driven solely by IT, data science, or business teams. While these functions bring valuable expertise, they may overlook regulatory, ethical, and risk considerations. Fundamentally, as IG professionals, our goal is to ensure organisations are using data and any new technology responsibly. 

So we are not saying that IG and DP professionals should become the new AI overlords. But by proactively positioning themselves as key stakeholders in AI governance, IG and Data Governance professionals ensure that organisations take a holistic approach – one that balances innovation, compliance, and risk management. Waiting to be invited to the AI governance conversation risks being sidelined in decisions that will have long-term implications for data governance and organisational risk. 

Final Thoughts 

To reiterate, AI governance should not be the sole responsibility of IG and Data Governance professionals – it requires a collaborative, cross-functional approach. However, their expertise in data integrity, privacy, compliance, and risk management makes them essential players in the AI governance ecosystem. 

As organisations increasingly rely on AI-driven decision-making, IG and Data Governance professionals must ensure that these systems are accountable, transparent, and legally compliant. By stepping up now, they can shape the future of AI governance within their organisations and safeguard them from regulatory, ethical, and operational pitfalls. 

Our new six module AI Governance Practitioner Certificate will empower you to understand AI’s potential, address its challenges, and harness its power responsibly for the public benefit.  

New AI Governance Practitioner Certificate

Artificial intelligence (AI) has seen huge advances in the last two years. A few years ago it was just a subject of geeky tech discussions; now it is playing a role in every aspect of our lives. Last month, the Prime Minister set out the Government’s plans to ‘unleash AI’ across the UK with the aim of boosting growth and delivering services more efficiently. 

For data protection professionals, the significance of understanding AI cannot be overstated. AI systems are data-hungry beasts; from predictive algorithms in healthcare to recommendation engines in e-commerce, these technologies process vast amounts of personal and non-personal data to deliver insights and functionality. However, the power of AI also brings challenges related to privacy, security, and ethical considerations. Key GDPR principles such as data minimisation, accountability, and transparency align closely with the ethical use of AI. However, the complexity of AI—with its opaque decision-making processes and reliance on intricate datasets—can pose unique challenges to compliance. 

By developing a deeper understanding of AI, data protection professionals can play a leading role in addressing the legal and ethical dilemmas posed by emerging AI technologies as well as position themselves as forward-thinking leaders who can bridge the gap between law, ethics, and technology. 

Building the AI Skillset 

Act Now is pleased to launch our new AI Governance Practitioner Certificate. 

This course is designed to equip data protection professionals with the essential knowledge and skills to navigate this transformative technology while upholding the highest standards of data protection and information governance.  

In just six modules, this immersive course will empower you to understand AI’s potential, address its challenges, and harness its power responsibly for public benefit. The course is tailored specifically for those working in information governance and data protection, offering practical insights and actionable strategies to integrate AI in the workplace and for the public benefit. From real-world case studies to the latest regulatory updates, you’ll gain a deep understanding of how to manage AI ethically, securely, and in compliance with emerging laws.   

By completing the course, you will gain the skills to: 

  1. Explain foundational AI concepts, including its technologies, applications, and key milestones in its evolution. 
  1. Identify real-world examples of AI risks and demonstrate an understanding of their legal and ethical dimensions. 
  1. Interpret the role of key legal and regulatory frameworks, such as GDPR and the EU AI Act, in governing AI systems. 
  1. Evaluate organisational strategies for ensuring transparency, accountability, and fairness in AI development. 
  1. Propose ethical and compliance-focused solutions to mitigate AI risks while balancing innovation and regulatory adherence. 
  1. Apply course concepts to analyse case studies and participate in informed discussions about AI’s role in society and industry. 

This course will also assist your organisation to comply with the EU AI Act’s AI Literacy obligation which came into force on 2nd February 2025. This requires providers and deployers of AI systems to ensure that they have a workforce that has the skills and understanding regarding how to develop and use AI, its opportunities and risks. 

Register Your Interest 

We are registering interest in this course which, subject to demand, will run in July, October and November. Register your interest now (no obligation). 

UK Government Publishes AI Playbook

AI is in the news headlines almost on a daily basis. 

Last week, at a global summit in Paris, the UK and US refused to sign an international declaration on AI which pledges an “open”, “inclusive” and “ethical” approach to the technology’s development. The UK government said it had not been able to do so because of concerns about national security and “global governance” despite dozens of countries, including France, China and India, signing the declaration. The US Vice President, JD Vance, told world leaders that AI was “an opportunity that the Trump administration will not squander” and that “pro-growth AI policies” should be prioritised over safety. 

What a difference a few months make! In September 2024, the UK, US and EU signed the Council of Europe Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law. This was the world’s first AI treaty and included provisions to protect the public and their data, human rights, democracy and the rule of law.  

So where does this leave the UK’s plans for legislation? In January, the Prime Minister set out the Government’s plans to use AI across the UK, pledging to use AI’s power to ”turbocharge” the economy and improve public services. Last week it published the AI Playbook for the UK Government, which updates and expands on the Generative AI Framework for HMG. This updated guidance aims to “help government departments and public sector organisations harness the power of a wider range of AI technologies safely, effectively, and responsibly.” The foreword states:  

“The AI Playbook will support the public sector in better understanding what AI can and cannot do, and how to mitigate the risks it brings. It will help ensure that AI technologies are deployed in responsible and beneficial ways, safeguarding the security, wellbeing, and trust of the public we serve.”  

The playbook defines 10 common principles to guide the safe, responsible and effective use of AI) in government and public sector organisations: 

Principle 1: You know what AI is and what its limitations are 

Principle 2: You use AI lawfully, ethically and responsibly 

Principle 3: You know how to use AI securely 

Principle 4: You have meaningful human control at the right stage 

Principle 5: You understand how to manage the AI life cycle 

Principle 6: You use the right tool for the job 

Principle 7: You are open and collaborative 

Principle 8: You work with commercial colleagues from the start 

Principle 9: You have the skills and expertise needed to implement and use AI 

Principle 10: You use these principles alongside your organisation’s policies and have the right assurance in place 

We know that the UK will at some point publish an AI Bill. The playbook gives us a clue about key UK AI considerations/approach which could be adopted into legislation.  

By refusing to sign the Paris declaration and publishing the play book, the UK is attempting to promote transparent and trustworthy AI whilst being careful not to upset the big US tech firms (who now have the ear of the Trump administration) and the Trump administration at a sensitive time for the UK/US trade relationship. Time will tell if this strategy is successful. 

Enjoy reading our blog? Help us reach 10,000 subscribers by subscribing today! 

Do you wish to keep abreast of AI developments? Do you need to sharpen your AI deployment skills? Join our forthcoming AI workshops  Artificial Intelligence: How to Implement Good Information Governance  and the EU AI Act and UK Approach to Regulation. We can also help with your AI literacy training programme through our in house customised training. Get in touch for a quote.