AI Governance Practitioner Certificate: Final Course for 2025 

Act Now is pleased to report that the next AI Governance Practitioner Certificate course, starting in September, is fully booked. There are still a few places available on the next course, starting in October, which is the final one in 2025. 

The AI Governance Practitioner Certificate is designed to equip Information Governance professionals with the essential knowledge and skills to navigate AI deployment within their organisations. As we detailed in our previous blog “What is the role of IG Professionals in AI Governance?”, IG professionals should be aware of how this technology works so that they can help to ensure that there is responsible deployment from an IG perspective, just as would be the case with any new technology.   

So far thirty delegates, from a variety of backgrounds, have successfully completed the course, giving great feedback. Delegates have complimented us on the scope of the syllabus and the delivery style. Cora Suckley, Information Governance Service Manager, Digital Health and Care Wales said: 

“The AI Governance Practitioner Certificate exceeded my expectations. The content was comprehensive and well-structured, successfully bridging the gap between technical AI concepts and essential governance frameworks. The course delved into responsible AI principles, risk management, compliance, policy and ethical considerations, equipping me with practical tools to navigate the evolving regulatory landscape. 

The instructor was excellent and made the sessions interactive, highly engaging and applicable, providing real-world examples. This course provides a solid foundation for implementing AI governance in a meaningful and effective way.” 

The final course for 2025 starts in October. Places are limited so book early to avoid disappointment.  

AI Governance Practitioner Certificate: First Cohort Successfully Completes Course 

Act Now is pleased to report that the first cohort of its new AI Governance Practitioner Certificate has successfully completed the course. 

This course is designed to equip Information Governance professionals with the essential knowledge and skills to navigate AI deployment within their organisations. As we detailed in our previous blog “What is the role of IG Professionals in AI Governance?”, IG professionals should be aware of how this technology works so that they can help to ensure that there is responsible deployment from an IG perspective, just as would be the case with any new technology.   

The first course ran over a four week period in May and June. It consisted of ten delegates from the health sector in Wales. They all successfully completed the course assessment in July. 

The course was extremely well received by the delegates who complimented us on the scope of the syllabus and the delivery style: 

“I took a huge amount from the course which will help shape the development of processes for us internally in the coming months.” Dave Parsons , WASPI Code Manager (Wales Accord on the Sharing of Personal Information)  

“This was a superb course with a lot of information delivered at a carefully managed rate that encouraged discussion and reflection.  Literacy in AI and its application is vital – without it we cannot comprehend the ever changing level of IG threat and risk.” MA, Digital Health and Care Wales

The training was very good. The instructor was also very knowledgeable about the subject.” HP, Digital Health and Care Wales

Cora Suckley, Information Governance Service Manager, Digital Health and Care Wales said: 

“The AI Governance Practitioner Certificate exceeded my expectations. The content was comprehensive and well-structured, successfully bridging the gap between technical AI concepts and essential governance frameworks. The course delved into responsible AI principles, risk management, compliance, policy and ethical considerations, equipping me with practical tools to navigate the evolving regulatory landscape. 

The instructor was excellent and made the sessions interactive, highly engaging and applicable, providing real-world examples. This course provides a solid foundation for implementing AI governance in a meaningful and effective way.” 

Two more cohorts are currently completing the course. The next course starts in September and has a few places left.  

When AI Misses the Line: What Wimbledon 2025 Teaches Us About Deploying AI in the Workplace 

This year’s Wimbledon Tennis Championships are not just a showcase for elite athleticism but also a high-profile test of Artificial Intelligence. For the first time in the tournament’s 148-year history, all line calls across its 18 courts are made entirely by Hawk-Eye Live, an AI-assisted system that has replaced human line judges. This follows, amongst others, the Semi-Assisted Offside System deployed in last year’s football Champions League after its success in the Qatar World Cup.  

The promise? Faster decisions, greater consistency, and reduced human error. 
The reality? Multiple malfunctions, public apologies, and growing mistrust among players and fans (not to mention losing the ‘best dressed officials’ in sport). 

What Went Wrong? 

  • System Failure Mid-Match: During a high-stakes women’s singles match between Anastasia Pavlyuchenkova and Sonay Kartal, the line-calling system was accidentally switched off for several points. No alerts were raised, and the match proceeded with no accurate judgments. Wimbledon officials later admitted human error was to blame, not the AI. 
  • Misclassification Errors: In the men’s quarter-final between Taylor Fritz and Karen Khachanov, Hawk-Eye incorrectly called a rally forehand a “fault,” apparently confusing it with a serve. Play was halted and the point was replayed, leaving fans and players confused and frustrated. 
  • User Experience Failures: Multiple players, including Emma Raducanu and Jack Draper, complained that some calls were “clearly wrong” and that the system’s announcements were too quiet to hear amid crowd noise. Some players called for the return of human line judges, citing a lack of trust in the technology.  

Lessons for AI and IG Professionals 

Wimbledon’s AI hiccup offers more than a headline; it surfaces deep issues around trust, oversight, and operational design that are relevant to any AI deployment in the workplace. Here are the key lessons: 

1. Automation ≠ Autonomy 

The Wimbledon system is not truly autonomous; it relies on human operators to activate it before each match. When staff forgot to do so, the AI didn’t intervene or alert anyone. This exposes a major pitfall: automated systems are only as reliable as their orchestration layers. 

Governance Principle: Ensure clear workflows and audit trails around when and how AI systems are initiated, paused, or overridden. Build in fail-safe triggers and status checks to prevent silent failures. 

2. Build in Redundancy and Exception Handling 

AI systems excel at pattern recognition in controlled environments but can fail spectacularly at edge cases. Wimbledon’s AI was likely trained on thousands of hours of ball trajectories – but it still confused a forehand rally shot with a serve under unusual conditions. 

Governance Principle: Plan for edge case management. When the AI encounters uncertainty, it should either defer to human review or trigger a fallback protocol.  

3. Usability is a Core Component of Accuracy 

Even when the AI was functioning correctly, players couldn’t always hear the line calls due to low audio volume. What good is a precise call if the user can’t perceive it? 

Governance Principle: Don’t separate accuracy from usability. A technically correct output must be understandable, accessible, and actionable to its end users. Invest in UI/UX design early in the AI lifecycle. 

4. Transparency Builds Trust 

Wimbledon’s initial response (vague statements and slow clarifications) only fuelled player frustration. Trust was eroded not just because of the error, but because of how it was handled. 

Governance Principle: When deploying AI, especially in high-stakes environments, build a culture of transparent accountability. Log decisions, explain anomalies, and communicate clearly when things go wrong. 

5. Hybrid Systems Are Often More Effective Than Pure AI 

While Wimbledon has fully replaced line judges with AI, there’s a strong case for a hybrid model. A combination of automated systems with empowered human oversight could preserve both accuracy and human judgment. 

Governance Principle: Consider augmented intelligence models, where AI supports rather than replaces human decision-makers. This ensures operational continuity and enables learning from both machine and human feedback. 

6. Respect Context and Culture 

Wimbledon isn’t just any tournament; it’s steeped in tradition, where human line judges are part of the spectacle. Removing them altered the tournament’s character, sparking emotional backlash from players and spectators alike. 

Governance Principle: Understand the organisational and cultural context where AI is deployed. Technology doesn’t operate in a vacuum. Change management, stakeholder engagement, and empathy are as important as algorithms. 

The problems with Wimbledon’s AI line-calling system are symptoms of incomplete design thinking. Whether you’re deploying AI in HR analytics, document classification, or customer service, the Wimbledon experience shows that trust isn’t just built on data; it’s built on reliability, clarity, and human-centred design. 

In a world increasingly mediated by automation, we must remember: AI doesn’t replace the need for governance. It raises the stakes for getting it right. And we just wish it was around for the “Hand of God” goal

Are you looking to enhance your career with an AI governance qualification? Our AI Governance Practitioner Certificate is designed to equip compliance professionals with the essential knowledge and skills to navigate this transformative technology while upholding the highest standards of data protection and information governance. The first course was fully booked, and we have added more dates.

The New Data (Use and Access) Act 2025 

The Data (Use and Access) Act 2025 received Royal Assent on 19th June 2025. It is important to note that the new Act will not replace current UK data protection legislation. Rather it will amend the UK GDPR as well as the Privacy and Electronic Communications (EC Directive) Regulations 2003 (PECR) and the Data Protection Act 2018. Most of these amendments will commence in stages, 2 to 12 months after Royal Assent. Exact dates for each measure will be set out in commencement regulations. 

The Bill was introduced into Parliament in October last year. It was trailed in the King’s Speech in July (under its old name of the “Digital Information and Smart Data Bill”) with his Majesty announcing that there would be “targeted reforms to some data laws that will maintain high standards of protection but where there is currently a lack of clarity impeding the safe development and deployment of some new technologies.” However, this statement of intent does not match the reality; many of the core provisions are a “cut and paste” of the Data Protection and Digital Information(No.2) Bill (“DP Bill”), which was dropped by the Conservative Government in the Parliamentary “wash up” stage before last year’s snap General Election. 

Key Provisions 

Let’s examine the key provisions of the new Act.  

Smart Data: The Act retains the provisions from the DP Bill that will enable the creation of a legal framework for Smart Data. This involves companies securely sharing customer data, upon the customer’s (business or consumer) request, with authorised third-party providers (ATPs) who can enhance the customer data with broader, contextual ‘business’ data. These ATPs will provide the customer with innovative services to improve decision making and engagement in a market. Open Banking is the only current example of a regime that is comparable to a ‘Smart Data scheme’. The Act will give such schemes a statutory footing, from which they can grow and expand.  

Digital Identity Products: Just like its predecessor, the Act contains provisions aimed at establishing digital verification services including digital identity products to help people quickly and securely identify themselves when they use online services e.g. to help with moving house, pre-employment checks and buying age restricted goods and services. It is important to note that this is not the same as compulsory digital ID cards as some media outlets have reported. 

Research Provisions: The Act keeps the DP Bill’s provisions that clarify that companies can use personal data for research and development projects, as long as they follow data protection safeguards.  

Legitimate Interests: The Act retains the concept of ‘recognised legitimate interests’ under Article 6 of the UK GDPR- specific purposes for personal data processing such as national security, emergency response, and safeguarding for which Data Controllers will be exempt from conducting a full “Legitimate Interests Assessment” when processing personal data.  

Subject Access Requests: The Act it makes it clear that Data Controllers only have to make reasonable and proportionate searches when someone asks for access to their personal data. 

Automated Decision Making: Like the DP Bill, the Act seeks to limit the right, under Article 22 of the UK GDPR, for a data subject not to be subject to automated decision making or profiling to only cases where Special Category Data is used. Under new article 22A, a decision would qualify as being “based solely on automated processing” if there was “no meaningful human involvement in the taking of the decision”. This could give the green light to companies to use AI techniques on personal data scraped from the internet for the purposes of pre employment background checks. 

International Transfers: The Act maintains most of the DP Bill’s international transfer provisions. There will be a new approach to the test for adequacy applied by the UK Government to countries (and international organisations) and when Data Controllers are carrying out a Transfer Impact Assessment or TIA. The threshold for this new “data protection test” will be whether a jurisdiction offers protection that is “not materially lower” than under the UK GDPR 

Health and Social Care Information: The Act maintains, without any changes, the provisions that establish consistent information standards for health and adult social care IT systems in England, enabling the creation of unified medical records accessible across all related services. 

PECR Changes: One of the most significant changes, copied from the DP Bill, is the increase in fines for breaches of PECR, from £500,000 to UK GDPR levels; meaning organisations could face fines of up to  up to £17.5m of 4% of global annual turnover (whichever is higher) for the most serious infringements. Other changes include allowing cookies to be used without consent for the purposes of web analytics and to install automatic software updates and extending the “soft opt” in for electronic marketing to charities.  

A full list of the changes to the UK data protection regime can be read on the ICO website.  

What is not in the new Act? 

Most of the controversial parts of the DP Bill have been have not made it into the Act. These include: 

  • Replacing the terms “manifestly unfounded” or “excessive” requests, in Article 12 of the UK GDPR, with “vexatious” or “excessive” requests. Explanation and examples of such requests would also have been included.  
  • Exempting all controllers and processors from the duty to maintain a ROPA, under Article 30, unless they are carrying out high risk processing activities.  
  • The “strategic priorities” mechanism, which would have allowed the Secretary of State to set binding priorities for the Information Commissioner. 
  • The requirements for the Information Commissioner to submit codes of practice to the Secretary of State for review and recommendations.  

The UK’s adequacy status under the EU GDPR now expires on 27th December following the recent announcement of a six month extension. Whilst the EU will commence a formal review of adequacy once the Bill receives Royal Assent, nothing in the Bill will jeopardise the free flow of personal between the EU and the UK. The situation would perhaps have been different had the DP Bill made it on to the statute books.  

AI and Copyright 

Much of the delay to the Bill was passing was caused by an issue which was not originally intended to be addressed in the Bill; that of the use of copyright works to train AI. Like the monster plant in Little Shop of Horrors, AI has an insatiable appetite; for data though rather than food. AI applications need a constant supply of data to train (and improve) their output algorithms. This obviously concerns copyright holders such as musicians and writers whose work may be used to train AI models to produce similar output, without the former receiving any financial compensation. A number of copyright infringements lawsuits are set to hit the courts soon. Amongst them, Getty Images’ is suing Stability AI accusing it of using Getty images to train its Stable Diffusion system, which can generate images from text inputs. Similar lawsuits have been launched in the US by novelists and news outlets. 

During the passage of the Bill through Parliament, there was strong disagreement between the Lords and the Commons over an amendment introduced by the crossbench peer and former film director Beeban Kidron. The amendment would have required AI developers to be transparent with copyright owners, about using their material to train AI models. 400 British musicians, writers and artists, including Sir Paul McCartney, signed a letter urging the Government to adopt the amendment. They argued that failing to do so would mean them “giving away” their work to tech firms.  

In the end, the Baroness Kidron dropped her amendment follow repeated rejection in the Commons. I expect this issue to raise its head again soon. The Government’s consultation on AI and copyright ended in February. Amongst other options, it proposes to give copyright holders the right to opt-out of their works being used for training AI. However, the music industry believes that such a measure would offer insufficient protection for copyright holders. In an interview with the BBC, Sir Elton John described the government as “absolute losers” and said he feels “incredibly betrayed” over the Government’s plans. 

Once the Government publishes it response to the copyright consultation, it will have to consider how to take the matter forward. Whether this comes in the form of a new copyright bill or AI regulation bill, expect more parliamentary wranglings as well as celebrity interviews.  

Data protection professionals need to assess the changes to the UK data protection regime. Our half day workshop will explore the new Act in detail giving you an action plan for compliance. A revised UK GDPR Handbook is now available incorporating the changes made by the DUA Act.

The Data (Use and Access) Bill Ready for the Statute Books 

The Data (Use and Access) Bill has cleared the final hurdle in Parliament and will soon become the Data (Use and Access) Act 2025 following Royal Assent.  

The new Act will amend the UK GDPR as well as PECR and the Data Protection Act 2018. The key changes are summarised in our blog post here. Most of these are not particularly controversial and were in the Data Protection and Digital Information Bill  which failed to make it through Parliamentary “wash up” stage when the General Election was announced last year. 

Much of the delay to the passing of the Bill was caused by amendments proposed by Baroness Kidron in the House of Lords. She wanted more protection for artists whose data is often used to train AI models, especially Generative AI. Her amendment would have required developers to be transparent with copyright owners about using their material to train AI models. 400 British musicians, writers and artists signed a letter saying the Government’s failing to adopt the amendment would mean them “giving away” their work to tech firms. In the end Baroness Kidron, following repeated rejections of her amendment in the House of Commons during the “ping pong” stage, decided to withdraw gracefully. Expect this issue to come up again when the government eventually brings forth AI legislation as mentioned in the King’s Speech. 

We expect most of the substantive provisions to come into force a few months after commencement. Plenty of time for us to update the UK GDPR Handbook

Data protection professionals need to assess the changes to the UK data protection regime. A revised UK GDPR Handbook is now available incorporating the changes made by the DUA Act.

Visit Us at the IRMS Conference 2025  

We are excited to announce that Act Now Training will be exhibiting at the IRMS Conference (“The Peaky Path to Progress”) in Birmingham next week. 

If you are attending the conference, we invite you to stop by our exhibitor stand. Here is what awaits you (in addition to the visual delight of our special Peaky Blinders themed stand!): 

Training Course Vouchers – For IRMS Delegates Only! 
We are offering exclusive conference-only discounts on our most popular training courses. Whether you’re looking to upskill in AI, data ethics, records management or FOI compliance, we’ve got a course tailored for your goals. 

Exclusive Bags 

Last year’s bags were a must have for any fashion-conscious information governance professional. This year our bags have been designed with a Peaky Blinders theme. Our way of saying thank you for being part of the IRMS community. 

Expert Advice on Training Pathways 

Not sure which training track is right for you or your team? Want to develop your expertise in AI Governance? Our friendly team will be on hand to chat about your goals and help you map out the best learning path; whether you’re just starting or aiming for advanced certification

Let’s Talk Learning 

This year’s conference theme is all about connection, innovation, and the future of information governance – and we are here to help you be at the forefront. Come and chat with us about how our training can support your professional development, boost your team’s capability, and help your organisation stay compliant and competitive. 

We can’t wait to meet you at #IRMS25!

Article 15 GDPR and “Meaningful Information” about Automated Decision-Making: What does this mean for AI? 

Article 15 of the EU and UK GDPR not only gives Data Subjects the right to obtain their personal data from the Data Controller but also the right to receive additional information about the processing. This includes: 

 “the existence of automated decision-making, including profiling, referred to in Article 22(1) and (4) and, at least in those cases, meaningful information about the logic involved, as well as the significance and the envisaged consequences of such processing for the data subject.” 

A recent ruling by the European Court of Justice (ECJ) sheds light on the concept of “meaningful information” and will have implications for those deploying AI systems. The case in question, C-203/22 Dun & Bradstreet Austria GmbH, concerns an Austrian mobile telecom operator. The company refused to enter into a contract with a customer due to their poor credit score. This decision was based on an automated credit evaluation provided by a third-party credit agency. 

The customer requested access to the information held by the credit agency so that they could understand the decision. The customer was dissatisfied with the disclosed information and so took legal action to demand further clarification on the logic behind the automated decision-making process. The core issue was whether the credit agency was obligated to provide more detailed information about the automated process under Article 15(1)(h) GDPR (as quoted above). The agency argued that doing so would expose trade secrets. However, the court ruled that it must provide “meaningful information about the logic involved” as required by GDPR. 

The Enforcement Court in Austria, tasked with enforcing the ruling, referred the following questions to the ECJ: 

  1. Does “meaningful information about the logic involved” require the controller to provide a comprehensive explanation of the procedures and principles used to come to a specific decision? 
  1. In cases where the controller argues that the requested information involves third-party data protected by the GDPR or trade secrets, is the controller obliged to submit the potentially protected information to supervisory authorities or courts for review? 

Meaningful Information 

In response to the first question, the ECJ confirmed that the phrase “meaningful information about the logic involved” fundamentally refers to all relevant details concerning the automated decision-making process. This includes an explanation of the procedures and principles used to arrive at the decision. 

While the ECJ made it clear that “meaningful information” does not require the disclosure of complex algorithms, it does require a sufficiently detailed explanation of the decision-making process. It emphasised that, in line with Articles 13(2)(f) and 14(2)(g) of the GDPR, which establish transparency requirements, the information must be clear, concise, and easily understandable. Data Subjects should be able to comprehend how their personal data is being processed. The right of access enshrined in Article 15 of the GDPR allows individuals to verify the accuracy and lawfulness of the processing of their personal data, which is a crucial safeguard under Article 22(3) that governs automated decision-making and profiling. 

Trade Secrets  

On the second question, the ECJ struck a delicate balance between Data Subjects’ right to access their data and the protection of third-party rights, such as trade secrets. It reiterated that while data protection is a fundamental right, it must be weighed against intellectual property protections as outlined in Recital 63 of the GDPR. 

The ECJ said that if providing access to personal data could violate the rights of third parties, such as revealing trade secrets, the controller must assess whether it is possible to disclose the information without infringing on third party rights. In cases of conflict, the issue must be referred to the relevant supervisory authority or court to decide on an appropriate solution. 

Importantly, the ECJ ruled that no Member State can impose a blanket ban on disclosing business or trade secrets, as doing so would undermine the GDPR’s requirement for a balanced approach to competing rights. In situations where access requests are contested, controllers are required to provide relevant information to supervisory authorities or courts, enabling an informed decision based on the principle of proportionality. 

So what are the implications of this ECJ ruling for AI systems 

While the ruling specifically focusses on the EU GDPR, it underscores the growing importance of transparency in data processing practices, especially when implementing automated decision-making processes. Organisations using AI for automated decision-making must ensure transparency by providing data subjects with clear, understandable explanations of how decisions are made even if complex algorithms are involved. Developers must design systems that can deliver “meaningful information” about the logic behind automated outcomes, while deployers must ensure this information is communicated effectively to individuals. Transparency is also a key theme of the recently enacted EU AI Act

Act Now recently launched the AI Governance Practitioner Certificate. This course is designed to equip compliance professionals with the essential knowledge and skills to navigate this transformative technology being implemented within their organisations while upholding the highest standards of data protection and information governance. 

New AI Governance Practitioner Certificate: Dates Published 

Act Now is pleased to publish the 2025 cohort dates for our new AI Governance Practitioner Certificate.  

This course is designed to equip Information Governance professionals with the essential knowledge and skills to navigate AI deployment within their organisations. As we detailed in our previous blog “What is the role of IG Professionals in AI Governance?”, AI implementation is already here. IG professionals should be aware of how this technology works so that they can help to ensure that there is responsible deployment from an IG perspective, just as would be the case with any new technology.  

The course is run over four days and the details of the upcoming cohorts and dates are below. The first cohort in May is already fully booked. 

May: 28th May, 29th May, 18th June, 19th June (Fully Booked) 

June: 27th June, 4th July, 11th July,18th July  

July: 29th July, 5th August,12th August,19th August 

September: 18th September, 25th September, 2nd October, 9th October 

October: 29th October, 5th November, 12th November, 19th November 

We are currently offering a £100 discount for the month of May (for any cohort) on the published price of this course. Please quote the code “Art100” when booking.  

What is the Role of IG Professionals in AI Governance? 

The rapid rise of AI deployment in the workplace brings a host of legal and ethical challenges. AI governance is essential to addresses these challenges and ensuring AI systems are transparent, accountable, and aligned with organisational values. 

AI governance requires a multidisciplinary approach involving, amongst others, IT, legal, compliance and industry specialists. IG professionals also possess a unique skill set that makes them key stakeholders in the governance process. Here’s why they should actively position themselves to play a key role in AI governance within their organisations. 

AI Governance is Fundamentally a Data Governance Issue 

At its core, AI is a data-driven technology. The fairness and reliability of AI models depend on the quality, accuracy, and management of data. If AI systems are trained on poor-quality or biased data, they can produce flawed and discriminatory outcomes. (See Amnesty International’s report into police data and algorithms.)  

IG professionals specialise in ensuring that data is accurate, well-structured, and fit for purpose. Without strong data governance, organisations risk deploying AI systems that amplify biases, make inaccurate predictions, or fail to comply with regulatory requirements. 

Regulatory and Compliance Expertise is Critical 

AI governance is increasingly being shaped by regulatory frameworks around the world. The EU AI Act and regulations and guidance from other jurisdictions highlight the growing emphasis on AI accountability, transparency, and risk management. 

IG professionals have expertise in interpreting legislation (such as GDPR, PECR and DPA amongst others) which positions them to help organisations navigate the complex legal landscape surrounding AI. They can ensure that AI governance frameworks comply with data protection principles, consumer rights, and ethical AI standards, reducing the risk of legal penalties and reputational damage. 

Managing AI Risks and Ensuring Ethical AI Practices 

AI introduces new risks, including algorithmic bias, privacy violations, security vulnerabilities, and explainability challenges. Left unchecked, these risks can undermine trust in AI and expose organisations to significant operational and reputational harm. 

IG Governance professionals excel in risk management (After all, that is what DPIAs are about). They are trained to assess and mitigate risks related to data security, data integrity, and compliance, which directly translates to AI governance. By working alongside IT and ethics teams, they can help establish clear policies, accountability structures, and risk assessment frameworks to ensure AI is deployed responsibly. 

Bridging the Gap Between IT, Legal, and Business Functions 

One of the biggest challenges in AI governance is the lack of alignment between different business functions. AI development is often led by technical teams, while compliance and risk management sit with legal and governance teams. Without effective collaboration, governance efforts can become fragmented or ineffective. 

IG professionals act as natural bridges between these groups. Their work already involves coordinating across departments to align data policies, privacy standards, and regulatory requirements. By taking an active role in AI governance, they can ensure cross-functional collaboration, helping organisations balance innovation with compliance. 

Addressing Data Privacy and Security Concerns 

AI often processes vast amounts of sensitive personal data, making privacy and security critical concerns. Organisations must ensure that AI systems comply with data protection laws, implement robust security measures, and uphold individuals’ rights over their data. 

IG and Data Governance professionals are well-versed in data privacy principles, data minimisation, encryption, and access controls. Their expertise is essential in ensuring that AI systems are designed and deployed with privacy-by-design principles, reducing the risk of data breaches and regulatory violations. 

AI Governance Should Fit Within Existing Frameworks 

Organisations already have established governance structures for data management, records retention, compliance, and security. Instead of treating AI governance as an entirely new function, it should be integrated into existing governance models. 

IG and Data Governance professionals are skilled at implementing governance frameworks, policies, and best practices. Their experience can help ensure that AI governance is scalable, sustainable, and aligned with the organisation’s broader data governance strategy. 

Proactive Involvement Prevents Being Left Behind 

If IG professionals do not step up, AI governance may be driven solely by IT, data science, or business teams. While these functions bring valuable expertise, they may overlook regulatory, ethical, and risk considerations. Fundamentally, as IG professionals, our goal is to ensure organisations are using data and any new technology responsibly. 

So we are not saying that IG and DP professionals should become the new AI overlords. But by proactively positioning themselves as key stakeholders in AI governance, IG and Data Governance professionals ensure that organisations take a holistic approach – one that balances innovation, compliance, and risk management. Waiting to be invited to the AI governance conversation risks being sidelined in decisions that will have long-term implications for data governance and organisational risk. 

Final Thoughts 

To reiterate, AI governance should not be the sole responsibility of IG and Data Governance professionals – it requires a collaborative, cross-functional approach. However, their expertise in data integrity, privacy, compliance, and risk management makes them essential players in the AI governance ecosystem. 

As organisations increasingly rely on AI-driven decision-making, IG and Data Governance professionals must ensure that these systems are accountable, transparent, and legally compliant. By stepping up now, they can shape the future of AI governance within their organisations and safeguard them from regulatory, ethical, and operational pitfalls. 

Our new six module AI Governance Practitioner Certificate will empower you to understand AI’s potential, address its challenges, and harness its power responsibly for the public benefit.  

Act Now Launches AI Governance Practitioner Certificate for the Middle East 

The Middle East has emerged as a dynamic force in the global AI landscape, making substantial strides in AI deployment and initiatives. Examples include: 

  • In the UAE, the Advanced Technology Research Council has developed the Falcon series of large language models, which have been integrated into sectors like healthcare and adopted internationally.   
  • Saudi Arabia, under its Vision 2030, has invested $40 billion into AI development, focusing on smart cities and energy.  
  • Qatar is expanding its AI infrastructure with Ooredoo, a leading telecom company, investing QR2 billion to enhance its data centres.  

The Middle East’s commitment to AI innovation has seen a parallel focus on governance frameworks. The UAE AI Charter, introduced in 2024, outlines 12 guiding principles emphasising transparency, inclusivity and accountability in AI development.​ In Saudia Arabia, the Saudi Data and Artificial Intelligence Authority has issued AI Ethics Principles and Generative AI Guidelines whilst the KSA government recently launched a consultation on the Global AI Hub Law. 

As AI technologies become increasingly integrated into societal frameworks, the role of governance professionals becomes paramount. Understanding AI governance is essential for several reasons: 

  • Ethical Oversight: AI systems must be developed and deployed ethically to prevent biases and ensure fairness. Governance professionals are instrumental in establishing frameworks that uphold ethical standards.​ 
  • Regulatory Compliance: With nations implementing AI-related regulations and guidelines, professionals must navigate these legal landscapes to ensure compliance and mitigate risks.​ 
  • Public Trust: Transparent and accountable AI practices foster public trust, which is crucial for the widespread adoption of AI technologies.​ 
  • Strategic Leadership: Professionals equipped with AI governance knowledge can lead initiatives that align technological advancements with societal values and objectives.​ 

Building the AI Skillset  

For compliance professionals in the Middle East, this is an opportune moment to acquire expertise in AI governance, ensuring that AI technologies are developed and deployed responsibly, ethically, and in alignment with the region’s strategic goals. There is also a professional development opportunity; compliance professionals can position themselves as forward-thinking leaders who can bridge the gap between law, ethics, and technology. 

With these objectives in mind, Act Now is pleased to launch our new AI Governance Practitioner Certificate (MENA). This course is designed to equip you with the essential knowledge and skills to navigate this transformative technology within the organisations while upholding the highest standards of data protection and information governance.   

In just six modules, this immersive course will empower you to understand AI’s potential, address its challenges, and harness its power responsibly for public benefit. From real-world case studies to the latest regulatory updates, you’ll gain a deep understanding of how to manage AI ethically, securely, and in compliance with emerging laws.    

By completing the course, you will gain the skills to:  

  1. Explain foundational AI concepts, including its technologies, applications, and key milestones in its evolution.  
  1. Identify real-world examples of AI risks and demonstrate an understanding of their legal and ethical dimensions.  
  1. Interpret the role of key legal and regulatory frameworks, such as the UAE and KSA PDPL, GDPR and the EU AI Act, in governing AI systems.  
  1. Evaluate organisational strategies for ensuring transparency, accountability, and fairness in AI development.  
  1. Propose ethical and compliance-focused solutions to mitigate AI risks while balancing innovation and regulatory adherence.  
  1. Apply course concepts to analyse case studies and participate in informed discussions about AI’s role in society and industry.  

This new course builds on the success of our UAE and KSA Data Protection Officer certificates. It will run in July and September and we are also able to deliver it on an
in-house customised basis. Please get in touch to learn more.