During the AI for Business Leaders Summit series, Heather Gentile, executive director of product watsonx.governance for IBM DATA AND AI, made it clear that any effort to tap into the potential of artificial intelligence needs to ensure that the data that feeds the AI engine is clean to mitigate the risks associated with early use of the technology, including hallucinations and bias.
Define AI governance in 2024
Gartner research VP analyst Svetlana Sicular defines AI governance as creating policies, assigning decision rights, and ensuring organisational accountability for risks and decisions for the application and use of artificial intelligence techniques.
Gartner cautions that lacking AI governance increases costs and can lead to failed initiatives.
The gravity of the issue is evident in a Gartner survey, which revealed that 55% of respondents say their organisation has not yet implemented an AI governance framework, but most (40%) have started developing one.
Among those that do, 46% of respondents say their organisation has implemented an AI governance framework, either as a dedicated framework (12%) or as an extension of other governance frameworks (34%).
Anand Ramamoorthy, APJ Community of Practice leader, DG&P at Informatica comments that just like the introduction of GDPR from a data privacy perspective back in 2018 and further efforts by other countries to ensure there is responsible use of data by businesses, striking a balance between innovation and responsible use will be the key focus for governments across the globe while using AI.
“Hence traditional data governance and AI governance will have many synergies given the need for trusted, relevant and timely data to make AI efforts meaningful,” he opines.
AI governance and data privacy – two sides of the same coin
While some will suggest treating AI as another new technology to integrate into an organisation's stack, FutureCIO believes that its evolving nature suggests considering the road to AI maturity as a journey that requires the presence of guardrails over its lifetime.
Sicular observes that one of governance organisations' first efforts is ensuring compliance with relevant regulations. "Compliance involves privacy but includes all kinds of industry and legal regulations. Privacy is the top and most visible concern," she continues.
Kris Payne, director of technology solutions for APAC at Neo4j, opines that AI systems involve a higher degree of uncertainty and more ethical risk due to their dynamic, autonomous, and opaque decision-making processes.
He says AI engines rely heavily on large datasets, often including personal information, making data privacy crucial for responsible and ethical AI development. "Additionally, protecting sensitive information and preventing unauthorised access significantly reduces bias in AI systems," Payne continues.
"AI governance must encompass the ethical use of AI technologies, ensuring data privacy is at the core, fostering trust in AI systems."
Kris Payne
Simon Dale, Adobe's vice president for Asia, opines that AI governance and data privacy are deeply interconnected, especially for companies that handle vast amounts of user data.
The 2022 Data Privacy Benchmark Report by Cisco concurs, revealing that privacy has become a business imperative and a critical component of customer trust for organisations worldwide.
In the survey, 90% of the respondents said they would not buy from an organisation that does not adequately protect its data, and 92% of organisations said that respecting privacy is integral to their culture.
Dale argues that strong AI governance ensures responsible use by emphasising transparent data practices, regulatory compliance, and minimising bias. "This aligns with data privacy principles, which secure personal information through limited use and transparency. Both fields emphasise transparency: responsible AI empowers users to comprehend how data is used while data privacy fosters trust through transparency," he continues.
Tanya Pandey, a product manager at Kissflow, suggests imagining AI governance as a building structure. In that situation, she posits that data privacy is the steel framework that holds everything up.
"Without it, the entire structure collapses. By designing AI systems with privacy by design and by default principles, and leveraging technologies like differential privacy and anonymisation, we ensure legal compliance and user trust."
Tanya Pandey
Challenges to achieving data and AI governance
There is little to dispute claims that AI interest is high. The Stanford University Artificial Intelligence Index Report estimates a fivefold increase in investments between 2016 and 2023. However, a reluctance to bring AI into production environments in some environments due in part to a lack of effective global regulation, according to KPMG.
Gartner's Sicular posits that the most significant challenge for AI governance is balancing value and risk. She comments that it's not only about risk or value but also about both. "This challenge involves a lot of efforts, such as risk mitigation, accelerating time to value and better utilisation of resources," she adds.
"A mistake many organisations make is having AI governance as a standalone initiative, while they should be extending existing governance and reusing the current successes achieved, for example, by data governance."
Svetlana Sicular
She suggests that such an approach is also better for employees familiar with the existing governance approaches, as many apply to AI—for example, data classification, standards, and communication practices.
"Although AI governance shares some capabilities with other business function governance bodies, it contains unique characteristics – trust transparency and diversity. Each of these characteristics applies to people, data, and techniques," adds Sicular.
Neo4j's Payne cautions that the lack of transparency in AI operating models heightens the challenge of defining ethical standards and establishing clear policies to balance regulation and innovation.
For his part, Pandey believes that the most significant barrier to effective data and AI governance isn't just technical—it's organisational. He believes that the real pain point lies in the deep-rooted silos that fragment enterprises.
He posits that each department operates in its bubble, with its data standards and objectives, making a unified governance framework nearly impossible to implement. "This fragmentation breeds inefficiencies, inconsistencies, and ultimately, a lack of trust in AI systems," he asserts.
The AI governance team
The five executives contributing to this article agree that the development and oversight of AI governance cannot be left to one person.
Kissflow's Pandey says responsibility for AI governance should lie with a specialised, cross-functional team. She elaborates that this team should include AI engineers, data scientists who bring technical expertise, and ethicists who provide insights on moral implications. These compliance officers ensure adherence to laws and regulations, and strategic planners align AI initiatives with broader business goals.
Recognising that AI can be used for bad purposes, Pandey says having IT security and risk management representatives can help address potential vulnerabilities and ensure comprehensive governance.
She also suggests bringing external advisors or consultants who can provide an unbiased perspective and help navigate complex governance challenges.
Adobe's Dale goes further, suggesting that, if possible, user and community representatives can also ensure that AI systems meet diverse stakeholder needs and minimise adverse impacts.
"Together, this team can manage AI responsibly and ethically, balancing innovation with societal and regulatory considerations."
Simon Dale
Ramamoorthy believes that ownership of such frameworks should reside with the business leadership. “A cross-functional AI governance team that has representation from each of these groups led by a business executive can make AI efforts a success while ensuring compliance to the framework,” he concludes.
Recommendations for AI governance
There are many roads to a destination, especially when it is popular and relatively new. Such is the case with AI and AI governance.
Payne suggests starting with focusing on user trust and transparency around AI usage within the organisation and leveraging technologies like Retrieval Augmentation Generation (RAG) powered by knowledge graphs for data accuracy and explainability to end-users.
Sicular says it is essential to define AI governance. She argues that purpose drives intent, and intent drives outcomes. She also is adamant that AI governance efforts must be intentional. She advocates starting discussions about balancing AI value and risk early. Other recommendations:
- Collaborate with legal and compliance counterparts to decide jointly on the necessary steps by AI and governance teams.
- Define policies to concentrate on the most significant business impact while leaving creative freedom to AI teams.
- Assign decision rights proportionally to the progress of AI.
- Be transparent. Document facts and decisions about AI and the data used to train it for each project.