Artificial intelligence (AI) governance encompasses the processes, standards, and safeguards that ensure AI systems are safe, ethical, and aligned with human rights. Effective governance frameworks guide AI research and application to mitigate risks such as bias, privacy violations, and misuse, while promoting innovation and trust.
This ethical approach necessitates the involvement of diverse stakeholders—including developers, users, policymakers, and ethicists—to ensure that AI systems reflect societal values. Given that AI is shaped by human-engineered code and machine learning, addressing the inherent biases and errors in its creation is crucial to preventing discrimination and harm.
It predicts that such efforts if left to country-specific interests will inevitably present significant operational and compliance challenges for firms focused on AI and AI-adjacent businesses, including digital services.
There are at least sixteen jurisdictions in Asia that are in various stages of developing AI governance frameworks, cites Goutama Bachtiar, director of IT Advisory, Grant Thornton Indonesia. He adds that most Asian countries have taken a light-touch, voluntary approach to AI regulation, aiming to embrace innovation and foster economic growth.
Goutama Bachtiar
“Many jurisdictions are incorporating ethical principles into their AI governance frameworks, addressing issues such as transparency, fairness, and human-centric design, and numerous countries are integrating AI regulations with existing data protection laws, recognising the close relationship between AI and data privacy. What’s more, there is a growing trend towards collaboration on AI governance, as evidenced by joint statements between Japan and ASEAN.” Goutama Bachtiar
He observes that the Southeast Asia region adopts a non-binding, voluntary approach to AI Governance. “This means that organisations are not legally obligated to follow the guidelines or principles outlined, and participation relies on the goodwill of individuals, entities, or governments,” he continues.
“Governance is not just about compliance; it's about building trust and ensuring that AI serves the public good,” clarifies Bachtiar. This perspective underscores the pivotal role that governance plays in fostering confidence among stakeholders while navigating the complexities of AI technologies.
The players in AI governance
The development and adoption of any AI governance strategy and framework rests on the coordinated efforts of stakeholders, which in this case involves technology and operations, with leadership oversight. Bachtiar asserts that “Governance is not merely about compliance; it’s about building trust and ensuring that AI serves the public good.” His perspective highlights the pivotal role governance plays in fostering stakeholder confidence while addressing the complexities associated with AI technologies.
A big believer that the CIO shall play a central role in AI Governance, Bachtiar explains that the CIO bridges technology and business strategy. “Their key responsibilities include ensuring AI initiatives align with business objectives and the organization's risk appetite, as well as executing and monitoring AI development and operations,” he elaborates. “They must implement an AI governance framework and collaborate with other executives to integrate AI governance into broader strategies,” he continues.
The CFO, from a bottom-line perspective, shall ensure AI investments align with the organisation's financial strategy, objectives and goals. “Allocating human and non-human resources for AI governance initiatives, assessing financial risks and opportunities associated with AI implementation, and collaborating with other executives to evaluate the financial implications of AI governance decisions are the heartbeats of their roles,” adds Bachtiar.
He opines that the chief data officer manages data governance policies, ensuring data quality, security, and compliance with regulations for trustworthy AI while the chief compliance officer develops AI compliance programs, monitors systems for regulatory adherence, and aligns AI compliance with business strategies.
Governance tools and platforms
AI governance platforms are essential tools for organisations striving to implement governance frameworks effectively. These platforms serve as centralised systems for monitoring, managing, and reporting on AI initiatives. Bachtiar notes, “A well-designed governance platform can bridge the gap between policy and practice, enabling organisations to operationalise their governance strategies.” This bridging is crucial as it translates abstract principles into actionable steps that can be effectively managed.
Translating principles into action
One of the significant challenges in AI governance is translating high-level principles into practical applications. The IMDA paper outlines various dimensions of governance, including accountability, data management, and incident reporting.
Bachtiar believes that leveraging AI governance platforms ensures these dimensions are systematically addressed, creating a cohesive governance approach that organisations can follow.
Enhancing accountability
“The way I see it, the impact of AI Governance frameworks can be effectively measured using a mix of quantitative and qualitative metrics—or even a blend of both,” says Bachtiar.
Accountability is at the heart of effective AI governance. Bachtiar emphasises, “Without accountability, AI systems can easily drift from their intended purpose, leading to potential misuse.”
“The way I see it, the impact of AI Governance frameworks can be effectively measured using a mix of quantitative and qualitative metrics—or even a blend of both.” Goutama Bachtiar
“We should monitor performance outcomes linked to AI initiatives using specific KPIs, such as model accuracy, inference speed, and efficiency gains,” says Bachtiar. He elaborates that compliance metrics include regulatory adherence and audit frequency.
“Operational excellence can be evaluated through cost savings from automation. Risk mitigation should track AI-related incident frequency and severity. Stakeholder trust hinges on feedback regarding AI transparency,” he adds. He stresses that continuous improvement requires regular reviews of governance metrics to align with evolving regulations and business objectives.
Governance platforms enhance accountability by tracking decisions and actions taken throughout the AI lifecycle, providing clarity on who is responsible for what and fostering a culture of responsibility within organisations.
Data management and integrity
Data serves as the foundation for AI systems, making its management critical to governance. Bachtiar stresses the importance of ensuring data quality and integrity, stating, “Organisations must be vigilant about the data they use; poor data can lead to poor outcomes.”
Governance platforms facilitate data management by offering tools for tracking data sources, ensuring compliance with privacy regulations, and maintaining high standards of data quality.
Incident reporting mechanisms
Even the most robust AI systems are not infallible, making effective incident reporting mechanisms vital for continuous improvement. “Being able to report and learn from incidents is essential for any AI governance strategy,” Bachtiar asserts.
Governance platforms can streamline this process, enabling organisations to document incidents, analyse their causes, and implement necessary changes swiftly.
The importance of testing and assurance
Third-party testing and assurance are crucial components of AI governance. The IMDA paper highlights the need for independent verification of AI systems to build trust among users. Bachtiar states, “Independent testing not only validates the effectiveness of AI systems but also reassures stakeholders about their safety and reliability.” Governance platforms can facilitate this process by integrating testing protocols and managing relationships with third-party evaluators.
Addressing security concerns
As AI technologies evolve, so do the security risks associated with them. Bachtiar emphasises that security should be a priority, stating, “Security should be baked into the governance framework from the outset, rather than treated as an afterthought.” Governance platforms must incorporate robust security measures to protect AI systems from potential threats, ensuring that both the technology and the data it utilises are safeguarded.
Promoting Ethical AI Use
A successful AI governance platform does more than enforce rules; it fosters a culture of ethical AI use within organisations. Bachtiar believes that “training and awareness are critical; stakeholders need to understand the ethical implications of AI.” By embedding ethical considerations into the governance framework, organisations can promote responsible AI use that aligns with societal values.
Engaging diverse stakeholders
Engagement with stakeholders is vital for effective AI governance. The IMDA paper emphasises the importance of collaboration among policymakers, industry leaders, and the public. Bachtiar echoes this sentiment, stating, “Collaboration is key; we must bring diverse perspectives to the table to create comprehensive governance frameworks.” AI governance platforms facilitate these discussions, ensuring that all voices are considered in the governance process.
The future of AI Governance platforms
As organisations increasingly adopt AI technologies, the significance of governance platforms will continue to grow. By translating governance principles into actionable frameworks, these platforms will help organisations navigate the complexities of AI effectively.
Bachtiar’s insights remind us that effective governance is not merely about compliance; it is about building a trusted ecosystem for AI that maximises its benefits while minimising its risks.
He opines that the future of AI governance relies on our ability to implement these frameworks thoughtfully and collaboratively. “AI governance platforms are not just a necessity; they are a pathway to ensuring that AI technologies are used responsibly for the public good,” says Bachtiar.
Recognising that regulations will evolve as governments, developers and users mature in their understanding of the technology, he reiterates the importance of proactivity throughout the lifecycle of AI governance.
“Staying informed about regulatory developments across jurisdictions is non-negotiable. Building robust AI governance frameworks, enhancing risk management practices, and strengthening cybersecurity measures for AI is essential.” Goutama Bachtiar
“Regular audits for algorithmic bias and fairness, alongside fostering a culture of responsible AI use and ethics, should be integral to their strategy. This isn’t just preparation—it’s futureproofing,” concludes Bachtiar.
Allan is Group Editor-in-Chief for CXOCIETY writing for FutureIoT, FutureCIO and FutureCFO. He supports content marketing engagements for CXOCIETY clients, as well as moderates senior-level discussions and speaks at events.
Previous Roles
He served as Group Editor-in-Chief for Questex Asia concurrent to the Regional Content and Strategy Director role.
He was the Director of Technology Practice at Hill+Knowlton in Hong Kong and Director of Client Services at EBA Communications.
He also served as Marketing Director for Asia at Hitachi Data Systems and served as Country Sales Manager for HDS’ Philippines. Other sales roles include Encore Computer and First International Computer.
He was a Senior Industry Analyst at Dataquest (Gartner Group) covering IT Professional Services for Asia-Pacific.
He moved to Hong Kong as a Network Specialist and later MIS Manager at Imagineering/Tech Pacific.
He holds a Bachelor of Science in Electronics and Communications Engineering degree and is a certified PICK programmer.