As we look ahead to 2025, the convergence of rapid technological advancements and evolving regulatory landscapes is poised to reshape how organisations operate and engage with their customers. In sectors such as financial services, telecommunications, and the public sector—each characterised by stringent regulations—the integration of artificial intelligence (AI) offers unprecedented opportunities and significant challenges.
Organisations increasingly leverage AI to enhance operational efficiency, personalise customer experiences, and drive innovation. However, the dual pressure of regulatory compliance and ethical considerations necessitates a robust governance framework. This is particularly critical in regulated industries, where the implications of AI decisions can affect business outcomes, public trust and legal standing.
The future of AI governance will likely be characterised by a collaborative approach among enterprises, regulators, and industry peers. As organisations strive to remain competitive while adhering to emerging regulatory requirements, developing best practices for AI governance will be essential. Proactive engagement with regulatory bodies and the establishment of ethical standards facilitate compliance and promote responsible AI use.
In this context, fostering a culture of responsible AI within organisations becomes paramount. Embedding ethical considerations throughout the AI lifecycle is crucial—from data collection and model training to deployment and monitoring. This integration ensures that AI initiatives drive business value and align with societal norms and regulatory expectations.

Anup Kumar, IBM’s chief technology officer for Data and AI, emphasises that while AI offers unprecedented opportunities for enhancing operational efficiency and personalising customer experiences, it also presents notable challenges surrounding regulatory compliance and ethical considerations.
“The integration of artificial intelligence offers unprecedented opportunities and significant challenges for all parties.” Anup Kumar
The importance of a robust governance framework
The dual pressures of regulatory compliance and ethical AI use necessitate establishing a more robust governance framework. This is particularly crucial in regulated industries where AI decisions can impact business outcomes, public trust, and legal standing.
Organisations in Asia are increasingly aware of the need to integrate AI thoughtfully into their workflows, ensuring that they do so within a framework that upholds operational integrity and ethical standards.
“The implications of AI decisions can affect not only business outcomes but also public trust and legal standing,” notes Kumar.
Transformational changes in regulated industries
In the past few years, regulated industries in Asia have undergone significant transformation-driven by the need for faster service response times, cost optimisation, and improved go-to-market strategies.
Companies have recognised the importance of enhancing customer experience, responding to queries promptly, and delivering products and services more efficiently. This urgency for transformation has resulted in a noticeable acceleration in innovation cycles, with many organisations cutting product development times from six months to just two or three.
Kumar suggests that “if you are in a very much settled market, creating a new product or service that earlier took six months can now be done in two to three months.”
He further emphasises that focusing on customer experience, mainly through personalisation enabled by generative AI, will be paramount in the coming years.
Balancing innovation and compliance
As organisations embrace these technological advancements, they face critical questions about the outcomes of AI implementations. Concerns about reputational risk are paramount, particularly in scenarios where AI-driven personalisation may inadvertently reflect cultural biases or inaccuracies.
Kumar cautions that while there is excitement around AI, companies must safeguard their reputations equally: "Companies are very much trying to safeguard their reputations, and they are concerned about what generative AI can do in terms of harm as well."
Collaboration with regulators is essential for organisations aiming to stay ahead of emerging regulatory requirements. Many governments in the Asia-Pacific region favour a framework-based approach rather than imposing strict regulations.
Kumar states, “Most of the stuff is more like a framework at the moment... the regulatory guys are trying to make sure that whoever is designing these AIs voluntarily take some steps.”
Trends in governance and regulation
Looking ahead, governments across Asia will continue to develop frameworks and guidelines for AI governance. These frameworks aim to mitigate risks while encouraging innovation. "Every government is going to bring some or other kind of framework and guidance... they don't want to be in a catch-up mode," says Kumar.
As technology evolves rapidly, regulators are cautious about imposing rigid compliance measures that may hinder progress. Instead, partnerships between the industry and regulators are crucial in developing mutually beneficial guidelines that protect users without stifling innovation.
Cultivating a culture of responsible AI
Kumar advocates a three-layered approach to cultivating a culture of responsible AI within organisations. At the top, an independent AI ethics board is vital for reviewing AI initiatives without bias toward immediate business outcomes.
"The idea about an independent AI ethics board is someone who can look at the long-term implications of using and building this AI." Anup Kumar
The second layer focuses on embedding AI governance into the daily operations of those developing AI solutions. In emphasising the importance of integrating ethical considerations from the outset, Kumar reiterates, “It is not about trying to govern something after building it. It is about right from the beginning.”
Finally, education and enablement at all levels are crucial. He remarks, “Enabling everyone... right from every consumer is going to help.” Just as pedestrians must follow traffic rules, users of AI systems need to be informed about best practices and potential risks associated with AI use.
Embracing the future of AI governance
As organisations in Asia’s regulated industries navigate the future of AI governance, they must adopt a mindset that prioritises questioning and critical thinking. Kumar advises, "The guardrails are essential... even if it is accurate, I still have to put them."
Establishing these guardrails for AI deployment, balancing benefits with risks, and fostering a culture of ethical AI are essential to ensure that technology enhances rather than undermines organisational integrity.
In 2025, the successful integration of AI in regulated industries will depend on clarity regarding business outcomes, careful selection of use cases, and a commitment to building trustworthy AI systems.
“Picking up the use cases and connecting that to the ROI is going to be fundamental when we are talking about the adoption of the generative AI,” concludes Kumar.
By focusing on these elements, organisations can navigate the complexities of AI governance and emerge as leaders in their fields.