The digital transformation conversation has irrevocably shifted. Today, the C-suite is no longer talking about adopting technology but about embedding intelligence into the very core of the enterprise. This was the central theme of the recent executive exchange event, organised by Cxociety and hosted by Red Hat Philippines.
The open source foundation for democratising AI

Prem Pavan, VP & GM, Southeast Asia & Korea (SEAK), Red Hat, opened the event by framing the company’s decades-long philosophy as the necessary bedrock for the current AI wave. He filtered “all the noise” from the last 30-35 years of industry experience down to two core contributions.
The first was taking “what was considered open source technology... and made it enterprise-ready.” Pavan noted that Red Hat made open source, which he characterised as a representation of “innovation and creativity and collaboration and transparency and openness”.
The second was bringing “something that is very, very, very relevant” to the market, citing the evolution of Red Hat Linux into the operating system of choice for critical systems and OpenShift as the platform that validated the hybrid cloud/multi-cloud model.
Pavan highlighted the urgency of AI adoption in the ASEAN region, citing a study projecting that AI could contribute 10-18% of the region’s GDP by 2030, with the top six ASEAN countries, including the Philippines, expected to contribute around US$120 billion to the overall economy.
However, this optimism is tempered by a stark reality: 89% of ASEAN companies acknowledge AI’s potential, but “only 17% reported as a clear AI strategy, an organisation-wide strategy”. Furthermore, only 22% of these companies are “really able to measure tangibly how they monetise the benefits of AI.”
Decoding the megatrend of AI sovereignty

Vincent Caldeira, CTO, APAC, Red Hat, echoed Pavan’s points and dove deeper into what he calls the “mega trend” of sovereignty. He stated, “AI now is also a strategic imperative at the national level, and most countries across APAC have built what we call a national AI policy as a result of that.” Caldeira introduced three new pillars to the concept of sovereignty beyond the familiar “data sovereignty”:
- Security and trust: Citing the “black box” nature of generative AI, Caldeira asked, “When we get an output, we have absolutely no clue where this output is coming from.” For critical business decisions, such as loan approval or network optimisation, understanding “what’s under the hood” is vital.
- Control and choice: This dimension involves avoiding vendor lock-in and dependencies. Caldeira gave the example of a banking customer in Australia who struggled to get fast GPUs because a larger company was prioritised. This highlights the need to “build an effective way to provide this GPU capacity on a local basis.”
- Local operation and ecosystem: This is about developing local talent and capability rather than always relying on “foreign technology.”
Caldeira argued that “open source [is the] only way to go to get innovation at scale.” He emphasised that using transparent, open source technologies allow organisations to audit the code, control the supply chain, and offer choice, meaning an AI can run on one cloud provider today and on a private infrastructure tomorrow using the same technology stack.
Panel insights: From experiment to enterprise scale

The panel discussion brought together Caldeira with industry leaders Carlos Tengkiat, CISO of Rizal Commercial Banking Corporation (RCBC), and Mark Santiago, head of IT Risk Management for one of the Philippines’ licensed digital banks as of 2025. The discussion centred on the practical challenges of deploying and governing AI at scale.
1. Biggest AI investments and early focus
When asked about the two biggest AI investments in banking, Santiago, representing a three-year-old digital bank, noted they are still in an “exploratory phase.” He stressed that the primary investment is in data quality. Operating without physical branches, digital banks rely on third-party vendors for internal models, making a “policy and modern risk management framework” to govern vendor use a core investment.

Tengkiat of RCBC took a different approach, viewing AI not as a specific investment but as an “option that you have for your solution.” He recognised that while their data might be “not that clean,” it is “still golden data…” RCBC’s focus is on how AI can “engage new clients” and optimise the effectiveness of existing business units.
Caldeira observed a marketplace shift. Initial “AI experiment” budgets are giving way to a focus on fundamental business principles. The industry’s initial AI focus, he said, has been on “back office efficiency, like internal process type of use case, because they are easier to really put constraints around.”
He cited the “very well publicised use case of chatbots” that led to airlines having to honour invented “refund policies” as a cautionary tale against starting with customer-facing systems. He highlighted IT operations as an early beneficiary, using AI to manage system issues and perform “preventive maintenance.”
2. Securing data in the AI era
The discussion raised the critical question of how to maintain speed and security while honouring data localisation preferences. Caldeira introduced a “third exposure” for data security: data in use. Previously, the focus was on data at rest and data in transit.
He argued, the AI model itself needs protection, as “people can somehow get the data out of it” while it’s running. Technologies like confidential computing are emerging to “provide a sandbox around the data or around the process so that you can keep it under control.”
Red Hat’s Caldeira stressed the need for a “consistent way of managing your IT stack, irrespective of the infrastructure provider.” He framed this as establishing a “policy as code” that governs data without being tied to any specific infrastructure, thereby preventing inevitable human error and compromise.
He also warned that “there is too much investment going into AI, but it is not going into data.” He noted that overlaying GenAI on top of 20-year-old data warehouses with lousy data will only lead to “rubbish outputs.”
Santiago called the security-versus-speed question “ironic.” He pointed out that while keeping data local requires managing and maintaining a physical data centre, choosing the cloud automatically provides resiliency and speed.
However, the key challenge remains “data localisation.” His solution is to work with compliance and governance teams to “identify which data can go into the cloud and which data are strictly, supposedly just be stored” locally.
RCBC’s Tengkiat agreed, emphasising that organisations must not only have a strategy for using the cloud but also for migrating out of it.
3. Scaling beyond the pilot
As organisations move AI “beyond the pilot,” the biggest challenges shift from technology to people and processes.

Santiago identified three primary challenges:
- Data quality: Moving from “curated data” in a pilot to the “dirty unstructured data” in a Business-as-Usual (BAU) environment requires ensuring that the change “will start on, making sure you govern your data.”
- Ownership: In a pilot, ownership is often ambiguous. In BAU, clear ownership is vital to avoid issues and “finger-pointing” when something goes wrong.
- AI governance: BAU requires robust governance, including “identity, access management, incident management, SME, data integrity checks.”
For Tengkiat, the single biggest challenge is people. RCBC’s approach was first to ensure employees’ “mindset is very high” by establishing clear usage guidelines.
Caldeira concurred that it “always starts with people.” He shared Red Hat’s internal example: Instead of a single mandatory course, Red Hat gave engineers time to “use this time to learn and to think how you can use this technology to change the way you work and do some experiments of your own.”
4. Practical guardrails for safe speed
In closing, the panellists offered practical “guardrails” to allow teams to move fast while remaining compliant and ethical.
Tengkiat advised focusing on existing frameworks. The task is to “build your AI management system on top” of existing frameworks, customising it for the organisation.
Santiago advocated for acceptable use policies on the cybersecurity side, which allow employees to use AI while providing the necessary "guardrails... to protect your company from misuse.”
Caldeira’s advice was to start with telemetry. It is "very important… to understand every single interaction with the model, between the agent and the model with your API.” Implementing consistent telemetry across the development and production lifecycle provides the data needed to “understand when things really go wrong.”
The way forward
The insights shared at the executive exchange underscore a strong recognition that becoming an AI-native enterprise is a strategic national and competitive imperative. It also can’t be done with a technological upgrade.
The urgency is palpable, with AI projected to contribute significantly to ASEAN’s GDP, yet the vast majority of organisations still lack a clear, measurable monetisation and governance strategy. The core message is that success in this new era relies on two interconnected pillars: openness and control.
Ultimately, the technical foundation must be matched by operational rigour. The transition from pilot to enterprise scale introduces critical challenges focused on the data quality, the complexity of securing data-in-use within the model itself, and the need to enforce clear ownership and governance frameworks.
