Tue, 28 Apr 2026

PodChats for FutureCIO: Sovereign AI by design – more than about where data lives

AI governance in Southeast Asia is driven by rapid AI adoption, risk awareness, and the need for ethical deployment. The ASEAN Guide on AI Ethics and Governance (2024), supplemented by its 2025 Generative AI expansion, provides holistic frameworks that influence national policies, blending voluntary soft law with baselines inspired by EU and OECD standards.

Singapore leads with its Model AI Governance Framework and AI Verifytoolkit; Vietnam’s risk-based 2026 AI Law enforces human oversight for generative systems; Indonesia and Thailand favour sectoral regulations. Persistent challenges include enforcement gaps and geopolitical tensions.

Sovereign AI—a nation’s or organisation’s ability to independently control, develop, deploy, and govern its full AI stack (infrastructure, data, models, operations)—has become essential for compliance, data protection, reduced foreign dependency, and alignment with local values and security.

As regulations mature, CIOs must prove end-to-end control: ownership of the control plane, transparent governance, and auditable decisions. This drives “sovereign-by-design” architectures embedding jurisdictional oversight across data, models, inference, and logs—without sacrificing performance or innovation.

Accenture’s 2026 research shows over 60% of APAC enterprises (64% in Southeast Asia) plan to increase sovereign AI and cloud investments, driven by compliance, security, national autonomy, and governance—yet only ~25% extend sovereignty to AI models, underscoring the shift to deeper control.

In Hong Kong and Southeast Asia, fragmented rules and geopolitical pressures favour hybrid models: local control-plane ownership paired with secure access to the frontier model. This “more than where data lives” approach—prioritising ownership, observability, policy-as-code enforcement, and modular runtimes—builds resilient, trusted systems that accelerate regulatory approvals and board support.

Gartner predicts sovereign AI acceleration in 2026, with nations investing in region-specific platforms to build trust, align with local cultures, and reduce lock-in risks. For CIOs, 2026 marks the year sovereign-by-design becomes the foundation for competitive advantage in Asia’s evolving AI landscape.

Owning the full control plane

Chris Wolf, global head of AI for VMware, cuts straight to the point: “It’s easy to say, just hold on to your data, but it’s more than the data plane.”

True AI sovereignty means owning the control plane—encryption keys, identity management, audit logs, and the ability to redeploy everything if the law changes. Wolf notes that, unlike hyperscalers that only offer residency while retaining ownership, VMware Cloud Foundation (VCF) gives organisations direct ownership. “You can run fully disconnected from the internet and still operate,” he asserts.

This distinction matters in Southeast Asia and Hong Kong, where 64% of enterprises plan major sovereign AI investments in the next two years, yet only 25% currently extend controls to models. A hybrid sovereignty model is preferred by 57% of APAC leaders.

Governance that builds trust

AI “factories” that merely generate tokens oversimplify the challenge. As Wolf explains, “it’s also about how do I keep these AI agents and work that the AI systems are doing – how do I ensure it’s auditable? How do I ensure that it’s reliable – that if there’s a failure in an agent transaction execution, the agent is not repeating the same transaction? How am I providing security logs?”

Boards and regulators need consistent platforms with built-in observability, tracing, and security logs. Homegrown bolt-on solutions require costly certification; purpose-built inference platforms avoid that trap.

Controls baked in, not bolted on

Wolf contrasts simplistic token-output systems with architectures offering observability, tracing, resiliency, and identity passing out of the box. “When you buy a system that’s only capable of producing AI tokens, this means that you have to go and literally recreate all of these other services that you would require to pass a compliance audit,” he notes.

Chris Wolf

“A cleaner approach would be to leverage systems that already have these capabilities built in, but at the same time are using industry standard or open-source frameworks by which to interact with those systems so that you don’t have any type of lock-in.” Chris Wolf

Smart routing for balance and safety

Public frontier models excel at vague prompts that require deep reasoning, but private data and specific tasks belong in sovereign environments. Agentic routing and Model Context Protocol (MCP) gateways decide in real time—localising work to cut egress costs, protect sensitive data, and enforce licensing.

Wolf highlights: “If I have a fairly vague prompt and I need a lot of iterative reasoning to make sense of it; I’m probably going to need to use a frontier model… But if I have a very specific prompt then I can redirect that traffic to the private cloud.”

This balances regulatory risk, cost, performance, and speed. Singapore’s Singtel-Nvidia Centre of Excellence for Applied AI (launched February 2026) and Hong Kong’s 2026 AI R&D push, including the new Hong Kong AI Research and Development Institute, reflect the same hybrid momentum.

Resiliency, tracing, and auditable agent flows

Non-deterministic AI agents create backdoor risks if identity tokens cascade unchecked. Wolf warns: “You can inadvertently create backdoors for data access if you’re not paying attention. You need tracing, so that I can understand the entire workflow by which agents call each other and make sure that that is logged and fully auditable.”

Service meshes track transactional state, session reliability, and full agent call chains.

These metrics—plus high-availability inference—provide regulators with evidence that systems comply with local AI laws.

Policy-as-code enforcement

IT operations sit at the centre. LangGraph helps developers add human-in-the-loop checks, but MCP gateways and service meshes enforce read-only access and prevent accidental deletes, as Wolf illustrates with a real-world email agent mishap.

VMware’s participation in CNCF Kubernetes and Certified Kubernetes AI Conformance ensures policy engines integrate cleanly without operational drag.

Adaptive guardarils across jurisdictions

Policy-aware APIs, output filters, and human checkpoints must flex without requiring app rebuilds per market. Open-source runtimes like vLLM provide accelerator interoperability; Kubernetes and service meshes (Dapper, Temporal) deliver resiliency for non-deterministic flows.

Projects such as Portkey handle agentic routing. The result: one runtime adapts to changing rules across Hong Kong’s voluntary AI frameworks and ASEAN’s risk-based guidelines.

Evidence that regulators accept

VMware separates “control plane in country, data plane hybrid” cleanly. “We provide not just control plane residency, but control plane ownership. You can run your data plane and control plane completely disconnected from the internet,” says Wolf.

Regulators see ownership of identity, encryption, and tracing inside the private cloud—even while calling external frontier models via API keys. Hyperscalers cannot offer the same; VCF supports fully air-gapped operation. Google Distributed Cloud’s local Gemini models complement this, enabling disconnected frontier capabilities in the jurisdiction.

Localising foundation models

Foundation models are moving on-premises. Google Distributed Cloud runs smaller Flash and Gemini Pro models locally; VMware partners with providers to host them on VCF.

This delivers cultural/linguistic alignment—such as Singapore’s SEA-LION family for Southeast Asian languages and Malaysia’s ILMU multimodal model—while retaining a common core and meeting the demands of regulators in Vietnam and Indonesia.

Modular stacks survive regulatory shifts

AI moves too fast for monolithic designs. Wolf’s team built optionality from day one: vLLM for runtime, OpenAI-compatible APIs, native CLIs. “If you even built an OpenAI-compatible service anywhere else, all you have to do is change the URL, and it’s just gonna run, just like it always has.” When a jurisdiction updates AI laws (as Vietnam did in March 2026), workloads are rerouted or decommissioned without downtime.

CIO Playbook 2026: Two pieces of advice that matter most

Wolf distils it:

  1. Localise and own your control plane—maximum flexibility, no matter what happens geopolitically.
  2. Never take AI inference for granted. Research-oriented “AI factories” were never built for production at scale; bolt-on governance creates compliance risk. Choose platforms intentionally designed for inference, lifecycle management, availability, and auditability.

Supporting the shift across borders

Accenture research confirms APAC’s pivot: sovereign AI is now mainstream, with data governance as the “quiet superpower” behind production AI. Vietnam’s comprehensive AI law (effective March 2026) and Hong Kong’s Fintech 2030 AI roadmap set the pace.

Enterprises adopting hybrid control planes and open standards gain faster regulatory approval and a competitive edge.

Wolf’s message is clear: “Sovereign AI by design is no longer optional—it is the architecture that turns regulatory pressure into unbreakable trust and enduring advantage.”

Click on the PodChats player to hear approaches for futureproofing your AI development strategy at the code level.

Related:  PodChats for FutureCIO: AI design and integration considerations for CIOs

Related Stories

MORE STORIES

Subscribe