Globally, AI adoption in healthcare is accelerating rapidly, driven by generative and agentic AI integration into clinical workflows, diagnostics, and administrative tasks. In the Asia-Pacific region, trends emphasise trust-building through governance, with AI evolving from hype to accountable, clinician-aligned systems.
Agentic AI investments are rising, with 75% of providers expecting superior productivity gains over non-agentic generative AI. Singapore’s AI sandbox for medical software exemptions exemplifies regulatory support that enables broader scaling in public healthcare. East Asia mirrors this momentum, with China leading in agentic AI for clinical tasks and multimodal models integrating imaging and traditional medicine data.
AI's maturing role in healthcare
As healthcare enters a new maturity curve, AI-driven transformations are concentrating investments in administrative efficiency and provider operations. Frost & Sullivan notes funding rebounding towards enterprise infrastructure and hybrid virtual care, potentially shifting clinician burdens to enhance care quality.
Gartner predicts agentic AI will redefine the sector by 2026, evolving workflows and experiences while urging CIOs to prepare for its promises and perils, including workforce impacts. The Kellton blog highlights generative AI’s role in real-time summarisation and personalised treatments, alongside agentic AI’s autonomous execution for preventive care and drug discovery by 2030, with 2026 marking accelerated adoption in hyper-personalised and proactive models. The World Economic Forum underscores AI’s transformative potential in diagnostics, early disease detection, and administrative relief, projecting benefits amid workforce shortages, though challenges like trust and regulation persist into 2026.
The state of AI policy in Thailand
Thailand’s AI regulatory landscape is advancing towards maturity, drawing on global standards while addressing local needs.

“Thailand has been drafting AI bills that were circulated last year for consultation, and the expectation is that the new AI law will come out towards the end of this year – it’s not guaranteed but that’s the expectation,” explains James McLeary, chief information officer and chief information security officer at Bumrungrad Hospital in Thailand.
He anticipates the legislation to focus on “transparency, risk assessment, making sure that the ethical aspects are considered as part of AI deployments.” This alignment with frameworks like the European Union’s (EU) AI Act indicates Thailand’s commitment to ethical governance and responsible innovation.
Compliance challenges with emerging regulations
Organisations may encounter difficulties adapting to new AI rules, although enforcement patterns could ease the transition. Reflecting on the Personal Data Protection Act (PDPA), McLeary recalls that after its initial enforcement there was limited action on violations.
Mirrored after the EU’s GDPR: “Some people thought it wasn’t going to be enforced leading some to cut corners. But then last year (2025) we saw the data privacy regulator handing out fines for the first time around data privacy breaches,” he recollected.
He envisions a comparable pattern for AI: “I imagine that may be similar with the new AI regulations. Okay, the regulation is there but how is it being enforced practically.” This view underscores the importance of proactive compliance planning rather than assuming early leniency will persist.
Accelerating AI adoption in healthcare
"Healthcare is interestingly one of the early adopter of AI," comments McLeary. "There’s a lot of use cases in healthcare which go back several years. For example, AI has been used in image analysis for many years, for example in x-rays radiologists assisting the doctors to be able to detect potential issues with the x-ray image."
He believes healthcare retains its lead: "I think healthcare is quite ahead of the curve in terms of adoption, and that’s going to ramp up significantly. I’m already seeing a fast pace of adoption. "
A high-value application addresses administrative burden. "If you go to a doctor today, it’s very often that the doctor will not be looking at you. He or she will be looking at their keyboard because they’re typing the notes of what they’ve asked you and what they’ve said, " McLeary explains.
AI tools can now automatically record consultations and generate structured notes in the electronic medical record, which doctors can later review, query or prompt. "Those types of use cases are going to add tremendous value," he adds.
Caution remains critical in clinical decision-making. "Where it’s a little bit more cautious in healthcare is anything to do with automatic decision making by the AI tool. We do not want to cross the boundary of AI making a clinical decision on behalf of the doctor because that enters a whole legal territory," McLeary warns.
"AI may be used to augment the (diagnosis) process but it needs to be the doctor that presses the button and says this is the decision. The doctor is still accountable. It’s still human accountability," he stresses.
"It has to be a human making the decision at the end of the day, and I think that is still going to be the case for other industries as well. No matter what industry, I think that's going to prevail for hopefully the time being." James McLeary
Ensuring PDPA compliance in AI operations
Integrating AI with Thailand's PDPA demands careful consent management and data protection practices.
"The consent piece cannot be just a uniform consent applied to all; it needs to have a lawful basis attached to it,” McLeary explains. In healthcare this means separating general data privacy consent from specific clinical consent required for procedures or patient information collection.
For AI training and use, he stresses: "it's important that the AI models that are accessing data has the right level of lawful basis consent attached to it – that we are not using data to train the model that doesn't have that consent applied." This principle extends to data minimisation and de-identification so that AI tools receive only the limited, necessary data.
Anonymisation remains challenging: "I wouldn't say it's easy because a lot of the medical devices that are used in hospitals are quite old technologies and very proprietary to a specific vendor."
Outputs frequently contain personally identifiable information (PII), so hospitals must depend on vendors for anonymisation solutions.
"It was never something that needed to be considered when a lot of these devices were first implemented," McLeary notes. "But now we see the need to be integrated into the IT network… and as CIO I need to keep ahead of any possible data breach that could be coming from those types of machines."
Structuring AI vendor contracts for liability
With Thailand’s forthcoming AI law expected to hold data controllers responsible, vendor contracts must include strong safeguards. "For AI contracts it’s important that we have those principles of data privacy, the data controller data processor. In these cases, we (the healthcare provider) are the data controller, so the liability stays with us. We cannot outsource the risk," McLeary emphasises.
Key provisions should cover: "what are the controls expected to be implemented. For example, we should prohibit the AI vendor from reusing any customer data in their own training models. By default, we should have in the (AI) contract clear statements around how those controls are going to be evidenced by the AI provider. Then when it comes to exit rights, if we find that there is some breach or performance issues with the AI vendor, how can we exit in a way that is secure and in a way that we know that our data is not going to be further exposed as part of that exit strategy."
Managing cross-border data flows
Balancing localisation requirements with regional compute needs is an emerging priority.
"To be honest, for me this is an emerging topic – something that I'm trying to get the foundations in place now because up until now the data privacy laws have allowed Thailand to have cross-border data transfers. There is no strict data sovereignty requirement so long as the receiving country has similar standards as Thailand," McLeary explains. "But I think this is going to tighten over the coming years, and maybe the new AI law will have aspects of this."
His preferred approach is to keep personally identifiable information (PII) local wherever feasible. "As a first step, can we ensure all the PII data is residing in Thailand," he says.
For compute-intensive AI workloads he favours federated models: "There are ways now that we can federate it so that the identifiable information remains local, we send the analytics data, the telemetry data to Singapore for the compute. We’re building that into policies so that we avoid the scenario where we’re sending PII data to other regions."
Learning from global regulatory frameworks
Thailand can draw from established models as its regulations evolve.
"The EU AI act was the first and most comprehensive and is the one that Thailand is mirroring for its own requirements. I’m also cyber security guy, so I am very comfortable with the NIST. When NIST came out with their AI risk management framework, I immediately pivoted to that one because I can correlate against the NIST CSF framework. The other one from this region would be the Singapore AI model given the ASEAN hub component," McLeary recommends.
Guidance for Thai leadership in AI adoption
McLeary advocates treating AI governance in the same structured way as a cybersecurity programme.
An ongoing exercise, he elaborates: "I think treating it (AI) like it’s a cyber security governance program and by that, I mean if we look at all the elements of a cyber program they do apply to the AI deployment."
He illustrates the approach by walking through key cybersecurity domains and their direct parallels in AI:
Asset inventory and visibility: "If we take identify for example, do I have an inventory of all the AI usage across the organization. Is there shadow AI that I don’t know about. Do I have proper asset management across those AI solutions."
Access control: "Am I managing identity and authentication to those AI platforms. Do I have the right security controls in place to protect the AI tools, to govern the access that they have to different data?"
Monitoring and response: "Moving into security monitoring, if there is a breach in one of the AI solutions, how will I know about it? Who is actively monitoring it? Do they have an incident response capability."
McLeary points out that the well-established NIST frameworks already used for cybersecurity can be directly adapted ("lift and shift") to cover AI deployments.
"All those NIST frameworks that we’ve applied ad nauseum to cyber security, we can now lift and shift it and apply to AI."
However, he cautions that this adaptation significantly increases the workload for leadership teams.
"What this means, however, is that the CISOs and CIOs have tripled the work that they now need to do because they have NIST framework that they developed for cyber on-premise, they then had to shift to cloud and now they must shift it to AI." James McLeary
