Thu, 7 May 2026

The confidence trap: Governing AI and data resilience in the Agentic era

Across the Asia-Pacific, a quiet crisis is unfolding. It is not the crisis of adoption—that train has left the station. It is the crisis of visibility.

Among technology and security delegates attending the 2nd Annual FutureCIO Conference in Bangkok, Thailand 55% of technology and security delegates attending the conference cited data privacy and protection as a challenge that concerns them the most.

Source: 2nd Annual FutureCIO Conference Thailand, Cxociety Research 2026

The Veeam 2026 Data Trust and Resilience Report quantifies this paradox: while 90% of security leaders claim extreme confidence in their ability to recover from a cyber incident, the reality is that over 40% of those hit by an incident suffered financial loss or customer disruption. In the world of ransomware, only 28% fully recovered their data.

This gap between perceived readiness and operational reality is the “confidence trap.” And according to Veeam executives: David Allott, CISO APJ at Veeam, and Tim Stead, technical director, Data & AI Security, APAC, this trap is snapping shut fastest on enterprises embracing generative AI and agentic workflows.

In a recent discussion, both leaders argued that traditional governance is failing.

The solution, they propose, lies not in more policies but in a unified architecture of detection, protection, and the ability to “undo” AI-driven mistakes.

The visibility paradox in Asia

The accelerating adoption of AI across Asia raises many concerns both by CIOs and CISOs. At the aforementioned FutureCIO Conference in Bangkok, 77% of delegates cited “data leakage via prompts” as their overwhelming concern – dwarfing insider misuse (11%) and model manipulation or poisoning (9%).

Source: 2nd Annual FutureCIO Conference Thailand, Cxociety Research 2026

The Veeam report’s statistics are a wake-up call for the region. Across the 900+ leaders surveyed, 43% admitted that AI tool adoption is outpacing their ability to secure data and models.

Worse, 42% have limited visibility into the AI tools already in use within their organisations. For Asia-Pacific—a region now navigating Vietnam’s new AI Law, Singapore’s updated PDPA guidance for AI, and China’s PIPL—this invisibility is a regulatory minefield.

“AI doesn’t do anything without data,” Stead noted, drawing a parallel to space-time. “The two things are inseparable. If you don’t know what you’re dealing with in terms of data, preventing data leakage is going to be a hundred times more difficult.”

The report supports this, noting that 25% of organisations cite Shadow IT and unauthorised AI tool usage as a primary concern.

Yet, as Allott pointed out, the modern CISO wants to be an enabler, not a blocker. He cited progressive organisations that allow developers to “bring your own model” for a month-long trial.

The enablers, however, share a secret: they have solidified their foundation. They have classified their data first.

Breaking the silos: The case for the Data Command Graph

The fundamental failure, according to Stead, is structural. “Historically, data security or data governance tools have been siloed,” he explained.

Tim Stead

“The data governance team had their enterprise data catalogue; the security team had DLP tools; the privacy team had something else. They were talking about the same data but using very different taxonomies.” Tim Stead

This fragmentation is lethal when dealing with agentic AI—autonomous systems that move data and trigger actions without human oversight. To solve this, Veeam and Securiti AI are advocating for a “Data Command Graph.”

This is not just a database; it is a contextual overlay that maps the relationships among data sensitivity, storage systems, compliance requirements (such as PDPA), and active controls.

Stead described a shift from annual manual audits by “the big four” to continuous compliance. The graph interrogates APIs daily to determine whether the appropriate controls are in place. But what happens when a control fails?

Currently, 90% of customers prefer to be notified rather than have automatic remediation, largely because infrastructure is managed by a different team than data governance.

This leads to the core organisational problem: ownership of AI risk.

Who sits at the table?

The Veeam report reveals a worrying trend: accountability for AI and data risk governance is concentrated in a single executive—38% at the CISO level and 27% at the CIO level. Only 17% use a cross-functional committee. This creates blind spots.

Allott argues that this model is unsustainable. “AI risk must be a formalised part of the company’s risk management framework,” he said. By placing AI risk on the corporate risk register, business owners, application owners, and data protection officers are forced to sit at the same table.

Stead agreed, noting that the Data Command Graph is designed for exactly this cross-functional reality. “From the start, the idea had been that we wanted this platform to be one that would serve cross-functional teams. They had a place that they could come together, get a singular view of the data with a single taxonomy and foster that improved collaboration.”

This is where toxic combinations—a confluence of sensitive data, open permissions, and an ungoverned AI model—are exposed. The graph doesn’t just show the technical vulnerability; it reveals the operational breakdown between teams.

The agentic threat: Non-human identities

Looking toward 2026, both leaders identified the next frontier of risk: autonomous, multi-agent workflows. The Veeam report warns that AI systems increasingly act on users’ behalf, moving data with less direct human oversight.

“The biggest thing for CISOs this year is going to be the governance of non-human identities,” Allott warned. “It’s the automated decision-making that these agents are taking.”

He cited real-world incidents where agents, acting on their own discretion—because they were confused—made autonomous decisions. The automated decision-making of agents requires a new class of policy.

Stead added that visibility tools must evolve to be agile. When a sales team adopts Salesforce Einstein, or a developer uses Microsoft Copilot alongside Atlassian tools, the security architecture cannot be siloed to a single vendor.

“This is why we built the platform around a graph database,” he said. “We simply add another type of node—a new agent, a new model—rather than building a completely new product.”

From policy to “Undo AI”

The final piece of the resilience puzzle is recovery. The Veeam report notes that while 48% of organisations have Data Loss Prevention (DLP) controls, policy alone doesn’t reduce risk. You need enforcement.

Stead described the command graph’s ability not only to detect misalignment but also to facilitate corrective action. This is the “Undo AI” capability. If a toxic combination is found—say, a highly sensitive dataset being fed into an unvetted public model—the system can flag it for the data owner or, eventually, automatically quarantine the data flow.

Allott emphasised that this recovery mindset is often absent. Referring to a quote in the Veeam report from a Fortune 1,000 Company, he noted that most recoveries are based on “heroics, not necessarily from sound governance.”

Measured resilience

The path forward for Asia-Pacific enterprises is not about slowing down AI adoption. The Veeam report shows that organisations with budget increases are more likely to track recovery KPIs (like RTOs and isolation times) and invest in immutable storage and automated backups.

For Allott and Stead, the message is clear. You cannot secure what you cannot see. You cannot govern what you cannot measure.

David Allott

“It’s about exposing those relationships to build effective policy that ensures that the governance of visibility of the agents, but also the controls that sit around them become so much tighter based on these use cases that are coming out. So that’s the big topic for CISOs this year.” David Allot

As Vietnam’s AI Law takes effect and Singapore tightens its PDPA, the enterprises that survive the next wave of agentic AI will be those that unify their security, IT, and data teams around a singular, graph-based reality. They will move from confidence to validation—because in the era of AI, trust is not a feeling. It is a recoverable state.

Related:  A zero-trust future for businesses requires mindset transformation

Related Stories

MORE STORIES

Subscribe