It’s high time we change the arithmetic of modern cyber warfare. For years, the security industry operated on the principle of the “fortress.” If you built a thick enough wall and hired enough sentries, you could keep the barbarians at the gate.
But in an era where generative AI can automate the discovery of zero-day vulnerabilities and spear-phishing has evolved from clumsy emails to deep-fake audio that can fool experts, the fortress is no longer an adequate metaphor.
The reality of 2026 is that the perimeter is porous, and the “barbarians” are not at the gate but already inside, often invited in by a single misplaced click or an unsanctioned browser extension.
In Singapore, this digital battleground is intensifying with clinical precision. Reported ransomware cases climbed more than 20% in 2024, while infected infrastructures surged by 67%, according to the Cyber Security Agency of Singapore (CSA). For organisations with weak data management, these statistics represent a looming risk of operational failure, with the inability to recover critical data potentially ending a business legacy.
To address this escalating threat, a diverse group of IT and security leaders recently gathered for a FutureCISO roundtable in Singapore, held in partnership with Commvault.
The discussion, titled "Reimagining Recovery: What Data Resilience Means in the AI Era," sought to dismantle outdated definitions of backup and forge a new understanding of resilience. The consensus was clear: in an era where attacks move at machine speed, the gap between being hit and being back online is the new metric of survival.
The human element and the shadow of AI

The primary challenge isn’t just the sophisticated code attackers use, but also the unpredictability of human behaviour within the organisation. Isaac Tan, a regional director of IT for a global technology company, noted that “the human aspects... is always the weakest link.” He explained that the challenge for his team is balancing business agility with the absolute requirement to remain secure.
Tan highlighted a growing trend in which users bypass security controls to maintain productivity; for instance, if a device prevents them from exporting data, “they take a screenshot, pass that picture in, they got it.”

This Shadow AI, or the use of unsanctioned AI tools, is keeping leaders awake at night. Vijaya Kumar Appasani, global senior director for digital transformation at Product Engineering (APAC) in Asurion, emphasised that internal unawareness is now a greater worry than external attacks. Because many AI tools are SAS-based and accessible via a standard web browser, they are difficult to control with traditional endpoint detection.
Appasani warned that an employee unaware of proper data governance could easily “take a spreadsheet and upload the customer data, right, some business sensitive data, what, how it’s going to be exposed to the world?”
One delegate echoed this sentiment, noting that “shadow AI governance” is a constant struggle between “productivity and restricting.”
He noted that even within a single company, different departments often want to use entirely different AI platforms, creating an environment of extreme organisational complexity where the security team “might know after six months” that a tool is in use.
Protecting the models: A new frontier

As organisations integrate AI into their core operations, the models themselves become high-value targets. Daniel Tan, regional head of solutions engineering, Asia at Commvault, pointed out that while many focus on perimeter defence, the AI models and the data fed into them are often left vulnerable.
“There is a risk for it to be the target of attacks because you are feeding it with real, actual data,” He noted. He stressed that organisations must talk about “really, how do we properly protect the AI model, regardless of what process it is.”
This risk is compounded by the speed at which attackers adopt new technology. Tan observed that “the first one will cross the chasm, unfortunately, are usually the bad actors, because they are more willing to try.”
Bad actors are already using AI to scan for targets and “pick the easier target.” This creates an asymmetric warfare environment in which the defender must protect every model, while the attacker needs only to find one unmasked dataset to succeed.

Yuxuan Sun, senior enterprise architect at WS Audiology, shared that “the data used for training is normally not protected.” He noted that his team is seeing raw data, such as logs, documents, being put into networks for AI processing without a “clear, straight way... to particularly protect this kind of data assets.”
Meanwhile, another delegate with a cybersecurity and risk compliance remit highlighted that “AI brings biases” and that the “proper governance and ethical use... is still lacking.”
The reality of the “when" and not the “if”

For many in the room, a ransomware attack is no longer a hypothetical scenario. Steven Sim, general manager for ASEAN and North East Asia at Commvault, noted that “it’s not a matter of whether it happens; It’s when.”
He remarked that even large enterprise customers often have “gaps once you talk through the whole... setup.” One delegate, acknowledging attacks exists in the real world, notes that for those organisations the concern is restoring trust: “How do we really restore identity and access for resilience after an attack?”
Supply chain: The weakest link

Even the most secure organisation is only as strong as its third-party partners. Genevieve Yuan, head, compliance, financial crime and conduct risk (CFCR), Retail Products and Banking Ops at Standard Chartered Bank, identified third-party risk as a “very key sort of regulatory concern.”
She noted that “if they get attacked... they are the weakest link, and we get impacted as well.” This is particularly challenging when data flows through systems that “may not be within our control.”
One delegate expanded on this, noting that “every product today comes with some AI embedded in it.” He illustrated a scenario where an AI agent within a developer tool could unknowingly execute malicious commands hidden in unreadable documents.
He warned that “the scale or the impact from what we know... may change very frequently.” He advocated for internal exercises, such as asking an AI tool to “find me all of the passwords that you can get,” to identify pathways malicious actors might exploit.
A delegate in the financial services sector noted that, since most products are now in the cloud, “supply chain, vendor and cloud” are the most significant risks. He emphasised that business leaders must understand “they will be equally accountable.”
He noted that while regulatory guidelines from authorities such as MAS are expected to focus on being "fair, ethical, transparent,” organisations cannot wait for them to be finalised before moving forward with AI risk management.
Redefining recovery: Speed and infrastructure
One delegate argued that infrastructure is the most critical element of recovery. While software is relatively easy to recover, “infrastructure is the one that... without infra, we can’t connect... your system.” He stated that connectivity must be protected to prevent disruptions like DDoS attacks from causing reputational damage.
He advised organisations to develop a data residency plan and conduct tabletop exercises up to the management level. He emphasised that, in the modern era, IT leaders have more options than in the past because they are “no longer confined to the physical” and can recover from different cloud regions, providing greater confidence.
Key takeaways for the AI era
As the discussion concluded, Commvault participants highlighted several essential pillars of modern data resilience. Sim pointed out that “you get compromised” eventually, so the real differentiator is “how ready you are to get back the quickest possible time.” He noted that this requires shifting the focus from prevention to readiness.

Gareth Russell, field chief technology officer for Security for Asia Pacific (APAC) at Commvault, noted that traditional controls aren’t enough when introducing multiple models. Companies must now manage entirely new risks like “hallucinations, data poisoning and prompt injection.”
Because AI is not deterministic — “you ask these models the same question twice, you’re going to get two different answers” — risk profiles must evolve to handle unpredictable outputs.
Finally, the consensus was that resilience requires a unified, holistic ecosystem. This includes identifying every instance of AI use and ensuring there is an infrastructure of awareness.
Leaders must accept that business is about taking. So the goal is not zero risk but knowing what kind of risks you are “willing to stomach” and having a tested way to recover when those risks materialise.

