The growing intersection of cloud computing and artificial intelligence poses unprecedented challenges for Singapore organisations. A primary concern is security. With Singapore's unique position as a global financial hub, cloud security and resilience have become national imperatives.
A recent FutureCISO roundtable, organised with Splunk and AWS, saw Singapore's leading security executives from the finance, healthcare, government, and technology sectors reveal a critical inflexion point: how to build a secure cloud ecosystem while harnessing the transformative power of AI without compromising data integrity or operational resilience.
The dawn of AI: Navigating infancy and exploration
For most Singapore enterprises, AI implementation remains in the nascent stage, led by cautious exploration and rigorous risk assessment. "We are cautious in terms of assessing and managing the risks that are posed by the AI tools that we introduce in the environment," commented a delegate, highlighting the "very stringent sort of checklist on how we would evaluate certain tools."

The core challenges revolve around trust, security, and compliance. Andy Tham, Head of Infrastructure at a large multinational conglomerate, acknowledged that his primary concern was "the data upload and the content." He was particularly concerned about "users loading up sensitive data." "How do we monitor them? How do we detect sensitive data? How do we stop them?" he asked.

Jon Lau, director of Cybersecurity Strategy, Policy & Governance at A*STAR (Agency for Science Technology and Research), who recognised the value of using tools like "Copilot and other Generative AI tools for personal development and capabilities", quickly identified the fundamental data dilemma: "How will the data be used? If it's being shared publicly, whose hands is it ending up in?"
His apprehension deepens when it comes to third-party AI tools being used to transcribe corporate meetings - settings where confidential and sensitive discussions are the norm.
Another delegate shared a revealing incident where an AI tool unexpectedly joined a team meeting, illustrating the insidious nature of shadow AI. "Users probably do not know what the proper use of AI is… unknowingly, they allow it into our environment, listening into sensitive discussions."
This creates a significant challenge: "It's very hard to log because some of them are really business... tools that are out there that can be enabled very easily." The consequence? "That becomes a challenge as to where the data [is going] out, where is it being uploaded, and who is even processing the data?"
The spectre of shadow AI: Governance and policy imperatives

The proliferation of accessible AI tools has exponentially magnified shadow IT risks. Robert Pizzari, group vice president of strategic advisory for APAC at Splunk, drew a direct parallel: "What we haven't solved in shadow IT, it is going to be the same problem with artificial intelligence." Â
A critical issue with public large language models (LLMs) is the irreversibility of data submission. "Once the query goes up or the data gets uploaded, you can't bring Google or any of the other open LLMs and ask them to delete it, right? It's too late," Pizzari warned, emphasising the implications for privacy laws.
To mitigate these risks, some organisations are establishing private "AI data centres" to maintain control and ensure proper "guardrails around my data centre, my data and how these tools are being used." Regardless of the implementation model, "it's really important to understand the data governance piece and governance in general," Pizzari noted.

Sankar Cherukuri, Solutions Architect at AWS, highlighted their security framework approach: "Scan what is the PI data when you want to master PI data" and a "well-architected framework for generative AI, where you can review by each of the pillars of the security aspects, where the responsibility AI or data governance lies."
Cherukuri advocated a pragmatic three-step methodology for generative AI adoption: "identify the use case and bring the right data only for that use case", and ensure the framework is "open and flexible… to integrate with the other tools."Â
The Singapore government is actively working to "drive AI governance from the policy level," including initiatives like AI labelling. However, the rapid proliferation driven by organisational desires to be "AI-ready" means "most organisations are still not ready to govern the data fully."
Frameworks in flux: To wait or to act decisively?

The evolving regulatory landscape raises a strategic question for CISOs: Should organisations await mature governmental frameworks or proactively develop internal governance models? Jonathan Lee, regional head of Information Technology for a regional express delivery company, advocated for autonomy: "The modern cyber landscape has shifted: a reactive approach is no longer sustainable. It's a costly path leading to severe reputational damage, operational paralysis, and persistent vulnerability."
He added, "With ever-growing and more sophisticated threats, rapidly evolving technology, and disjointed global regulations, a proactive, security-first stance – aligned with your organisation's risk appetite – isn't just an option, it's a necessity. Waiting for regulatory clarity is a gamble no organisation can afford."
Highly regulated sectors typically adopt more proactive stances.
A senior executive responsible for machine learning and data management emphasised the importance of proactive action regarding AI governance: "The general expectation is that AI will eventually be regulated, so waiting passively is not an option." Their organisation has established a centralised AI governance team that develops policies ahead of integrating any commercial AI models, adapting existing frameworks to address the distinct risks posed by generative AI.

Healthcare organisations follow similar approaches. A healthcare IT leader emphasised their "people-centric" governance model tailored "for our patients." Strict mandates, such as keeping patient data within Singapore, remain non-negotiable. For AI implementation, they've adopted a "copilot" philosophy, positioning AI as an augmentation tool rather than a replacement for clinical decision-making. Â
Atul Dhamne, chief information security officer at Accuron Technologies, offered a balanced perspective. He referenced the EU AI Act's mandate for employee AI literacy and his company's policy making "responsible use of AI."
Dhamne sees practical value in frameworks, stating, "If you have a framework, it is a good start [for adoption], and don't have to reinvent the wheel", especially for complex organisations, and certifications assure stakeholders.
The tightrope walk: Balancing rapid transformation with robust security

Digital transformation imperatives often conflict with security requirements, particularly in hybrid environments. Jean Koay, AVP (Technology & IT) emphasised user education on data classification and shared responsibility principles.
"It's all about the awareness of the business... Many users harness the power of AI to process data without understanding where that data is being transmitted. In some cases, they're unconcerned with what happens behind the scenes—as long as the business objectives are achieved," she observed.
She also noted that this challenge is particularly common in small and medium-sized businesses (SMBs), where operational demands for productivity frequently outweigh security considerations, rather than recognising security as a strategic partner that can enhance operational resilience.Â
One delegate at the roundtable advocated for a proactive, enablement-focused approach: "As cybersecurity professionals, we just have to run together with the business and the technology...Just be prepared. Put on the guard rails and educate the users." He cautioned that "the worst thing we can do is stop it."
One delegate offered this metaphor: cybersecurity must function as both an "enabler" and "brake." "Just like a sports car... don't have a brake, it can go very fast, but we would hit the wall and crash." Frameworks, he argued, serve as the "most effective brakes."Â
Another delegate to the roundtable highlighted the unique challenge of balancing security with operational continuity: "We can't simply halt activities—research teams expect uninterrupted progress and would push back strongly." While governance frameworks remain crucial, the use of advanced AI tools has proven effective in enhancing security operations by significantly speeding up log analysis during incident response.
While governance frameworks remain essential, AI tools like Microsoft Copilot can enhance security operations, significantly accelerating log analysis during incident response.
Splunk's Pizzari recommended a "crawl, walk and run" implementation methodology coupled with operational readiness for security incidents. He strongly advised running "tabletop exercises with the heads of business units" and "re-craft your incident response plan and your communication plan."
Five steps to building Singapore's secure cloud future
For Singapore enterprises, the journey toward cloud resilience requires a comprehensive strategy that balances innovation enablement with robust security controls. As the country strengthens its position as a global financial and technology hub, organisations must prioritise six critical imperatives:
First, implement proactive data governance frameworks. One delegate suggested that before onboarding any of the commercial models, Singapore organisations must establish comprehensive control mechanisms addressing data sovereignty, classification, and protection.
Second, institutionalise continuous security awareness programs. With Koay highlighting that "it's all about the awareness of the business," Singapore enterprises must cultivate a security-conscious workforce that understands the implications of cloud and AI usage, particularly regarding data classification and handling protocols.
Third, develop contextual security frameworks aligned with Singapore's unique regulatory landscape. Lee pointed out that the evolving cyber threat environment makes it clear: proactive cybersecurity and data privacy governance are no longer just about meeting requirements but about building resilience and gaining a competitive advantage.
He suggested that "Sticking to a reactive 'checkbox' mentality, waiting for mandates, results in severe financial penalties, irreparable reputational damage, and operational inefficiencies, leaving organisations vulnerable to advanced threats.
"Conversely, by embedding proactive governance, businesses benefit from lower risk, substantial cost savings, enhanced customer trust, and the flexibility to innovate and expand into new markets."
Fourth, implement robust cloud access security controls. Concerns from Tham on unintentional "data upload" underscore the need for comprehensive visibility into cloud data movement that monitors data exfiltration risks.
Finally, conduct regular resilience testing and incident response simulations to ensure optimal preparedness. Pizzari's advice to "re-craft your incident response plan and your communication plan" can address Singapore-specific scenarios, including regulatory notification requirements and cross-border incident management challenges.
As AWS's Cherukuri observed, security and resiliency must be the "highest priority" for organisations embracing cloud and AI technologies. By following the above steps, Singaporean enterprises can confidently navigate the evolving digital landscape while maintaining operational integrity and data sovereignty in an increasingly interconnected and AI-driven world.
