There are steps organisations can take to buffer the impact should their artificial intelligence (AI) deployments fail. For one, there should be a quick way to unplug when things go awry.
There should be a kill switch for everything, said Nimish Panchmatia, chief data and transformation officer at DBS Bank.
The Singapore bank also takes small steps, learns, and ensures it is able to control whatever it applies AI to.
And the technology is always a copilot, so there is always human oversight, said Panchmatia, who was speaking at a media briefing held on the sidelines of Google Cloud’s AI conference in Singapore.
DBS taps Google Cloud’s AI platforms and tools, including Vertex AI, which is integrated with the bank’s self-service data platform, ADA.

The bank expects to extract SG$1 billion in economic value this year from its AI and machine learning initiatives, according to Panchmatia. This figure does not include its generative AI (GenAI) specific projects.
Amongst those is CSO Assistant, a GenAI virtual assistant tool that was developed in-house and trained with a large language model (LLM) designed to understand local languages and context.
Some 1,000 of the bank’s customer service officers are equipped with CSO Assistant, serving more than 250,000 customers.
The GenAI-powered application supports voice telephony and speech recognition, transcribing customer queries in real-time. It then searches DBS’ knowledge database to retrieve relevant information, providing service officers the ability to resolve customer queries.
CSO Assistant also generates call summaries and prefills service request fields. The AI platform has helped DBS cut call handling time by 20%.
“We took a measured approach by stress-testing [CSO Assistant] against our responsible data use frameworks and iteratively enhancing it based on feedback received during the pilot,” said Panchmatia.
He noted that the bank puts “strict” controls and takes its time with its AI adoption, evolving as the technology advances. For now, for instance, it is choosing not to apply GenAI where the technology interacts directly with customers.
It also uses LLMs to evaluate each other, where one LLM assesses responses generated by another to check for various issues, such as hallucinations, he said.
Strong safeguards are necessary since DBS is an FSI (financial services institute) and handles large volumes of sensitive information, Panchmatia said.
Google’s AI models also have built-in controls to protect against potential risks, including prompt injection and RAG (Retrieval Augmented Generation) poisoning, Google Cloud’s CEO Thomas Kurian, said at the media briefing.
Kurian pointed to the vendor’s Model Armour service, which screens LLM prompts and responses for security risks as well as responsible AI practices.
Tapping agentic potential via sandbox
Caution and care also will be necessary as Singapore looks to push ahead with its adoption of AI, including emerging technologies such as AI agents.
Agentic AI offers new possibilities in the way humans interact with AI, said Josephine Teo, Minister for Digital Development and Information (MDDI) and Minister-in-charge of Cybersecurity and Smart Nation, who was speaking at the Google Cloud event.

More advanced AI agents can implement a series of instructions within complex workflows, combining tasks and deciding on the next steps, Teo said.
They also understand high-level instructions and can autonomously break these down into smaller tasks, experimenting with different approaches and continuously learning from earlier efforts, she said.
She added that agentic AI can be used to improve public services, where they can power workflows that pull information across different government agencies to help businesses, for instance, get licenses approved.
However, as it does with all emerging technologies, Singapore must first understand how they work and why mistakes happen, Teo said.
“With AI agents, there are valid concerns about unintended actions and we need to pay even more attention to governance,” she said. “What permissions should agents be given? When should humans be in the loop? If things do not got as expected, who should be held accountable?”
She noted that her ministry will work to ensure agentic capabilities are developed and deployed in “a safe and responsible way”, so the public sector and citizens can use them confidently.
The government’s CIO office, GovTech, will begin working with Google Cloud on a sandbox to experiment with AI agents, including testing and evaluating these applications in public sector use cases, Teo said.
This sandbox initiative will provide MDDI early access to a new browser control tool via Google’s Gemini API, which is based on the cloud vendor’s Project Mariner. The new tool can “reason, plan, and take action” on a user’s behalf.
“Agencies will have a chance to test and evaluate the latest agentic capabilities, assess the risks, develop mitigation measures, and share the lessons learned with the broader community of AI practitioners in Singapore,” Teo said. “From the sandbox, we hope to better understand how to interact with agentic AI and build confidence to capture its value for public good.”
Data residency guarantees for local use
In addition, Google is expanding its data residency guarantees to its Singapore cloud region, offering organisations in the country the ability to store their data locally as well as perform machine learning processing for their AI workloads in Singapore.
This will apply to select Google AI services, including Gemini 2.5 Flash, Vertex AI Search, and NotebookLM Enterprise.
Tapping the data residency guarantees, GovTech will make Gemini 2.5 Flash and Vertex AI Search available to government agencies via Singapore’s Government on Commercial Cloud platform. The government cloud platform provides agencies with a standardised approach to adopting commercial applications offered by cloud service providers.
“Singapore public sector agencies, including Centre for Strategic Infocomm Technologies, GovTech, and Home Team Science and Technology Agency, will be the first in Asia -- and amongst the first worldwide -- to get access to Gemini on Google Distributed Cloud, air-gapped,” Google said.
“With this technology, they can accelerate the development and deployment of agentic AI whilst keeping highly sensitive data within their on-premises data centres, fully disconnected from the public internet,” it added.
Under its agreement with GovTech, Google Agentspace also will be made available to some public officers, giving them access to tools to create custom AI agents and access to Google-developed agents.