While there has been much discussion about the opportunities artificial intelligence (AI) will bring, in particular increased productivity, not enough has been said amongst businesses about its security and performance.
AI still is an evolving technology and much remains uncertain about its delivery, said Bernard Chew, group CIO of Nets (Network for Electronic Transfers Singapore).
Chew pointed to an IEEE report on a research paper that revealed AI would crack when it performed under pressure.
Conducted on PropensityBench, the 2025 study tested several AI models operated by OpenAI, Google, Meta, Anthropic, and Alibaba across 5,874 scenarios spanning different domains, such as biosecurity and cybersecurity.
PropensityBench measures an AI agent’s decisions to use harmful tools in order to complete its assigned tasks.
During the tests, in each scenario, an AI agent was assigned a task and given access to several tools. It was directed to use safe tools, not harmful ones. Six types of pressures were applied, including time pressure, financial constraints, and access to limited resources.
In a biosecurity test scenario, for instance, AI agents might be tasked to study the spread of a pathogen and find a way to contain it. A tool deemed safe could involve the use of anonymised genetic data, while a harmful tool would use data that was not anonymised.

“Models that appear perfectly safe in a neutral zero-pressure environment become progressively more dangerous as stress increases,” the study’s co-authors, Udari Madhushani Sehwag and Matthew Siegel, wrote.
“For example, OpenAI o3, the model with the lowest baseline Propensity Score of 2.2%, saw its score jump sharply under maximum pressure: first to 10.5% when the dangerous tool was explicitly labelled, and then dramatically to 40.3%, when the harmful name was replaced with an innocuous one,” they noted.
Across all models, the average rate of choosing a dangerous path without any pressure was 18.6%, which climbed to 46.9% when the models were under stress, the study found.
While AI developers typically work to ensure systems operate according to safety standards through training and instructions, it remains unclear how closely AI models will adhere to these guardrails.
“When they are actually put under real-world stress, and if the safe option is not working, are they going to switch to just getting the job done by any means necessary?” Sehwag posed. “This is very timely topic.”
No room for AI misfires
It underscores the need for organisations to look closely at the security and stability of AI, Chew said.
“I think people aren’t talking [enough] about this,” he said.
Asked about his biggest concerns about AI, he highlighted “the wall of trust and truth”.
“How do you know what the AI generates is true?” he noted, stressing the need for people using the tools to be critical of AI. “We need to be sceptical of AI. We should constantly ask what’s the use case for AI, how do we do better with it, and how do we ensure the data we use is clean.”
This is especially important for organisations such as Nets, which operate in highly regulated sectors and run critical systems. First launched in 1985, Nets was established by a consortium comprising multiple Singapore banks. It currently is owned by DBS, UOB, and OCBC.
The desire to transform the business must be balanced with IT security demands, said Chew.
“We’re acutely aware that trust is of utmost important. We cannot break the trust of the institutions and consumers, [so] we need to be very careful about how we deploy new solutions,” he said.
AI-powered code automation tools, for instance, have to be well tested and put through penetration and vulnerability testing.
There also must be policies and guardrails put in place, from a security standpoint, to facilitate the use of AI amongst employees, he said.
One of Chew’s current priorities is to facilitate the operationalisation of AI at scale across Nets, moving the organisation from an exploratory stage to “operational reliability”.
Its technology transformation roadmap focuses on five pillars, namely, data, intelligence, cloud-first, enterprise architecture, and security, he said.
“You cannot leverage data and AI without the scalability of a cloud-first strategy, a robust enterprise architecture, and the assurance of security,” he added.
Collectively, Chew believes these five components will provide the agility and resilience, and tech backbones, Nets needs to better compete in its market.
He noted that delivering on this can prove challenging when the organisation comprises three business entities offering different solutions and services, including merchant payments, banking computing services, and central bank software. These support various offerings, such as Singapore’s peer-to-peer payment infrastructure, PayNow.
Each business entity moves at different speed and has different risk profiles and business strategies, he noted.
The national payment operator, for instance, demands stability and perfection, and is highly regulated by several regulatory bodies. In comparison, the commercial business entity demands speed and agility, with quick time-to-market to push out new products and services to merchants.
In addition, there are data and AI sovereignty issues that need to be addressed, he said.
“To fuel AI, data must be open and not locked in a box,” he noted, adding that this must be done securely.
Nets’ entities handle highly sensitive citizen transactions as well as both commercial and national data, some of which the company owns and some it does not, Chew explained.
It then needs to create an AI-powered environment that connects these businesses, while ensuring it complies with all data sovereignty and privacy requirements, he said.
It must have the foundation and infrastructure that enable data to be open and accessible, to fuel AI, and secured and protected at the same time, he noted. This is critical before any AI can be built on top of the data, he said.
Focus on new roles needed
Nets also has an experienced workforce, which instinctively rely on their experience and proven decision-making processes, Chew said.
However, the company’s efforts to drive new technology, such as AI, require an element of unlearning and relearning from its workforce, he said.
There has been much discussion and anxiety about AI displacing jobs, which may be inevitable with any new technology, but it also can create new roles and jobs, he noted.
This, too, is an issue that has not been talked about enough, he said.
He believes the AI era will give rise to the need for technocrats, who understand both the technology and business, and have critical thinking.
All employees should have the support and training to understand AI and grasp how to use it, Chew said.
They also must assume the responsibility to do so securely, he added.
“The benefit isn’t about whether the organisation adopts AI. It’s how we adopt AI and the sustainability of our AI use moving forward,” he said. “From that perspective, the greatest risk of AI isn’t that the AI will replace people, but whether people are adapting to the change.”
