Even though Singaporean organisations are moving quickly to adopt AI, many remain unclear about who owns the associated risks and how they should be governed, according to Okta's AI Security Poll.
The findings reveal that over half (53%) of respondents said AI security risk falls to the CISO or the security function. Some 25% reported no single person or function currently owns AI risk in their organisation.

Organisations in Singapore are adopting AI at speed, which signals growing maturity in how the technology is being used. We are seeing a shift from early experimentation to responsible, strategic adoption. The next step is ensuring governance and security evolve at the same pace," said Stephanie Barnett, vice president, Asia Pacific & Japan, Okta.
Governing growing AI risk
The poll also revealed a limited visibility into AI behaviour, with only 31% expressing confidence in their ability to detect if an AI agent operates outside its intended scope; 33% do not currently monitor AI agent activity at all.
There are also security blind spots, such as data leakage via integrations (36%), Shadow AI, and unapproved or unmonitored tools (33%).
Alarmingly, only 8% said their identity systems are fully equipped to secure non-human identities such as AI agents, bots and service accounts.
While 50% said their boards are aware of AI-related risks, only 31% reported full board engagement in oversight.
Securing AI agents
"As AI becomes more embedded across workflows, organisations need to treat AI agents like any other and apply the same discipline to securing AI agents as they do to human users."
The live poll, conducted in November at Okta's Oktane on the Road event in Singapore, surveyed technology and security leaders.
