100% of organisations in Singapore are already using Generative AI tools within their businesses, even though 90% of them believe GenAI tools pose a potential security risk, according to Zscaler, Inc.’s latest survey.
All eyes on securing GenAI
Based on a survey of over 900 global IT decision makers globally “All eyes on securing GenAI” revealed that 24% of organisations in Singapore do not monitor the usage of GenAI tools. Moreover, 31% still have to implement any additional GenAI-related security measures.
"GenAI tools, like ChatGPT, offer Singaporean businesses the opportunity to improve efficiencies, innovation, and the speed in which teams can work. But we can’t ignore the potential security risk of some tools and the potential implications of data loss,” said Heng Mok, chief information security officer, Asia Pacific and Japan at Zscaler.
Closing the gap
64% said IT teams directly drive their usage of GenAI tools while only 3% of respondents said it stemmed from employees. Nearly half (48%) of the respondents anticipate a significant increase in the interest of using GenAI tools before the year ends.
Organisations need to help close the gap between GenAI use and security by developing an acceptable use policy to educate employees on valid business use cases while protecting organisational data, implementing a holistic zero trust architecture to authorise only approved AI applications and users and conduct thorough security risk assessments for new AI applications to clearly understand and respond to vulnerabilities.
Zscaler also recommends establishing visibility through a comprehensive logging system for tracking all AI prompts and responses and enabling zero trust-powered Data Loss Prevention (DLP) measures for all AI activities to safeguard against data exfiltration.