Wed, 13 May 2026

IT and security leaders are ignorant to GenAI threats, study finds

Image by Lukas Bieri from Pixabay

ExtraHop’s new research, The Generative AI Tipping Point, found that enterprises struggle with understanding and addressing the security concerns that come with employees using generative AI

Key findings

The findings reveal that 73% of IT and security leaders are unsure how to address security risks when their employees use generative AI tools or Large Language Models (LLM) sometimes or frequently at work.

IT and security leaders are more concerned about getting inaccurate or nonsensical responses (40%) than the exposure of customer and employee personal identifiable information (PII) (36%), exposure of trade secrets (33%), and financial loss (25%).

Around 32% of respondents admit a usage ban on Gen AI tools in their organisations. However, only 5% say employees never use these tools at work.

A majority (90%) of respondents want guidance from the government. More than half (60%) agree to mandatory regulations while 30% support government standards that businesses can adopt.

Strong safeguards needed

Raja Mukerji

“There is a tremendous opportunity for generative AI to be a revolutionary technology in the workplace,” said Raja Mukerji, co-founder and chief scientist, ExtraHop. “However, as with all emerging technologies we’ve seen become a staple of modern businesses, leaders need more guidance and education to understand how generative AI can be applied across their organization and the potential risks associated with it. By blending innovation with strong safeguards, generative AI will continue to be a force that will uplevel entire industries in the years to come.”

Related:  Generative AI dubbed as “fifth literacy” at University of Hong Kong

Related Stories

MORE STORIES

Subscribe