Gartner says the mass availability of generative AI has become a top concern for enterprise risk executives in the second quarter of 2023. It was mentioned by 66% of 249 senior risk executives, just one point behind third-party viability and ahead of financial planning uncertainty.
Ran Xu, a research director in the Gartner Risk & Audit Practice says this reflects both the rapid growth of public awareness and usage of generative AI tools, as well as the breadth of potential use cases and therefore potential risks, that these tools engender.
Since GenAI going mainstream is now inevitable, perhaps risk managers can use the very same technology to mitigate the risks that come with the technology.
Asked about GenAI being used as a weapon to penetrate an organisation, Ramprakash Ramamoorthy, director of research at ManageEngine, starts off by revealing the how of the process. He explains that by leaking data into large language models (LLMs), this can be weaponised as attackers who traditionally write phishing emails are now able to write them very professionally using Generative AI.
"Content is so naturally generated that it is extremely difficult for privacy-aware folks to distinguish between an email generated and a legit email," he explained. He suggested one way to mitigate AI-generated threats is to deploy AI tools.
"For example, using continuous user and entity authentication ensures original user and not random sources. Continuously raising employee awareness can also help to prevent business data from getting leaked," he commented.