Forrester's Top Cybersecurity Threats in 2023 lists among other things the potential to weaponize artificial intelligence variants like Generative AI and ChatGPT to fine-tune ransomware and social engineering techniques.
"In addition to more advanced social engineering and phishing threats, attackers could use these tools for easier malicious code generation. Vendors who offer generative AI foundation models assure customers they train their models to reject malicious cybersecurity requests; however, they don’t provide users with the tools to effectively audit all the security controls in place."Avivah Litan
Jon France, chief information security officer at (ISC)2 says the rise of ChatGPT and other generative AI is a double-edged sword. "A great resource when used appropriately, they can help improve our defences against cyberattacks by suggesting ways to educate, develop policy, potentially review configurations/code, and generally draw on a large body of knowledge," he elaborated.
He noted that the same technology can help attackers by suggesting approaches and weaknesses infiltrate systems.
"AI is ultimately a means to an end and it is up to users how they want to use it. Cybercriminals are always on the lookout for how they can evolve their schemes, and this includes finding loopholes in what’s trending."
Can ChatGPT be used effectively/efficiently against malware?
France is of the opinion that existing natural language processing (NLP) models like ChatGPT are adept at detecting subtle anomalies and improving signal-to-noise ratios on indicators of compromise and indicators of attack, raising these for further investigation.
He noted that security professionals are still required to view and take action if necessary. "In addition, we are seeing increasing expectations for fully autonomous operations alongside the rise in adoption of AI technology to automate mundane and time-consuming data-related tasks," he continued.
He cautioned that this poses risks for cybersecurity teams that require visibility over technology systems, data usage and traffic levels to effectively defend against cyberattacks. "One simply cannot protect what he does not know," he reminds us.
Can ChatGPT be used as an offensive weapon by cybercriminals as well as a defensive weapon against criminals?
France acknowledges that ChatGPT is already being used by cyber criminals, with reports of bad actors leveraging the technology to appear more human and ultimately bypass detection in phishing and malware attacks.
"This is especially true when developing scripts for social engineering or text for phishing scams, by using the context of the target as well as being grammatically correct – where poor grammar and spelling are typical warning signs in scams," said France.
He points out that by using the technology, cybercriminals are also able to build a database of non-existent people at a faster and cheaper rate to spread misinformation.
"However, as a defensive tool, there are still doubts towards its capability. A recent survey of (ISC)2 cybersecurity professionals found that 90% were concerned about the increasing integration of AI and ML into both business and consumer technology," he added.
France points out that it will be a race between using AI for good and for malicious intent. "Technology and security professionals have to thoroughly assess the use of AI and its generative/interactive products and its potential impact so they are well equipped to handle potential attacks in the future," he continued.
What are the conditions that would justify/warrant embedding ChatGPT as part of an organisation’s cybersecurity strategy?
There currently exists a huge cybersecurity workforce gap of 3.4 million globally (in APAC the gap stands at 2.2 million), according to the (ISC)2 Cybersecurity Workforce Study. France opines that it is this lack of skilled cybersecurity talents that is one of the main drivers for the integration of ChatGPT into an organisation’s cybersecurity strategy. The same (ISC)2 report found that over 57% of organisations worldwide already automate their cybersecurity systems.
France believes that ChatGPT can be helpful in aiding cybersecurity teams with more routine tasks and allowing employees to focus on more complex and important functions of cybersecurity.
"However, there needs to be increased vigilance on all sides. Cybersecurity is not solely meant for security professionals; everyone needs to be aware of what they should and shouldn’t do to avoid falling victim to such attacks," he added.
What are the top challenges a CISO/CIO needs to overcome as they integrate ChatGPT into the organisation’s cybersecurity strategy/implementation?
A Salesforce study shows that the majority of organisations will be prioritising generative AI technologies such as ChatGPT over the next 18 months.
France noted that one of the main challenges in integrating ChatGPT into existing cybersecurity strategies is the technical and ethical concerns of such products.
"Data collection and privacy are still a big risk when it comes to utilising ChatGPT, as well as the issues that we’ve seen arise around the safety and accuracy of generative AI outputs."Jon France
He cautioned that a CISO and their cybersecurity team must fully understand the algorithms behind ChatGPT before implementing it as a main pillar within the organisation’s cybersecurity strategy.
Another challenge would be the use and accuracy of the content generated by ChatGPT. "There have been well-documented cases of the responses generated by ChatGPT being disproven, and there is also discussion on the rights to use such content from these AI models," he added.
He went on to add that while the discussion is nascent, there exist ethical and copyright issues with the work done by these technologies.
Name one piece of advice to CISOs/CIOs considering the potential of ChatGPT as part of their cybersecurity strategy.
France contends that as cybercriminals begin utilising ChatGPT in their cyberattacks, understanding how it functions will aid organisations in implementing it as part of their defences.
"CISOs should also develop an organisational framework for their teams to guide and train employees through utilising ChatGPT in an ethical and safe manner, especially as the technology continues to develop rapidly over time," he concluded.
Recognising that Generative AI development will not stop, Gartner's Litan suggests organisations need to act now to formulate an enterprise-wide strategy for AI trust, risk and security management (AI TRiSM).
"There is a pressing need for a new class of AI TRiSM tools to manage data and process flows between users and companies who host generative AI foundation models," she suggested. "AI developers must urgently work with policymakers, including new regulatory authorities that may emerge, to establish policies and practices for generative AI oversight and risk management."