Black Duck’s ‘2024 Global State of DevSecOps’ report revealed that most organisations use AI tools for development. Still, concerns arise as AI-generated code can introduce security risks, as many organisations lack the necessary measures to secure it effectively.
AI-generated code
The report found that most (90%) organisations use AI tools for software development, and 85% implement some security protocols for AI-generated code.
However, the report also uncovered a concerning trend. Despite using AI tools, many organisations admitted to lacking the necessary measures to secure AI-generated code effectively. Only a mere 24% expressed strong confidence in their policies and processes for testing it.
“The most concerning insight comes from those who permit all developers to use AI tools while claiming slight confidence (18%) or no confidence (4%) in their ability to secure AI-generated code,” Steven Zimmerman, senior solutions manager at Black Duck, said.
According to the Black Duck executive, this group, which comprised 6% of all respondents, “seemingly prioritises development speed over application security.”
Some 43% of respondents permit only certain developers or teams to use AI tools to write code for added security.
Sculpting DevSecOps
“As organisations proceed to sculpt their DevSecOps programs with AI-assisted development in mind, it’s important to emphasise both testing coverage and actionability of results. After all, faster development schedules and more frequent code pushes mean the task of fixing detected issues must also be abbreviated without sacrificing efficacy,” Zimmerman noted.