Checkmarx, an agentic AI-powered application security provider, cautions against the risks of AI coding.
Checkmarx's latest report, titled “Future of Application Security in the Era of AI,” highlights the widespread use of AI coding assistants in organisations. It reveals that up to 60% of code is being generated with AI, despite 20% still prohibiting its use.
Risky practices
The report, which surveyed more than 1,500 CISOs, AppSec managers, and developers across North America, Europe, and the Asia-Pacific, has revealed that 50% already use AI security code assistants and 34% admit that more than 60% of their code is AI-generated. However, only 18% have policies governing this use.
Moreover, 81% of organisations knowingly ship vulnerable code, and almost all (98%) experienced a breach stemming from vulnerable code in the past year (rising from 91 % in 2024).
Some 32% of respondents expect Application Programming Interface (API) breaches via shadow APIs or business logic attacks within the next 12 to 18 months. However, fewer respondents report deploying foundational security tools, such as using mature application security tools, including dynamic application security testing (DAST) or infrastructure-as-code scanning. Only half of organisations surveyed actively use DevSecOps core tools.

“The velocity of AI-assisted development means security can no longer be a bolt-on practice. It has to be embedded from code to cloud,” said Eran Kinsbruner, vice president of portfolio marketing.
Application security readiness
The report strongly urges organisations to shift from mere awareness to decisive action in strengthening their application security readiness. This includes embedding “code‑to‑cloud” security, governing AI use in development, operationalising security tools, preparing for agentic AI in AppSec, and fostering a culture of developer empowerment.
“AI generated code will continue to proliferate; secure software will be the competitive differentiator in the coming years,” Kinsbruner added.