DigiCert introduced a new AI Trust architecture to help organisations secure AI systems and their outputs, along with new capabilities to secure autonomous agents and AI models, and separate capabilities to provide verifiable content authenticity.

“AI has created a new trust challenge,” said Amit Sinha, CEO of DigiCert. “Organisations are relying on agents, models, and content they can’t always verify. At DigiCert, our purpose is to give people confidence in the security, privacy, and authenticity of their digital interactions. With our AI Trust solution, we help organisations confirm what’s real, secure, and approved so AI can be used with confidence.”
Unified trust layer
DigiCert unveiled new capabilities designed to replace fragmented, manual processes with an automated trust architecture that ensures verifiable identity, data integrity, and continuous validation across AI systems. New DigiCert ONE enhancements include:
- AI Agent Trust: Provides discovery, identity, governance, and lifecycle management for AI agents by issuing cryptographic identities and enforcing policy-based controls.
- AI Model Trust: Delivers cryptographic protection and verification for AI models, including secure packaging, signing, and runtime validation.
- Content Trust: Enables organisations to cryptographically sign and verify digital content, providing tamper-evident provenance and transparency using the C2PA standard.

“AI is forcing organisations to rethink trust from the ground up,” said Jennifer Glenn, research director for IDC Security and Trust Group. “Bringing cryptographic assurance to AI systems gives enterprises the ability to independently verify identity, integrity, and provenance of content, enabling these organisations to build trustworthy AI at scale.”










