"If you don't get accountability right, you won't get much else right either," said Roger Connors, CEO of Culture Partners and the chief researcher of the Workplace Accountability Study.

Accountability, defined by Merriam-Webster as " an obligation or willingness to accept responsibility or to account for one's actions", is a trait that often defines workplace success.
Yet the Culture Partners study, involving over 40,000 participants globally, found that 93% struggle to align their work with taking accountability for desired results.
"There's a crisis of accountability in organisations today, a crisis of epidemic proportions," said Connors. "When properly approached, accountability can really be the low-hanging fruit for optimising organisational performance and accelerating organisational change efforts."
If accountability is difficult for human employees, it becomes even greater with AI agents, often described as 'digital employees'. Shanker V Selvadurai, IBM's VP of Technical Sales and Client Engineering, sees strong parallels between human and machine accountability.
From blind trust to evidence-based
As AI increasingly shapes decisions that affect people, blind trust is no longer enough. Businesses, regulators, and the public are demanding explainability, fairness, and transparency.
"When you give AI agents the ability to make decisions without a human in the loop, you're also handing them the power to affect people, processes, and reputations in real time. Accountability is what ensures those decisions are traceable, explainable, and correctable. It's not just about fixing problems after the fact. It's about building the trust that allows businesses, regulators, and the public to adopt AI at scale," he said.
For every decision made, Selvadurai believes that organisations must be able to explain what the agent saw, why it acted as it did, and how to challenge the outcome if necessary.
"That's how you move from 'just trust us' to 'here's the evidence," Selvadurai said. This shift from faith to evidence mirrors how regulators worldwide are beginning to think about AI accountability.

A lack of accountability in AI can lead to bias, discrimination, and erosion of trust. In case of a mishap, Selvadurai argued that accountability is shared, but not equally.
"The business that deploys the AI carries the ultimate responsibility, because they decide where and how it's used, what data it has access to, and what guardrails are in place. Developers and platform providers are responsible for building systems that are robust, secure, and transparent. Risk and compliance teams define the boundaries, and operations teams handle response when something goes wrong," he explained.
For example, Selvadurai said that EY and IBM designed EY.ai for Tax, a solution built with IBM watsonx and powered by IBM's open Granite models, for high-compliance environments with built-in auditability. However, upon deployment, a client owns the risk posture for how it's configured, the processes it touches, and how decisions are applied.
"This clear division of responsibilities means that if something goes wrong, everyone knows their role in fixing it and preventing it from happening again," explained the IBM executive.
Clearly dividing AI responsibilities serves as a bedrock, but accountability does not end at deployment. Organisations must ensure AI agents will always act responsibly.
Making AI agents act responsibly
There are steps companies can take to make sure their AI agents act responsibly and are easy to audit. For Selvadurai, the first step is visibility, saying that "you can't govern what you can't see."
He further explained that this entails knowing exactly which agents are in play, what they're capable of, and what permissions they have, with every decision leaving a complete trail.
Organisations must know the input AI agents received, the context they retrieved, the tools they used, the output they generated, and any approvals involved.
AI agent accountability in APAC
The momentum is there, but the next phase will be about making these practices standard, not exceptional. Shanker V. Selvadurai
Compared to Europe or the US, Selvadurai noted that addressing AI agent accountability in the APAC region is more diverse.
In Europe, for example, the EU AI Act, regarded as the world's first comprehensive AI law, sets binding obligations on both developers and deployers of AI. Selvadurai believes it "sets the benchmark: it's clear, binding, and comprehensive, with specific obligations for both those who build AI and those who use it," he said.
It also outlines requirements for providers of high-risk AI systems, including establishing risk management systems and implementing a quality management system to ensure compliance.
"The US takes a more standards-based approach, with the NIST AI Risk Management Framework (RMF)shaping how organisations manage AI responsibly," Selvadurai said.
According to the AI RMF of the National Institute of Standards and Technology, a division of the US Department of Commerce, the framework aims to "define and differentiate the various human roles and responsibilities when using, interacting with, or managing AI systems."
Both the EU and US frameworks underscore that accountability is not optional, but must be operationalised through registries, documentation, and continuous monitoring, a lesson that APAC regulators can learn from. Unlike the EU's binding legislation or the US's standards-based approach, APAC's AI governance landscape is fragmented.
"Across APAC, the intent is strong, but execution varies. Singapore is ahead of the curve. They're not just talking about AI assurance, they're doing it," he said.
Some initiatives in the region include the AI Verify Foundation. This global open-source community brings together AI owners, solution providers, users, and policymakers to build trustworthy AI by testing real-world AI systems and sharing best practices.
Under the program is the AI Assurance Pilot, which helps codify emerging norms and best practices for the technical testing of Generative AI applications.
"Singapore is advancing quickly with AI Verify and assurance pilots; Australia is moving forward through active policy consultations and sector-specific guidelines; India is shaping its approach through data protection laws and draft AI principles; and Indonesia is exploring AI governance as part of its digital transformation agenda," he said.
"Elsewhere, I see companies eager to deploy AI but still lacking the operational muscle like centralised registries, ongoing evaluations, and incident response processes. We've worked with clients in the region who are closing that gap, pairing innovative deployments with robust governance from day one. The momentum is there, but the next phase will be about making these practices standard, not exceptional," Selvadurai added.
He advises multinational companies to design their accountability processes to meet EU-level requirements, and operationalise them with the flexibility to adapt locally.
"That way, you're future-proofed no matter where regulations land," he said.
Building AI agent accountability
The goal is to make accountability part of how you run AI every day, not an afterthought. Shanker V. Selvadurai
When asked about his advice on building AI accountability, Selvadurai offers a concise yet meaningful response: start small but move quickly.
"Within 90 days, you can set up a central registry of all AI agents and assign a clear owner for each. Capture decision provenance from the outset, so every action is traceable. Define the policies - for what the agent can do, when it must escalate, and what data it can access - and enforce them. Then put monitoring in place, not as a quarterly check, but as an ongoing process that flags drift or non-compliance immediately," he explained.
He recommends that they consider solutions that can automate evidence collection, map controls to major frameworks like the EU AI Act and NIST AI RMF, and keep everything in one place for audits.
Selvadurai also reminds organisations of the importance of rehearsing an AI incident response plan to know what to do if something goes wrong.
"The goal is to make accountability part of how you run AI every day, not an afterthought," he said.
Getting accountability right
Circling back to Connor's words: "If you don't get accountability right, you won't get much else right."
The warning initially applied to human workers also heavily applies to digital employees. Getting accountability right with AI agents is key to gaining trust and broader adoption at scale.