• About
  • Subscribe
  • Contact
Tuesday, August 26, 2025
    Login
  • Management Leadership
    • Growth Strategies
    • Finance
    • Operations
    • Sales and Marketing
    • Careers
  • Technology
    • Infrastructure and Platforms
    • Business Applications and Databases
    • Big Data, Analytics and Intelligence
    • Security
  • Industry Verticals
    • Finance and Insurance
    • Manufacturing
    • Logistics and Transportation
    • Retail and Wholesale
    • Hospitality and Tourism
    • Government and Public Services
    • Utilities
    • Media and Telecommunications
  • Resources
    • Whitepapers
    • PodChats
    • Videos
  • Events
No Result
View All Result
  • Management Leadership
    • Growth Strategies
    • Finance
    • Operations
    • Sales and Marketing
    • Careers
  • Technology
    • Infrastructure and Platforms
    • Business Applications and Databases
    • Big Data, Analytics and Intelligence
    • Security
  • Industry Verticals
    • Finance and Insurance
    • Manufacturing
    • Logistics and Transportation
    • Retail and Wholesale
    • Hospitality and Tourism
    • Government and Public Services
    • Utilities
    • Media and Telecommunications
  • Resources
    • Whitepapers
    • PodChats
    • Videos
  • Events
No Result
View All Result
No Result
View All Result
Home Technology Big Data, Analytics & Intelligence

Accountability in AI agent decisions

Melinda Baylon by Melinda Baylon
August 25, 2025

"If you don't get accountability right, you won't get much else right either," said Roger Connors, CEO of Culture Partners and the chief researcher of the Workplace Accountability Study.

Roger Connors

Accountability, defined by Merriam-Webster as " an obligation or willingness to accept responsibility or to account for one's actions", is a trait that often defines workplace success.

Yet the Culture Partners study, involving over 40,000 participants globally, found that 93% struggle to align their work with taking accountability for desired results.

"There's a crisis of accountability in organisations today, a crisis of epidemic proportions," said Connors. "When properly approached, accountability can really be the low-hanging fruit for optimising organisational performance and accelerating organisational change efforts."

If accountability is difficult for human employees, it becomes even greater with AI agents, often described as 'digital employees'. Shanker V Selvadurai, IBM's VP of Technical Sales and Client Engineering, sees strong parallels between human and machine accountability.

From blind trust to evidence-based

As AI increasingly shapes decisions that affect people, blind trust is no longer enough. Businesses, regulators, and the public are demanding explainability, fairness, and transparency.

"When you give AI agents the ability to make decisions without a human in the loop, you're also handing them the power to affect people, processes, and reputations in real time. Accountability is what ensures those decisions are traceable, explainable, and correctable. It's not just about fixing problems after the fact. It's about building the trust that allows businesses, regulators, and the public to adopt AI at scale," he said.

For every decision made, Selvadurai believes that organisations must be able to explain what the agent saw, why it acted as it did, and how to challenge the outcome if necessary.

"That's how you move from 'just trust us' to 'here's the evidence," Selvadurai said. This shift from faith to evidence mirrors how regulators worldwide are beginning to think about AI accountability.

Shanker V Selvadurai
Shanker V Selvadurai

A lack of accountability in AI can lead to bias, discrimination, and erosion of trust. In case of a mishap, Selvadurai argued that accountability is shared, but not equally.

"The business that deploys the AI carries the ultimate responsibility, because they decide where and how it's used, what data it has access to, and what guardrails are in place. Developers and platform providers are responsible for building systems that are robust, secure, and transparent. Risk and compliance teams define the boundaries, and operations teams handle response when something goes wrong," he explained.

For example, Selvadurai said that EY and IBM designed EY.ai for Tax, a solution built with IBM watsonx and powered by IBM's open Granite models, for high-compliance environments with built-in auditability. However, upon deployment, a client owns the risk posture for how it's configured, the processes it touches, and how decisions are applied.

"This clear division of responsibilities means that if something goes wrong, everyone knows their role in fixing it and preventing it from happening again," explained the IBM executive.

Clearly dividing AI responsibilities serves as a bedrock, but accountability does not end at deployment. Organisations must ensure AI agents will always act responsibly.

 Making AI agents act responsibly

There are steps companies can take to make sure their AI agents act responsibly and are easy to audit. For Selvadurai, the first step is visibility, saying that "you can't govern what you can't see."

He further explained that this entails knowing exactly which agents are in play, what they're capable of, and what permissions they have, with every decision leaving a complete trail.  

Organisations must know the input AI agents received, the context they retrieved, the tools they used, the output they generated, and any approvals involved.

AI agent accountability in APAC

The momentum is there, but the next phase will be about making these practices standard, not exceptional. Shanker V. Selvadurai

Compared to Europe or the US, Selvadurai noted that addressing AI agent accountability in the APAC region is more diverse.

In Europe, for example, the EU AI Act, regarded as the world's first comprehensive AI law, sets binding obligations on both developers and deployers of AI. Selvadurai believes it "sets the benchmark: it's clear, binding, and comprehensive, with specific obligations for both those who build AI and those who use it," he said.

It also outlines requirements for providers of high-risk AI systems, including establishing risk management systems and implementing a quality management system to ensure compliance.

"The US takes a more standards-based approach, with the NIST AI Risk Management Framework (RMF)shaping how organisations manage AI responsibly," Selvadurai said.

According to the AI RMF of the National Institute of Standards and Technology, a division of the US Department of Commerce, the framework aims to "define and differentiate the various human roles and responsibilities when using, interacting with, or managing AI systems."

Both the EU and US frameworks underscore that accountability is not optional, but must be operationalised through registries, documentation, and continuous monitoring, a lesson that APAC regulators can learn from.  Unlike the EU's binding legislation or the US's standards-based approach, APAC's AI governance landscape is fragmented.

"Across APAC, the intent is strong, but execution varies. Singapore is ahead of the curve. They're not just talking about AI assurance, they're doing it," he said.

Some initiatives in the region include the AI Verify Foundation. This global open-source community brings together AI owners, solution providers, users, and policymakers to build trustworthy AI by testing real-world AI systems and sharing best practices.

Under the program is the AI Assurance Pilot, which helps codify emerging norms and best practices for the technical testing of Generative AI applications.

"Singapore is advancing quickly with AI Verify and assurance pilots; Australia is moving forward through active policy consultations and sector-specific guidelines; India is shaping its approach through data protection laws and draft AI principles; and Indonesia is exploring AI governance as part of its digital transformation agenda," he said.

"Elsewhere, I see companies eager to deploy AI but still lacking the operational muscle like centralised registries, ongoing evaluations, and incident response processes. We've worked with clients in the region who are closing that gap, pairing innovative deployments with robust governance from day one. The momentum is there, but the next phase will be about making these practices standard, not exceptional," Selvadurai added.  

He advises multinational companies to design their accountability processes to meet EU-level requirements, and operationalise them with the flexibility to adapt locally.

"That way, you're future-proofed no matter where regulations land," he said.

Building AI agent accountability

The goal is to make accountability part of how you run AI every day, not an afterthought. Shanker V. Selvadurai

When asked about his advice on building AI accountability, Selvadurai offers a concise yet meaningful response: start small but move quickly.

"Within 90 days, you can set up a central registry of all AI agents and assign a clear owner for each. Capture decision provenance from the outset, so every action is traceable. Define the policies - for what the agent can do, when it must escalate, and what data it can access - and enforce them. Then put monitoring in place, not as a quarterly check, but as an ongoing process that flags drift or non-compliance immediately," he explained.

He recommends that they consider solutions that can automate evidence collection, map controls to major frameworks like the EU AI Act and NIST AI RMF, and keep everything in one place for audits.

Selvadurai also reminds organisations of the importance of rehearsing an AI incident response plan to know what to do if something goes wrong.

"The goal is to make accountability part of how you run AI every day, not an afterthought," he said.

Getting accountability right

Circling back to Connor's words: "If you don't get accountability right, you won't get much else right."

The warning initially applied to human workers also heavily applies to digital employees. Getting accountability right with AI agents is key to gaining trust and broader adoption at scale.

Related:  Study shows growing use of PKI for enterprise security
Tags: accountabilityAI agentsArtificial Intelligencedigital transformationIBM
Melinda Baylon

Melinda Baylon

Melinda Baylon joins Cxociety as editor for FutureCIO and FutureIoT. As editor, she will be the main editorial contact for communications professionals looking to engage with aforementioned media titles. 

Melinda has adecade-long career in the media industry and served as TV reporter for ABS-CBN and IBC 13. She also worked as a researcher for GMA-7 and a news reader for Far East Broadcasting Company Philippines. 

Prior to working for Cxociety, she worked for a local government unit as a public information officer. She now ventures into the world of finance and technology writing while pursuing her passions in poetry, public speaking and content creation. 

Based in the Philippines, she can be reached at [email protected]

No Result
View All Result

Recent Posts

  • L'Oréal launches new beauty and technology initiative in China  
  • PodChats for FutureCIO: How agentic AI redefines enterprise decision-making
  • SUS ENVIRONMENT highlight waste-to-wonder initiatives in sustainability report
  • Pippit launches AI-powered tool for content creation
  • Accountability in AI agent decisions

Live Poll

Categories

  • Big Data, Analytics & Intelligence
  • Business Applications & Databases
  • Business-IT Alignment
  • Careers
  • Case Studies
  • CISO
  • CISO strategies
  • Cloud, Virtualization, Operating Environments and Middleware
  • Computer, Storage, Networks, Connectivity
  • Corporate Social Responsibility
  • Customer Experience / Engagement
  • Cyber risk management
  • Cyberattacks and data breaches
  • Cybersecurity careers
  • Cybersecurity operations
  • Education
  • Education
  • Finance
  • Finance & Insurance
  • FutureCISO
  • General
  • Governance, Risk and Compliance
  • Government and Public Services
  • Growth Strategies
  • Hospitality & Tourism
  • HR, education and Training
  • Industry Verticals
  • Infrastructure & Platforms
  • Insider threats
  • Latest Stories
  • Logistics & Transportation
  • Management Leadership
  • Manufacturing
  • Media and Telecommunications
  • News Stories
  • Operations
  • Opinion
  • Opinions
  • People
  • Process
  • Remote work
  • Retail & Wholesale
  • Sales & Marketing
  • Security
  • Tactics and Strategies
  • Technology
  • Utilities
  • Videos
  • Vulnerabilities and threats
  • White Papers

Strategic Insights for Chief Information Officers

FutureCIO is about enabling the CIO, his team, the leadership and the enterprise through shared expertise, know-how and experience - through a community of shared interests and goals. It is also about discovering unknown best practices that will help realize new business models.

Quick Links

  • Videos
  • Resources
  • Subscribe
  • Contact

Cxociety Media Brands

  • FutureIoT
  • FutureCFO
  • FutureCIO

Categories

  • Privacy Policy
  • Terms of Use
  • Cookie Policy

Copyright © 2022 Cxociety Pte Ltd | Designed by Pixl

Login to your account below

or

Not a member yet? Register here

Forgotten Password?

Fill the forms bellow to register

All fields are required. Log In

Retrieve your password

Please enter your username or email address to reset your password.

Log In
No Result
View All Result
  • Management Leadership
    • Growth Strategies
    • Finance
    • Operations
    • Sales and Marketing
    • Careers
  • Technology
    • Infrastructure and Platforms
    • Business Applications and Databases
    • Big Data, Analytics and Intelligence
    • Security
  • Industry Verticals
    • Finance and Insurance
    • Manufacturing
    • Logistics and Transportation
    • Retail and Wholesale
    • Hospitality and Tourism
    • Government and Public Services
    • Utilities
    • Media and Telecommunications
  • Resources
    • Whitepapers
    • PodChats
    • Videos
  • Events
Login

Copyright © 2022 Cxociety Pte Ltd | Designed by Pixl

Subscribe