Agentic automation offers CIOs a transformative opportunity to drive extreme productivity by enabling AI agents to autonomously execute and adapt workflows with human oversight.
This approach, which goes beyond traditional AI, orchestrates multi-agent collaboration seamlessly integrated with existing enterprise systems. The result is the unlocking of strategic efficiencies, particularly in key areas such as customer service, finance, and procurement, inspiring a new strategic potential.
While the potential of agentic AI is immense, it's not without its challenges. Managing the technical complexity, establishing robust AI governance, and fostering a culture that embraces this technology responsibly are all crucial steps in this journey.
CIOs must balance innovation with risk mitigation, ensuring scalable, compliant deployment to truly harness AI's potential for competitive advantage in 2026's dynamic marketplace.

Kitman Cheung, CTO, IBM ASEAN, discussed agentic AI's development and how it impacts not just the CIO, but the enterprise as a whole.
Gaining the most with agentic AI
To turn agentic AI from concept to measurable value, CIOs must identify which workflows benefit most from automation. Cheung emphasised that the most significant returns come from targeting repetitive, rules-driven processes first.
"We have actually automated several workflows across human resources, procurement and finance, and in each case we're able to achieve demonstrable ROI," he said.
The IBM executive underscored that leveraging agentic AI is not about entirely taking over human interaction, but it can definitely help with repetitive and procedural tasks.
"Look for tasks that require interaction with multiple tools and maybe mental work to integrate things," he said.
Orchestrating multiple AI agents for collaboration
Once organisations identify where to apply agentic AI, the next challenge is ensuring these agents can interact effectively. Multi-agent collaboration determines whether automation remains isolated or scales across the enterprise.
Cheung says it involves familiarity with the communication protocols for agent-to-agent interaction and agent-to-human interaction.
"We are advocating for a protocol called Agent Communication Protocol (ACP). It's open source, receives quite a bit of contributions from IBM, and, in conjunction with inviting other partners to work in this space.
He said it encourages openness and standardisation by allowing agents built with various frameworks to communicate with each other, even though they sit on different agentic frameworks.
"There are a lot of capabilities there, but another important part is that you need the ability to do agent discovery. So what that means is dynamically, one AI agent can find out what other agents are within its community, if you will," he explained.
He advises CIOs to start smaller when starting to build AI agents: "We've been talking a lot about domain-specific agents.
"These agents don't think of a massive agent to do everything. That's why you have a multi-agent piece, because when you select a smaller piece at the domain-specific level," he said.
What an organisation can do, according to Cheung, is to have an agent that mimics someone in a particular function, learns the task, and shows greater reasoning accuracy.
"The additional benefit of having a multi-agent protocol is that you become more agile. You can upgrade or deploy an agent without impacting the rest of the functionality. So testing is easier, deployments faster, and I think overall is something that is emerging quickly, but I think something that should be selected maybe at the start of the project," he said.
Scaling AI agents responsibly
However, deploying agentic AI at scale introduces new governance and risk considerations. Cheung stressed that organisations must establish responsible oversight early, not after systems are already operational.
Beyond selecting a governance framework, Cheung underscores the need to embed compliance into the AI development workflow.
"Selecting a governance platform that helps you operationalise those governance requirements is going to help tremendously, because on the same platform is where you can manage use cases from a risk perspective, operational risk, AI risk, and so on, data risk as well, keeping in mind that in terms of risk management, AI doesn't sit alone," he said.
He said that organisations should also consider data privacy and operational issues, and financial risks to scale AI responsibly.
He posits that the governance platform should come early in the deployment and monitoring stages. As compliance often comes too late during an audit, issues should be addressed early, not when the damage is already done.
"We've got the regulation and everything at the decision point where we selected the use case. We've got checkpoints and approval points during development, and ideally, they're aligned with our regulations and so forth. But also during the rest of the AI life cycles, when it actually goes live, having the monitoring, having the guardrail sitting to actually monitor the agent's behaviour based on the governance rules and the regulation that's there," he further explained.
Productivity and efficiency metrics for AI agent deployment
Once governance foundations are in place, CIOs must define how success will be measured. Productivity gains vary by workflow and go beyond simple time savings.
It also means performing tasks more accurately.
"Productivity and profitability are hand-in-hand when you're using agentic AI to help the employees complete what they do." Kitman Cheung
"Less error translates to different types of gain," he said. Cheung added that fewer errors lead to greater consistency, allowing agents to execute tasks without variation due to fatigue and other human factors.
"There's also the better side of productivity, or perhaps we change the word to profitability," he posits. "If the agent is there to assist a higher-level knowledge worker, as in someone who's making critical decisions in terms of company investment or company outcome, generating more revenue."
In that regard, gains are measured by factors such as client satisfaction, employee performance, and financial gains, depending on whom the AI agents are assisting.
Cheung explained: "Instead of a bottom-line conversation, you're having a top-line conversation. The ROI will now be measured on revenue in terms of, or incoming investment, if that is the case, in terms of new investment and so on."
"Productivity and profitability are hand-in-hand when you're using agentic AI to help the employees complete what they do," he added.
Lessons on agentic AI adoption
When initially you may not get the best outcome, it's not the time to give up. Kitman Cheung
Asked about his lessons on agentic AI adoption, he wishes to share, Cheung said: "When initially you may not get the best outcome, it's not the time to give up. It's time to listen to the user and evolve and move forward with it."
"Second, I would say start smaller," he added. "Take over pieces of the function, not try to be everything all at once.”
Starting small allows employees to learn and build their AI knowledge and literacy, driving better adoption.
