As organisations expand their artificial intelligence (AI) deployment, including agentic AI, they will have to be mindful about how they are managing their resources in order to keep their expenditure in check.
Hidden costs often brew in areas that are less visible during the initial planning stage and can erode ROI (returns on investment), if left unmanaged, said Lyon Poh, partner and head of corporate transformation at KPMG in Singapore.
He pointed to vendor concentration as one potential risk.
“If organisations rely too heavily on a single model or vendor, sudden price volatility or licensing changes can create unexpected financial strain, without flexibility to switch,” Poh explained in an email interview with FutureCIO.
He advised companies to diversify their partnerships and maintain flexibility to mitigate such cost risks.

Rushing into AI adoption without proper data and infrastructure foundations also can lead to costly remediation later, he said, pointing to technical debt as a major concern.
In addition, gaps in governance can lead to reputational, operational, and regulatory costs if incidents occur, he noted. Without strong safeguards, businesses risk fines and compliance failures as well as loss of stakeholder trust.
Shut down agents with low returns
Proper governance will enable organisations to intervene if AI costs escalate beyond their set parameters.
AI agents, particularly, can create sprawl and significantly push costs up, if companies lack visibility and control, said David Irecki, Boomi’s Asia-Pacific Japan CTO.
Organisations should implement tools that will allow them to identify redundant agents and measure how much each agent costs to operate, Irecki said in a video call with FutureCIO.
AI agents that are not functioning efficiently can consume more tokens than needed. This will lead to higher costs, since token consumption accounts for a large portion of a company’s monthly AI spending.
Pointing to tools such as Boomi Agentstudio, Irecki said these provide key information such as cost visibility per AI agent and tasks each agent performs. Agentstudio also allows organisations to see how much computing power and storage each agent consumes, further enabling them to control expenses and optimise performance.
Understanding the lifecycle of each agent ensures companies pay only for agents that are effective and generate value, and shut down those that fail to meet these key requirements, he said.
It underscores the importance of governance, Irecki said.
Businesses that have proper governance from day one will be able see the spikes in cost and make the necessary adjustments to reduce their token count, he said.
“Organisations that invest in resilient foundations, diversify risks, and embed governance, can accelerate AI confidently while avoiding expensive course corrections,” Poh said. “Managing AI costs effectively requires a balanced approach--one that is anchored in a holistic ROI framework [that combines] disciplined financial oversight with a broader view of value creation.”
“This means clearly defining the value they expect AI to deliver, aligning investment asks with those outcomes, and tracking progress against performance metrics and reporting structures that make costs transparent,” he added.

Assess what cost model fits best
There also are ways to mitigate costs as companies carry out AI inferencing at scale, Leslie Joseph, principal analyst at Forrester, told FutureCIO.
Not every workload, for instance, needs to be powered by the most expensive or latest version of a major LLM (large language model), Joseph said.
An AI agent that requires a high level of reasoning can be shunted to a more powerful LLM, while another agent will not need access to the most powerful AI models if it only requires basic reasoning.
Some workloads also can run on the cloud, while others should be processed on-premises, depending on various factors such as latency and data sovereignty considerations.
Companies will find themselves struggling to manage their AI maintenance costs, if they fail to direct the right kind of workloads to the right kind of AI model, Joseph said.
Shift towards resilience, returns
AI investment priorities are evolving as organisations transition from experimentation to enterprise-scale adoption, Poh noted.
Historically, the majority of AI investment had focused on technology enablement, spanning cloud infrastructure, tooling, and platforms to support generative AI (GenAI) pilots and proofs of concepts, he said.
This was essential to build momentum and validate use cases.
Business focus now is shifting towards foundational capabilities that ensure resilience and sustainability, such as robust data governance, cybersecurity, and workforce readiness, he said.
Organisations want to be able to show a return from their AI investments and demonstrate successful use cases, so they can move forward with more initiatives, Irecki said.
The focus on ensuring returns will intensify as companies “agentify” their existing workflows and processes.
The bulk of AI cost is not just about acquiring the technology, but also about integrating the systems so there is data liquidity alongside data management and data security, he said.

Noting that 95% of AI pilots fail to yield returns, Irecki said a lack of the necessary infrastructure often is the reason companies are unable to achieve AI success.
Do unto AI as cloud
Asked if there were lessons from the cloud era that organisations can now apply to AI, Poh highlighted that there is no one-size-fits-all approach to AI as well.
In cloud, the different stages such as development, testing, and production each require different configurations based on data sensitivity and business needs. AI follows the same principle, he noted.
Successful deployment is not plug-and-play, but a highly customised process shaped by the organisation’s context, objectives, and data sensitivity, he said.
Joseph added that adopting a “lift and shift” approach was a common mistake organisations made during the early days of cloud.
These companies took existing workloads and moved them to the cloud, without rearchitecting to ensure their applications were ready for the transition, he said.
The same is happening with AI, with some nuances, he noted.
Companies are taking their existing workloads and processes and applying AI to them, without first looking at what the business workflow should look like in an AI environment, he said.
Some processes need to change because AI has capabilities that previously were not possible, he noted.
Companies, hence, have to determine how functions can be transformed with AI, Joseph said.
“Just as mature cloud strategies balance flexibility, security, and performance, AI strategies must be tailored to balance innovation with governance and risk management,” Poh said. “There is no silver bullet; organisations that succeed will design AI deployments with the same nuance and intentionality.”
He added that KPMG encourages organisations to consider multiple pathways, including in choosing the right implementation strategy.
Most organisations adopt a hybrid approach to address unique priorities, he said. Each option carries distinct tradeoffs in speed, cost, maintainability, and flexibility, and should be aligned with ambition, data maturity, and risk appetite, he said.
“The key takeaway: apply the same strategic discipline that defines mature cloud adoption to AI--tailor, iterate, and govern for sustainable success,” Poh said.
