Just a few days left to 2025 and ‘tis the season again to look back at how the year went, including key takeaways for organisations as they face what may be another frenzied sprint towards the next iteration of artificial intelligence (AI).
Agentic AI is the much-anticipated next phase in this still fast-evolving space, with IDC forecasting that agentic use amongst Global 2000 companies will climb 10-fold by 2027. By 2029, the number of actively deployed AI agents will clock more than 1 billion worldwide, or 40 times more than the current number, IDC said.
In Asia-Pacific, 97% of business IT leaders already have implemented or plan to implement AI agents in the next couple of years, according to a study by Salesforce MuleSoft.
For the IT industry, the coming years will bring “massive reallocation of value”, said Rick Villars, IDC’s group vice president of worldwide research, in a December 2025 post on agentic AI as the next inflection point.
As the use of AI agent expands, the IT industry will need to become “the orchestrator of autonomy”, where they can no longer simply deliver technology, Villars wrote. “They will deliver the frameworks that govern intelligent digital resources by balancing efficiency, ethics, and economic sustainability,” he said.
What about organisations then? How should they approach AI in the new year and avoid the mistakes that others have made in this past 12 months? Here are my top picks as I sum up my observations from a year’s worth of news headlines and coverage:
Dare to be patient
Be patient with AI. Forget FOMO. Dare to take the time to figure things out before rushing into a company-wide rollout.
Too often these days, when it comes to AI, organisations seem to have amnesia and lost all reasonable project management practices they may have developed.
The same due diligence and cost-benefit assessments should be carried out, so they can properly identify the value and returns from their AI initiatives.
Any undue rush could result in unnecessary spending, either in remediating errors or from uncovering hidden costs that could have been identified if organisations took the time to think things through.
It’s also easy for companies to fall into the vendor lock-in trap in their haste to adopt the latest and trendiest market offering. This can be risky since AI still is such a fast-evolving space…today’s cool tech can very quickly become yesterday’s recycled junk.
Mind the vendor lock-in trap
The need to avoid single vendor lock-in should further extend to infrastructure, so companies are not dependent on just one primary provider.
This is important not only for business continuity planning, but also increasingly critical as AI workloads expand in volume and diversity.
Major service outages this past year, including AWS’ October disruption that took down major services including Signal and Snapchat as well as banking sites.
Commenting on the AWS outage, Forrester analysts pointed to the systemic “concentration risk”, when organisations become dependent on a single cloud provider as well as a single region covered by the vendor.
“Convenience often overshadows navigating the complex, nested dependencies in highly concentrated environments,” Forrester said.
Such tech platforms are vast, and their scale keeps costs low and their security tools more accessible, wrote Graeme Stewart, head of public sector at Check Point Software, in a November 2025 commentary note after a service outage involving Cloudflare. This had followed outages involving AWS and Azure, he added.
When platforms of such scale slips, the impact spreads far and fast, Stewart noted. “A single layer they all rely on stopped responding…the break reached into the systems that hold up essential services,” he penned. “From a cybersecurity view, this is the part that matters.”
“Many organisations still run everything through one route with no meaningful backup,” he added. “When that route fails, there is no fallback. That is the weakness we keep seeing play out. The internet was meant to be resilient through distribution, yet we have ended up concentrating huge amounts of global traffic into a handful of cloud providers.”
“Until there is real diversity and redundancy in the system, each outage will hit people harder than it should,” Stewart said.
Look for multi-N strategy
And it’s not just about deploying secondary cloud providers. There needs to be diversity in how organisations operate their core infrastructure, which also supports their AI workloads, where hybrid and multi-cloud platforms are touted as the way forward.
AI will become the backbone of enterprise architecture, reshape software lifecycle development, and redefine cloud consumption, said Pascal Brier, chief innovation officer at Capgemini, in his 2026 tech predictions.
“At the same time, enterprise systems are undergoing a fundamental shift towards intelligent operations, while tech sovereignty emerges as a strategic priority, driving organisations to build resilient interdependence,” Brier said.
He believes that all flavours of cloud, including hybrid, multi-cloud, and sovereign architectures are emerging. This will be fundamental to how AI runs at scale, becoming the operational backbone for AI and agentic workloads.
“AI cannot scale and get the right performance on classical public cloud alone, pushing adoption of all other models of cloud,” he said.
He further noted that agentic systems rely on scalable and low-latency infrastructures, with edge and cloud working as a single intelligent fabric.
Furthermore, large-scale outages and geopolitical pressures will accelerate diversification and resilience strategies, he noted.
“While hybrid platforms will become mainstream, organisations will redesign architectures for performance, portability, sovereignty, and strategic autonomy to secure business continuity,” Brier said.
AI also is reshaping software development lifecycles, moving from writing code to expressing intent, he added.
“Developers will specify outcomes while AI generates and maintains components, shortening delivery cycles, and improving quality. But governance and oversight remain critical to prevent hallucinations, security gaps, and silent errors,” he said.
AI augments, not replaces
That brings up an important final point: when we hear the tagline that AI “augments, not replaces” humans, how much do companies actually mean it?
How many more will take the opportunity to say they’re exiting employees who cannot be retrained with AI? And when they do, are they just as ready to explain why their AI-generated reports contain errors?
And on that note, here’s wishing one and all a merry human new year!

