Organisation appear to be throwing good project management practices out the window in their approach to artificial intelligence (AI) and forgetting any discipline they may have developed.
This has been the most surprising revelation to surface, said Akamai’s CTO of cloud computing Jay Jenkins, in a video call with FutureCIO.
Companies are rushing to get AI into their products or to their customers, and neglecting sound principles they usually would apply to other projects, Jenkins said.
He acknowledged that organisations may need, to some extent, to invest more so they do not fall behind. However, they still need to carry out the same due diligence and cost models to really understand the returns from their AI initiatives, he noted.
Furthermore, businesses are falling into the vendor lock-in trap in their haste to get on the AI bandwagon, he said.
This can prove precarious, since AI still is a rapidly moving space, he noted.
While it is okay for organisations to adapt a bootstrap approach with their AI projects, they also need to have a longer-term vision to ensure these initiatives are sustainable, he said.
This includes making sure they address any security and business risks associated with using AI, Jenkins said.

These are all key components businesses should look at when managing any project, he added.
He noted that the AI market is going through significant change, from training, inferencing, to research and development.
Already struggling to operationalise their AI efforts, organisations have to figure out how to best architect their infrastructure for AI, as well as how to monitor and manage their deployments.
In addition, they need to know where their GPUs should be deployed, what is required to execute their AI workflows, and how to extract the most value from AI.
Many companies are straining to generate ROI because they cannot understand the architecture that is required, including the tokens to facilitate the inferencing and the cost of the AI architecture, Jenkins said.
“They go out and buy the best GPUs, thinking that equates to performance, but that’s not necessarily the case,” he said.
Mind the hidden costs
This challenge with compute distribution will be further exacerbated as agentic AI adoption becomes prevalent and workflows, along with it data flow, become more complex, he noted.
Organisations then will have to start looking more closely at egress costs, he said.
In cloud computing, egress involves data moving out of a network to another external location, such as a user’s local device or cloud platform.
With agentic AI, companies will have to determine, for instance, where the decision making or inferencing is needed, and how the data needs to be routed. They then have to look at the level of latency that is acceptable for the different workloads and ensure AI agents are sufficiently responsive.
There is a high level of variability with agentic AI, in terms of cost, so enterprise customers will need to establish exactly what they want out of their deployments, Jenkins said.
For example, is their AI model too precise for a basic task? Does their spending on a model outweigh its returns?
“You can actually optimise a lot of that to save on costs and put the savings where you can get the most benefits,” he said.
Organisations then have to manage all of these while staying within their cost targets, he added.
It does not help that AI still is often perceived as monolithic, where users ask an LLM (large language model) a question and it generates a response.
This has led to cost models from hyperscalers that are outdated in terms of being customer-focused in the AI era, especially as more businesses look to multi-cloud infrastructures with their agentic AI rollouts, Jenkins noted.
He underscored the need for cost models to evolve as agentic AI takes off, and egress expenditures increasingly become a focus area for organisations that want to contain their AI costs.
Governance also is a growing issue, in particular, data provenance, he said.
Companies want to know where the data is coming from, the risks it carries, whether customer data can be leaked, and whether a rogue AI model can tarnish their reputation.
“Right now, it seems like the wild wild west of AI, [where] people aren’t considering these things because they’re rushing out [with] AI,” he said. “It’s an AI cost that they, too, need to think about.”
Are there then lessons from the cloud era that companies can apply to AI?
Jenkins said it may make sense to bring some AI workloads back on-premises, especially as the cost of AI inference chips come down over time.
Organisations can do this for workloads that are less erratic, and run workloads that are more variable on the cloud, he suggested.
This provides companies the ability to move AI models on-premises where it makes sense to, while still keeping the option to tap the cloud for other workloads.
They still will have to ensure they have the skillsets to manage this architecture in-house, he said.
Need for industry best practices
Jenkins also highlighted the need to build trust that is embedded in AI models, especially as AI agents take off.
“How can one AI agent trust another AI agent?” he posed. “It comes down to creating something that’s like human trust. When I’m talking to someone, I know whether or not I should say something, and err on the side of caution. Eventually this kind of trust will be embedded in AI models as well.”
It may take the form of hard-coded rules that can be fed into models or created within the agentic model, he said.
There also is need to figure out what industry best practices should be, he added.
This is proving a major hurdle as more LLMs and AI models continue to emerge, requiring new ways to distribute computing resources, measure results, and assess the costs.
The AI industry needs to ensure it is enabling organisations to build the best models possible, which means establishing what a good architecture should look like, Jenkins said.
These should encompass best practices that are continuously refined as the market evolves, he said.
