Businesses are reviewing how they consume cloud services, as artificial intelligence (AI) is expected to power more workflows across their organisation.
This momentum will fuel the need for AI-native cloud infrastructures, which are designed specifically for such workloads.
They are unlike previous generations of cloud services, when AI was just an isolated service category, said Charlie Dai, vice president and principal analyst at Forrester.
AI-native clouds operate on an AI-by-design architectural principle, where AI capabilities are built into all major cloud service segments, spanning infrastructure, development, and applications, Dai said in an email interview with FutureCIO.
He urged organisations to revisit their cloud strategy and ensure there is “architectural alignment between AI and cloud”, with seamless data flow.
They also should focus on AI use cases with business outcomes, localise AI-native cloud services strategically, and leverage open source, he added.
The rapid adoption of AI has changed what enterprise customers expect from cloud providers, said Choong Hon Keat, Singapore country manager for Alibaba Cloud Intelligence.

“Where once the focus was on infrastructure and scalability, enterprises now require integrated environments that can support the full AI development lifecycle, from data preparation to model deployment and application,” Choong said in an email interview.
Alibaba Cloud’s offerings include Platform for AI, which supports machine learning workflows including data labelling, model training, and inference deployment. Its cloud-native database products also are integrated with RAG (retrieval augmented generation) service and support open standards, such as model context protocol (MCP), he said.
He noted that businesses face new challenges that go beyond simply infrastructure, of which one of the most pressing is the increasing complexity of managing data pipelines.
Data complexity remains big hurdle
“AI workloads bring not just large volumes of data, but also data that [must be] high quality, well governed, and readily accessible across teams and geographies,” Choong said. “For many organisations, fragmented data environments and inconsistent data standards are major roadblocks.”
Dai noted that data silos across cloud and on-premises environments hinder seamless data flow that is necessary for AI workloads.
Rapidly growing data volume as well as variety in the cloud also is challenging performance and manageability, he said.
Ensuring compliance with regional data regulations further adds to the complexity of an organisation’s cloud strategy, he added.
High expenses and hidden service dependencies of GenAI (generative AI) also can lead to unexpected cost hikes, making it difficult to track and manage costs, Dai noted.
Questions, too, are being raised regarding where and how data is processed and stored, especially as AI powers more workflows.
AI will touch every part of the enterprise as companies, including small businesses and others that are typically tech laggards, move to deploy the technology, said Google Cloud COO Francis deSouza.
Compared to previous new technologies, adoption of AI has been significantly rapid, with companies transitioning quickly from pilot projects to production, deSouza said at a recent media briefing in Singapore.

The accelerated deployments have prompted organisations in the country, as well as globally, to look more closely at how their data is managed, he noted.
In some scenarios, businesses are comfortable using public cloud to drive their AI workflows. For others, however, they prefer to keep their AI and data on-site, he said.
Keeping data where organisations want it
Such demands are due to the organisation's desire to preserve its proprietary data as well as comply with regulations that require data to be stored locally, deSouza said.
As a result, enterprise customers increasingly are asking for Google’s infrastructure to facilitate these needs, he said.
The US cloud vendor's global network of data centres includes four such facilities in Singapore, where it recently expanded its data residency guarantees. This offers organisations the ability to store their data locally and perform machine learning processing for their AI workloads in Singapore.
With businesses wanting more control of their cloud workflows, Google has been working to support such demands, deSouza said. These include guaranteed data residency, processing AI models and data within the boundaries defined by customers, and running AI models that are disconnected from the cloud or in the cloud region of their choice.
Alibaba Cloud's Choong said: “Public cloud remains essential for its elasticity and vast computing power, which supports large-scale model training, experimentation, and flexible resource allocation. However, when AI workflows involve sensitive or regulated data, such as in finance, healthcare, or public sectors, security, privacy, and compliance become critical."
“This drives the need for private cloud environments, where organisations can maintain control over data and meet regulatory obligations,” he said.
He added that hybrid cloud architectures have emerged as the optimal approach, allowing businesses to leverage the scalability and cost-effectiveness of public cloud for compute-intensive workloads, while handling sensitive data and inference workloads securely in private or on-premises environments.
Like Google, Alibaba also has pushed out products to cater to such demands.
The latter’s Apsara Stack, for instance, offers an integrated infrastructure that supports public cloud, private data centres, and edge environments, according to Choong.
This enables enterprises to train AI models at scale in the public cloud, then deploy inference or production workloads closer to the data source with enhanced governance, he said.
“As AI becomes more deeply embedded in critical business operations, hybrid cloud is no longer a transitional model but a strategic imperative,” he added. “It offers not only technical flexibility, but also the governance and resilience required to deploy AI at scale, securely, and responsibly.”

In addition to data complexities, enterprises also have to make sure AI applications are technically robust.
“Emerging AI models, while powerful, can produce unexpected errors or hallucinations, especially in real-time use cases like chatbots,” Choong said. “This uncertainty often leads to cautious deployment, longer testing cycles, and challenges around ensuring consistent, trustworthy AI outputs in production.”
Talent, too, remains a bottleneck, he said.
While interest in AI is high, he noted that building and maintaining AI systems at scale still requires deep bench of expertise comprising, amongst others, data engineers, machine learning ops teams, and domain specialists who can identify the right business problems to solve.
Getting ready for agentic AI
Organisations also will face further cloud-related challenges as they roll out agentic AI, including orchestrating agents within distributed cloud environments and managing the interaction between enterprise and cloud AI agents.
Asked what cloud and SaaS vendors should do to resolve key challenges for their enterprise customers, Dai pointed again to the need for AI-native services.
With infrastructure and application management transformed through intelligence automation, AI-native cloud infrastructures will be able to facilitate automated operations, the Forrester analyst said.
This will power smart applications in key areas of agentic AI, including agentic workflows, application generation, analytics, and AIOps (AI operations), he noted.
They also should have built-in integrated AI capabilities across data management, model development, model finetuning, model serving, and application development for AI agents, he said.
Above all, it is critical to ensure security and trust in agent-to-agent interactions.
“[Cloud vendors] must provide AI-specific security, cost transparency, and technology roadmaps to help organisations manage the complexity and risks of scaling AI deployments,” Dai said. “Vendors should enable seamless data movement across AI systems, support standard model formats, and integrate modern data infrastructure, such as distributed databases and data fabrics to make cloud environments AI-ready.”
Agentic AI security will be increasingly crucial to address emerging threats, such as model poisoning and exploitation, he said.
As organisations begin to roll out agentic AI, Choong noted that they will encounter several cloud-related challenges.
For one, implementing robust, scalable, and secure cloud infrastructure is critical, as agentic AI models require significant computational power and seamless access to large volumes of data for both training and inference, he said.
Scalability is essential to handle fluctuating workloads and increasing data volumes. These can strain existing cloud resources if not properly planned, he cautioned.
“Security and compliance present another major challenge,” he added. “Agentic AI often processes sensitive data and interacts autonomously with multiple systems, so organisations must ensure their cloud infrastructure adheres to strict security protocols, robust access controls, and evolving regulatory requirements.”
He said it can be difficult to manage and integrate data efficiently across disparate sources.
“Ensuring low latency, high availability, and reliable performance is essential, especially as agentic AI applications interact with real-time systems and end-users,” Choong said. “Cost management is also a key concern, as running large-scale AI workloads in the cloud can quickly escalate expenses without careful resource optimisation.”
In addition, data literacy, knowledge-driven culture, and aligning organisational structures as community of practices, will be important to enable effective collaboration between IT and business stakeholders, Dai said.
This is key to a successful agentic AI deployment, he said.