In 2026, CIOs across the Asia-Pacific region are navigating a perfect storm.
Generative AI and emerging agentic AI workloads are accelerating rapidly, infrastructure demands are intensifying, skilled talent remains scarce, and regulatory frameworks around data sovereignty are becoming stricter by the year.
Yet despite massive investments in artificial intelligence, many organisations are not ready. According to the latest IBM Institute for Business Value research, just 8% of organisations say their current infrastructure fully meets the needs of AI workloads. Only 42–46% believe they can support advanced models or real-time inference at scale, while concerns around privacy, security, and compliance continue to derail AI initiatives before they deliver value.
Against this backdrop, one critical component of the AI stack is often overlooked: storage.
Traditionally treated as a passive layer, storage is now emerging as a central pillar of AI success. Flash storage is evolving into something far more intelligent, emerging as a generative flash system. These systems embed autonomous AI capabilities that can self-provision resources, tune performance, migrate workloads, detect cyber threats, and optimise costs in real time.
Craig McKenna, vice president of Storage Sales for IBM Technology Asia-Pacific, believes this transformation is redefining enterprise infrastructure and forcing organisations to rethink how they architect for AI from the ground up.
Where traditional storage fails
Across the Asia-Pacific, organisations are rapidly moving beyond AI experimentation into real-world deployment. According to McKenna, while the region may not yet match the maturity of global leaders, the pace of adoption is accelerating.
"We may not be the most advanced geography," McKenna explains, "but we're catching up awfully fast."
At the same time, the nature of AI itself is evolving. Enterprises are shifting from generative AI toward agentic AI systems, which McKenna describes as "robotic helpers" capable of acting autonomously. This shift is placing even greater pressure on both compute and storage infrastructure.
"Storage is unfortunately an afterthought," he says. "But it can quickly become a major bottleneck to achieving what you set out to achieve."
As a result, many organisations discover too late that storage architectures designed for experimentation cannot support production-scale AI. As workloads scale, so do requirements for backup, tiering, archiving, and high-speed data access. Without proper planning, storage limitations can inhibit AI initiatives entirely.
At the same time, the ongoing challenges in global hardware supply chains add another layer of complexity. High-performance flash systems remain expensive and often delayed, forcing organisations to plan months.
For McKenna, the takeaway is clear: AI infrastructure planning must become proactive, not reactive.
Powering RAG and data-intensive AI
As AI workloads mature, storage is no longer just a repository for data; it is becoming an active participant in AI pipelines.
This shift is particularly evident in retrieval-augmented generation (RAG) architectures, where large language models rely on continuously updated enterprise data.
Maintaining these systems requires constant ingestion, indexing, and updating of vector databases, a process that is both compute- and data-intensive.
To illustrate this, McKenna compares it to the endless cycle of painting the Sydney Harbour Bridge.
"By the time you finish, you have to start again," he says. "That's what maintaining vector databases looks like when data is constantly changing."
Traditionally, these processes rely heavily on CPU or GPU resources. However, new approaches are moving this work closer to where the data lives: inside the storage layer itself.
By embedding compute capabilities within storage systems, organisations can continuously update vector databases in real time, reduce latency by eliminating data movement, offload processing from expensive GPU infrastructure, and improve the overall efficiency of RAG pipelines.
"We're taking workloads that used to sit on server farms and pushing them right next to the data," McKenna explains. "Storage is no longer just bits and bytes; it's transforming data on behalf of the system."
In this way, what is often referred to as content-aware or computational storage represents a fundamental architectural shift: one where storage systems begin to understand and act on the data they manage, rather than simply storing it.
Governance, trust, and the human-in-the-loop
While storage is becoming more intelligent, McKenna reiterates that human oversight remains essential. As AI becomes more deeply embedded in enterprise operations, organisations must strengthen governance frameworks to ensure accountability, transparency, and compliance.
"AI can make recommendations," he says, "but there still needs to be a human in the loop."
This is particularly important as governance requirements expand. These include auditability, tracking every action taken by both humans and AI systems; explainability, or understanding why decisions were made; and robust security controls.
Security becomes more complex in AI environments. If sensitive data is used to train a model, the model's outputs must inherit the same access restrictions. Otherwise, organisations risk exposing information to people who were never authorised to see it, McKenna notes.
Beyond security, data sovereignty adds another layer of complexity. Many countries across the Asia-Pacific, including India, Indonesia, China, and Australia, enforce strict rules about where data can reside and how it can be used.
McKenna underscores that sovereignty does not necessarily mean on-premises infrastructure. Instead, it refers to control by ensuring organisations retain authority over where data is stored, processed, and accessed, even in cloud environments.
Agentic AI in storage
One of the most transformative developments in enterprise infrastructure is the integration of agentic AI directly into storage systems.
These capabilities allow storage platforms to automate routine management tasks, detect ransomware in real time, optimise data placement and performance, and reduce reliance on scarce specialist skills.
"We're augmenting humans to make their lives easier. And eventually, through this sort of human and AI interaction, turning people into subject matter experts," McKenna says.
Through natural language interfaces, administrators can interact with storage systems more intuitively. In turn, this reduces operational complexity while enabling IT teams to focus on higher-value activities.
Advice for CIOs & storage leaders
Given all these shifts, McKenna's advice is straightforward: "Front and centre, I'd say, involve the storage team in the architecture conversation very early".
At the same time, McKenna stresses that as organisations modernise applications, they must also modernise storage.
Most traditional storage systems were designed around block-oriented, structured database environments. However, today's AI workloads involve unstructured data, tiny files, video, logs, and telemetry from IoT devices across highly distributed environments.
Craig McKenna
"We're dealing with a very different data problem and a very different data processing problem than traditional. Using that same solution that solved that problem to solve for a completely different computing paradigm is a little bit naive." Craig McKenna
Still, this does not mean organisations should discard existing investments. Instead, the focus should be on evolving and extending what they already have.
His final advice is simple: "Drag the experts you already have on staff. If they don't have the skills, there are clearly people like us who can help them."
Ready to see the future of intelligent storage in action?
Register now for the ASEAN IBM FlashSystem Launch event coming to your city and get expert insights plus a first look at the new IBM FlashSystem.
Allan is Group Editor-in-Chief for CXOCIETY writing for FutureIoT, FutureCIO and FutureCFO. He supports content marketing engagements for CXOCIETY clients, as well as moderates senior-level discussions and speaks at events.
Previous Roles
He served as Group Editor-in-Chief for Questex Asia concurrent to the Regional Content and Strategy Director role.
He was the Director of Technology Practice at Hill+Knowlton in Hong Kong and Director of Client Services at EBA Communications.
He also served as Marketing Director for Asia at Hitachi Data Systems and served as Country Sales Manager for HDS’ Philippines. Other sales roles include Encore Computer and First International Computer.
He was a Senior Industry Analyst at Dataquest (Gartner Group) covering IT Professional Services for Asia-Pacific.
He moved to Hong Kong as a Network Specialist and later MIS Manager at Imagineering/Tech Pacific.
He holds a Bachelor of Science in Electronics and Communications Engineering degree and is a certified PICK programmer.