Traditional AI literacy has become a baseline skill for modern workforces. But as artificial intelligence evolves rapidly, organisations are discovering that knowing how to "use" AI tools is no longer enough. To remain competitive, teams must now move from basic AI literacy to AI bilingualism.
From AI literacy to AI bilingualism

"Traditional AI literacy focuses on interaction: how to query a model or interpret a dashboard. AI bilingualism focuses on direction: how to frame objectives, set boundaries, and govern outcomes when machines increasingly participate in decision-making itself," said Deepak Ramanathan, vice president, Customer Advisory, Asia Pacific, SAS.
At its core, AI bilingualism goes beyond tool proficiency. Ramanathan described it as fluency in decision systems, where employees understand how intelligent systems reason, test hypotheses, and act with varying degrees of autonomy, and how human judgment must guide or override those actions when necessary.
This distinction is becoming critical as organisations adopt agentic analytics, where systems no longer wait for prompts but initiate analysis, adapt code, and recommend actions independently.
"For modern workplaces, this capability is becoming essential because execution is no longer linear," Ramanathan said. "Strategy now flows through probabilistic systems that learn, self-correct, and occasionally fail in non-obvious ways."
As a result, both leaders and employees must learn to think in two modes at once: human intent and machine logic.
"Without that dual fluency, organisations risk deploying powerful systems they cannot meaningfully supervise, explain, or trust; an untenable position in regulated, reputation-sensitive environments," he added.
The risks of operating without AI bilingualism
The absence of an AI-bilingual workforce first shows up as a productivity issue. But Ramanathan warned that the deeper risks are strategic.
One is strategic misalignment, where AI investments optimise individual tasks but undermine broader decision coherence.
"Teams cannot see how autonomous recommendations compound across the enterprise," he explained. This can result in fragmented and sometimes contradictory outcomes.
Another risk lies in control.When employees are trained only to operate AI tools, they may struggle to detect model drift, cascading bias, or inappropriate decision escalation. These expose organisations to risks across credit, pricing, compliance, and operational resilience.
There is also a growing risk of workforce fragility.
"Employees who interact with AI but cannot direct it are more easily displaced," Ramanathan said."Over time, organisations retain tools but lose institutional judgement."
From a CIO's perspective, the most serious danger for Ramanathan is dependency without understanding. He warns that AI-driven decisions that move faster than humans can
interrogate them, can propagate errors quietly, and diffuse accountability. In such cases, he said organisations may appear "technologically advanced while becoming strategically brittle".
"Ultimately, a non-AI-bilingual workforce slows the organisation down at the exact moment the market is accelerating," he said.
Ultimately, a non-AI-bilingual workforce slows the organisation down at the exact moment the market is accelerating. Deepak Ramanathan
Upskilling for AI bilingualism
In the Asia Pacific, 78 per cent of employees report regularly using AI tools, higher than the global average. However, Ramanathan noted that adoption is outpacing organisations' ability to build the skills needed to manage AI effectively.
"Building AI bilingualism means moving beyond teaching people how to use AI, and helping them understand how to work with it," he said.
That shift begins with helping employees see AI not merely as a faster tool, but as a system that actively shapes outcomes. Upskilling efforts should therefore start with clear explanations of how modern analytics works. The workforce needs to understand what goals systems optimise for, the constraints they operate under, how they learn from feedback, and where they can fail.
While data literacy remains important, AI bilingualism also requires a deeper understanding: why a model may be uncertain, what trade-offs it is making, and how optimisation can produce unintended consequences.
Crucially, this learning must be embedded in real work.
"Teams should practise reviewing AI-driven decisions in areas like credit approvals, demand forecasting, customer interactions, or fraud detection, and regularly ask, 'Would we make the same call?'"
Ramanathan also suggests scenario testing and post-decision reviews, as well as pairing business leaders with data specialists to accelerate learning further.
Just as critical is governance literacy.
"Employees must learn when to escalate, how to set guardrails, and how decisions are audited," Ramanathan said. "The objective is simple: ensure people remain confidently in charge of outcomes, even as AI does more of the thinking."
Who needs to become AI-bilingual first?
The first roles that should develop AI bilingualism are those where decisions are frequent, consequential, and already shaped by automation.
"What these roles have in common is accountability," Ramanathan said. "The people in them are expected to stand behind decisions that affect customers, revenue, and compliance."
In these roles, basic tool usage is insufficient. Employees must understand why systems make specific recommendations and when those recommendations should be challenged.
"This is not about job title or seniority,” he reminded. “Frontline managers and analysts often sit closest to automated decisions and therefore need the strongest practical fluency. Wherever AI shapes outcomes, human judgement must remain firmly in control."
The CIO's role in fostering AI bilingualism
For CIOs, fostering an AI-bilingual workforce begins with clarity of intent.
"This is about preparing the organisation to make better decisions as AI takes on a more active role in day-to-day operations," Ramanathan said.
He advised technology leaders to map where AI already influences outcomes, such as pricing, risk assessment, forecasting, and customer interactions, and to prioritise skills development in those areas. Platform choices also matter: systems should make it easy to understand how recommendations are generated, how confident they are, and how they can be reviewed or overridden.
He added that training must be continuous and grounded in real scenarios, with clear ownership of AI-assisted decisions.
"People learn fastest when they are accountable for outcomes, not just usage," he said.
Leadership behaviour sets the tone. In meetings, CIOs should ask how AI-driven recommendations were formed, what assumptions underpin them, and what could cause them to fail, making it acceptable to question automated outputs without stalling progress.
Finally, to ensure governance and skills evolve together, Ramanathan underscored the importance of working closely with risk, legal, and HR teams.
"As AI becomes more embedded in decision-making, the CIO increasingly shapes how the organisation thinks, not just the technology it runs."
As AI becomes more embedded in decision-making, the CIO increasingly shapes how the organisation thinks, not just the technology it runs. Deepak Ramanathan
Fostering an AI-bilingual workforce
The technology landscape is evolving rapidly. The debate has shifted from whether to adopt AI to how to maximise its benefits. However, the next challenge is deeper.
As AI becomes more prevalent in everyday work, organisations must understand, manage, and direct these systems.
Without that capability, even the most advanced AI tools can cause more confusion than clarity.
