A while back, an industry peer asked if I used generative artificial intelligence (GenAI) in my work and I replied that I didn’t.
Before I could explain why, he launched into a rant about how people shouldn’t be afraid of using new technology and learning more about it.
I am not and I have, thanks obviously to the requirements of my job as a tech journalist.
My interviews with senior executives often encompass information that are confidential as well as personal notes that I don’t particularly want made public, or leaked. I don’t have access to paid enterprise versions of GenAI tools, which presumably would offer more robust security features and some indemnity.
There’s inadequate guarantee, much less transparency, that my data won’t be scraped and used to train some AI model sitting on a public cloud, somewhere halfway across the world.
Long story short: my decision to restrain my use of GenAI for work, in its current form, is a calculated one, all risks considered.
Of course, like most in the industry, I do believe there are strong reasons for enterprises to adopt GenAI tools.
However, what I find issue with, is that organisations are doing so without first laying down a proper plan.
Too many now do so because they feel immense pressure to adopt AI, or risk being labelled a hopeless laggard.
This is further compounded by calls from the wider industry, including governments, to do something, anything -- because if they don’t, companies will sink into certain oblivion, or so it's implied.
Investments stall, returns uncertain

“I’m willing to go bankrupt rather than lose this [AI] race,” Google’s co-founder Larry Page supposedly has said repeatedly within Alphabet office walls, according to Gavin Baker, chief investment officer at Atreides Management, during a podcast last year for Invest Like the Best.
It reminds me of the infamous Silicon Valley war cry to “move fast and break things”.
But not every organisation operates like Silicon Valley, or needs to. Moreover, this maxim cannot apply to AI because breaking things can lead to serious, and sometimes irreparable, consequences for businesses.
Deloitte, for instance, faced a fresh round of scrutiny this week when reports emerged that the consulting firm appeared to have published customer documents that contained unverifiable citations.
Its consultants produced a commissioned research paper on health human resources, which was released by Canada’s Department of Health and Community Services, according to an article published by The Independent. The Deloitte report cited work by researchers that was actually never carried out and papers that didn’t exist.
If true, the errors suggest that at least some parts of the report were generated by AI and exhibited classic AI-induced hallucinations.
This would mark the second time Deloitte’s use of AI ran amok. In October, the consulting firm said it would refund the Australian government AU$440,000, after revealing it used GenAI to generate a report that contained inaccurate information, including false references and citations.
For a company that preaches the importance of trustworthy AI, I’m guessing Deloitte now faces a tough road ahead convincing customers it eats its own dog food.
If major companies, with supposed well-established best practices and policies, can fail in their AI adoption, what more businesses that have way fewer resources and lack the in-house expertise?
And some of these enterprises are jumping head-first into their AI deployments, with no safety measures and no plan on what they need to ensure they make it to the end, in one piece.
As it is, just 24% of Singapore organisations feel prepared to manage future risks from their AI and cloud investments, according to Kyndryl’s annual Readiness Report. The study had 3,700 respondents across 21 markets, including Italy, Japan, China, and the UK.
While Singapore respondents boosted their AI spending by an average of 33% over the past year, 53% of such projects do not progress past the pilot stage.
Some 58% reveal they are struggling to keep up with technological change, with 68% saying their IT infrastructure is not ready to manage future risks.
Globally, 62% of respondents have yet to push their AI projects past the pilot phase, the report found.
Push for more AI discipline
The question everyone should be asking, more urgently now, is how ready organisations are to adopt and integrate AI into their operations.

They have to make good use of the technology to reinvent their own value proposition, said Frederic Giron, Forrester’s vice president and senior research director, at the research firm’s Predictions 2026 summit in Singapore.
Giron noted that ChatGPT has in excess of 800 million weekly users and market stats have put individual productivity gains from GenAI tools at 10% to 40%.
However, there has been 0% in improvements on corporate balance sheets, he said.
So there’s still a need for businesses to find ROI (returns on investment) from their AI rollouts. They have yet to scale “AI reinvention”, he added.
Why has AI ROI remained elusive despite widespread adoption? Giron believes there are key barriers, including a “vision vacuum”, where few organisations have clarity on what the technology really entails.
The Forrester analyst said companies are too focused on experiments and lack focus on what strategic value AI will bring to their organisation.
There also is “innovation muscle atrophy”, he said, where companies have lost their ability to reinvent their workflows and value proposition.
Giron predicts that CFOs will gate AI investments, delaying 25% of enterprise AI spend into 2027.
Just 15% of decision makers reported a lift in their EBITDA from their AI initiatives, he said, noting that the hours saved in employee productivity gains would not go towards paying the company’s GPU bill.
There also are other hidden costs of AI, including systems integration, ethics and regulatory risks, gaps in vendor promises, and maintenance expense, for instance, from token consumption.
This will force a market correction in 2026 and a push for more disciplined investment strategies in AI, Giron said.
In an SAP study released in October, 70% of business leaders in Singapore are unsure if AI is delivering its full potential.
This is despite organisations in the country spending on average SG$18.9 million this year on AI and reporting ROI of 16%, according to the SAP Value of AI Report. The study polled 1,600 business leaders across eight countries, including India, China, Australia, and 200 respondents in Singapore.
The findings, SAP suggests, indicate increasing awareness amongst Singapore respondents that current success does not automatically translate into long-term advantage.
Moving ahead, Giron predicts that 25% of CIOs will be asked to bail out failed AI rollouts led by business teams within in their organisation.
He explained that, on average, 32% of IT spend sits outside of IT, consumed by other businesses in the company. This creates governance gaps.
He added that a rush to deploy agentic AI, triggering errors and compliance blind spots, will lead to failure rates of 60% to 90%. This will amplify systemic risks associated with AI.
CIOs will emerge as the logical leaders for enterprise AI governance, according to Giron.
Until then, I think we need to stop guilt-tripping organisations into rushing into AI and instead, start encouraging them to do so only when they have done the due diligence.
At the very least, maybe don’t rage bait one for not yet using GenAI?
