Artificial Intelligence has revolutionised software development, but it has also introduced new challenges. One of the main issues is that codes are created more rapidly than they are tested, raising concerns about trust, security, and control. This is where AI-powered testing comes into play.

Damien Wong, senior vice president, APJ at Tricentis, explains what truly defines AI-powered testing from a strategic leadership perspective, one grounded in KPIs, ROI, and enterprise-wide adoption.
KPIs that matter
Beyond speed and fewer defects, Wong explains that the most important KPIs when scaling AI-powered testing are those that demonstrate business impact, not just faster cycles.
“Traditional metrics like execution speed and defect counts no longer reflect the realities of an AI-accelerated SDLC where code volume, ripple effects, and integration complexity have exploded,” he said.
Instead, Wong advises CIOs to assess whether AI-powered testing at scale contributes to resilience, risk, and financial value.
“Organisations using comprehensive testing platforms have realised US$5.33M in annual benefits and 51% faster testing cycles, proof that quality drives measurable business outcomes,” he added.
According to Tricentis’ latest Quality Transformation Report, quality improvements contribute to improved customer satisfaction (16.6%) and revenue growth (14%). On the other hand, quality failures lead to increased customer churn (34.2%) and brand damage (26.2%).
Beyond customer-facing outcomes, Wong posits that CIOs should also monitor outage risk and change-risk KPIs, such as test-gap analysis and coverage of high-change, business-critical paths.
“These help teams catch vulnerabilities introduced by both human and AI-generated code. With AI-driven model-based testing, test upkeep can be up to 90% lower, freeing developers to innovate instead of maintaining scripts,” he said.
Lastly, Wong highlights that risk prevention, cost avoidance, quality confidence, and AI reliability are important KPIs for ensuring software quality.
Tackling common barriers
While the benefits are clear, adoption is not without challenges. According to Wong, the most significant barriers to AI testing adoption stem from technical debt, legacy systems, weak governance and AI-specific risks.
Nearly half of teams still release untested code due to time pressure, creating fragile systems and widening risk exposure. Legacy architectures further compound this, making modernisation difficult and slowing down automated validation,” he said.
AI-generated code also introduces new vulnerabilities. Without strong validation pipelines, testing quickly becomes the bottleneck of transformation. Developers often view testing as tedious, and with modern codebases spanning multiple interconnected applications, the challenge intensifies.
At the same time, misalignment between leadership and engineering teams leads to unclear priorities and rushed decisions. While the pace of development already feels fast, it is only set to accelerate.
To overcome barriers, CIOs can adopt model-based, codeless AI testing that reduces dependence on fragile code-level automation and cuts ongoing maintenance. This approach can reduce manual effort by 80%–90% and enables non-technical experts to participate through natural-language and visual test generation.
For Wong, strong governance is vital. This entails domain-trained, testing-specific AI that avoids hallucinations and supports safe, transparent validation. Instead of becoming autonomous, AI can act as a co-pilot working alongside developers.
“By aligning teams, modernising toolchains, and embedding guardrails, CIOs can shift from reactive firefighting to proactive, AI-driven quality engineering,” he said.
Lessons from enterprise-wide AI testing
Successful enterprise-wide adoption requires more than tools; it also requires trust.
“One of the consistent lessons we see across enterprises that successfully adopt AI testing is that they start by building trust, not by forcing a technology rollout,” Wong said.
Moreover, Wong observes that successful AI testing “removes friction rather than introducing new processes.”
He explains that model-based and AI-driven test automation reduces the maintenance burden, especially when a single change can ripple across multiple systems.
When teams experience that reduction in effort, adoption becomes natural rather than mandated. Damien Wong
“When teams experience that reduction in effort, adoption becomes natural rather than mandated,” he adds.
As workforce concerns around job displacement persist, Wong emphasises the importance of positioning AI testing as an assistant, one that reduces manual effort while still relying heavily on human judgment and oversight.
Lastly, inclusive testing plays a critical role in accelerating adoption.
“No-code AI tools allow non-technical users and domain experts to participate directly in validation, breaking down silos and building shared ownership of quality across the enterprise,” he explains.
Prioritising AI testing initiatives
For most organisations, resources are limited, making it vital to prioritise which AI testing initiatives to pursue.
Wong emphasises that the most crucial step is knowing where AI can deliver impact with the least disruption.
“That starts with visibility. When CIOs use quality intelligence to understand where technical debt concentrates, where untested code exists and which systems carry the highest business risk, the priorities become clear,” expailns the Tricentis executive.
To determine where to begin, CIOs can examine test gap analysis, coverage trends, and how frequently applications require maintenance.
Second, leaders should prioritise initiatives that deliver productivity quickly without forcing teams to re-engineer their entire toolchain.
“AI-driven, model-based test automation is a good example as it abstracts business processes from underlying technology, with updates automatically propagated across test assets,” he said.
Finally, Wong underscores that “pilots are the safest and most effective way to build confidence.”
“Start where the risk is highest, the effort is greatest, and the value is immediate and expand from there,” he says.
Influencing business and IT decisions
Ultimately, CIOs need to ensure that insights from AI-powered testing extend beyond QA dashboards, but also impact IT and business decisions.
To achieve this, Wong advises CIOs to recognise that software quality is no longer a technical silo.
In most organisations today, software is the heartbeat of the business. Damien Wong
“In most organisations today, software is the heartbeat of the business. That means quality has to be treated as a driver of business value, not something delegated to a downstream QA function,” he explains.
Leaders adopting such a mindset treat AI-powered testing as a strategic input into how the business manages risk, continuity and customer experience.
“AI gives CIOs clearer visibility of what has changed and which areas of the business are most exposed. Rather than relying on broad assumptions, leaders can see which customer journeys, critical processes or compliance-sensitive systems are likely to be impacted by a release,” Wong says.
Those insights, when fed into release committees, change advisory boards, and portfolio planning, help organisations identify modernisation priorities, technical debt, and innovation opportunities.
Moreover, Wong highlights the importance of transparency, which gives CIOs guidance on what was tested, what changed, and what remains exposed.
“That level of evidence helps leadership quantify service, compliance and customer-impact risks clearly,” he adds.
Tool for strategic leadership
Beyond accelerating delivery, AI-powered testing can be an impactful tool for strategic leadership.
When organisations fully embrace it, it can drive measurable business impact, optimise the user experience, and ensure organisational adoption at scale.
