* Editor’s note: The following extends from an earlier discussion with Dr David Hardoon, global head of AI enablement at Standard Chartered on the impact of AI agents on software engineering.
In 2024, Forrester issued a clear directive to development leaders: Start Using and Investing in AI Today. The message was unambiguous — generative AI (GenAI) was no longer a speculative tool but a transformative force reshaping how software is built.
As highlighted in Top Recommendations For Development Leaders, 2024, GenAI had already begun freeing developers from routine tasks, automating boilerplate code, and accelerating test case generation.
“Coder TuringBots can semantically search and automatically generate code,” the report noted, while tester equivalents help “generate the right test cases from requirements with associated automation.” The era of manual, siloed coding was giving way to an integrated, AI-augmented future.
Fast forward to 2026, and that future is now the present. Enterprises across Asia are deeply immersed in AI-powered software engineering, deploying AI-generated code at scale, but not without growing pains.
As Dr David R. Hardoon, global head of AI enablement at Standard Chartered, observes, “GenAI does not eliminate the need for engineering rigour — rather it amplifies the consequences of poor practices.”
Rise of GenAI
The adoption curve has steepened rapidly. Forrester’s 2024 call to action has been heeded: low-code and AI-assisted platforms, such as GitHub Copilot, are now mainstream in Asian banks and financial institutions. According to the report, these tools are not just for prototyping — they are being embedded into the software development lifecycle (SDLC), supported by natural language prompting and visual modelling. This shift enables “fusion engineering teams that combine low-code and high-code to create new products faster.”
Yet, as Hardoon warns, the speed of AI-generated prototypes creates a dangerous illusion.

“Business stakeholders are witnessing the ‘vibe’ of AI — watching prototypes appear in weeks, not months — and they’re asking: why can’t production be just as fast?” David Hardoon
But he quickly dispels the myth: “Turning an AI-assisted sketch into a system that survives audits, runs at scale, and does not break the bank’s risk model is still real engineering work.”
Governance first
Forrester’s 2024 report urged leaders to strike a balance between innovation and responsibility. It recommended investing in skills, managing open-source risks, and treating internal platforms as products — all precursors to the governance challenges now front and centre in 2026.
Hardoon’s insights confirm this evolution. At Standard Chartered, a dedicated AI Safety team within the Chief Data Office leads centralised governance of all AI use cases. This structure ensures compliance is not an afterthought.
“Our approach aligns with leading industry standards, specifically the MAS FEAT guidelines and the HKMA BDAI guidelines,” Hardoon says, calling them “benchmarks in the banking regulator space.”
These frameworks demand transparency, reproducibility, and auditability — now “table stakes,” not optional extras.
As Forrester cautioned about open-source risks, so too must enterprises scrutinise AI-generated code. Dual-licensed or “ajar source” tools may introduce hidden compliance traps; AI-generated components carry similar — if not greater — risks when unchecked.
Engineers elevated
Far from making engineers obsolete, AI has elevated their role. Forrester anticipated this shift, noting that AI tools free developers from “wasting time searching for code snippets,” allowing them to focus on higher-value work. In 2026, that promise is being realised.
Hardoon describes the modern engineer not as a coder, but as an “orchestrator.” He explains: “The engineer becomes more like an orchestrator than a builder of bricks.”
This aligns with Forrester’s vision of fusion teams, where low-code and high-code converge, and developers must integrate AI outputs into secure, scalable, and robust architectures.
Moreover, Forrester’s emphasis on developer experience (DevEx) — shaped by tools, processes, culture, and metrics — is now critical. As AI increases cognitive load through complexity, organisations must actively reduce friction.
Hardoon underscores this: “Software engineers are not being replaced – they are being pulled upstream, into earlier design phases, system integration and governance.”
Oversight amplified
Forrester’s 2024 guidance on treating platforms as products — with product management, design, and user research — is now essential for AI governance. A platform without ownership, metrics, or feedback loops cannot sustain AI-assisted development at scale.
Hardoon agrees that human oversight must increase as AI capabilities grow. He identifies three layers: architectural ownership, compliance embedding, and collaborative validation.
“Engineers must maintain control over system design decisions, using AI as an accelerant for implementation rather than a replacement for strategic thinking,” he states.
This aligns with Forrester’s recommendation to establish internal platforms with professional product discipline. Without it, AI tools become “shadow” systems — experimented with in isolation, then slipped into pipelines without review.
“Unchecked AI tools and unreviewed code make their way into pipelines,” Hardoon warns. The result? Regulatory scrutiny is not just on what was built, but also on how and by whom.
Measuring value
Forrester introduced the concept of Dev insights TuringBots — AI-driven analytics that aggregate data from agile planning, CI/CD, and deployment tools to reveal productivity, technical debt, and value creation. By 2026, these systems will be operational in leading enterprises.
Hardoon’s proposed metrics reflect this evolution. He advocates for cross-functional synergy intensity, measuring collaboration through co-authoring and joint reviews — a direct response to Forrester’s warning that silos stifle AI innovation. He also champions prototype-to-production velocity, addressing the gap Forrester identified between rapid ideation and robust delivery.
Finally, Hardoon calls for business outcome multipliers with clear attribution — linking engineering improvements to revenue, cost savings, or customer impact. This aligns with Forrester’s emphasis on outcome-based platform success, moving beyond activity metrics to measure real organisational value.
Skills for 2026
Forrester stressed the need to “invest in skills development for AI and ML,” recognising that top talent is scarce and internal upskilling is essential. Hardoon echoes this: “We see it as essential to upskill our people for in-house expertise and capabilities to manage associated risks.”
This includes not just LLMs and MLOps, but also model risk management and third-party oversight — areas where external AI vendors introduce new dependencies.
As Forrester advised switching to memory-safe languages like Rust and Java to reduce vulnerabilities, engineers must now also understand the security implications of AI-generated code, even in “safe” ecosystems.
Stepping into the maestro’s plate
By 2026, the integration of AI into software engineering will no longer be optional — it will be foundational. The dual forces of Forrester’s 2024 recommendations and Hardoon’s real-world governance framework reveal a clear path: accelerate with AI, but anchor innovation in discipline.
The most successful organisations are those that treat AI not as a shortcut, but as a catalyst for deeper collaboration, stronger governance, and more strategic engineering.
As Hardoon concludes: “The key is viewing GenAI as an amplifier of existing engineering discipline rather than a replacement for it.”
In this new era, the most valuable asset is not the AI tool itself — it is the human engineer who knows how to wield it effectively.