Singapore has introduced a new Model AI Governance Framework (MGF) for agentic AI at the World Economic Forum (WEF). This framework provides a structured overview of the risks associated with agentic AI and highlights emerging best practices for organisations implementing these AI solutions.
Model AI Governance Framework for Agentic AI
The MGF for Agentic AI, developed by the Infocomm Media Development Authority (IMDA), aims to provide guidance on technical and non-technical measures for responsible deployment across four dimensions:
• Assessing and bounding the risks upfront by selecting appropriate agentic use cases and placing limits on agents’ powers, such as agents’ autonomy and access to tools and data;
• Making humans meaningfully accountable for agents by defining significant checkpoints at which human approval is required;
• Implementing technical controls and processes throughout the agent lifecycle, such as baseline testing and controlling access to whitelisted services; and
• Enabling end-user responsibility through transparency and education/training.

April Chin, co-chief executive officer at Resaro, said: “As the first authoritative resource addressing the specific risks of agentic AI, the MGF fills a critical gap in policy guidance for agentic AI. The framework establishes critical foundations for AI agent assurance. For example, it helps organisations define agent boundaries, identify risks, and implement mitigations such as agentic guardrails.”
Welcoming feedback
IMDA emphasised that it is a living document and open to analysing feedback and case studies of agentic AI use to refine the framework.
The authority is also developing guidelines for testing agentic AI applications based on the “Starter Kit Testing of LLM-Based Applications for Safety and Reliability”.
