The Monetary Authority of Singapore (MAS) has been watching closely as artificial intelligence reshapes the country’s financial sector.
What began as simple automation tools has grown into generative models, multi-agent systems and increasingly autonomous decision making. That shift forced the regulator to rethink how AI should sit within the broader financial system.
MAS has issued ethical frameworks before, including FEAT and Veritas, but the latest wave of AI is different.

It moves faster, learns faster and embeds itself deeper into the everyday operations of banks, insurers and capital markets players.
By the time the Singapore Fintech Festival 2025 arrived, MAS decided a more structured approach was needed. That is how the Guidelines on Artificial Intelligence Risk Management, AIRG, to avoid it being too mouthful, came to life.
At the core of this new guideline is a clear message that MAS wants to lay out.
Financial institutions should not wait for AI to become too entrenched before putting guardrails in place. MAS wants institutions to use AI with discipline, transparency and strong oversight so that innovation does not outrun governance.
The AIRG lays out supervisory expectations that cover the entire life cycle of AI systems. MAS organises the guidelines around several pillars that work together as a holistic framework.

This structure nudges institutions to see AI not as a single deployment but as a system that evolves over time, shaped by decisions made from development to retirement.
Leadership forms the starting point. Then comes the work of identifying where AI sits across the organisation. After that, the guideline dives into life cycle controls covering data, fairness, monitoring, explainability and third-party risks.
The final pillar focuses on whether firms have the right people and internal capabilities to manage AI responsibly.
The Need to Have Responsible Leadership
Leadership is the anchor of the entire guideline. MAS places early emphasis on boards and senior management because AI decisions now touch strategy, customer outcomes and the institution’s overall risk profile.
Boards are expected to understand how AI fits into the firm’s risk appetite and to challenge major AI decisions instead of rubber-stamping them.
Senior management, on the other hand, must turn these expectations into day-to-day practice. They are responsible for creating structures, designing policies and ensuring that staff overseeing AI have the right skills.
Where AI plays a large role in areas such as lending, trading, compliance, advisory or fraud detection, MAS encourages the creation of dedicated cross-functional committees.
It represents a shift from earlier approaches where AI was tucked under model risk or IT governance.
AIRG elevates it into its own governance lane.
Firms Must Identify AI Everywhere It Lives
A surprising number of financial institutions do not realise how many of their internal tools qualify as AI.
AIRG directs firms to create a clear definition of AI and then map out every system that falls under it across the organisation.
Internal models, commercial products, embedded AI features, cloud-based tools and even small decision engines used by customer-facing teams all belong on that list.
MAS wants institutions to maintain a central AI inventory that records model purpose, data sources, validation history, dependencies, risk owners and other essential details.
Without this visibility, proportional controls become impossible. Institutions cannot supervise what they cannot locate.
To Introduce A Structured Risk Classification Framework
After identifying their AI systems, institutions must classify them using three dimensions. Impact comes first and measures how much harm could result from errors, bias or unexpected behaviour.
Models that influence loan approvals or money laundering checks naturally sit at the higher end.
Complexity follows. Simpler tools behave predictably, while large models capable of reasoning or generating content introduce far more uncertainty.
Reliance completes the assessment. Some systems only support human decision-making, while others operate with significant autonomy. Higher reliance means stronger controls.
This approach keeps the AIRG proportionate.
Not every chatbot or internal knowledge tool needs the same scrutiny as a model used in trading or compliance.
Going Deep Into Life Cycle Controls
A significant portion of the guideline focuses on AI life cycle controls. MAS expects institutions to build robust boundaries around AI systems from the start.
Data quality is the first foundation. Training and inference data must be representative, protected and governed properly. Poor data leads directly to skewed outcomes, so the AIRG encourages institutions to document how they reduce these risks.
Fairness is closely linked. Institutions must define fairness for each use case and assess whether the system treats customers equitably. Underwriting, pricing, and eligibility decisions require the strictest oversight.
Explainability comes next. High-impact models need human-understandable explanations for their decisions, and customer-facing use cases may require disclosures about the use of AI.
Human involvement remains essential even in automated environments. Staff must be able to supervise AI, intervene when necessary and avoid automation bias. Effective oversight needs real authority and technical understanding.
Third-party AI tools receive particular attention because institutions increasingly rely on external models and APIs.
MAS expects firms to examine vendor practices, understand model lineage, assess security risks and consider the implications of many institutions relying on similar foundation models.
Testing forms one of the most detailed sections of the AIRG. Systems should be tested across performance, stability, fairness and robustness.
Subpopulation analysis matters, and high-risk AI must undergo independent validation. Documentation should allow auditors to reproduce results.
Monitoring continues after deployment. Institutions need mechanisms to detect drifts, anomalies and shifts in behaviour. Early warning triggers and the ability to deactivate systems are part of the expectation.
Change management rounds off the life cycle.
Models evolve through fine-tuning, retraining and updates. Institutions must determine when changes count as significant and require another round of validation.
Focusing On Internal Capabilities And Talent
A strong framework still depends on the people running it. MAS highlights the need for adequate resources, both in terms of technical capability and domain expertise.
Data scientists, model validators, risk specialists and IT professionals all need to play a part. Institutions should not assume vendors will fill every gap.
MAS has opened consultation until January 2026 and plans a twelve-month transition period once the guideline is finalised.
Institutions still have time to adapt, but the direction is clear as AI is moving into critical roles, and supervision needs to keep pace.
Singapore aims to position itself as a global benchmark for AI governance in finance, and the AIRG will likely influence how other markets approach the same challenges.
Firms that adjust early will unlock the benefits of AI with far greater confidence, while those that delay may struggle to retrofit sound governance onto systems that are already deeply embedded.

Featured image: Edited by Fintech News Singapore based on an image by mohammadhridoy_11 via Freepik.



