Why Model Risk Management Cannot Fully Own AI Risk
Why Model Risk Management Cannot Fully Own AI Risk
A growing number of firms are trying to solve AI governance by assigning AI risk to model risk management. It is understandable. It is also incomplete.
Model risk management is one of the most mature controls disciplines many regulated firms already have. It comes with governance structures, validation practices, challenge mechanisms, documentation standards, and clear accountability. When AI starts to spread across the enterprise, it is natural to ask whether model risk can simply expand its remit and take ownership.
For some parts of AI, that works well. For enterprise AI, it does not.
The problem is not that model risk management is irrelevant. The problem is that modern AI creates a broader set of exposures than traditional model governance was designed to handle. If firms force all AI risk into the model risk box, they will miss some of the systems, behaviours, and control failures that matter most.
Why firms keep putting AI under model risk
There are good reasons firms take this route.
Model risk management already has credibility with regulators, boards, and internal control functions. It is used to working with technical systems that influence important outcomes. It has a language for validation, challenge, limitations, monitoring, and evidence. Compared with building a new AI governance structure from scratch, it looks like the fastest and least disruptive option.
In some organisations, it is also the only function that feels remotely prepared. That is why AI often gets pulled into model governance by default. It feels more disciplined than leaving it with business teams, innovation groups, or generic technology committees. But convenience should not be confused with fit.
Where model risk management is genuinely useful
It is important to be precise here. Model risk management absolutely has a role in AI governance. In many firms, it should be a major part of it.
Where AI systems behave like models in the classic sense, model risk brings real strengths. It helps evaluate assumptions, performance, drift, limitations, documentation quality, intended use, escalation thresholds, and the need for challenge and review. It provides a discipline for testing whether a system is doing what the organisation thinks it is doing.
That matters, and it should not be minimised. The mistake is not involving model risk. The mistake is assuming model risk can fully own the entire enterprise AI problem.
What model risk management can miss
Modern enterprise AI is not limited to formally developed models with clear validation cycles. It includes copilots embedded inside software platforms, third-party AI features switched on by vendors, workflow automation built by operations teams, internal assistants built with external APIs, agentic systems that can call tools or trigger actions, and generative systems that influence behaviour without fitting neatly into a model inventory.
That is where the gap begins.
Model risk frameworks are often strongest where the system is identifiable, bounded, documented, and treated as a model from the outset. But much of enterprise AI enters through less formal routes. It may arrive through procurement, experimentation, shadow adoption, feature expansion, or workflow layering. It may be used operationally long before anyone classifies it as a model. And even where the underlying technology is a model, the real source of risk may sit elsewhere.
The issue may be permission sprawl in an agent. It may be the system's connection to sensitive data. It may be changing scope after deployment. It may be reliance on a third-party provider that updates the model without meaningful notice. It may be the combination of prompts, orchestration, retrieval logic, and tool access that creates the true exposure, rather than the model in isolation.
Those are not minor edge cases. They are increasingly the core of the enterprise AI landscape.
Why AI risk is broader than model performance
One of the reasons this distinction matters is that AI risk is not just about whether a model performs well. A system can be technically strong and still create serious governance problems.
- A summarisation assistant may not be a poor model yet still expose confidential information through insecure workflows.
- A customer-facing AI tool may not drift in the classic sense yet still create conduct risk through misleading outputs.
- An internal agent may not fail validation yet still overreach because its tool permissions quietly expanded.
- A vendor AI feature may never appear in model governance at all, yet still affect regulated activity, client communications, or operational decisions.
In other words, performance is only one dimension of control.
Modern AI risk also includes ownership, authority, scope, data exposure, dependency risk, resilience, traceability, explainability, misuse potential, and evidence quality. It cuts across technical and non-technical functions. That is why no single legacy control discipline can safely absorb all of it without blind spots.
Why AI risk cuts across multiple control functions
AI risk is inherently cross-functional. Model risk may own part of the picture. But operational risk, technology risk, cyber, privacy, compliance, legal, procurement, resilience, and business ownership all have material roles as well. That is not a governance inconvenience. It is a reflection of how AI actually behaves inside the firm.
A single AI system may raise questions about model performance, vendor dependency, data rights, customer harm, regulatory accountability, cyber exposure, and continuity of service all at once. Trying to force all of that into one traditional function may simplify the org chart, but it does not simplify the underlying risk.
This is why firms need a broader AI control approach, one that can see the whole system and coordinate across multiple control lenses rather than reducing everything to one inherited category.
What better ownership looks like
The right answer is not to remove model risk from the equation. It is to place it properly. Model risk should own what genuinely belongs in model risk. That may be a substantial part of the stack, especially in high-impact analytical or decisioning environments.
But enterprise AI governance needs a wider layer around that. It needs a structure that can identify AI systems even when they do not enter through formal model channels, classify them according to use case and exposure, assess them based on evidence, connect them to legal and regulatory obligations, and monitor how they change over time.
That broader layer is what prevents model risk from being asked to govern things it was never designed to see.
Final thought
The question is not whether model risk management should be involved in AI governance. It should.
The question is whether firms mistake one important control function for the whole answer. That is where the real risk sits. Because if AI risk is reduced to model risk alone, firms will govern what looks familiar and miss what matters most.