Key takeaways:

  • Banks have been operationalizing Model Risk Management (MRM) programs for decades, and lessons from their implementation can serve as a blueprint for leaders looking to stand up AI governance programs.
  • AI governance requirements are even broader than MRM from a risk, stakeholder and regulatory perspective.
  • AI governance platforms like FairNow can help organizations adopt best practices and get compliant quickly.

The Emergence of AI Governance

With new global regulations and increasing consumer scrutiny of AI products, AI governance is quickly becoming an expectation for companies looking to build or buy technology.

NYC Local Law 144 was one of the first AI regulations in the United States, entering into force in July 2023 and requiring bias audits of tools using AI in the hiring process. The Colorado AI Act and critical elements of the EU AI Act will enter into force in early 2026, covering a wider set of AI builders and users with deeper compliance requirements. And regulatory bodies like the EEOC have the power to set rules about how AI can be used in their domains.

Even where AI governance is not yet legally required, stakeholder expectations are driving requirements as customers look for assurance that AI is developed and used safely. 

These new requirements will be extensive and daunting for those developing and deploying high-risk AI.

Companies will soon need to maintain an inventory and track all their AI, put together detailed technical documentation, perform quantitative testing before and after releasing new tools, provide transparency to end users and more.

But fortunately, there are industries that have been applying AI governance for decades. Studying the practices of these companies can help today’s organizations prepare for what’s to come.

Model Risk Management (MRM) in Financial Services

Financial services is perhaps the best example.

As statistical and mathematical models took a more central role in operations and risk management at banks in the 1980s and 1990s, the importance of validating those models became better understood. This point was further reinforced by the 1998 collapse of hedge fund Long-Term Capital Management and the 2007-2008 financial crisis, both of which were caused in part by an overreliance on flawed models.

In 2011, following the 2007-2008 financial crisis, the US Federal Reserve Board released SR 11-7 (Supervisory Guidance on Model Risk Management), which bolstered the expectations of banks to validate and govern their AI models.

SR 11-7 in the US, along with analogous regulations like OSFI E-23 (Canada) and SS 3/18 (UK) serve as the blueprint for banks’ Model Risk Management (MRM) programs.

Model Risk Management (MRM) programs are founded on three key pillars:

  • Model development, implementation and use: The use, design and theory behind the development of every model must be documented and supported by evidence. All data used to develop or validate the model must be rigorously evaluated for quality, representativeness and completeness. All models must be tested for accuracy and stability.
  • Model validation: The bank must verify that models are performing as expected. To avoid conflicts of interest, validation must be handled by independent validators, outside the reporting line of the model development team. The scope and depth of model validation should correspond to the model’s potential for risk – more critical models receive deeper diligence.
  • Governance, policies and controls: Model risk management is the responsibility of the board of directors and should be owned by senior management. Banks must formalize policies and procedures for their MRM program, covering the other two pillars. The program should establish clear roles and responsibilities that clarify accountability. 

7 Insights from Banks’ Model Risk Management (MRM) Playbook

There is a substantial amount that we can learn from banks’ implementation of Model Risk Management (MRM) programs. Below are the key takeaways that leaders adopting AI governance should consider:

#1 Focus on governance process and policy

Establish policy and procedure documents that comprehensively describe AI governance at your organization. Identify who is accountable, for what, and at what step in the AI lifecycle. List out what steps must be taken to deploy a model, how often the model should be monitored, what documents and artifacts should be retained and other critical functions and workflows.

Getting buy-in from senior leadership is important. Ensure that your AI governance program is assessed for effectiveness and feedback is taken to continuously improve.

Set up an appropriate system of roles and accountability. Training and learning are key, especially as AI continues to evolve rapidly. Staff should be sufficiently trained and empowered to carry out their roles effectively. 

#2 Inventory your AI and perform risk assessments

You can’t govern your AI without identifying and understanding your AI. Track all your AI models in one centralized location and collect relevant metadata about its design and usage. Perform risk assessments that classify the model’s risk level (into high, medium and low risk, e.g.) and identify what risks they pose. 

#3 Upskill your teams

Staff involved in building and using AI will need to be trained in identifying and mitigating risks and demonstrate understanding of the organization’s AI governance practices. Similarly, staff involved in risk management will need to be aware of the technical points of AI in order to capably manage the technology’s risks. 

#4 Adopt a risk-based approach

It takes time, attention and resources to validate AI, all of which are limited quantities. It makes sense to spend those most effectively by tackling the largest risks. Your AI governance program should use a risk leveling approach and define an appropriate level of scrutiny depending on the likelihood and severity of the risks posed.

#5 Demonstrate soundness of AI both before and after deployment

It’s critical to demonstrate your AI is performing as expected and has mitigated risks appropriately. This should happen both before and after the model is deployed.

Your AI governance program should define how often models should be monitored and what options should be considered when models are underperforming or deemed to be too risky.

#6 Documentation and record-keeping

Maintain records of all artifacts and activities related to AI governance. These are essential to measure and evaluate the effectiveness of the AI governance program, whether by an internal assessor, an auditor or a government agency. 

You should also consider logging all model activity (user inputs, model outputs and other usage metadata). In addition to being a legal requirement by some statutes, recording this data is critical to monitor AI and identify risks.

#7 Start holding vendors accountable

As users of AI, organizations are often still liable for the outcomes even if the AI is developed by a 3rd party. This point is made explicit in SR 11-7. It’s critical to vet all vendors well and build appropriate terms into your contracts. Learn what questions to ask and what good answers sound like by reviewing our list of what questions to ask your AI vendors.

AI Governance Will Be Even Broader than Traditional Model Risk Management

Executing an AI governance program is likely to become even more complex for organizations than Model Risk Management (MRM).

  • AI governance focuses on a broader set of risks than bank MRM. Modern AI governance considers the risk of bias, cybersecurity, adversarial risk and more. Many AI regulators pay special attention to the ethical development and use of the technology. SR 11-7 is primarily focused on business and financial risk that comes from faulty models. As seen in the 2008 financial crisis, the financial health of one bank influences the health of others.  
  • Given the multifaceted set of risks covered by AI regulation, we expect many more stakeholders to be involved than in a typical Model Risk Management program. These may include technical experts and data scientists, as well as technology and security experts; legal and compliance experts; and human resources. An effective AI governance program will bring together experts with diverse perspectives and expertise.
  • While Model Risk Management is broadly covered by regulations like SR 11-7, there is already a much more complex matrix of AI regulations that span jurisdictions and subject area. Keeping track of the varying requirements from standards and laws like ISO42001, NIST AI RMF, NYC LL144, Colorado SB205, the EU AI Act and more will require careful coordination on the part of AI teams.

Conclusion

We recommend starting the process of AI governance early.

We’ve seen from experience that organizations who need to apply governance to an existing AI portfolio have a much more difficult time than those who build and buy AI systems already with their AI governance practices in mind.

By applying lessons from banks who have had to adopt Model Risk Management over the past decades, organizations can ensure that the AI they are building and deploying is safe, responsible and compliant by:

  • Tracking AI adoption across the organization
  • Centralizing AI risk assessments and the review process
  • Flagging risks and compliance concerns for proposed use cases
  • Operationalizing AI governance approval workflows
  • Automating and streamlining monitoring of existing AI tools

Looking to simplify AI governance for your organization? Request a free demo.

Explore the leading AI governance platform