Financial Services Model Risk April 27, 2026 · 5 min read

SR 26-02 Replaced SR 11-7 — and Left Your GenAI Ungoverned

On April 17, 2026, bank regulators replaced the 15-year SR 11-7 model risk framework. They explicitly carved out generative and agentic AI. Here's what that means for your compliance team.

SR 26-02 Replaced SR 11-7 — and Left Your GenAI Ungoverned

On April 17, 2026, the Federal Reserve's SR 26-02 and OCC Bulletin 2026-13 officially replaced SR 11-7 — the model risk management guidance that has governed bank AI since 2011. For compliance teams at institutions over $30B in assets, this was a long-anticipated update.

There's a catch: generative AI and agentic AI are explicitly excluded from SR 26-02's scope.

Read that again. The framework your examiners will use to assess model risk management — the one that just replaced the 15-year standard — does not cover the category of AI your employees are most actively using right now.

What SR 26-02 Does Cover

SR 26-02 governs traditional quantitative models: credit scoring, stress testing, fraud detection, algorithmic trading. The disciplines of model development, independent validation, ongoing monitoring, and change management that SR 11-7 established carry forward. That's good. Your model risk team knows how to work inside that framework.

What It Doesn't

Every employee at your institution who opens ChatGPT, Microsoft Copilot, or Claude to draft a client memo, summarize a contract, or query internal data sits outside SR 26-02's scope — by design. The regulators acknowledged the gap explicitly. They're working on sector-specific guidance for GenAI. That guidance doesn't exist yet.

In the interim, your employees are using these tools today. Prompts containing client PII, MNPI, confidential deal terms, and proprietary trading strategies are flowing to third-party model providers right now — outside any SR 26-02 validation regime, outside any formal governance structure, and outside your audit log.

The Framework Vacuum Creates Exam Risk

OCC examiners don't stop caring about AI risk just because a framework carved it out. The examination guidance from OCC 2026-13 still expects institutions to have a coherent approach to AI risk. "We're waiting for the GenAI-specific framework" is not a satisfying answer during an examination.

Institutions that can demonstrate real-time policy enforcement — what employees are allowed to submit to which AI tools, enforced at the point of use, with a tamper-evident audit trail — are in a materially better position than institutions with a policy in a drawer.

Where Vanta and Drata Stop

Compliance automation tools like Vanta and Drata are built around the same paradigm as SR 26-02: evidence collection, control mapping, and audit readiness for defined frameworks. They're excellent at helping you pass a SOC 2 or ISO 27001 audit.

They don't watch what your employees submit to ChatGPT. They don't enforce a policy that blocks a banker from pasting a client's NPI into a public LLM. They don't produce an immutable log of which employee, on which device, submitted which prompt to which model — at the moment it happened.

That's a different control layer entirely.

The August 2026 Pressure

The EU AI Act's remaining provisions — including requirements for high-risk AI systems used in financial decisions — become enforceable on August 2, 2026. For institutions with EU operations or EU-based clients, that's 97 days away. The Act requires organizations to "implement automated guardrails enforcing policies at runtime."

Runtime enforcement. Not post-hoc review. Not annual policy attestations. Guardrails that fire at the moment an employee interacts with an AI system.

What Containment.AI Does

Containment.AI operates at the layer SR 26-02 doesn't cover: the real-time interface between your employees and AI tools. Our proxy and browser extension enforce your organization's AI policies before a prompt leaves your network — blocking submissions that contain PII, MNPI, or other sensitive data classes you define. Every interaction is logged with full context: user, device, AI tool, timestamp, policy result.

When your examiner asks what controls you have over employee use of generative AI, the answer is a live dashboard and an immutable audit log — not a policy document.

The SR 26-02 gap is real. The exam risk is real. The tools to close it exist today.


Containment.AI enforces AI governance policies in real time — at the proxy layer, in the browser, and in the admin dashboard. Built for regulated enterprises navigating the GenAI governance gap.

Ready to close the gap?

Talk to us about runtime AI governance for regulated environments.

Schedule a Conversation →