In April 2026, IBM announced FedRAMP Moderate authorization for 11 AI and automation products, including watsonx.governance. Oracle achieved DISA IL5 and FedRAMP High for its Generative AI platform. ServiceNow's Moveworks AI Assistant reached FedRAMP Moderate. The rush to authorize AI tools for federal use is real, accelerating, and — for CISOs and ISSOs managing compliance programs at defense contractors — creating a false sense of security.
Here's the problem: FedRAMP authorizes infrastructure. It does not govern runtime behavior.
What FedRAMP Actually Covers
FedRAMP—the Federal Risk and Authorization Management Program—certifies that a cloud service meets specific security controls covering confidentiality, integrity, and availability. A FedRAMP Moderate or High authorization means the vendor passed an assessment against NIST SP 800-53 controls: access control, audit logging, encryption, incident response, configuration management.
What it does not cover: what your employee types into the authorized AI. What your autonomous agent does once it has access. Whether a prompt contains CUI. Whether a response inadvertently exfiltrates regulated data. Whether an agent operating inside your IL5 perimeter takes an action that violates your organizational policy.
FedRAMP authorizes the container. Governance has to cover what's inside it — at runtime, at the moment of every interaction.
FedRAMP is now specifically prioritizing authorization of AI-based cloud services designed for routine federal worker use. That's the right call. It still doesn't solve the runtime enforcement problem.
The IL5 Gap Is Especially Sharp
DISA Impact Level 5 goes beyond FedRAMP High in several areas: data isolation between tenants, physical infrastructure requirements, and handling of Controlled Unclassified Information (CUI) and National Security Systems (NSS) workloads.
Organizations operating at IL5 face a specific challenge with AI deployments: the model is authorized, but the policy enforcement layer is optional. When a defense analyst uses an IL5-authorized AI assistant to summarize a briefing, the underlying compute is compliant. But if that analyst includes NSS-designated content in the prompt, the model processes it — and depending on the architecture, that content may cross a boundary it shouldn't.
FedRAMP 20x, rolling out broadly in Q3–Q4 FY2026, moves toward continuously-generated, machine-readable compliance evidence. That is a significant improvement. It still doesn't tell the AI system to block a specific category of prompt in real time. No FedRAMP control does that — that's not what FedRAMP is for.
What "Authorized" Doesn't Mean
Authorization is a point-in-time determination against a security baseline. An AI tool receives FedRAMP authorization based on its architecture and controls — not on how it will be used at runtime once deployed. Once authorized and operational, the system processes whatever inputs it receives.
Consider what this means for a defense contractor with 500 employees using a FedRAMP High AI assistant:
- Analyst A pastes a paragraph from a CUI-marked document into the AI to get a summary. The AI summarizes it. FedRAMP authorization: intact. Compliance posture: compromised.
- Analyst B runs an AI agent to automate a procurement workflow. Given broad tool access, the agent queries a system outside its intended scope. The AI platform: authorized. The action: a policy violation with no audit trail that satisfies your ISSO.
- Analyst C asks the AI to help draft content that will be used in a context with different classification requirements. Infrastructure: compliant. Behavior: not governed.
None of these scenarios break the FedRAMP authorization. All of them create audit exposure.
The Governance Layer That's Missing
The gap is runtime policy enforcement — a deterministic layer that sits between the user (or agent) and the AI service, evaluates each interaction against your organization's specific policies, and produces a logged decision: allow, block, redact, or escalate.
This is different from vendor-provided "safety filters" or model-level content moderation. Those are probabilistic controls useful for general safety — but they enforce the vendor's policies, not yours. They are not designed to block CUI patterns, enforce your organization's data-handling SOPs, or produce the kind of deterministic, auditable evidence that FedRAMP 20x Key Security Indicators demand.
Runtime policy enforcement means:
- Deterministic decisions. Every interaction produces a logged, policy-based outcome. Not "the model decided" — "rule #14 blocked this interaction because it matched CUI pattern X at 14:32:07Z."
- Your policy, not the vendor's. Your policies reflect your classification authorities, your contractual obligations, your ISSO's requirements — not OpenAI's usage guidelines.
- Audit-ready evidence. FedRAMP assessors and DISA reviewers increasingly expect continuous monitoring evidence. Runtime enforcement generates that evidence as a byproduct of normal operation — not as a bespoke reporting project.
- Agent governance. As autonomous AI agents enter federal workflows, the enforcement challenge scales. An agent that can read, write, query, and act needs the same governance controls as a human user — except it operates orders of magnitude faster and won't pause to ask whether something is permitted.
What This Means for Your Compliance Program
If you're a CISO, ISSO, or Head of AI Governance at a defense contractor or federal agency, the FedRAMP authorization question is necessary — but it's the first question, not the last. The questions that follow:
- What happens at runtime when a user or agent violates policy?
- What evidence do I produce for continuous AI governance — not just at authorization time?
- How do I enforce policies on agents, not just users?
- When FedRAMP 20x KSI requirements demand automated compliance evidence, where does that evidence come from?
FedRAMP authorization tells you the AI platform is secure. Runtime policy enforcement tells you how it's actually being used — and gives you the deterministic control to ensure it stays compliant.
The IBM watsonx authorization, Oracle's IL5 milestone, and ServiceNow's FedRAMP Moderate all signal that federal AI deployment is accelerating. The governance infrastructure needs to keep pace — and "the platform is authorized" is not the same as "we have governance."
Containment.AI provides deterministic, runtime AI governance designed for high-compliance environments — including on-premise and air-gapped deployments for organizations where cloud-only solutions are not an option. Built on formally-verified, NSA-certified cross-domain solution technology, it's the enforcement layer that makes your authorized AI tools auditably governable. Learn more about our compliance approach or read the Containment by Design whitepaper.