Tomorrow — April 28 — NIST kicks off the first of three virtual working sessions on its Cybersecurity Framework Profile for Artificial Intelligence (NISTIR 8596), with follow-up sessions on May 5 and May 12.
If you run AI governance, security, or compliance at a regulated enterprise, this is the standard you'll be held to. Here's a plain-language breakdown of what's in it, why the timing matters, and what you should do before it finalizes.
What Is the NIST Cyber AI Profile?
NIST's Cyber AI Profile maps the widely-adopted Cybersecurity Framework (CSF 2.0) onto AI-specific risk and defense scenarios. It covers three focus areas:
- Securing AI systems — identifying cybersecurity challenges when integrating AI into your infrastructure
- AI-enabled cyber defense — using AI to enhance your security posture, and understanding the limitations
- Thwarting AI-enabled cyberattacks — building resilience against AI-powered threats targeting your organization
The profile was built with input from more than 6,500 practitioners over a year-long process. It's not a theoretical document — it's a practical mapping of AI risk onto the frameworks your teams already use.
Why This Week's Working Sessions Matter
The comment period on the preliminary draft closed January 30, 2026. NIST is now refining the profile into an initial public draft. This week's working sessions are where the agency incorporates practitioner feedback and signals the direction of the final publication.
For enterprise AI governance teams, that means two things:
- The window to shape the standard is closing. If your organization has unresolved concerns about how AI agent governance, shadow AI, or browser-based AI tool usage maps to CSF 2.0 controls — the working sessions are still open for engagement.
- The compliance clock is already ticking. NIST's finalized agent-specific guidance (under the separate AI Agent Standards Initiative) is unlikely to arrive before 2027. Enterprises that wait for the completed standard before establishing AI governance frameworks will be playing catch-up — with real audit exposure during the gap.
The AI Agent Gap That's Not Getting Enough Attention
Separate from the Cyber AI Profile, NIST's Center for AI Standards and Innovation (CAISI) is running sector-specific listening sessions this month — targeting healthcare, financial services, and education — specifically on barriers to AI agent adoption. The SP 800-53 Control Overlays for Securing AI Systems (COSAiS) agent-specific overlays are still in active development as of April 2026.
Translation: there's a meaningful gap between where enterprise AI agent deployment actually is today and where NIST's finalized guidance will be when it arrives. Organizations deploying AI agents — copilots, autonomous workflows, browser-based AI tools — are operating in governance gray space right now.
What You Should Be Doing Before the Standard Finalizes
You don't need to wait for NIST to hand you a checklist. The AI RMF 1.0 GOVERN, MAP, MEASURE, and MANAGE functions give you the operational baseline today. Specifically:
- Inventory your AI surfaces. Which AI tools are your employees actually using? ChatGPT, Claude, Copilot, Gemini — and the dozens of browser-based tools that fly under IT radar?
- Map policy to enforcement. A written AI policy is not enforcement. Real enforcement means real-time monitoring at the point where data and AI meet — the browser, the API, the prompt.
- Build your audit trail now. When examiners start using the Cyber AI Profile as an evaluation lens, you'll need evidence of continuous monitoring — not a spreadsheet from last quarter.
Containment.AI enforces AI governance policies in real time at the proxy and browser layer — so every prompt, every AI tool interaction, every potential policy violation is logged, blocked, or flagged before data leaves your organization. That's the audit trail regulators are going to ask for.
The NIST Cyber AI Profile working session is tomorrow. Is your AI governance program ready for the standard it's defining?