At RSAC 2026, Vanta's Sr. Director of GRC, Khush Kashyap, said something that should stop every enterprise security leader cold: shadow AI is already "exponentially bigger" than shadow IT ever was.
He's right. And the implications are worse than most CISOs have had time to process.
The shadow IT analogy undersells the problem
Shadow IT was manageable because the tools were largely infrastructure — servers, SaaS subscriptions, unauthorized cloud storage. You could find them with an asset scanner, block the egress, remediate the policy gap. The blast radius of a rogue Dropbox folder was real but bounded.
Shadow AI is different in kind, not degree. Employees aren't spinning up unauthorized databases — they're pasting customer PII, unreleased financial data, internal legal strategy, and source code directly into ChatGPT, Claude, and Gemini in real time, every day, at scale. By the time a retrospective audit surfaces the pattern, the data is already in a model provider's training pipeline or accessible to the model's operators.
Kashyap described the core dynamic at RSAC: employees adopt AI tools faster than security teams can evaluate them, and open-source libraries are actively being used to circumvent human-in-the-loop controls. It's not ignorance — it's workarounds. That's a different adversarial posture than accidental shadow IT.
The CISO burnout paradox
Vanta named something else worth sitting with: what they call the "CISO burnout paradox." The technology that's supposed to make CISOs' lives easier — AI — is currently making it harder, because the complexity AI introduces is outpacing the efficiency gains it delivers.
This is structurally true. AI governance programs require new policies, new tooling, new vendor assessments, new training programs, and new audit evidence — all while the underlying threat surface expands faster than the governance infrastructure can keep up. Security teams are being asked to govern a moving target with frameworks built for static systems.
Where Vanta's answer falls short
Vanta's response to the shadow AI challenge is audit automation: context-aware agents that flag issues, suggest fixes, and help execute remediation inside their GRC platform. That's a meaningful improvement over spreadsheet-based compliance programs.
But audit automation is retrospective by design. It surfaces the gap after the policy has been violated, the document has been uploaded, the PII has been transmitted. For SOC 2 Type II readiness or ISO 27001 certification, that's fine — you're demonstrating a control framework existed over a time period. For actually preventing a data exfiltration event, it's too late.
The EU AI Act makes this distinction concrete. Under Article 12, high-risk AI deployers must maintain logs that can withstand evidentiary scrutiny — not just stored logs, but logs tied to the actual system behavior at the moment of processing. Audit automation that reconstructs what probably happened from downstream signals is unlikely to satisfy that bar when enforcement begins on August 2, 2026.
What real-time governance looks like
The architecture that actually closes the shadow AI gap works at the point of transmission, not after the fact:
Proxy-layer enforcement — every AI API call, regardless of which tool an employee is using, routes through a policy gateway that evaluates content before it leaves the organization. PII, source code, financial projections, legal strategy — blocked at the edge, not discovered in next quarter's audit.
Browser-layer interception — for consumer AI tools that employees access directly (ChatGPT, Claude.ai, Gemini), a browser extension enforces organizational policies at the point of submission, before the content reaches the model provider.
Tamper-evident audit logs — every policy evaluation, allow, and block is logged with cryptographic integrity at the moment it occurs. That's the Art. 12 evidentiary standard Vanta's retrospective audit trail can't meet.
Real-time policy visibility — admins see what AI tools are being used across the organization, what policies are being triggered, and which teams have the highest exposure — today, not at the next audit cycle.
This is what Containment.AI does. The CISO burnout paradox Vanta named at RSAC is real. The answer isn't faster retrospective auditing — it's stopping the leak before it happens.
The window is closing
With EU AI Act high-risk obligations applying from August 2, 2026, and US state AI laws (Colorado AI Act effective 2026, California following with multiple transparency and employment statutes) creating a patchwork of enforcement pressure, the window for "we'll figure it out at audit time" is closing fast.
CISOs who want to get ahead of the shadow AI problem — not just document it — need enforcement infrastructure in place before the next audit cycle starts. Retrospective compliance programs document history. Real-time governance controls the future.
Containment.AI is an AI governance platform that enforces organizational policies in real time — at the proxy layer, in the browser, and in the admin dashboard — before sensitive data leaves your organization. Learn more at containment.ai.