Two years ago, a SOC 2 Type II auditor reviewing your AI usage would ask whether your employees were trained on data handling policies and whether your LLM vendor had a SOC 2 of their own. You'd hand over the vendor attestation, reference your AUP, and move on.
That is no longer what the audit looks like. In 2026, auditors at the major advisory firms are asking questions that your existing controls were never designed to answer: What inputs are your employees sending to AI systems? How do you know? What happens when a user submits content that violates your data-handling policy? What's your evidence that the control fired?
The underlying issue is structural, not procedural. SOC 2 Trust Services Criteria were written for human-operated software systems. An AI governance platform — one that can actually answer the auditor's questions — requires a different kind of control architecture.
The Three CC Controls Your Auditor Is Now Applying to AI
SOC 2 is built around the AICPA Trust Services Criteria. Three of the Common Criteria (CC) controls are increasingly being applied to AI usage in enterprise environments:
- CC6.1 — Logical and Physical Access Controls. The organization implements logical access security measures to protect against threats from sources outside its system boundaries. When an employee submits a prompt containing customer PII or proprietary source code to an external LLM, that is data leaving your system boundary. If you cannot demonstrate a control that governs what leaves — and produces evidence that it fired — CC6.1 has a gap.
- CC7.1 — System Operations. The organization monitors system components and detects anomalies that are indicative of malicious acts or errors that could result in a security incident. Prompt injection attacks, data exfiltration via AI, and policy-violating queries are all system anomalies. If your monitoring doesn't reach the AI layer, CC7.1 doesn't either.
- CC8.1 — Change Management. The organization authorizes and approves changes prior to implementation. AI systems, particularly autonomous agents, can take actions — write code, send emails, query databases, call APIs — that constitute unapproved changes to your environment. The CC8.1 question is: what approved these actions?
The auditor's question isn't "did you buy an enterprise AI license?" It's "show me the control that governs runtime behavior — and the evidence that it worked."
What "Enterprise AI" Covers (and What It Doesn't)
Enterprise licensing from LLM vendors provides contractual data handling guarantees, tenant isolation, and the vendor's own SOC 2 or ISO 27001 attestation. Those are meaningful. They cover the infrastructure. They do not cover what your users do inside it.
A vendor's SOC 2 attestation tells the auditor: this company processes data securely. It does not tell the auditor: your employee's interactions with this company's product are governed according to your policies. That second sentence requires controls you have to implement and maintain.
Consider the specific evidence gap: your employee submits a prompt that contains 15 rows of customer records from your CRM, attached as context to ask the AI for an outreach strategy. The LLM vendor's systems process this correctly and securely within their authorized boundary. Their controls work. Yours — CC6.1 — have a gap, because you have no evidence that a control evaluated and allowed or blocked this before it left your boundary.
What an AI Governance Platform Actually Produces
An AI governance platform designed for SOC 2 evidence production operates at the proxy layer — between the user (or agent) and the LLM service — and generates a deterministic, logged decision for every interaction. Not a probabilistic safety filter, not a model-level guardrail: a policy evaluation that runs against your specific rules and produces structured evidence.
For each interaction, the evidence record should capture:
- Who. The authenticated user or agent identity, linked to your directory.
- What. The policy categories evaluated: PII detection, source code leakage, customer data patterns, prompt injection attempts, financial data, regulated content.
- Decision. Allow, block, or redact — with the specific rule that fired and the matched content category.
- When. Timestamp, session context, model endpoint.
- Retention. Audit log immutability and a retention window that satisfies your SOC 2 audit period — typically 12 months minimum for Type II.
This is the evidence the auditor is asking for when they request "evidence that your controls governing AI usage operated effectively during the audit period." A spreadsheet of AI usage stats, or a vendor-provided access log, does not satisfy this. A policy evaluation log that says "at 14:32:07Z, user alice@acme.com submitted a prompt; CC6.1 control evaluated; PII pattern matched; prompt blocked" does.
The Autonomous Agent Complication
If your organization has started deploying AI agents — code assistants that push to production, customer-service bots that query your CRM, workflow agents that send emails — the CC8.1 question gets sharper. Agents don't ask permission. They act.
The question your auditor will ask is: what authorized these actions? If the agent writes a database query and executes it, CC8.1 wants to know whether that constitutes an approved change and who approved it. The honest answer for most organizations today is: nothing did, at the policy level. The agent was given broad tool access and operated within it.
An AI governance platform that covers agents applies the same policy-evaluation model to agent-initiated interactions that it applies to user-initiated ones. The agent's tool calls are evaluated against your policies before execution. The evidence record captures what the agent did, what was evaluated, and whether the action was permitted. That is the CC8.1 audit trail for AI agents.
ISO 27001 Annex A.8 Is Asking the Same Questions
For organizations pursuing ISO 27001:2022 certification, Annex A.8 (Technological Controls) has the same structural gap when applied to AI. A.8.15 (Logging) requires that events affecting information security are logged. A.8.23 (Web Filtering) requires controls on access to external systems that could expose the organization to information security risk. A.8.28 (Secure Coding) requires that software development follows secure practices — including when that software is AI-generated.
The pattern is identical: the standard was written for human operators interacting with defined systems. AI usage expands the surface faster than the standard can iterate. A runtime AI governance layer that produces structured logs against defined policies satisfies the same evidence requirement across both SOC 2 and ISO 27001 — without requiring you to write a separate control for each standard.
Getting Ahead of the Audit Conversation
The organizations best positioned for their 2026 SOC 2 Type II and ISO 27001 audits are not the ones that bought the most comprehensive LLM vendor package. They're the ones that deployed a governance layer on top of it — one that produces the evidence the auditor is going to ask for.
Three things to verify before your next audit window opens:
- Do you have a control that evaluates AI interactions against your policies at runtime? Not the vendor's policies — yours. If the answer is no, CC6.1 has a gap.
- Do you have structured, retained logs of AI governance decisions? Policy-evaluation records, not just access logs. If the answer is no, your auditor will ask for them and you won't have them.
- Does your governance control cover AI agents as well as human users? If your AI deployment includes any form of autonomous agent, and your governance platform doesn't evaluate agent actions, you have a CC8.1 exposure that will only grow as agent usage scales.
The auditor's job is to verify that your controls operated effectively. An AI governance platform's job is to make sure they did — and to produce the evidence that proves it.
Compliance posture for AI isn't about which LLM vendor you chose. It's about the governance layer you built on top of it — and whether that layer produces evidence your auditor can stand behind.
Containment.AI provides a proxy-layer AI governance platform designed to produce SOC 2 and ISO 27001-ready audit evidence as a byproduct of normal operations. Every user and agent interaction is evaluated against your policies in real time; every decision is logged in an immutable audit trail. Learn more about our compliance approach or read the Containment by Design whitepaper for the technical architecture.