August 2, 2026 is now less than 100 days away. For enterprises using high-risk AI systems, that date triggers the EU AI Act's core compliance obligations — including a logging requirement that is quietly becoming a serious legal exposure.
What Article 12 actually requires
Article 12 of the EU AI Act mandates that providers of high-risk AI systems implement automatic logging of events throughout the system's lifecycle. Articles 19 and 26 set a six-month minimum retention period for those logs — longer in some sectors.
The penalty for non-compliance: up to €15 million or 3% of worldwide annual turnover, whichever is higher.
So far, most compliance teams have treated this as a storage problem. Keep the logs. Retain them for six months. Done.
But there's a harder question buried in Article 12 that isn't getting enough attention: what happens when a regulator asks you to prove your logs weren't altered?
The tamper-evident gap
A recent analysis in Help Net Security (April 2026) put it plainly: application logs live on infrastructure someone controls. They can be edited or replaced without anyone noticing. Article 12 doesn't use the word "tamper-proof" — but if your logs can be silently altered and you can't demonstrate otherwise, their evidentiary value is zero.
For high-risk AI systems, that's not a theoretical risk. It's a compliance failure waiting to happen at the moment a national market surveillance authority asks for documentation.
Where the audit automation tools fall short
Vanta and Drata — the two dominant compliance automation platforms — have both pivoted to what they call "agentic trust" and AI governance in 2025–2026. They automate evidence collection, draft policies, and run vendor reviews.
What they don't do is sit in the path of actual AI interactions and generate a tamper-evident record of what happened in real time.
That's a structural limitation, not a roadmap gap. These platforms are built to collect compliance artifacts after the fact — pulling evidence from connected systems, cross-mapping controls, and preparing audit packages. They are excellent at that. But Article 12 isn't asking for an audit package. It's asking for a log of what your AI system actually did, captured at the moment it happened, in a form that can withstand evidentiary scrutiny.
The runtime enforcement difference
Containment.AI operates at the proxy layer — sitting between your employees and the AI services they use. Every interaction that passes through the proxy generates a structured audit event at the time of the interaction: what was sent, what policy evaluated it, what the outcome was, and when.
Because the log is generated at the enforcement layer — not reconstructed from connected-system snapshots — the provenance chain is inherently cleaner. You're not pulling logs from five different SaaS APIs and hoping the timestamps reconcile. You have a single authoritative record per interaction, created by the system that enforced the policy.
For Article 12 compliance, the difference matters. An auditor reviewing a post-hoc compliance package assembled from integration data has a fundamentally different evidentiary posture than one reviewing an immutable proxy-layer audit trail.
What to do before August 2
If your organization deploys AI tools for employees — ChatGPT, Claude, Microsoft Copilot, Gemini, Grok, or any OpenAI-compatible API — and those use cases fall into Annex III high-risk categories, you have three things to verify before August 2, 2026:
- Are you logging AI interactions at the point of occurrence? Not reconstructed. Not sampled. Logged as they happen.
- Can you demonstrate the integrity of those logs? If a regulator asks, can you show the log wasn't modified after the fact?
- Do your logs capture the right fields? Article 12 requires event logging sufficient to identify the sequence of operations, inputs, and outputs of the AI system.
The compliance automation layer (Vanta, Drata) is excellent for frameworks, controls, and audit prep. But it doesn't answer these three questions. The answer to all three starts at the enforcement layer — where the AI interaction actually happens.
Containment.AI is a real-time AI governance platform. The proxy layer intercepts AI interactions, applies configurable policies, and generates a structured audit trail at the point of enforcement — before sensitive data leaves your organization. Start a free trial or read the docs.