EU AI Act Compliance April 28, 2026 · 5 min read

EU Digital Omnibus Trilogue Is Underway — What a Potential AI Act Deadline Shift Means for Your Controls

The EU Parliament voted 569-in-favour to advance the Digital Omnibus, potentially pushing high-risk AI Act obligations from August 2026 to December 2027. Here's why your controls still need to work either way.

The EU AI Act's high-risk enforcement date — August 2, 2026 — has been the central compliance deadline driving enterprise AI governance programs all year. Now that deadline is in flux.

On March 26, 2026, the European Parliament voted 569 votes in favour to advance the Digital Omnibus proposal into trilogue negotiations with the European Commission and Council. The Digital Omnibus is a broad "digital simplification" package that, among other things, proposes to link the effective date of high-risk AI obligations to the availability of harmonised standards — with long-stop dates potentially extended to December 2, 2027 for high-risk systems and August 2, 2028 for product-embedded systems.

Trilogue negotiations have now officially launched. The outcome is not predetermined, but the trajectory is clear: enterprise compliance teams face a real possibility that their August 2026 hard deadline softens.

What's Actually in the Digital Omnibus?

The Digital Omnibus is not just an AI Act amendment. It's a consolidation of several EU digital regulations — the AI Act, GDPR, NIS2, DORA, and the Data Act — into a more streamlined framework. For AI specifically, key proposed changes include:

The European Commission's AI Act Service Desk and Single Information Platform are already live. August 2026 remains the official application date for now — the Digital Omnibus has not passed. But trilogue negotiations typically take 3–9 months, meaning a final text could arrive before August 2026 and alter the enforcement landscape.

The "Delay Trap" for Enterprise Compliance Teams

Here's the risk: organisations that are pausing compliance investment because of the potential delay are making a strategic error.

First, the transparency rules of the AI Act — covering AI-generated content disclosure, Art. 12 logging, and Art. 15 robustness requirements — are still on track for August 2026 regardless of what happens to high-risk deadlines. These are not touched by the Digital Omnibus proposal in its current form.

Second, even if high-risk deadlines slip to December 2027, the controls you'd need to build are the same. Building them now — while the urgency is high and compliance budgets are allocated — is materially easier than building them under a 2027 deadline when everyone else is scrambling at the same time.

Third, the Digital Omnibus is not yet law. It's in trilogue. If negotiations stall or the proposal is narrowed, you could find yourself with an August 2026 hard deadline and no controls in place.

What Vanta and Drata Aren't Telling You

The dominant compliance automation vendors — Vanta and Drata — have both pivoted to what they call "agentic trust": AI agents that collect audit evidence, draft policies, and answer security questionnaires. Vanta launched its Compliance, TPRM, and Customer Trust Agents at Vanta Delivers in March 2026. Drata followed with its own AI agents and MCP server integrations.

Both platforms are excellent at post-hoc audit evidence collection. Neither can show you a blocked ChatGPT submission in real time.

The EU AI Act's Art. 12 logging and Art. 15 robustness requirements aren't about having a clean audit trail after the fact — they require controls that fire before the model processes the input. That means runtime enforcement at the point of employee interaction with AI tools: the browser, the API, the prompt.

Audit automation doesn't satisfy runtime enforcement obligations. You need both.

What Enterprise Compliance Teams Should Do Right Now

  1. Do not pause investment — the transparency rules apply regardless of what happens to high-risk deadlines. Art. 12 logging needs to be operationally ready by August 2026.

  2. Map your AI use by risk tier now — you need to know which of your systems would fall under high-risk classification before you can determine how the Digital Omnibus delay affects your timeline. This classification work takes months.

  3. Instrument runtime controls, not just audit controls — the evidence your auditor needs for Art. 12 logging is only as good as the systems generating it. Policy checks at the API layer and browser layer are what produce the logs.

  4. Follow the trilogue — the Digital Omnibus text will be refined in negotiation. Subscribe to the EU AI Office's Single Information Platform for authoritative updates. Don't rely on secondary reporting.

  5. Treat the potential delay as risk reduction time, not budget-cut time — the organisations that will be compliant in 2027 (or 2026) are the ones building controls now.


Containment.AI enforces AI governance policies in real time — at the proxy layer and in the browser — before sensitive data leaves your organisation. Our policy engine covers EU AI Act Art. 12 logging, Art. 15 robustness, and a full suite of PII/secrets/regulatory-data evaluators. Get started free →

Ready to close the gap?

Talk to us about runtime AI governance for regulated environments.

Schedule a Conversation →