The EU AI Act :
What enterprises need to know before Dec 2027

The world's first comprehensive AI law just shifted its most demanding deadlines — but the operational work expanded. High-risk obligations now apply from December 2027, while new registration rules close the "quiet exemption" loophole for systems self-assessed as non-high-risk.

The world's first comprehensive AI law just shifted its most demanding deadlines — but the operational work expanded. High-risk obligations now apply from December 2027, while new registration rules close the "quiet exemption" loophole for systems self-assessed as non-high-risk.

15 min read

15 min read

Enterprise Guide

Enterprise Guide

14 May 2026

14 May 2026

Last Updated on

Last Updated on

ENFORCEMENT DATE

ENFORCEMENT DATE

02/12/27
High-risk Annex III obligations (Articles 9–17, Art. 26)

02/12/27
High-risk Annex III obligations (Articles 9–17, Art. 26)

02/12/27
High-risk Annex III obligations (Articles 9–17, Art. 26)

Key EU AI Act dates enterprises must know

Key EU AI Act dates enterprises must know

The Digital Omnibus agreement of 7 May 2026 shifted high-risk deadlines by 16 months and added new obligations. Provisional agreement; formal adoption expected before August 2026.

The Digital Omnibus agreement of 7 May 2026 shifted high-risk deadlines by 16 months and added new obligations. Provisional agreement; formal adoption expected before August 2026.

1 Aug 2024

LIVE

LIVE

EU AI Act entered into force

EU AI Act entered into force

2 Feb 2025

LIVE

LIVE

Prohibited AI practices and AI literacy obligations

Prohibited AI practices and AI literacy obligations

2 Aug 2025

LIVE

LIVE

GPAI model obligations and governance rules

GPAI model obligations and governance rules

2 Dec 2026

IMMINENT

IMMINENT

Transparency/watermarking for AI-generated content · New CSAM & nudifier ban

Transparency/watermarking for AI-generated content · New CSAM & nudifier ban

2 Aug 2027

UPCOMING

UPCOMING

National AI regulatory sandboxes must be established

National AI regulatory sandboxes must be established

2 Dec 2027

UPCOMING

UPCOMING

High-risk AI system obligations (Articles 9–17 and Article 26)

High-risk AI system obligations (Articles 9–17 and Article 26)

2 Dec 2028

UPCOMING

UPCOMING

High-risk AI embedded in regulated products (Annex I)

High-risk AI embedded in regulated products (Annex I)

THE FRAMEWORK

A risk-based framework with four tiers

Unacceptable Risk

Unacceptable Risk

PROHIBITED

Eight categories banned outright — social scoring, real-time biometric ID in public spaces, manipulation of vulnerabilities.

Eight categories banned outright — social scoring, real-time biometric ID in public spaces, manipulation of vulnerabilities.

High Risk

High Risk

STRICTLY REGULATED

Significant risk to health, safety, or fundamental rights. Subject to the Act's most demanding obligations.


Significant risk to health, safety, or fundamental rights. Subject to the Act's most demanding obligations.


Limited Risk

Limited Risk

TRANSPARENCY

Chatbots, deepfake generators, and similar systems. Users must be informed they're interacting with AI.


Chatbots, deepfake generators, and similar systems. Users must be informed they're interacting with AI.


UNREGULATED

Minimal Risk

Minimal Risk

The majority of AI applications — spam filters, AI-enabled games. No specific obligations.


The majority of AI applications — spam filters, AI-enabled games. No specific obligations.


CLASSIFICATION

CLASSIFICATION

Does your AI system qualify as high-risk?

A system is high-risk if it meets either of two conditions.

CONDITION 1

Safety Component of a regulated product

Used as a safety component where the product requires third-party conformity assessment under EU harmonisation legislation.

The qualifying test

  • Is your AI embedded in a product covered by EU harmonisation legislation (Annex I)?

  • Does that product require third-party conformity assessment before going to market?

  • Could the AI's failure or malfunction create health or safety risks? (May 2026 Omnibus narrowing.)

If you answered yes to all three, your AI system qualifies as high-risk under Condition 1, with a compliance deadline of 2 August 2028.

CONDITION 2

Annex III use case categories

Falls within one of the use cases the Act designates as inherently high-risk.

Annex III Categories

  • Biometric identification

  • Critical infrastructure

  • Education & vocational training

  • Employment & worker management

  • Essential private & public services

  • Credit scoring & insurance

  • Law enforcement

  • Migration & border control

  • Administration of justice

  • Democratic processes

IMPORTANT

Any AI system that profiles individuals — processing personal data to assess work performance, economic situation, health, preferences, reliability, or behaviour — is always high-risk under Annex III. If uncertain about classification, treat the system as high-risk until a formal determination is made. The exposure from misclassification outweighs the cost of precautionary compliance.

Any AI system that profiles individuals — processing personal data to assess work performance, economic situation, health, preferences, reliability, or behaviour — is always high-risk under Annex III. If uncertain about classification, treat the system as high-risk until a formal determination is made. The exposure from misclassification outweighs the cost of precautionary compliance.

PROVIDER OBLIGATIONS

What enterprises that develop or place high-risk AI on the market must do

It's not an experimentation problem. It's a proof problem

It's not an experimentation problem. It's a proof problem

Article 9

Risk Management System

Iterative, lifecycle-spanning risk process. Identifies known and foreseeable risks. Active throughout operational life — not a one-time assessment.

Article 10

Data Governance

Training, validation, and testing datasets must be relevant, sufficiently representative, and as free from errors as possible.

Article 11

Technical Documentation

Detailed documentation demonstrating compliance and giving authorities what they need to assess it.

Article 12

Record-Keeping & Logging

Automatic recording of events relevant for identifying risks and substantial modifications across the system's lifecycle.

Article 13

Transparency & Use Instructions

Sufficient transparency to enable deployers to interpret outputs and use them appropriately.

Article 14

Human Oversight

Override, interrupt, and stop mechanisms must be technically embedded in the system itself — not merely described in documentation.

Article 15

Accuracy, Robustness & Cybersecurity

Appropriate levels of accuracy, robustness, and cybersecurity throughout the system's lifecycle.

Article 17

Quality Management System

Documented procedures ensuring ongoing conformity with technical standards, risk management, and conformity assessment obligations.

Cross Cutting

The Common Thread

All eight articles require continuous, automated evidence — not policy documents. That's the implementation gap most enterprises face.

Before Market Launch, providers must also complete

Before Market Launch, providers must also complete

✓ Applicable conformity assessment

✓ Applicable conformity assessment

✓ EU declaration of conformity

✓ EU declaration of conformity

✓ Registration in the EU AI database

✓ Registration in the EU AI database

Before Market Launch, providers must also complete

✓ Applicable conformity assessment

✓ EU declaration of conformity

✓ Registration in the EU AI database

ARTICLE 26 | DEPLOYER OBLIGATIONS

ARTICLE 26 | DEPLOYER OBLIGATIONS

Even organisations that purchase third-party high-risk AI carry statutory duties.

Even organisations that purchase third-party high-risk AI carry statutory duties.

Buying an AI system from a compliant vendor doesn't transfer your obligations. As a deployer, you own these:

Buying an AI system from a compliant vendor doesn't transfer your obligations. As a deployer, you own these:

Follow provider instructions

Use systems strictly in accordance with provider-supplied instructions for use.

Assign trained oversight personnel

Trained staff with authority to exercise meaningful human oversight.

Retain logs for 6 months

Automatically generated logs must be kept for a minimum of six months.

Monitor and report incidents

Continuous performance monitoring. Serious incidents reported without delay.

Conduct FRIAs where required

Fundamental Rights Impact Assessments — particularly for credit, insurance, and public-sector decisions.

Transparency to affected individuals

Inform individuals and workers affected by high-risk system decisions.

Follow provider instructions

Use systems strictly in accordance with provider-supplied instructions for use.

Assign trained oversight personnel

Trained staff with authority to exercise meaningful human oversight.

Retain logs for 6 months

Automatically generated logs must be kept for a minimum of six months.

Monitor and report incidents

Continuous performance monitoring. Serious incidents reported without delay.

Conduct FRIAs where required

Fundamental Rights Impact Assessments — particularly for credit, insurance, and public-sector decisions.

Transparency to affected individuals

Inform individuals and workers affected by high-risk system decisions.

THE REALITY CHECK

THE REALITY CHECK

Most enterprise AI programmes are not ready

It's not an experimentation problem. It's a proof problem

It's not an experimentation problem. It's a proof problem

The compliance burden is substantial and multi-layered. Most enterprises have policies. Very few have the operational infrastructure to satisfy the continuous evidence requirements.

The compliance burden is substantial and multi-layered. Most enterprises have policies. Very few have the operational infrastructure to satisfy the continuous evidence requirements.

Requirements 4 through 7 demand continuous, automated evidence generation. A policy document doesn't satisfy them. A one-time audit doesn't satisfy them.


Closing that gap requires technical infrastructure — logging, monitoring, oversight mechanisms, and evidence generation running continuously across every high-risk system in production.

Requirements 4 through 7 demand continuous, automated evidence generation. A policy document doesn't satisfy them. A one-time audit doesn't satisfy them.


Closing that gap requires technical infrastructure — logging, monitoring, oversight mechanisms, and evidence generation running continuously across every high-risk system in production.

Full AI system inventory across the organisation

Risk classification against Annex III categories

Conformity assessment planning and execution

Active risk management — operational, not policy

Technical logging embedded in system architecture

Human oversight built into system architecture

Quality management enabling continuous compliance

Most enterprises have the policy-level requirements in place. The operational gap is where most fall short.

How EU AI Act compliance maps to AI assurance

How EU AI Act compliance maps to AI assurance

The Act doesn't just require a governance policy. It requires proof — documented evidence — that systems are actively monitored, controlled, and managed throughout their lifecycle.

The Act doesn't just require a governance policy. It requires proof — documented evidence — that systems are actively monitored, controlled, and managed throughout their lifecycle.

EU AI Act Requirement

EU AI Act Requirement

Risk management system (Art. 9) →

Risk management system (Art. 9)

Record-keeping and logging (Art. 12)→

Record-keeping and logging (Art. 12)

Human oversight (Art. 14) →

Human oversight (Art. 14)

Accuracy and robustness (Art. 15) →

Accuracy and robustness (Art. 15)

Quality management (Art. 17) →

Quality management (Art. 17)

Deployer monitoring (Art. 26) →

Deployer monitoring (Art. 26) →

AI Assurance Capability

AI Assurance Capability

ARTICLE 09

Continuous monitoring and risk detection in production

ARTICLE 12

Automated audit trails and decision chain logging

ARTICLE 14

Policy-aligned controls with override and alert mechanisms

ARTICLE 17

Systematic testing, validation, and evidence generation

ARTICLE 26

Real-time production monitoring and incident reporting

EU AI Act Requirement

Risk management system (Art. 9) →

Record-keeping and logging (Art. 12)→

Human oversight (Art. 14) →

Accuracy and robustness (Art. 15) →

Quality management (Art. 17) →

Deployer monitoring (Art. 26) →

AI Assurance Capability

ARTICLE 09

Continuous monitoring and risk detection in production

ARTICLE 12

Automated audit trails and decision chain logging

ARTICLE 14

Policy-aligned controls with override and alert mechanisms

ARTICLE 17

Systematic testing, validation, and evidence generation

ARTICLE 26

Real-time production monitoring and incident reporting

Agentic AI Under the Act

AGENTIC AI UNDER ACT

Agentic AI Under the Act

Agentic AI systems face particular scrutiny

Agentic AI systems face particular scrutiny

The Act was drafted primarily with predictable, single-purpose systems in mind. Agentic AI — systems that reason, plan, and act autonomously across connected tools — raises the compliance bar.

The Act was drafted primarily with predictable, single-purpose systems in mind. Agentic AI — systems that reason, plan, and act autonomously across connected tools — raises the compliance bar.

Non-determinism


Agentic systems behave differently depending on context and inputs. Demonstrating consistent compliance requires continuous monitoring, not point-in-time testing.

Non-determinism


Agentic systems behave differently depending on context and inputs. Demonstrating consistent compliance requires continuous monitoring, not point-in-time testing.

Autonomous action


Agents take actions, not just produce outputs. Each action may affect customer data, financial decisions, or regulated workflows — and each is a potential compliance event.

Autonomous action


Agents take actions, not just produce outputs. Each action may affect customer data, financial decisions, or regulated workflows — and each is a potential compliance event.

Evolving behaviour


Agents drift as training data, connected systems, and inputs evolve. Lifecycle risk management means governance cannot stop at go-live.

Evolving behaviour


Agents drift as training data, connected systems, and inputs evolve. Lifecycle risk management means governance cannot stop at go-live.

What to have in place before 2 December 2027

What to have in place before 2 December 2027

What to have in place before 2 December 2027

Two parallel tracks — provider obligations and deployer obligations — plus ongoing requirements that don't end at the deadline.

Two parallel tracks — provider obligations and deployer obligations — plus ongoing requirements that don't end at the deadline.

Two parallel tracks — provider obligations and deployer obligations — plus ongoing requirements that don't end at the deadline.

Before December 2027

Before December 2027

Complete AI system inventory across the organisation

Complete AI system inventory across the organisation

Classify all systems against Annex III high-risk categories

Classify all systems against Annex III high-risk categories

Register all systems in the EU database, including those self-assessed as non-high-risk

Register all systems in the EU database, including those self-assessed as non-high-risk

Implement operational risk management for high-risk AI

Implement operational risk management for high-risk AI

Embed technical logging in system architecture

Embed technical logging in system architecture

Implement human oversight at the system level

Implement human oversight at the system level

Establish quality management procedures

Establish quality management procedures

Complete conformity assessments for high-risk systems

Complete conformity assessments for high-risk systems

For Deployers Specifically

For Deployers Specifically

Confirm six-month log retention is in place

Confirm six-month log retention is in place

Assign trained personnel with oversight authority

Assign trained personnel with oversight authority

Conduct Fundamental Rights Impact Assessments where required

Conduct Fundamental Rights Impact Assessments where required

Establish incident monitoring and reporting processes

Establish incident monitoring and reporting processes

Set up transparency mechanisms for affected individuals

Set up transparency mechanisms for affected individuals

Document use procedures aligned to provider instructions

Document use procedures aligned to provider instructions

Establish escalation paths for serious incidents

Establish escalation paths for serious incidents

Train staff on oversight roles and responsibilities

FAQs

01

Does the EU AI Act apply to non-EU companies?

Yes. The Act applies to providers and deployers located outside the EU where the AI system's output is used in the EU. If your AI affects EU citizens or EU market participants, the Act applies regardless of where your organisation is based.

02

What is the difference between a provider and a deployer?

03

How do I know if my AI system is high-risk?

04

What happens if my organisation is not compliant by August 2026?

05

What is the fastest way to close the compliance gap?

06

Does the Act require continuous monitoring or just pre-deployment testing?

HOW DISSEQT HELPS

WHY DO YOU NEED DISSEQT

WHY DO YOU NEED DISSEQT

From pre-production testing to continuous compliance evidence

From pre-production testing to continuous compliance evidence

Built to operationalise the requirements the EU AI Act imposes on enterprise AI teams

Built to operationalise the requirements the EU AI Act imposes on enterprise AI teams

Risk evidence before deployment


Systematic testing and validation of agents, generating pre-deployment risk evidence aligned to Article 9.

Risk evidence before deployment


Systematic testing and validation of agents, generating pre-deployment risk evidence aligned to Article 9.

Real-time monitoring & policy enforcement


Drift detection and policy controls with automated logging aligned to Article 12 record-keeping.

Real-time monitoring & policy enforcement


Drift detection and policy controls with automated logging aligned to Article 12 record-keeping.

Structured compliance reports


Reports generated automatically, formatted for regulatory review under EU AI Act, NIST AI RMF, and ISO 42001.

Structured compliance reports


Reports generated automatically, formatted for regulatory review under EU AI Act, NIST AI RMF, and ISO 42001.

The December 2027 deadline is closer than it looks.

The December 2027 deadline is closer than it looks.

Most enterprise compliance programmes underestimate the operational work required to satisfy continuous monitoring, logging, and evidence requirements. The time to build that infrastructure is now.

Most enterprise compliance programmes underestimate the operational work required to satisfy continuous monitoring, logging, and evidence requirements. The time to build that infrastructure is now.

Logo

The Assurance Layer for Enterprise AI

© DISSEQT AI LIMITED

Logo

Where Agentic AI

Meets Assurance

© DISSEQT AI LIMITED

Logo

The Assurance Layer for Enterprise AI

© DISSEQT AI LIMITED