The EU AI Act :
What enterprises need to know before Dec 2027
1 Aug 2024
2 Feb 2025
2 Aug 2025
2 Dec 2026
2 Aug 2027
2 Dec 2027
2 Dec 2028
THE FRAMEWORK
A risk-based framework with four tiers

PROHIBITED

STRICTLY REGULATED

TRANSPARENCY

UNREGULATED
Does your AI system qualify as high-risk?
A system is high-risk if it meets either of two conditions.
CONDITION 1
Safety Component of a regulated product
Used as a safety component where the product requires third-party conformity assessment under EU harmonisation legislation.
The qualifying test
Is your AI embedded in a product covered by EU harmonisation legislation (Annex I)?
Does that product require third-party conformity assessment before going to market?
Could the AI's failure or malfunction create health or safety risks? (May 2026 Omnibus narrowing.)
If you answered yes to all three, your AI system qualifies as high-risk under Condition 1, with a compliance deadline of 2 August 2028.
CONDITION 2
Annex III use case categories
Falls within one of the use cases the Act designates as inherently high-risk.
Annex III Categories
Biometric identification
Critical infrastructure
Education & vocational training
Employment & worker management
Essential private & public services
Credit scoring & insurance
Law enforcement
Migration & border control
Administration of justice
Democratic processes
IMPORTANT
PROVIDER OBLIGATIONS

Article 9
Risk Management System
Iterative, lifecycle-spanning risk process. Identifies known and foreseeable risks. Active throughout operational life — not a one-time assessment.

Article 10
Data Governance
Training, validation, and testing datasets must be relevant, sufficiently representative, and as free from errors as possible.

Article 11
Technical Documentation
Detailed documentation demonstrating compliance and giving authorities what they need to assess it.

Article 12
Record-Keeping & Logging
Automatic recording of events relevant for identifying risks and substantial modifications across the system's lifecycle.

Article 13
Transparency & Use Instructions
Sufficient transparency to enable deployers to interpret outputs and use them appropriately.

Article 14
Human Oversight
Override, interrupt, and stop mechanisms must be technically embedded in the system itself — not merely described in documentation.

Article 15
Accuracy, Robustness & Cybersecurity
Appropriate levels of accuracy, robustness, and cybersecurity throughout the system's lifecycle.

Article 17
Quality Management System
Documented procedures ensuring ongoing conformity with technical standards, risk management, and conformity assessment obligations.

Cross Cutting
The Common Thread
All eight articles require continuous, automated evidence — not policy documents. That's the implementation gap most enterprises face.
Full AI system inventory across the organisation
Risk classification against Annex III categories
Conformity assessment planning and execution
Active risk management — operational, not policy
Technical logging embedded in system architecture
Human oversight built into system architecture
Quality management enabling continuous compliance
Most enterprises have the policy-level requirements in place. The operational gap is where most fall short.
Train staff on oversight roles and responsibilities
FAQs
Does the EU AI Act apply to non-EU companies?
Yes. The Act applies to providers and deployers located outside the EU where the AI system's output is used in the EU. If your AI affects EU citizens or EU market participants, the Act applies regardless of where your organisation is based.
What is the difference between a provider and a deployer?
How do I know if my AI system is high-risk?
What happens if my organisation is not compliant by August 2026?
What is the fastest way to close the compliance gap?
Does the Act require continuous monitoring or just pre-deployment testing?












