EU AI Act

The EU AI Act classifies AI systems by risk level and imposes obligations on providers and deployers, with phased enforcement starting in 2024.

What is the EU AI Act?

The EU Artificial Intelligence Act (Regulation (EU) 2024/1689) is the European Union's horizontal regulation of artificial intelligence. It classifies AI systems into four risk tiers: prohibited (e.g. social scoring, emotion recognition in workplaces), high-risk (employment screening, credit scoring, critical infrastructure, biometric categorisation), limited-risk (chatbots, deepfakes — transparency obligations), and minimal-risk (everything else).

The Act applies to providers (those who develop or place an AI system on the EU market) and deployers (those who use it in their operations). It applies extraterritorially: a U.S. AI vendor whose system is used by an EU customer falls under the Act regardless of where the vendor is based.

High-risk obligations

For high-risk AI systems, the Act requires: a documented risk management system, training-data governance and quality controls, technical documentation, automated logging, human oversight mechanisms, accuracy and robustness testing, and a conformity assessment before market placement. Providers must register the system in the EU AI database. General-purpose AI models above a compute threshold (10²⁵ FLOPs) face additional obligations including systemic risk assessment.

Enforcement timeline

The Act entered into force in August 2024. Prohibitions on banned practices apply from February 2025. Most rules for general-purpose AI take effect August 2025. High-risk AI obligations take effect August 2026 for new systems, with longer transition periods for systems already on the market. Penalties scale to the higher of €35M or 7% of global annual turnover for prohibited practices.