Back to Lexicon

EU AI Act

intermediate

The European Union's comprehensive AI regulation establishing risk-based requirements for AI systems. Categorizes AI by risk level and mandates transparency, human oversight, and safety requirements for high-risk applications.

Category: ethics
regulationlaweuropecompliance

Overview

The EU AI Act is the world's first comprehensive AI regulation, establishing a legal framework for AI development and deployment in Europe. It takes a risk-based approach, with requirements scaling based on potential harm. High-risk AI systems (healthcare, employment, law enforcement) face strict requirements: risk management, data governance, human oversight, transparency, accuracy, and cybersecurity. General-purpose AI models have transparency obligations proportional to their capability. The Act also bans certain AI applications entirely, including social scoring, real-time biometric surveillance (with exceptions), and manipulation of vulnerable groups. Non-compliance carries significant fines—up to 7% of global revenue.

Key Concepts

Risk Categorization

Unacceptable, high-risk, limited-risk, and minimal-risk classifications.

Conformity Assessment

Required evaluation proving compliance before market placement.

General-Purpose AI Rules

Specific requirements for foundation models and large-scale AI.

Extraterritorial Effect

Applies to any AI system deployed to EU users, regardless of developer location.

Related Concepts