Demystifying the 4 Risk Tiers of the AI Act
Not all AI is treated equally. The European Union regulates AI based on its potential to cause physical harm, psychological damage, or violations of fundamental rights.
The core philosophy of the EU AI Act is a risk-based approach. Rather than regulating the technology itself (like neural networks or deep learning), it regulates the use-case. A predictive algorithm used to suggest a movie to watch is harmless; that exact same algorithmic architecture used to predict criminal behavior is highly restricted.
Unacceptable Risk (Prohibited Practices)
Under Article 5, the EU has outright banned certain AI practices because they present a clear threat to human safety, livelihoods, and democratic rights. There are no exceptions or compliance pathways for these systems—they must be decommissioned.
Evaluating trustworthiness based on social behavior leading to detrimental treatment.
Deploying subliminal techniques to alter a person's behavior causing physical or psychological harm.
Creating facial recognition databases via untargeted scraping of the internet or CCTV.
Making risk assessments of natural persons to assess their risk of committing a crime solely based on profiling.
High-Risk Systems
This tier contains the vast majority of the Act's heavy regulatory text. High-Risk systems are permitted, but only subject to strict compliance obligations before they can be placed on the market. They are divided into two main categories:
Annex I: Safety Components
AI systems intended to be used as a safety component of a product covered by existing EU harmonisation legislation (e.g., Medical Devices, Machinery, Toys, Aviation).
Annex III: Critical Use Cases
Stand-alone AI systems used in specific critical sectors. Examples include:
• Biometric identification and categorization
• Education and vocational training (e.g., automated grading)
• Employment and HR (e.g., CV screening algorithms)
• Essential private and public services (e.g., credit scoring, healthcare triaging)
• Law enforcement and migration control
- • Quality Management Systems
- • Technical Documentation
- • Conformity Assessments (CE Marking)
- • EU Database Registration
- • Human Oversight Architecture
Limited Risk (Transparency Obligations)
These systems do not pose a physical or systemic threat, but they do pose a risk of manipulation, deceit, or confusion. Therefore, they are bound by the specific transparency obligations in Article 50.
Users must be informed they are interacting with an AI (e.g., Chatbots, automated customer service).
AI-generated audio, video, or text (Deepfakes) must be marked in a machine-readable format.
Natural persons must be informed if a system is attempting to recognize their emotions or biometrics.
Minimal Risk
The vast majority of AI systems fall into this category. Examples include spam filters, AI-enabled video games, or internal supply chain inventory predictors. These systems face no mandatory obligations under the AI Act, though the EU encourages providers to adopt voluntary codes of conduct.
The GPAI Overlay: Foundation Models
General Purpose AI (GPAI) models—like OpenAI's GPT-4, Google's Gemini, or Meta's Llama—do not fit neatly into one single use-case tier because they can do almost anything. Therefore, the EU AI Act applies a separate overlay of rules specifically for GPAI providers.
- ●Standard GPAI: Must adhere to copyright law and publish a detailed summary of the content used for training.
- ●GPAI with Systemic Risk: Models trained using massive computing power (>10^25 FLOPs) face severe scrutiny, including mandatory adversarial testing (red-teaming), incident reporting, and energy consumption logging.
Generate Your Risk Classification
Stop manually reading through Annexes. Use our deterministic engine to automatically classify your system and generate an audit-ready PDF dossier.
Run Compliance Scan →