Provider, Deployer, or Importer? Mapping your AI value chain liability.
The AI Act allocates regulatory burden based on your position in the supply chain. Misidentifying your role—especially falling into the Article 25 trap—is the single most common compliance failure.
If an AI system causes harm or violates fundamental rights, who pays the fine? The developers who built it, or the corporate entity that used it? The EU AI Act answers this by establishing a strict taxonomy of roles across the AI value chain.
1. The Provider (The Heaviest Burden)
Legal Definition (Art 3(2)): A natural or legal person, public authority, agency, or other body that develops an AI system (or has one developed for them) and places it on the market or puts it into service under its own name or trademark.
The Provider is effectively the "Manufacturer" of the AI. Because they control the code, the training data, and the architecture, the EU places roughly 80% of the regulatory burden squarely on their shoulders.
Key Obligations for High-Risk Providers:
- Establish and implement a comprehensive Quality Management System (QMS).
- Draw up extensive Annex IV Technical Documentation detailing algorithms, logic, and training data provenance.
- Conduct continuous post-market monitoring and log keeping.
- Undergo conformity assessments and officially affix the CE marking to the system.
2. The Deployer (The Corporate User)
Legal Definition (Art 3(4)): Any natural or legal person, public authority, agency, or other body using an AI system under its authority for professional purposes.
Deployers are businesses using AI to run their operations. Think of a hospital buying an AI scheduling tool, or a bank using a third-party AI credit scoring algorithm. Because the Deployer doesn't control the source code, their obligations focus on safe operation rather than safe design.
Key Obligations for High-Risk Deployers:
- Ensure strict Human Oversight (assigning natural persons to monitor the AI's output).
- Use the system exclusively in accordance with the Provider's instructions.
- Monitor the system for localized bias or risks and immediately suspend use if a serious incident occurs.
- (For public bodies and specific critical sectors) Conduct a Fundamental Rights Impact Assessment (FRIA) before deploying the system.
3. Importers, Distributors & Authorized Reps
To prevent non-EU companies from dumping non-compliant AI into the European market, the Act places legal liability on the middlemen.
Importers & Distributors
Before selling an AI system in the EU, middlemen must verify that the original Provider has completed the CE marking, written the tech docs, and appointed an Authorized Rep. If they sell non-compliant AI, the middleman can be fined.
Authorized Representatives
If a US or UK Provider wants to sell in the EU without setting up an EU subsidiary, they must mandate an Authorized Rep located within the Union. This Rep holds a copy of the tech docs and acts as the legal point of contact for EU authorities.
The Article 25 Trap
How Deployers accidentally become Providers.
Many companies believe they are merely "Deployers" because they use off-the-shelf foundation models (like an OpenAI API). However, under Article 25, a Deployer legally inherits all the massive obligations of a Provider if they do any of the following:
- 1.White-labeling: Placing your own name or trademark on a High-Risk system already placed on the market.
- 2.Substantial Modification: Modifying the system in a way that changes its original intended purpose (e.g., taking an AI designed for general text summarization and fine-tuning it to diagnose medical conditions).
If you trigger Article 25, you must instantly comply with QMS, CE marking, and Technical Documentation rules. Ignorance of this clause will lead to maximum fines.
Are you a Provider, Deployer, or Both?
Don't guess your legal status. Our engine analyzes your deployment context, modifications, and supply chain position to lock in your exact legal role.
Determine My Role →