Leading MEPs tackle enforcement in AI regulation


The European Parliament’s rapporteurs circulated a new batch of compromise amendments redesigning the enforcement structure of the AI Act.

On Friday (18 November), Dragoș Tudorache and Brando Benifei, the MEPs spearheading the file, shared a new compromise text addressing the enforcement of the AI regulation.

The AI Act is a flagship regulation intended to place obligations on Artificial Intelligence based on its potential to cause harm.

The document was initially set to be discussed on Monday and Tuesday, but due to the short notice, the technical discussions were postponed to Wednesday.

Enforcement powers

The compromise text gives the national supervisor authority to conduct unannounced on-site and remote inspections of high-risk AI, acquire samples related to high-risk systems to reverse-engineer them and acquire evidence to identify non-compliance.

For AI systems used in law enforcement, the original text left the possibility for police authorities to supervise themselves. For the MEPs, only data protection authorities can have this role.

Leading MEPs push for European 'office' to enforce the EU's AI rulebook

The lawmakers spearheading the work on the AI Act launched the idea of an AI Office to streamline enforcement and solve competency disputes on cross-border cases.

EU database

The scope of the provisions establishing a pan-European database for high-risk systems was significantly increased from stand-alone to any systems, including AI that is integrated with others in a complex system, as is usually the case for general-purpose AI.

The database was extended from AI providers of high-risk systems to high-risk users that are public authorities and their subcontractors.

The co-rapporteurs want the database to be “user-friendly and accessible, easily navigable and machine-readable containing structured digital data based on a standardized protocol”.

Post-market monitoring

The AI Act requires providers of algorithms that pose a significant risk to cause harm to monitor how their systems perform after they are launched on the market, as AI evolves as it receives new inputs.

Also for post-market monitoring, the AI providers will have to consider “continuous analysis of the AI environment, including other devices, software, and other AI systems that interact with the AI system taking into account the limits resulting from data protection, copyright and competition law”.

The Commission will have to provide a template for post-market monitoring plans together with the elements that will need to be included within one year since the entry into force of the AI regulation.

Procedure for dangerous systems

For lawmakers, an AI system is also dangerous if it risks worker rights, consumer protection, the environment and public security.

Where authorities have sufficient reason to think that an AI system exploits the vulnerabilities of children, the compromise mandates them to investigate it thoroughly and reverses the burden of proof on the AI operator to demonstrate compliance with the AI rulebook.

Regarding withdrawing dangerous systems from the market, a time limit has been set to 15 working days.

When informing other authorities about a risky system, the competent authority must also inform its peers about the economic actors involved in its supply chain.

If the AI operator fails to implement adequate corrective measures, the national authority might order the system’s withdrawal from the market. Other national authorities or the Commission might object to this decision within three months, a timeline reduced to one month for decisions related to prohibited practices.

If the decision is contested, the same timeframes apply for the AI office to take a binding decision to settle the dispute.

AI Act: Leadings MEPs want to expand Commission's revision powers

The European Parliament’s co-rapporteurs of the AI Act have proposed expanding the European Commission’s revision powers to extend the list of high-risk systems and prohibited practices at a later stage.

Reporting serious incidents

The need to report serious incidents or malfunctioning has been extended to users of high-risk systems. The notification would have to occur no later than three days since the providers or users have become aware of it.

In turn, the national supervisory authority would have to take appropriate measures within seven days of the notification. The AI office and relevant authorities should be informed of incidents that are likely to occur in other member states.

The national authorities would have to report annually to the AI office of any serious incidents.

Joint investigations

A new article has been added introducing the possibility for joint investigations for cases that amount to a widespread infringement or are likely to affect at least 45 million EU citizens. The AI office would provide central coordination for such joint operations.


Formal non-compliance has also been extended to cases where the technical documentation is unavailable, the AI system was not registered on the EU database, or an authorised representative has not been appointed in the EU.

Individuals’ rights

For the co-rapporteurs, any individual or group affected by an AI system covered by the AI regulation should have the right to complain to the relevant national authority.

Complainants have also been given the right to be heard and informed about their complaint’s progress, with a preliminary response already provided within three months. The authority would then have six months to issue a decision.

The text also specifies that individuals have the right to a judicial remedy, including when supervisory authorities fail to fulfil their duties.


The MEPs extended the scope of the directive protecting persons who report breaches of EU law to those reporting violations of the AI Act.

National authorities

The new text mandates that the supervisory authority appointed by the member states cannot be the same as the authority overseeing the work of conformity assessment bodies.

AI Act: EU Parliament’s discussions heat up over facial recognition, scope

EU lawmakers held their first political debate on the AI Act on Wednesday (5 October) as the discussion moved to more sensitive topics like the highly debated issue of biometric recognition.

[Edited by Nathalie Weatherald]

Read more with EURACTIV

Subscribe to our newsletters