French Presidency pushes for alignment with the new legislative framework in AI Act

The French Presidency of the EU Council has been making steady progress on the Artificial Intelligence Act. [Alexandros Michailidis/Shutterstock]

France is proposing several changes to the Artificial Intelligence (AI) Act to ensure better alignment with the new legislative framework, the EU’s legislation that regulates market surveillance and conformity assessment procedures. The changes also relate to the designation of competent authorities and the high-risk AI database.

The French Presidency, which leads the work in the EU Council, shared a new compromise text on Monday (25 April) that will be discussed with the representatives of the other member states at the telecom working party on Thursday.

Notified bodies and competent authorities

Notified bodies will play a crucial role in the enforcement of the AI Act, as they will be designated by EU countries to assess the conformity of the AI systems to EU rules before they are launched on the market.

The new text refers explicitly to the EU regulation setting up the requirements for accreditation and market surveillance, and a reference that such bodies will have to respect confidentiality obligations has been added.

A new article has been introduced to mandate the way notified bodies should operate, in particular for the conformity assessment of high-risk systems. The new article includes provisions indicating how the notified bodies must collaborate with the notifying authority, the national authority in charge of overseeing the entire conformity assessment process.

If the national authority has “sufficient reasons to consider” that a notified body is failing to fulfil its obligations, it should take appropriate measures proportionate to the level of failing, notably by restricting, suspending or withdrawing notifications to that body.

To become a notified body, there will be an application procedure managed by the relevant authority. The national authorities might also nominate bodies outside this procedure, but in that case, they would need to provide the Commission and other member states with documents proving the competence of the body and how it will satisfy the relevant requirements.

Similarly, the Commission will be able to challenge the competence of notified body. A sentence has been added giving the EU executive the power to suspend, restrict or withdraw the notification in certain cases, via secondary legislation, in line with similar provisions of the Medical Devices Regulation.

An important part of the AI Act will be carried out by harmonised standards, which will mandate how general concepts in the regulation such as ‘fairness’ and ‘security’ apply in practice to Artificial Intelligence.

When a system follows such standards, it will be presumed in conformity with the regulation. Similarly, a new article clarifies that such presumption of conformity would also apply to notified bodies that follow the harmonised standards.

Notified bodies across the bloc will have to coordinate their conformity assessment procedure in a specific working group, but only for high-risk AI systems. The national authorities will oversee that there is effective collaboration between the bodies.

There is also the possibility that conformity assessment bodies from a country outside the EU are authorised by a national authority to carry out the same work of an EU notified body, as long as it respects the same requirements set out in the regulation.

Flexibility for countries

The text regarding the designation of national competent authorities has been changed to give EU countries more flexibility in organising such authorities according to their needs, as long as the principles of objectivity and impartiality are respected.

The French Presidency is proposing making the provisions related to how the EU countries should inform the Commission about such designation process less prescriptive.

The part mandating the adequate resources for competent authorities has also been made less binding. The frequency of reporting on the state of the resources of the national authorities has been reduced.

High-risk database and reporting

The article on the EU database of high-risk AI systems now also covers the high-risk systems tested in real-world conditions as part of a regulatory sandbox, mandating that the ‘perspective provider’ should insert it into the database.

Moreover, a new paragraph indicates that the information in the EU database on high-risk systems already on the market should be accessible to the public. For systems that are being tested in a regulatory sandbox, the information will not be made public unless the provider consents to that. The text clarifies that the EU database will not contain personal data.

The article on the supervision of systems already launched in the market has been clarified to only apply to high-risk systems. More flexibility for suppliers to collect, document and analyse relevant data necessary for the conformity assessment has been provided.

The malfunctioning of high-risk systems has been excluded from the reporting obligations on serious incidents. At the same time, the text broadens the possibility to cover these notification obligations to financial institutions that might be included under the scope of high-risk systems at a later stage.

[Edited by Nathalie Weatherald]

Subscribe to our newsletters

Subscribe