French Presidency pitches changes to law enforcement provisions in the AI Act

An example of facial recognition technology used on bypassers. [Trismegist san/Shutterstock]

A new compromise text on Artificial Intelligence (AI) and law enforcement marks more swift progress on the AI Act in the EU Council, with the French Presidency aiming for at least a partial general approach in June.

In the past week, the French Presidency of the European Council has circulated several compromises on the Artificial Intelligence Act, a flagship proposal to regulate AI systems following a risk-based approach.

The AI regulation is a high priority for Paris, trying to progress as much as possible before the end of its presidency. However, the French Presidency sent mixed signals about what it wanted to achieve during the Telecom Council on 3 June 2021.

According to an EU diplomatic source, the idea is to reach a general approach to the entire text. By contrast, for a second diplomat, the general approach will not be reached before the end of the year, and the French government is aiming for a partial compromise on the provisions regarding innovation, standards and common specifications (Art. 40-55).

However, what is clear is that the law enforcement aspect of the legislation has been a sticking point throughout the discussion, with Germany and Finland even calling for separating these provisions into a different file. A compromise seen by EURACTIV on Tuesday (5 April) outlines how the French diplomats are trying to address this aspect.

Biometric systems

On the sensitive use of biometric identification systems, the definition was modified to explicitly exclude ID verifications and checks, for instance, to access airport security or unlock a smartphone.

The text clarifies that real-time is intended as in “biometric identification system whereby the capturing of biometric data, the comparison and the identification all occur instantaneously or near instantaneously.”

The definition of ‘publicly accessible space’, where the prohibition for deploying facial recognition technologies applies, was also edited in search of greater clarity, with explanations and examples added to the text’s preamble.

The document significantly broadens the exception under which biometric recognition might be used, removing the specific example of missing children and the reference that the threat needs to be ‘imminent’. Ensuring the safety of physical infrastructure was added in the exception.

Moreover, the use of biometric identification systems for the location of a criminal suspect was extended from the original text that required the criminal offence to fall under the scope of a European arrest warrant to any criminal offence that might lead to a detention period of at least five years.

Under a new article, the restrictions to facial recognition technologies would not apply when a person is unwilling or unable to identify him or herself in a situation that authorises police forces to carry out identity checks.

Conformity assessment

Significant changes were applied to the derogation to the conformity assessment procedure, which allows a market surveillance authority from any member state to authorise the placement of a high-risk AI system on the market upon reception of a duly justified request.

In the original proposal, the Commission or another member state could challenge the authorisation within two weeks, with the EU executive playing the role of the deal-breaker in case of different interpretations. This possibility was removed in France’s text.

In addition, a new article has been added: “to create a possibility to ask ex-post for authorisation for law enforcement authorities, in order to provide more flexibility for these authorities in case of specific urgencies.” In these cases, the authorisation needs to be asked for “without undue delay.”

“This is way too broad and potentially harmful. The purpose of that article was to have some small exceptions with at least some oversight. This is basically saying the police just do it anyway and ask later,” said Sarah Chander, a senior policy advisor at the European Digital Rights.

Transparency obligations

Users of AI systems with emotion recognition technology would generally have to inform those who are being targeted. However, an exception for this transparency obligation has been included for criminal investigations.

Additionally, the text specifies that the use of emotion recognition will need to be clear and distinguishable the first time of exposure and that national governments can go beyond introducing additional transparency requirements.

EU database for high-risk AI systems

The AI Act includes an EU database with information regarding the deployed high-risk systems. According to the new text, this information would not be publicly accessible for the high-risk systems in the field of testing, as it would only be available to the market surveillance authority and the EU Commission.

However, the text does not clarify which information will be made public and which not. The annexe to the document specifies that the high-risk systems in law enforcement and migration would be excluded from the database.

Market monitoring and surveillance

A new article was added to specify that the post-market monitoring of a high-risk AI system does not affect the confidentiality of law enforcement authorities in conducting their investigations.

Finally, the French government rephrased the article on the market surveillance authority to supervise the application of AI systems by police agencies “to indicate that data protection authorities do not necessarily have to be the first choice in this respect.”

Next steps

The French compromise text will be discussed in a meeting with government representatives for justice and home affairs on Thursday (7 April).

[Edited by Nathalie Weatherald]

Subscribe to our newsletters