EU seeks ‘clear criteria’ for use of biometric AI on mass scale

Bank staff demonstrates biometric scanner as an alternative to traditional ATM card pin during the 21st World Congress on Information Technology (WCIT) in Taipei, Taiwan, 11 September 2017. [EPA-EFE/RITCHIE B. TONGO]

There should be “clear criteria” in the future mass-scale rollout of Biometric Identification Systems in the EU, a recently leaked draft of the EU’s Artificial Intelligence strategy seen by EURACTIV reveals.

The document, an update on an earlier leaked version, has also scrapped the idea of a temporary ban on facial recognition technologies in public spaces.

The document notes that the lack of information about the use of biometric identification systems prohibits the Commission from making a broad analysis of the implications of this technology, which analyses a person’s physical features for computational purposes.

“This assessment will depend on the purpose for which the technology is used and on the safeguards in place to protect individuals,” the document states. “In case biometric data are used for mass surveillance, there must be clear criteria about which individuals should be identified.”

In addition, the new draft states that ‘key elements’ of a future regulatory framework for artificial intelligence in Europe should be built on an ‘ecosystem of trust.’

“Thus, the ecosystem of trust should give citizens the confidence to welcome artificial intelligence and give companies the legal certainty to innovate with artificial intelligence,” the paper states.

EU's Vestager calls on public sector to establish 'particularly high' AI standards

Artificial Intelligence technologies deployed in the public sector should be held to “particularly high standards when it comes to transparency and accountability,” regardless of whether the technology is deemed ‘high-risk’ or not, the Commission’s Vice-President for Digital Margrethe Vestager told MEPs on Monday (27 January).

The completed paper on Europe’s Artificial Intelligence strategy is due to be presented by the EU’s Digital tsar Margrethe Vestager on 19 February.

Speaking to lawmakers in the European Parliament’s legal affairs committee earlier this week, Vestager said Artificial Intelligence technologies should be held to “particularly high standards when it comes to transparency and accountability.”

She also warned that “serious concerns” may emerge in the use of certain Artificial Intelligence technologies, such as facial recognition.

Facial recognition “may be used in ways that would raise serious concerns when it comes to data protection, but also to fundamental values such as the right to assemble,” said Vestager who is also the EU’s antitrust Commissioner.

Earlier this week, the UK government laid out its own approach to the deployment of Artificial Intelligence within public institutions, highlighting a series of risk-areas that should be addressed, and emphasising that any intelligent systems should be compatible with the EU’s General Data Protection Regulation and the UK’s 2018 Data Protection Act.

Meanwhile, US officials have been heavily lobbying Brussels on adopting a softer approach with regards to the future regulation of Artificial Intelligence technologies.

On Tuesday (7 January), the White House put forward a set of regulatory principles aimed at avoiding the overregulation of Artificial Intelligence technologies in the private sector.

In a statement, the US said AI regulation should not be pursued until risk assessment exercises and cost-benefit analyses have been carried out, adding that the government hopes its European counterparts would adopt a similar approach.

“Europe and our allies should avoid heavy-handed innovation-killing models, and instead consider a similar regulatory approach,” the statement from the White House read.

[Edited by Zoran Radosavljevic]

Subscribe to our newsletters

Subscribe

Want to know what's going on in the EU Capitals daily? Subscribe now to our new 9am newsletter.