There should be “clear criteria” in the future mass-scale rollout of Biometric Identification Systems in the EU, a recently leaked draft of the EU’s Artificial Intelligence strategy seen by EURACTIV reveals.
The document, an update on an earlier leaked version, has also scrapped the idea of a temporary ban on facial recognition technologies in public spaces.
The document notes that the lack of information about the use of biometric identification systems prohibits the Commission from making a broad analysis of the implications of this technology, which analyses a person’s physical features for computational purposes.
“This assessment will depend on the purpose for which the technology is used and on the safeguards in place to protect individuals,” the document states. “In case biometric data are used for mass surveillance, there must be clear criteria about which individuals should be identified.”
In addition, the new draft states that ‘key elements’ of a future regulatory framework for artificial intelligence in Europe should be built on an ‘ecosystem of trust.’
“Thus, the ecosystem of trust should give citizens the confidence to welcome artificial intelligence and give companies the legal certainty to innovate with artificial intelligence,” the paper states.
The completed paper on Europe’s Artificial Intelligence strategy is due to be presented by the EU’s Digital tsar Margrethe Vestager on 19 February.
Speaking to lawmakers in the European Parliament’s legal affairs committee earlier this week, Vestager said Artificial Intelligence technologies should be held to “particularly high standards when it comes to transparency and accountability.”
She also warned that “serious concerns” may emerge in the use of certain Artificial Intelligence technologies, such as facial recognition.
Facial recognition “may be used in ways that would raise serious concerns when it comes to data protection, but also to fundamental values such as the right to assemble,” said Vestager who is also the EU’s antitrust Commissioner.
Earlier this week, the UK government laid out its own approach to the deployment of Artificial Intelligence within public institutions, highlighting a series of risk-areas that should be addressed, and emphasising that any intelligent systems should be compatible with the EU’s General Data Protection Regulation and the UK’s 2018 Data Protection Act.
Meanwhile, US officials have been heavily lobbying Brussels on adopting a softer approach with regards to the future regulation of Artificial Intelligence technologies.
On Tuesday (7 January), the White House put forward a set of regulatory principles aimed at avoiding the overregulation of Artificial Intelligence technologies in the private sector.
In a statement, the US said AI regulation should not be pursued until risk assessment exercises and cost-benefit analyses have been carried out, adding that the government hopes its European counterparts would adopt a similar approach.
“Europe and our allies should avoid heavy-handed innovation-killing models, and instead consider a similar regulatory approach,” the statement from the White House read.
[Edited by Zoran Radosavljevic]