Artificial Intelligence technologies deployed in the public sector should be held to “particularly high standards when it comes to transparency and accountability,” the European Commission’s Vice-President in charge of digital policy, Margrethe Vestager, said on Monday (27 January).
Speaking to lawmakers in the European Parliament’s legal affairs committee, Vestager also warned that “serious concerns” may emerge in the use of certain Artificial Intelligence technologies, such as facial recognition.
Facial recognition “may be used in ways that would raise serious concerns when it comes to data protection, but also to fundamental values as the right to assemble,” said Vestager who is also the EU’s antitrust commissioner.
The European Commission will address those concerns in a forthcoming AI White Paper to be published on February 19, the commissioner added.
Vestager’s comments come after EURACTIV recently revealed that the European Commission is considering measures to impose a temporary ban on facial recognition technologies used by both public and private actors. Currently, however, it is not the executive’s preferred course of action.
More broadly, the Commission is looking at a range of issues related to the future rollout of AI in Europe, such as the “quality and traceability of data,” Vestager told MEPs.
This meaning that firms may be required to tender more information on the algorithms that drive AI technologies as well as the data flows that branch out from the application of such devices.
In addition, the Commission would like to see “transparency about the capability and also the limitations of artificial intelligent systems,” Vestager said, adding that further transparency is also required with regards to an AI device’s interactions with humans – that that an individual operating an AI system is fully aware of the technologies at play.
By default, AI technologies would be subject to current EU rules aimed at ensuring high standards across the fields of data protection, privacy, non discrimination, gender protection, and product safety liability rules, Vestager pointed out.
Her comments came on the same day that the UK government laid out its own approach to the deployment of Artificial Intelligence within public institutions, highlighting a series of risk-areas that should be addressed, and emphasising that any intelligent systems should be compatible both with the EU’s General Data Protection Regulation and the UK’s 2018 Data Protection Act.
Meanwhile, US officials have been lobbying Brussels heavily on adopting a softer approach with regards to the future regulation of Artificial Intelligence technologies.
On Tuesday (7 January), the White House put forward a set of regulatory principles aimed at avoiding the overregulation of Artificial Intelligence technologies in the private sector. In a statement, the US said AI regulation should not be pursued until risk assessment exercises and cost-benefit analyses have been carried out, adding that the government hopes its European counterparts would adopt a similar approach.
“Europe and our allies should avoid heavy-handed innovation-killing models, and instead consider a similar regulatory approach,” a statement from the White House read.
The US’s Chief Technology Officer, Michael Kratsios, who has played a leading role in the US position on the issue, met with Vestager following the US statement, after having earlier raised the issue with her at November’s Web Summit in Lisbon.
In Brussels, any regulatory approach towards the deployment of Artificial Intelligence technologies may have to take shape with haste. The German Interior Ministry has recently announced intentions to roll out automatic facial recognition at 134 railway stations and 14 airports, while France has plans to establish a legal framework permitting video surveillance systems to be embedded with facial recognition technologies.
[Edited by Frédéric Simon]