Europeans still anxious about AI facial recognition

Panellists at the Microsoft event AI Facial Recognition Technology held in Brussels on 5 June 2019. [Oliver Anbergen]

Technology experts are usually among the first to embrace new and emerging digital tools. But that idea was put to the test at a stakeholders’ gathering about artificial-intelligence–enabled facial recognition this week at the Microsoft Center in Brussels.

Asked who in the audience was comfortable with facial recognition, only a smattering of people raised their hands. Asked who was uncomfortable, over half of the room said yes.

It was a discomfort reflected in the panel of experts assembled to talk about the issue. Christian D’Cunha, head of office for Giovanni Buttarelli, the European Data Protection Supervisor, said the EU needs to address this public discomfort by asking some key questions:

“We need to discuss whether to have it before we discuss how,” he said. “What is facial recognition for? Do we really need it? Who benefits from it? Do the benefits outweigh the harms?”

But other panellists pointed out that the cat is already out of the bag. Cornelia Kutterer, senior director for EU Government Affairs at Microsoft, said she wanted to challenge the people who had raised their hands indicating discomfort with facial recognition by pointing out that they probably already use the technology to unlock their iPhones.

A facial recognition system is able to identify or verify a person from a digital image or a video, by comparing facial features from the image with a database. It is the database that has many people worried, since the information can be collected without peoples’ consent from, for example, surveillance cameras on the street.

Thus far, the technology has been mostly confined to access control uses in security systems, replacing previous authentication measures such as fingerprint scanning. But there have been increasing uses outside of security, for example in monitoring crowd control, searing for criminals, or helping blind people recognise emotional cues from peoples’ facial expressions.

But some of the stories about this expanded usage has frightened the public, particularly the Chinese government’s reported use of the technology for public surveillance.

“It’s poignant to have this conversation now because it’s been 30 years since the Tiananmen Square massacre,” said D’Cunha. “Now to get into that square you need your eID scanned. If there’s any trace of you being involved in dissident activity, you’re taken away.” Beijing is moving full speed ahead with using facial recognition for state surveillance, he said.

GDPR application

Kutterer said she was aware that Europeans have concerns but voiced belief that the EU can put in place a legislative framework that helps protect peoples privacy and guards against abuses, using the General Data Protection Regulation (GDPR) as a basis.

“We’ll probably see in the course of the next year the compliance and enforcement of GDPR in these spaces,” she said. “But there are also areas that GDPR doesn’t address. That’s why we need third-party testing”.

“I think we can deploy AI scenarios in a GDPR compliant way,” she added. “It’s able to have wonderful safeguards, but there are areas where other human rights like human dignity, or the private sphere, or non-discrimination, that we might want to think through a bit more and look at whether they require secondary laws to reinforce them.”

Microsoft has recently come out with a transparency tool for the use of its technology.

But Doru Peter Frantescu, a member of the EU forum European AI Alliance, said there is a risk of rushing too fast with regulation. “We first have to give some time for companies to comply with GDPR before coming up with something on top of that and making things even more complicated,” he said. “With any piece of regulation, you have to give it time.”

In April, the EU’s High-level Expert Group on AI put forward a set of recommendations and ethical principles to use going forward. In these recommendations, they addressed facial recognition and suggested some practical tools to use or develop these applications.

Joris van Hoboken, a professor of law at the Free University of Brussels who teaches AI subjects to students, said that at a recent conference he attended in China, it was clear they are going in a different direction in terms of regulation.

“The message I got was, we know the Europeans and the Americans have their way of dealing with it, and we’re going to have our own way of dealing with it. But there’s a lot of Chinese companies that want to operate globally and enter the European market.” Those Chinese companies will have to comply with EU data privacy law.

AI bias

But even with a legislative framework in place, many in the room raised concerns about racial and gender bias being inevitably built into these systems.

“The thing we’re seeing already with a lot of live AI systems up and running, and especially facial recognition technologies being used around the world, is that they are too often used as a higher-tech way of doing something that was already wrong,” said Guillermo Beltrà, a policy director with Access Now, a group that works to protect vulnerable communities from new technologies.

“We’re seeing examples of marginalised communities being followed or monitored in their communities just because they might be communities with higher indexes of crime,” he said.

“We’re seeing also, while police shouldn’t be using facial recognition technology to do positive IDing, they are using that and using it in a way that is not necessarily compatible with the laws we have right now”.

Van Hoboken agreed that it is a problem, and perhaps more so in Europe than in America. “You see that anti-discrimination groups in Europe aren’t included in tech policy debates,” he said. “In the US people of colour are being more included in these discussions. We lack that representation in Europe and we really need to address that.”

“There’s really an overrepresentation of black and brown people in government databases. They can be set up with the purpose of dealing with particular issues of migrant groups. We really have some issues with the representation of data.”

Kutterer said that for Microsoft’s part, these considerations are being built in at the time of technology development. “The one thing that we are driving for in this space is also diversity not only in the data but also in the workforce, in order to ensure that when these techniques are developed it reflects the broader society,” she said.

“We need to go back to the point that this technology should augment human decisions and not replace them.”

As facial recognition technology continues to develop, lawmakers and private companies will likely have to do more to reassure the public that its applications will not spiral out of control.

[Edited by Zoran Radosavljevic]

Subscribe to our newsletters

Subscribe

Want to know what's going on in the EU Capitals daily? Subscribe now to our new 9am newsletter.