Should the EU embrace artificial intelligence, or fear it?

EU Commission President-designate Ursula von der Leyen (C) arrives to attend the conference of Presidents of the European Parliament in Strasbourg, France, 19 September 2019. [Patrick Seeger/EPA/EFE]

As Ursula von der Leyen took office as the new President of the European Commission this week, she said her administration will prioritise two issues above all: guiding Europe through the energy transition in response to climate change, and guiding it through the digital transition in response to new technologies.

On the latter, she has her work cut out. “Digitalisation is making things possible that were unthinkable even a generation ago,” she told the European Parliament ahead of her approval last week.

“To grasp the opportunities and to address the dangers that are out there, we must be able to strike a smart balance where the market cannot. We must protect our European well‑being and our European values. In the digital age, we must continue on our European path.”

Many MEPs understand that the European path wants to be in contrast to that of America, which has had a light-touch approach to regulating the internet and digital technology. Brussels has stepped into that regulatory vacuum with laws and standards such as the General Data Protection Regulation that are now affecting the whole world.

At a EURACTIV event on Friday in Brussels, experts were divided over how aggressive von der Leyen should be in regulating artificial intelligence and data usage in order to protect European citizens.

“We need to be careful when we set the regulation so it doesn’t stifle innovation,” said Kristof Terryn, group chief operating officer at Zurich Insurance. The insurance industry is becoming heavily involved in artificial intelligence, where it is using algorithms in many areas, including loss prediction and claims handling.

This innovation could be stifled if EU regulation becomes too onerous. “We keep talking about the risks of AI, but there are massive benefits as well,” he said. As an example, he pointed to new technology in Japan which can automatically and immediately compensate people affected by an earthquake.

The EU should be careful that it only regulates the applications of AI that are deemed risky, not all of it, he said.

Eline Chivot, a senior policy analyst at the Centre for Data Innovation, agreed. “There are worrying signals that the EU is falling behind the US and China in technology development – only 20% of European SMEs demonstrate digital intensity,” she said. “Ethical discussions shouldn’t sidetrack us from the competitiveness discussion.”

But Jennifer Baker, a digital rights activist and EU privacy policy correspondent for IAPP, said for her, civil rights and data protection are far more important than companies’ profits.

She said there are major questions not being tackled, which are already affecting citizens without their knowledge, such as who owns the data and how AI systems process it, and how bias and discrimination are being embedded into these systems.

“We don’t even know how many of these things are already out there because there’s been a rush to make profits,” she noted.

Wojciech Wiewiorowski, the acting European Data Protection Supervisor, said that his office is taking data protection very seriously and that it will be essential that the EU defines exactly what AI is and which types of applications could cause harm.

“Transparency is important,” he said. “If you include bias, we need to know that the system operates that way.”

The bias is being built in by humans, said Baker. “The best way to combat bias is to eliminate bias from society,” she said. “Right now we have the chance to tackle the bias before it’s hardwired into AI.”

There have already been reports of AI systems used in policing showing bias against minorities. Questions have arisen about situations where a self-driving car heading for a crash needs to choose which car, biker, or pedestrian, it should swerve into – a situation where bias might dictate it hits an older person instead of a younger person.

“The reason we’re talking about bias is because it reflects our own bias as human beings, and that’s difficult to accept,” said Chivot. “But sometimes bias is a good thing.” She cited as an example when AI might be used to select vulnerable populations for a medical study or disease treatment.

Some on the panel felt that the EU’s efforts to deal with AI specifically so far – for instance, a high-level group’s guidelines published earlier this year – have been underwhelming. “There’s not a lot to disagree with there, they say they don’t want bias and want human beings at the centre,” said Baker.

“When you flip these statements on their head, it’s utterly ridiculous. We want bias? We don’t want humans at the centre?”

Wiewiorowski said he liked the document but agreed it can be difficult to precisely define what ethics are. “I don’t know the definition of ethics but I think we can all agree that there is something we can call ‘general ethics’.”

As Ursula von der Leyen begins her term, technology industry stakeholders are going to be closely watching the Commission to see which direction it will go in regulating these developing technologies and usage of data. But if her speech to the European Parliament last week is anything to go by, it appears she is not afraid to plough through with regulation.

“For us, the protection of a person’s digital identity is the overriding priority,” she told MEPs. “We have to have stringent security requirements and a unified European approach.”

[Edited by Zoran Radosavljevic and Samuel Stolton]

Subscribe to our newsletters