The European Commission announced on Wednesday (25 April) that it will invest €1.5 billion into artificial intelligence research over the next three years, and was promptly hit with criticism for drafting its strategy years after the United States and China started their own massive funding plans.
The new commitment marks a drastic increase of EU funding into AI, an emerging technology that is used for robotics and other digital services like climate prediction tools or health applications that analyse large amounts of data.
On top of the €1.5 billion pledge by 2020, the Commission plans to leverage funding from EU member states and private companies worth a total of €20 billion during the same time. After 2020, Brussels wants EU-wide funding levels to rise to €20 billion per year.
The funding plan is a significant change and would mean a 70% increase compared to current EU investments in AI. But for some tech policy observers, the Commission’s plan came too late. The US and Chinese governments started pumping public money into AI research two years ago, and private investments in both countries are much higher than in Europe.
“I hope the Commission will speed up, but it’s late. They could have done it two years ago,” said Mady Delvaux, a centre-left Luxembourgish MEP who drafted a European Parliament report on robotics in 2017. Delvaux’s opinion paper called for an EU response to issues relating to the legal responsibility of robots and autonomous vehicles.
The Commission plans to publish guidelines by the end of this year on liability, the transparency of companies’ algorithms and other ethical complexities relating to artificial intelligence. First, it will bring together a group of experts on AI by July who will spend the next few months drafting that document.
The EU strategy emphasises its “human-centric” approach to AI, and bills the bloc’s focus on ethics as a selling point for Europe that will give companies a competitive advantage over tech giants in the US and Asia.
“For investors, this ethical code will be needed. In case we don’t have this ethical code about human-centric artificial intelligence, somebody could invest huge amounts of money and then, later on, say, ‘You’ve created a Frankenstein,’” Commission Vice-President Andrus Ansip told reporters this week.
With EU guidelines, and later on, potentially legislation on AI ethics in place, companies could market their products—whether those are internet-connected home entertainment systems, autonomous cars or smart hospital devices—as safer and subject to human oversight, according to the Commission’s thinking.
Since the Commission’s current term ends next year, it will be up to the next administration to decide whether to propose new EU legislation regulating the technology. But Delvaux warned that member states might start drafting their own national rules before then.
She said the Commission should propose legislation applying to all member states now “to show its ownership of the European approach to the single market.”
Some member states are already ahead of the EU plans.
French President Emmanuel Macron announced a plan last month to invest €1.5 billion in AI by 2021. In its coalition agreement from February, the new German government promised to set up a research centre and to work together with France to advance the technology.
The Commission wants to encourage other member states to come up with similar national programmes.
So far, most EU countries have already signalled interest in promoting the technology. Earlier this month, 24 member states—all except Cyprus, Romania, Croatia and Greece—signed a declaration indicating they will consider investing public funds into AI.
Even though the Commission is not proposing binding legislation on AI yet, companies, legislators and consumer groups described the new strategy as a small first step towards creating an EU-wide approach to the technology.
“This is a Commission that will leave soon. For the new one that will come, it’s a signal that Europe should move in this direction,” said Georgios Petropoulos, a research fellow at the think tank Bruegel.
Petropoulos said there should be new legislation explaining how EU rules on data protection and other issues could affect AI.
Criticism of the Commission’s strategy reflects the range of industries that could be upended by the growth of AI: the technology is already used to create consumer products, do government data analysis and, especially in the US and China, to build weapons.
Dutch Liberal MEP Marietje Schaake said the announcement lacked details on how defence and competition policies should address AI.
“A strong European approach is desperately needed to catch up with the US and China in this fast-moving environment,” Schaake said.
The consumer watchdog organisation BEUC wanted the Commission to propose new legislation clarifying how the bloc’s liability rules will apply to defective robots and machines.
Monique Goyens, the campaign group’s director general, said companies that make products using artificial intelligence “must be held responsible when an automated car causes an accident, or a home robot causes damage. Current safety, security and liability rules are not sufficient and lack clarity”.
It will still take a while before there could be any new EU legislation.
The Commission said it will publish information in mid-2019 on how current EU liability rules could be used to regulate new technologies.