‘Adverse impacts’ of Artificial Intelligence could pave way for regulation, EU report says

The German Chancellor Angela Merkel (2-L) and Swedish Prime Minister Stefan Loevfen (2-R) look at robots at the FRANKA-EMIKA booth during their opening tour at the Hannover Industry Fair (Hannover Messe) in Hanover, Germany, 01 April 2019. [EPA-EFE/FOCKE STRANGMANN]

The EU should consider the need for new regulation to “ensure adequate protection from adverse impacts” in the field of Artificial Intelligence, a report published on Wednesday (26 June) by the Commission’s High-Level Group on AI says.

Wednesday’s report finds that areas which may result in ‘adverse’ consequences include biometric recognition, the use of lethal autonomous weapons systems (LAWS), AI systems built on children’s profiles, and the impact AI may have on fundamental rights.

The recommendations add that a set of “risk classes” should be established so as to determine a proportionate regulatory approach to AI threats that could arise. “The higher the impact and/or probability of an AI-created risk, the stronger the appropriate regulatory response should be,” the document states.

However, the EU’s Digital Commissioner Mariya Gabriel downplayed talk of a hard regulatory environment for AI on Wednesday (26 June), telling EURACTIV that the Commission agrees with the report’s general recommendation to avoid “prescriptive regulation” that “stops innovation and prevents applications bringing new products and services to the market. ”

Artificial Intelligence presents 'black swan' ethical issues, Commission report says

A series of ‘critical concerns’ in the development of Artificial Intelligence may have future unforeseen “high-impact” ramifications, a European Commission-led project has suggested.

Meanwhile, members of the EU’s Artificial Alliance, a forum comprising experts and stakeholders in the field, convened on Wednesday to chew the fat on the 33 recommendations put forward by the High-Level Group.

EURACTIV heard from sociologist and member of the Alliance, Mona Sloane, who said that the narrative being spun by the Commission, that regulation could effectively stifle innovation, was a message that is not entirely being delivered in good faith.

“Pitting regulation against innovation is not a productive approach unless the European Commission is looking to model its approach on the US and its deregulated tech industry,” Sloane said.

“I am not aware of any evidence suggesting that regulation actually does stifle innovation. Innovation and regulation should go hand in hand, and it is the European Commission’s mandate to actively explore what that can and must mean.”

Along this axis, Ursula Pachl, member of the High-Level Group and Deputy Director General of  BEUC, the European Consumer Organisation, said that the framing of the EU’s Artificial Intelligence debate has been skewed because “the discussion on ethics is often used as an overall cover to distract us from the core issue, which is to define the right legal framework.”

In a further comment that gestured towards the benefits of preemptive regulatory frameworks for AI, Pachl also said that the “reliance on ethics is part of a problematic narrative” due to the fact that it suggests “trustworthy AI can be achieved through self-regulation and that we need first to look through the ethical prism before we could ever regulate.”

MEPs back plans for artificial intelligence and robotics, but ethical concerns remain

MEPs in the European Parliament’s Committee on Industry, Research and Energy backed plans on Monday evening (14 January) for a comprehensive policy framework on artificial intelligence (AI) and robotics, weeks after ethical concerns in the field were highlighted in a EU report.

However, many in the High-Level Group concur with Commissioner Gabriel that a principled-based approach built on ethics and distanced from hard regulation should be adopted. Head of Regulatory Policy at the Centre for European Policy Studies, Andrea Renda, says that due to technology’s propensity to develop quickly, “prescriptive regulation makes little sense.”

Nonetheless, Renda believes that this should not necessarily result in deregulation, “just a different approach to setting rules, grounded in principles and focused on outcomes.”

This is a perspective also shared by the University of Bologna’s Francesca Ternullo, also part of the AI Alliance, who told EURACTIV that the Commission is right to “avoid excessive detailed regulation,” and with “robust and enforceable guidelines, European Union countries – right now – do not need prescriptive regulations.”

More broadly, Wednesday’s report contains 33 recommendations, ranging from developing appropriate skills for AI to ensuring World-Class Research Capabilities and raising funding and investment. The Commission has stated that over 300 organisations have expressed interest in trialling the recommendations in their practice.

Feedback will be collected on the effectiveness of the implementations until early December, with a view to producing a revised set of recommendations in early 2020, which may then impact any prospective decision to introduce a regulatory environment for AI.

AI & Ethics: 'Critical Concerns' highlighted in Commission report

The European Commission’s high-level group on Artificial Intelligence (AI) published their highly anticipated report on Artificial Intelligence and Ethics on Tuesday (18 December), drawing attention to issues related to areas such as identification, citizen scoring, and killer robots. 

Subscribe to our newsletters

Subscribe

Want to know what's going on in the EU Capitals daily? Subscribe now to our new 9am newsletter.