A series of ‘critical concerns’ in the development of Artificial Intelligence may have unforeseen “high-impact” ramifications in the future, a European Commission-led project has suggested.
Published on Monday (8 April), the Ethics Guidelines for Artificial Intelligence set out the key requirements for trustworthy AI, touching upon some of the more pressing concerns surrounding the future of the technology. Detail on the future negative impacts of AI is, however, rather lacking.
The guidelines were created by the High-Level Group for AI, set up by the Commission in 2018.
The report states that ‘long-term’ concerns can be ‘hypothesised,’ and then cites “Artificial Consciousness, Artificial Moral Agents, Super-intelligence or Transformative AI” as examples of such long-term issues.
The document goes on to say that “a risk-based approach suggests that these concerns should be kept into consideration in view of possible unknowns and ‘black swans,'” which refer to very rare, unforeseen and high impact events.
Due to this risk, the High-level Group call for “regular assessment of these topics.”
More generally, the future risks outlined in Monday’s publication echo those in the draft report released in December. Such include issues over citizen scoring systems, covert Artificial Intelligence, facial-recognition technologies and lethal autonomous weapons systems.
However, the specific ways in which the EU could assuage these concerns were not addressed in the report.
To find out why, EURACTIV talked to Ursula Pachi, deputy director of BEUC, the European Consumer Organisation, and a member of the High-Level Group on AI. She revealed that due to the ‘unbalanced’ composition of the 52-member group, certain ‘negative’ issues that may deter investment were pushed to the sidelines of the report.
“After all the talk of the ‘diversity’ within this group, it is not well balanced,” she said. “There are too few representatives from civil society and too many from private industry.”
“This has led to a downgrading of the report’s focus on the risks and potential future vulnerabilities of using Artificial Intelligence.”
In terms of the report’s future potential in earmarking the EU as a leader in AI, Pachl noted that while it is “good that the EU has a model of trustworthiness in place…ethics will never be enough” and that in the future, she hopes for clear rights that are enforceable in the way of a regulatory framework for AI.
Indeed, one decipherable trend in the EU’s general legislative trajectory is that regulation often starts with voluntary mechanisms or guidelines such as the AI and Ethics report.
Lian Benham, vice-president for government affairs at IBM, told EURACTIV that the Commission’s current approach is welcome and that a regulatory framework from the outset would not have been the right thing to do.
“The plans currently do not represent a prescriptive regulatory approach, this is because we must remember that we are still in the foothills of this journey, we’re not quite sure yet of the future challenges that AI may throw up,” he said.
“For regulation to come at this early stage it would be too hard and too early. However, we’ve got no doubt that in the future, should further critical risks be identified, the Commission will, of course, consider a regulatory approach,” he added.
Benham’s comments were echoed by Senior Policy Analyst at the Center for Data Innovation, Eline Chivot, who said the “new ethics guidelines are a welcome alternative to the EU’s typical “regulate first, ask questions later” approach to new technology.”
In its bid to turn the EU into a world leader in AI, the Commission is now launching a large-scale pilot phase for feedback from stakeholders based on the advice set out in the guidelines, with a long-term view to “bring this approach to AI ethics to the global stage”.
Digital Commissioner Mariya Gabriel hopes this will cultivate interests in the plans further afield than the EU.
“We now have a solid foundation based on EU values and following an extensive and constructive engagement from many stakeholders including businesses, academia and civil society,” a statement from Gabriel read on Monday.
“We will now put these requirements to practice and at the same time foster an international discussion on human-centric AI.”
Building on the feedback received, the Commission’s AI expert group will review the assessment criteria in early 2020, in order to mull over any next steps and potential regulatory measures.
[Edited by Zoran Radosavljevic]