Stakeholders weigh in on EU’s future AI plans

President of the European Commission Ursula von der Leyen wears AR goggle as she tests the invention 'Sara', a computer-assisted surgery (CAS) device, during a visit at the AI Xperience Center at the VUB (Vrije Universiteit Brussel) in Brussels, Belgium, 18 February 2020. [EPA-EFE/STEPHANIE LECOCQ]

A cross-section of stakeholders has responded to the EU’s plans to build an ethical framework for the development and deployment of next-generation Artificial Intelligence technologies, highlighting concerns ranging from the use of biometric technology to the operation of Automated Decision Making (ADM) software.

Sunday (14 June), marked the deadline for stakeholders to submit feedback on the European Commission’s White Paper on Artificial Intelligence, published in February.

The paper had called for AI technologies carrying a high-risk of abuse to be subject to new requirements, earmarking a series of ‘high-risk’ technologies for future oversight, including those in ‘critical sectors’ and those deemed to be of ‘critical use.’

Those under the critical sectors, the Commission said at the time, include healthcare, transport, police, recruitment, and the legal system, while technologies of critical use include such systems with a risk of death, damage or injury, or with legal ramifications.

In their submission to the public consultation launched after the publication of the White Paper, rights group Access Now renewed calls for the explicit ban on the use of certain technologies, including indiscriminate biometric surveillance software, facial emotion analysis applications, and the use of AI systems between borders.

Moreover, the group called for EU member states to establish public registers of Artificial Intelligence and Automated Decision Making systems used by the public sector and, in certain cases, the private sector.

On the other side of the coin, the research group The Center for Data Innovation hit out at the Commission’s intentions to produce conformity assessments for high-risk Artificial Intelligence applications, calling them “onerous and counterproductive.”

This is a line on which the Computer and Communications Industry Association also cautioned against, calling for the EU to avoid “lengthy, bureaucratic approval processes.”

Meanwhile, the European Consumer Organisation BEUC stated that the ‘risk-based’ approach should be broadened, and the EU adopt a more inclusive stance, while the precautionary approach should be implemented when legislating for AI and ADM more generally.

On the subject of demarcating ‘high risk’ technologies, tech lobby The Information Technology Industry Council says that the definition of such technologies should “consider context-specific factors such as complexity of the AI system, or the probability and irreversibility of harm caused in worst-case scenarios.”

Meanwhile, a contingency of cities, think tanks and research institutes, joined by the open-source software company Mozilla, submitted a letter to the Commission highlighting concerns related to the procurement of certain AI applications, and the transparency issues around the use of certain technologies contracted for use by public administrations.

International AI alliance launch

Elsewhere in the field of AI, members of an international alliance launched a new initiative of Monday (15 June), entitled the Global Partnership on Artificial Intelligence.

The group counts as its members Australia, Canada, France, Germany, India, Italy, Japan, Mexico, New Zealand, the Republic of Korea, Singapore, Slovenia, the United Kingdom, the United States of America. The EU’s membership is still pending.

The Secretariat of the alliance will be hosted by the Organisation for Economic Cooperation and Development (OECD) in Paris and the group aims to “guide the responsible development and use of AI, grounded in human rights, inclusion, diversity, innovation, and economic growth.”

[Edited by Zoran Radosavljevic]

Subscribe to our newsletters