The European Commission presented on Wednesday (21 April) its long-awaited ‘AI package’, setting out its ambition to make Europe a global leader in the field by being the first to set clear guidelines.
The Commission proposal is structured around developing trust in this technology by proposing the first EU legal framework intended to regulate artificial intelligence applications at European level. It also plans to launch a Coordinated Plan with national governments to boost investments in skills and infrastructure.
“On Artificial Intelligence, trust is a must, not a nice to have. With these landmark rules, the EU is spearheading the development of new global norms to make sure AI can be trusted.” Margrethe Vestager, the EU executive’s Vice-President for Digital, said.
Following the so-called ‘risk pyramid’, most uses of AI are expected to have no or low risks, while high-risk or harmful applications are considered to be less numerous but will be strictly regulated.
The most basic requirement is transparency. To prevent the risk of manipulation, it will need to be clear to users that they are interacting with a machine (i.e. bots) or machine-devised product (i.e. deep fake), as well as when an AI-powered system is used to detect their emotions.
The draft regulation includes a series of obligations for AI applications that are considered to be ‘risky’, as they could have a direct impact on someone’s personal or professional life (e.g. mortgage risk assessment or recruitment process). In these cases, the organisation using AI would need to ensure high quality of data, a detailed documentation that proves compliance with the existing regulations, transparency and human oversight on the process, as well as an high level of accuracy and cybersecurity.
The Commission also proposes to prohibit certain AI uses that are considered as incompatible with EU values. These include AI systems aimed at procuring harm or manipulating human behaviour as they are considered an ‘unacceptable risk’. Similarly, ‘Chinese-style’ scoring systems are also banned, a senior Commission official told reporters.
The proposal has already prompted widespread reactions from the stakeholder community. The European Consumer Association (BEUC) claims that the proposal is weak in terms of consumer protection, criticising the reliance on industry self-assessment.
However, the blueprint has been welcomed by the Business Software Alliance (BSA) lobby group, which stated that “even in cases where the use of AI may bear significant risks, appropriate safeguards are in place to maximize its benefits”.
But industry groups have not offered blanket support for the plan. DigitalEurope Director General Cecilia Bonefeld-Dahl said that “the inclusion of AI software into the EU’s product compliance framework could lead to excessive burden for many providers.”
The most controversial part of the proposal might concern the biometric identification in public spaces, which will only be allowed to law enforcement authorities in very specific cases and following ex ante approval from judicial authorities. These exceptions include kidnapping, terrorist attacks and the identification of criminal offenders.
The EFA/Green group in the European Parliament has been a vocal opponent of any form of facial recognition in public spaces. According to Patrick Breyer, a lawmaker from the German Pirate Party, “the European Commission’s proposal would bring the high-risk use of automatic facial recognition in public spaces to the entire European Union, contrary to the will of the majority of our people.”
The proposal was better greeted by Dragoş Tudorache from the liberal Renew Europe group, the Chair of the Special Committee on Artificial Intelligence in a Digital Age (AIDA), who said that it “is clearly and unequivocally centred around protecting European citizens and their rights, and it provides some much-needed initial clarity on what is and what is not allowed in artificial intelligence.”
[Edited by Benjamin Fox]