The European Commission will seek to outlaw Artificial Intelligence systems used for “indiscriminate surveillance” operations as part of new prohibitions to be put forward next week.
As part of the draft regulation on a ‘European approach for artificial intelligence,’ seen by EURACTIV, the EU executive proposes to ban AI technologies that are used for “indiscriminate surveillance applied in a generalised manner to all natural persons without differentiation.”
The text details such methods of surveillance as those including the “monitoring and tracking of natural persons in digital or physical environments, as well as automated aggregation and analysis of personal data from various sources”.
In addition, the Commission foresees prohibitions against the use of Artificial Intelligence applications that breach Union values or violate human rights. Such include AI systems that manipulate human behaviour and predictive AI systems that target vulnerabilities.
Fines for violations of these prohibitions could amount to up to 4% of a firm’s global annual turnover.
However, the draft, which was first reported on by Politico on Tuesday (13 April), stipulates that such prohibitions will not apply to EU governments and public authorities when carried out “in order to safeguard public security.”
This means that EU governments could in the future justify the use of intrusive AI applications on security grounds.
The draft also bans the practice of ‘social scoring’ in AI applications, technology made infamous by China’s centralised social credit-rating system.
Meanwhile, ‘high-risk’ AI applications that could come under the scope of new third-party conformity assessment requirements include those “used for the remote biometric identification of persons in publicly accessible spaces,” which would also by definition cover facial recognition technology – an area for which the Commission had previously mulled over a temporary ban.
Certain high-risk applications also include those employed as safety components in essential ‘public infrastructure networks’, including roads, the supply of water, gas, and electricity.
There are self-assessment conformity requirements for certain technologies used in the emergency first response services, recruitment processes, and systems “used for the purpose of determining access or assigning persons to educational and vocational training institutions.”
Elsewhere, the Commission also charts the establishment of a ‘European Artificial Intelligence Board,’ composed of one representative per each of the EU27 countries and a representative of the Commission, as well as the European Data Protection Supervisor.
The Board will be tasked with “issuing relevant recommendations and opinions to the Commission, with regard to the list of prohibited artificial intelligence practices and the list of high-risk AI systems.”
The proposal, due to be put forward by the Commission’s Executive Vice-President for digital, Margrethe Vestager, on 21 April, comes as a follow-up to the Commission’s White Paper on Artificial Intelligence from last year, which laid the groundwork for new rules against AI tech deemed to be of ‘high risk’.
More recently, a letter obtained by EURACTIV detailed how European Commission President Ursula von der Leyen had assured MEPs that the Commission would “go further” on introducing more robust rules for Artificial Intelligence technologies that pose a risk to fundamental rights.
This came after a cross-party letter from 116 MEPs called on the Commission to tackle risks to fundamental rights raised by high-risk AI applications. On publication of the Commission’s new regulation on AI next week, EU lawmakers will once again have the chance to air their views on the EU’s stance towards AI technologies.
[Edited by Zoran Radosavljevic]