The future for next-generation AI in the EU

Artificial Intelligence technologies in the EU are set to come under the scope of new legislation that the European Commission aims to put forward in April. This comes after a protracted period of policy consultation on the best direction for…

Supporter

President of the European Commission Ursula von der Leyen visits the AI Xperience Center at the VUB (Vrije Universiteit Brussel). [Shutterstock]

Samuel Stolton Euractiv 16-02-2021 10:35 1 min. read Content type: Euractiv is part of the Trust Project

Artificial Intelligence technologies in the EU are set to come under the scope of new legislation that the European Commission aims to put forward in April.

This comes after a protracted period of policy consultation on the best direction for the EU to pursue in the field, with risks and benefits of equal measure being weighed up against one another.

There has been no shortage of those calling for the EU to equip itself with the resources to compete more seriously on the world stage in the development of innovative AI technologies. But on the same point, some stakeholders have been keen to remind the EU of the importance of preserving fundamental rights to privacy and the potential pitfalls that may arise should risks be overlooked.

This policy brief analyses the current state of play in the EU with regards to the foreseen regulatory frameworks for AI, the different positions thus far adopted by policy stakeholders, and ultimately the future for next-generation AI on the bloc.

Background and Issues

The onset of next-generation Artificial Intelligence (AI) applications in Europe presents new regulatory challenges with respect to technologies deemed to present risks to the existing legal framework, rights, and ethics.

The scope of potential challenges is broad, with many AI technologies already featuring prominently in our everyday life: algorithms deciding on the fate of our loan applications, recognising faces on public streets, flagging potentially illegal content online, targeting adverts to individual profiles online, estimating the outcome of elections, and even being employed across warzones the world over, as a means to highlight areas of potential hazard and risk.  

The use of algorithms to supplement human intelligence received a boost at the turn of the Millenium, with the onset of machine learning devices and the realization among technologists that 'Big Data' could be taken advantage of in predictive mechanisms, effectively providing machines with the vast intelligence required to conduct complex real-time decisions. 

In this vein, of vital importance to the operation of AI technologies, is access to and utilisation of data streams. While the EU has strict safeguards on the use of personal data for this cause, as part of its General Data Protection Regulation, it is now seeking to harness the power of industrial data sharing as a means to boost competitiveness with other global players in the field of data-driven innovation. 

Meanwhile, the EU seeks to mitigate the potential pitfalls in utilizing swathes of data for Artificial Intelligence applications as part of a new regulatory approach for AI. February 2020's White Paper on AI, presented by the Commission, put forward a series of new measures that the EU intends to introduce as a means of tackling Artificial Intelligence technologies deemed to be of ‘high risk.’ It is with these technologies in particular that the EU will attempt to hone in on as part of a new regulatory environment for Artificial Intelligence.

While certain member states across the bloc, including Germany, would like to see broader rules being laid out, AI technologies will be expected to abide by a new series of rules that respect European values, while also fostering innovation - a balance that is easier said than done. This is a difficulty further exasperated by the fact that there are cultural differences the world over that will impact the functionality of algorithms used in AI, particularly with regards to ethics, which can often differ depending on cultural contexts. In this regard, the EU is hoping to standardise a very 'European' approach to ethics in AI, which pursues a human-centred approach. 

Positions

Commission 

In February 2020, the European Commission published its 'White Paper' on Artificial Intelligence, which laid the groundwork for new rules against AI tech deemed to be of ‘high risk.’ As part of the roadmap, the EU executive noted that certain technologies would be earmarked for future oversight, including those in ‘critical sectors’ and those deemed to be of ‘critical use.’

Those under the critical sectors remit include healthcare, transport, police, recruitment, and the legal system, while technologies of critical use include those with a risk of death, damage or injury, or with legal ramifications.

On presenting the plans in February 2020, Commission President Ursula von der Leyen said that "high-risk AI technologies must be tested and certified before they reach the market."

Sanctions could be imposed should certain technologies fail to meet such requirements. Such ‘high-risk’ technologies should also come “under human control,” according to Commission documents.

For areas deemed not to be of high-risk, an option could be to introduce a voluntary labelling scheme which would highlight the trustworthiness of an AI product by merit of the fact that it meets “certain objective and standardised EU-wide benchmarks.”

Another area in which the Commission will seek to provide greater oversight is the use of potentially biased data sets that may negatively impact demographic minorities.

In this field, the executive has outlined plans to ensure that unbiased data sets are used in Artificial Intelligence technologies, avoiding discrimination of under-represented populations in algorithmic processes.

However, the Commission held back on introducing strict measures against facial recognition technologies. A leaked version of the document had previously floated the idea of putting forward a moratorium on facial recognition software.

The Commission instead opted to “launch an EU-wide debate on the use of remote biometric identification,” of which facial recognition technologies are a part.

However, more recently, the Commission has not ruled out a future ban on the use of facial recognition technology in Europe, mulling over a public consultation on the subject. 

Speaking to MEPs on the European Parliament’s Internal Market Committee in September, Kilian Gross of the Commission’s DG Connect said all options were still on the table. 

Gross, who heads DG Connect’s Technologies and Systems for Digitising Industry Unit, also noted how the EU’s general data protection regulation (GDPR) covers the processing of biometric technology in certain cases, but that the Commission would also examine whether the GDPR is sufficient in terms of data acquired from facial recognition technology.

The Commission's work in this context leads up to a legislative 'follow up' to the White Paper, which the Commission will present on 21 April. Rather than introduce hard regulation, however, the legislation is likely to present clarifications on the uses and types of certain Artificial Intelligence technologies that fall into the 'high-risk' bracket, and therefore requiring greater oversight.   

Moreover, the Commission would also like to exercise its influence on the world stage in the field of AI. Speaking at a recent online event, Kim Jørgensen, head of cabinet of Executive Vice-President Margrethe Vestager, said that now was the right time for the bloc to pursue a transatlantic accord with the United States on Artificial Intelligence, following Joe Biden's inauguration as the new US president.

Parliament

While the Commission has been drafting legislation for Artificial Intelligence, there has been no shortage of reports from the European Parliament in the field.

In October last year, Parliament adopted three wide-ranging texts on the subject. A report from Spanish S&D MEP  Iban García del Blanco urged the Commission to present a new legal framework outlining the ethical principles to be used when developing, deploying and using artificial intelligence, robotics and related technologies in the EU.

German EPP MEP Axel Voss’s text calls for a future-oriented civil liability framework to be adapted, making those operating high-risk AI strictly liable if there is damage caused. French Renew MEP Stéphane Séjourné underlines the key issue of protecting intellectual property rights (IPRs) in the context of artificial intelligence.

And more reports are in the offing from Parliament, with attention also paid to the use of Artificial Intelligence in criminal law matters,  as well as the use of AI in education, culture and the audiovisual sector.

Meanwhile, Parliament has further reified its intention to probe the potential risks and benefits of artificial intelligence technologies by establishing its own Special Committee for AI, which was established in June 2020.  

On some of the more controversial issues surrounding biometric AI, Parliament has adopted an unambiguous stance, with consecutive reports highlighting the potential pitfalls at play.  This was evidenced most recently as part of a resolution led by Identity and Democracy MEP Gilles Lebreton, adopted by Parliament in January this year, which called for the Commission to duly consider a moratorium on the software until all fundamental rights concerns have been taken into account.

Council

Member states have thus far adopted divergent approaches to the future of AI regulation in Europe. On the one side are EU nations very sensitive to future risks - particularly in the field of data protection and the processing of biometric data streams. But there is also a contingent of EU member states determined to pursue innovation in the field of AI, as a means to better compete with other actors on the world stage. 

In this latter group, October 2020 saw no less than fourteen member states come forward with their plans for the future of Artificial Intelligence, urging the European Commission to adopt a “soft law approach”.

In a position paper spearheaded by Denmark and signed by digital ministers from other EU tech heavyweights such as France, Finland and Estonia, the signatories call on the Commission to incentivise the development of next-gen AI technologies, rather than put up barriers.

“We should turn to soft law solutions such as self-regulation, voluntary labelling and other voluntary practices, as well as robust standardisation process, as a supplement to existing legislation that ensures that essential safety and security standards are met,” the paper noted.

“Soft law can allow us to learn from the technology and identify potential challenges associated with it, taking into account the fact that we are dealing with a fast-evolving technology,” it continued.

Along with Denmark, the paper has also been signed by Belgium, the Czech Republic, Finland, France Estonia, Ireland, Latvia, Luxembourg, the Netherlands, Poland, Portugal, Spain and Sweden.

However, the softer policy angle could come into conflict with some of the other positions adopted by EU countries.

Germany, for example, is concerned that the Commission only wants to apply restrictions on AI applications deemed to be of high-risk, and would prefer a much broader scope for technologies that would be subject to new rules.

Berlin is also concerned that the Commission’s current plans would lead to a situation in which “certain high-risk uses would not be covered from the outset if they did not fall under certain sectors.”

Moreover, Germany’s June position also made clear reference to the risks to civil liberties posed by biometric remote identification tech, noting how they could lead to a potential encroachment on fundamental rights.

Timeline

February 2020: European Commission publishes its White Paper on AI.

February-June 2020: European Commission runs public consultation on AI.

June 2020 : European Parliament establishes Special Committee on Artificial Intelligence.

October 2020: European Parliament adopts three resolutions on Artificial Intelligence, covering ethics, civil liability, and intellectual property (IP).

April 2021: European Commission to present legislative follow up to AI White Paper. 

Further Reading

LEAK: Commission considers facial recognition ban in AI ‘white paper’ (EURACTIV)

High-risk Artificial Intelligence to be ‘certified, tested and controlled,’ Commission says (EURACTIV)

Stakeholders weigh in on EU’s future AI plans (EURACTIV)

Vestager warns against predictive policing in Artificial Intelligence (EURACTIV)

MEPs chart path for a European approach to Artificial Intelligence (EURACTIV)

Germany calls for tightened AI regulation at EU level (EURACTIV)

EU nations call for ‘soft law solutions’ in future Artificial Intelligence regulation (EURACTIV)

Commission will ‘not exclude’ potential ban on facial recognition technology (EURACTIV)

MEPs mull over ‘significant risks’ of Artificial Intelligence (EURACTIV)

White Paper on Artificial Intelligence, European Commission.

Consultation on Artificial Intelligence - results, European Commission

Resolution on a civil liability regime for artificial intelligence, European Parliament Report.

Resolution on a framework of ethical aspects of artificial intelligence, robotics and related technologies, European Parliament report.

Resolution on intellectual property rights for the development of artificial intelligence technologies, European Parliament report.


European Parliament decision of 18 June 2020 on setting up a special committee on artificial intelligence in a digital age

Subscribe to our newsletters

Subscribe