The European Commission may “have to go further” on introducing more robust rules for Artificial Intelligence (AI) technologies that pose a risk to fundamental rights, a letter written by President von der Leyen to MEPs, obtained by EURACTIV, states.
In a written response to a recent cross-party letter from 116 MEPs which called on the Commission to tackle risks to fundamental rights raised by certain ‘high-risk’ AI applications, von der Leyen assured EU lawmakers that the executive would take their concerns into account when drafting upcoming legislation.
“I would like to assure you that the Commission takes your concerns regarding the protection of fundamental rights very seriously,” von der Leyen wrote in the letter, dated 29 March.
The Commission President also cited forthcoming legislation to be introduced by the Commission on 24 April, and said that it would aim to promote AI “that is transparent and trustworthy.”
“We envisage mandatory rules applicable to all AI systems that pose a high risk to the rights or safety of people. In the case of applications that would be simply incompatible with fundamental rights, we may need to go further,” von der Leyen said.
Follow up to AI White Paper
The Commission’s follow-up to last year’s White Paper on Artificial Intelligence will detail how ‘high-risk’ applications will be regulated at EU level. The initiative presented last year laid the groundwork for new rules against AI tech deemed to be of ‘high risk,’ as well as those technologies employed in critical sectors of the economy, including across healthcare, transport, police, recruitment, and the legal system.
Meanwhile, speaking recently to the European Parliament’s special committee on Artificial Intelligence last week, deputy director-general of the Commission’s DG Connect, Khalil Rouhana, noted that the 24 April follow-up on AI would set out the scope of ‘high-risk’ applications, which would be well-defined, he said.
However, such a targeted approach runs the risk of dividing EU nations, with certain players, such as Germany, calling for a broader scope of technologies that can be regarded as ‘high-risk.’
On the other side of the debate, last year, fourteen EU countries set out their position on the future regulation of Artificial Intelligence, urging the European Commission to adopt a “soft law approach”.
“We should turn to soft law solutions such as self-regulation, voluntary labelling and other voluntary practices as well as robust standardisation process as a supplement to existing legislation that ensures that essential safety and security standards are met,” the paper, spearheaded by Denmark and signed by digital ministers from other EU tech heavyweights such as France, Finland and Estonia, noted.
[Edited by Benjamin Fox]