An internal report on Artificial Intelligence recently approved by a special committee of the European Parliament embodies a push from EU lawmakers and member states to make regulation on artificial intelligence less burdensome and more innovation-friendly.
Christian Democrat MEP Axel Voss has been leading the charge against “overburdening” companies with excessive regulation, arguing that the EU regulatory environment should leave more room for innovation.
That was the underlying motive of an own-initiative report on Artificial Intelligence in a Digital Age, recently approved in the AIDA committee, a parliamentary body set up in 2020, under Voss’ leadership.
“We need a better regulatory framework that learns also from the mistakes of the GDPR,” Voss said while presenting the report. Instead of overburdening companies, the AI Act should give clear guidance and should leave space for innovation, he added.
While the AIDA committee has no regulatory power, Voss, who is also an influential member of the legal affairs committee (JURI), might be gaining a key role on the AI Act, the EU’s flagship legislation meant to set international standards for regulating risks related to AI application.
The leadership of Parliament’s internal market and consumer protection (IMCO) committee on the AI Act has been contested, and JURI might benefit from a reattribution of the file, with Voss likely to be given the lead. The conference of committee presidents is expected to take the final decision on the matter on Thursday (18 November).
However, concerns that the AI Act could hamper innovation are not limited to Cristian-Democrat MEPs. Iit was also voiced during the European Council meeting in late October, where German Chancellor Angela Merkel outlined the need for an “innovation-friendly regulation.”
Estonia is especially outspoken about not overburdening companies with regulations on AI.
“The scope of measures and the definition of AI systems is currently too wide,” Marten Kokk, Estonia’s deputy permanent representative to the EU, told EURACTIV.
The Baltic country also fears that the EU could fall further behind on private sector AI investment.
“It should be kept in mind that SMEs with moderate capacities might struggle to understand and follow constantly growing regulatory requirements – this is easier for big multinationals,” Kokk stressed, adding the EU should focus more on creating favourable conditions for the take-up of AI and improve the availability of high-quality data.
MEP Voss also warned the EU has fallen behind in the “winner takes most” tech race and should increase its efforts in order to remain a key international rule setter.
“This is a very strategic relevant development and AI is the most important part of it, and therefore we can’t afford to fail,” Voss said.
Making more personal data available
In his report, Voss also criticised the current practice under the EU’s data protection (GDPR) regime.
While the AI Act intends to ensure that algorithms used by tech companies live up to high standards – like being non-discriminatory and non-biased – this would mean that more personal data would have to be made accessible for companies.
“We can’t succeed if we are not offering or exchanging data and also personal data,” Voss said.
According to the report, this is especially the case in the health sector, where the specific consent obligations of the GDPR hinder the processing of medical data, which could lead to lengthy delays in the discovery of new treatment methods and put a bureaucratic burden on health research, the report says.
Voss also stressed that the amount of digital legislation currently in place should be reduced. Instead, the EU should “focus more on the implementation and harmonisation of existing laws,” he said.
According to the report, this is especially true for the GDPR, where differing interpretations across member states led to legal uncertainty and a lack of cooperation, especially in the health sector.
The risk-based approach to regulate AI
However, civil society groups say the current provisions in the AI Act are not sufficient to tackle risks related to AI and algorithms. Human Rights Watch warned in a report that the AI Act is ill-equipped to protect people from algorithms.
“The EU’s proposal does not do enough to protect people from algorithms,” said Amos Tho, a senior researcher on artificial intelligence at HRW.
HRW warned that the regulation would only ban a narrow list of automated systems that are defined as posing a “high risk.”
The current proposal of the AI Act introduces a risk-based approach to AI, where high-risk AI systems are subject to strict obligations.
However, while HRW criticised the list of high-risk systems as too narrow, some member states disagree.
“We must avoid classifying de facto whole sectors as high risk. Risks should be considered alongside benefits and added value that AI offers across sectors,” Estonia’s Kokk told EURACTIV.
[Edited by Luca Bertuzzi/Zoran Radosavljevic]