Commission reveals details on future EU robotics policy 

Robotics technologies in the EU could be set to come under the scope of new rules as part of a series of efforts to ensure the safety of next-generation technologies in the field, it has emerged.

The European Commission aims to present a revision of the machinery directive in the second quarter this year. [Shutterstock]

Robotics technologies in the EU could come under the scope of new rules as part of a series of efforts to ensure the safety of next-generation technologies, it has emerged.

The European Commission aims to present a revision of the machinery directive in the second quarter this year, and it has recently been revealed that there are plans to tackle issues related to ‘human-robot’ collaboration, as well as improve the transparency of Artificial Intelligence algorithms in robots.

Moreover, the Commission will also look at the radio equipment directive, which  covers communications transmitted by devices connected to the Internet of Things, in an attempt to bolster privacy protocols.

Speaking as part of a Commission panel recently, Gwenole Cozigou, Director for Sustainable Industry and Mobility at DG GROW, offered an insight into what the revised machinery directive could have in store.

The EU executive would first seek to “address the question of robot safety, in particular, in cases of human-machine collaboration,” as well as attempting to ensure that “human oversight” is guaranteed in terms of the development of software for robotics devices.

“We want to also address the issues raised by algorithms in AI robots so that there is more transparency,” Cozigou added.

Moreover, while the EU’s Radio Equipment Directive lays down requirements equipment manufacturers must comply with, there may need to be greater attention paid to the automatic transmission of data in the use of robotics devices, particularly by children.

“As part of the radio equipment directive, which regulates everything that a device emits or receives, there’s a particular relevance for robots, when they communicate,” Cozigou said.

In terms of data protection, “we’ve got certain cases where we’ve seen the need, in particular, for children to be protected, as well as to ensure greater protection from fraud,” he said.

In a further attempt to clarify the regulatory environment for the operation of robots in Europe, Cozigou also noted how the Commission is looking at adapting the ‘product liability directive’ in terms of adapting it to the new challenges presented by next-generation robotics.

EU rights watchdog warns of pitfalls in use of AI

The European Union’s rights watchdog has warned of the risks of using artificial intelligence in predictive policing, medical diagnoses and targeted advertising as the bloc mulls rules next year to address the challenges posed by the technology.

Clarity on Artificial Intelligence rules in EU

Meanwhile, in terms of artificial intelligence, which is crucial to the functionality of robots, an indicative timetable for policy initiatives published by the Commission notes that a ‘follow up’ to a 2020 White Paper, is set to be presented on 21 April, a delay of around a month from what had previously been suggested. The White Paper had laid the foundations for regulating certain high-risk AI applications in the EU,

The legislative proposal in April is expected to clarify liability rules for AI liability, however, there are also wider issues at play, including a common definition for this type of technology, according to recent comments from DG Connect’s Roberto Viola.

“First of all, we would come up with a definition in a horizontal regulation on what is the concept of artificial intelligence,” he said. “But of course, it covers also machines which have algorithms.”

In addition, Viola said that, following on from last year’s White Paper, the EU would further specify certain uses of AI, which would be considered ‘high-risk’ and ‘low-risk’ by the Commission.

As part of the executive’s moves last year, ‘high-risk’ technologies were earmarked for future oversight, including those in ‘critical sectors’ and those deemed to be of ‘critical use.’

Those under the critical sectors remit include healthcare, transport, police, recruitment, and the legal system, while technologies of critical use include technologies with a risk of death, damage or injury, or with legal ramifications.

However, the EU executive held back from introducing strict safeguards against facial recognition technologies, after an earlier leaked version of the paper floated the idea of a moratorium on facial recognition software, instead opting to “launch an EU-wide debate on the use of remote biometric identification,” of which facial recognition technologies are a part.

MEPs mull over 'significant risks' of Artificial Intelligence

MEPs have been urged to consider the ‘significant risks’ that next-generation Artificial Intelligence applications could entail, particularly with regards to discrimination, employment, and social exclusion.

[Edited by Benjamin Fox]

Subscribe to our newsletters

Subscribe