How should autonomous and self-learning robots be considered from a legal viewpoint? Under plans currently under discussion, they can be regarded as pets or even as “robot persons” in their own right, euractiv.com has learned.
The European Commission wants to usher in the era of artificial intelligence and robotisation in factories with a “digitising industry” strategy published last April.
But how to consider robots from a legal point of view is a difficult nut to crack for policymakers. And the answer will have far-reaching implications for the robot-manufacturing industry, and insurance companies covering potential damage caused by robots and workers operating alongside machines on factory floors.
Dirk Staudenmayer, head of unit for contract law at the European Commission’s justice department, said legal clarification on liability – who is responsible in case of damage – was “very important” for the EU’s Digital Single Market initiative and the Internet of Things where objects are connected to each other and share information automatically.
“Robots are leaving the cage now in industrial production,” Staudenmayer told a EURACTIV event held last week (28 September).
“And that means collaborative robots – or co-bots – who no longer work on their own but work together with humans. And then accidents may or may not happen,” he told participants at the event, which was organised with the support of VDMA, the German Engineering Association.
Trade unionists could not agree more and have also flagged the many unforeseen risks that robots can generate for humans. Dr. Laurent Zibell, a policy advisor at the IndustriAll European trade union, warned that workers were now coming in direct contact with robots and needed to be covered in case something goes wrong.
“There are accidents unfortunately, and we need to make sure that compensation is fast and straightforward,” he told the EURACTIV event. One idea being floated in a European Parliament report would be to have a mandatory insurance system for these machines, he pointed out.
‘A robot person’?
Kaja Kallas is an Estonian MEP from the Liberal ALDE faction who is a member of a working group on robots and artificial intelligence in the European Parliament. She said legislators were currently exploring three different tracks to figure out what legal personality should be assigned to robots.
“One idea on the table is that you have a legal person, a physical person and then a robot person,” she said. “And then, of course, there should be a fund related to this so if there is damage, you can claim damages from that.”
Kallas was referring to a recent Parliament report, which said:
“Robots’ autonomy raises the question of their nature in the light of the existing legal categories – of whether they should be regarded as natural persons, legal persons, animals or objects – or whether a new category should be created, with its own specific features and implications as regards the attribution of rights and duties, including liability for damage.”
For Staudenmayer, the European Commission’s main objective in the liability debate is more down to earth – ensuring legal issues do not thwart EU industries and job creation in the digital economy.
“We want to ensure that the IoT works. We see robots as a really growing market, they have lots of advantages for them. So this should be promoted,” he told the EURACTIV event. “The objective here is to provide a framework where industry can invest in the Internet of Things, and invest in robots, and can be clear on who is liable, when and under which conditions,” he said.
But he said the Commission had chosen not to rush things for now.
“Whether that means living with the existing framework, slightly adapt the existing framework, or more – is not yet clear,” he admitted.
What is clear, however, is that liability regimes at national and European levels will have to be reconsidered in light of the challenges posed by autonomous systems and self-learning machines. Taking self-driving cars as an example, Staudenmayer said it is tricky to establish a causal link between a fault or defect, and damage caused in accidents, because there are so many players involved.
“If an accident happens, who is liable? Is it the producer, is it the seller, is it the car owner, is it the structure that sends the data to the car or is it the software?” he asked.
If self-driving cars are already creating headaches for policymakers, then artificial intelligence and self-learning systems take them to an entirely new level.
“Can you talk about fault at all with fully autonomous systems like robots who learn by themselves and, based on this learning, act autonomously? Whose fault is it there? Is it the fault of the owner of the robot or those who have programmed it and allowed the robot to learn by itself?”
Staudenmayer said the Commission preferred not taking a stance on this issue at this stage, but was in the process of reviewing the EU’s product liability directive to see whether it needs to be adapted for this new environment.
Are robots like pets?
When it comes to the legal personality of robots, one idea envisaged by policymakers is to look at the closest precedent lawyers have at hand – pets.
“Animals are also autonomous. We try to teach them, and they behave more or less as we tell them,” Zibell remarked. “Nevertheless, you have dogs that kill people, you have cows that go wild and cause accidents, and we’ve been able to tackle this for quite a while actually.”
Staudenmayer acknowledged that this was an option the European Commission was looking into. Many countries already regulate animals based on the principle that pet owners open a risk for others and are therefore liable for having opened that risk.
“So there are insurances for pets and for the risks created by pets and animals in general. And there are some parallels,” the official said.
In fact, Staudenmayer revealed that three broad options were currently being considered in the robot liability debate.
- The first, he said, is to argue for a modified product liability regime and adapt it to the new ecosystem of the IoT.
- A second option is to consider robots like animals, and consider that whoever has created a risk is liable for running or opening that risk.
- The third option is to look at the party which is best placed to manage or avoid the risk and make that party liable.
But he was also quick to highlight that these were just options under consideration, insisting that the Commission had not made up its mind yet.
“These are trends which came out of our discussions, we’re not expressing any preference because we’re at an early stage of the debate. But we clearly have to think more about it.”
Holger Kunze, director of VDMA’s European office, agreed that more reflexion was needed, but urged policymakers to tread cautiously. “You mentioned the robots getting out of the cage. Under the machinery directive, there are standards being developed also covering that. And from a liability perspective, it can be dealt with,” Kunze remarked.
However, for autonomous and self-learning systems, it’s a different story, he warned. “This is a very sensitive area for innovation and investment. So you have to get it right otherwise it can have a counter-productive effect.”
“So, yes, I think we have to discuss that, but take your time,” Kunze said.