Law professor: ‘Autonomous weapons’ notably absent from EU motion on robotics, AI

Autonomous weapons can take the form of flying drones but also underwater and land drones. [Christoph Boecken / Flickr]

The European Parliament is voting today (16 February) on a landmark resolution on civil law, robotics and artificial intelligence. Ahead of the vote, Euractiv.com explored the ethical and legal implications of robotisation with a professor of international and European law.

Thomas Burri is as an assistant professor of international and European law at University of St. Gallen (HSG) in Switzerland. His research has recently focused on the law and ethics of robots and artificial intelligence. He is a member of The Global Initiative for Ethical Considerations in the Design of Autonomous Systems.

Burri spoke to Euractiv’s publisher and editor, Frédéric Simon.

The European Parliament will vote on a resolution on civil law, robotics and artificial intelligence (AI) this week. The resolution is not legally binding but still, it may be a forerunner for future legislation in the field. So, is it not too early to consider legislating?

I definitely think it’s too early for legislation. As you said, this is just a preliminary stage before we come to legislation. And the European Commission has the monopoly when it comes to proposing new legislation.

Why is it too early in your view?

Because we don’t know for sure what we’re dealing with. And this is quite apparent in the Parliament report. When you look for key definitions like the autonomy of robots or what artificial intelligence actually means, the Parliament report is rather vague. The field is so vast and the issues we’re dealing with are horizontal. So we should be careful not to wade into regulation too quickly, also to avoid stifling innovation.

The Parliament resolution lists a number of examples like autonomous vehicles, drones, care robots, medical robots, etc. As an academic involved in discussions over the ethical and legal aspects of AI, how do you see this developing? Do you see different sets of rules being developed for these different types of machines?

For me there is no question, we shouldn’t adopt overarching regulation. We should address specific questions like autonomous vehicles and medical robots separately. And what bothers me is that the Parliament report puts all of those things in the same pot although they don’t really belong together.

It also often refers to artificial intelligence or robotics whereas these are too quite separate disciplines. AI is mainly about software whereas robotics is a lot about embodiment, there is something physical.

Breakthroughs have been happening mainly in AI but in very limited sectors: you may remember the Google AI that beat the best human player in the world at the game of Go for instance. But in robotics, moves happen very gradually, there are no major leaps forward. So great advances in AI don’t translate necessarily into robotics, which is more about engineering.

EU plans first laws on robotics

A European Parliament committee will look Thursday (12 January) at a draft resolution relating to the regulation of robotics. The text could become the basis for the first European legislation on automation and robots. EURACTIV Germany reports.

AI is evolving fast. So where is it being applied today and where do you see it being applied in the coming five or ten years?

What’s clear is that it’s already being applied in autonomous vehicles, although the details aren’t public. But my colleagues tell me we are still quite far from mass production. Moving from a prototype to something that’s available on the market takes at least ten years, or even more.

Then, there is a lot happening in software, like facial recognition and algorithms for web advertisements. That is already happening and is going to continue. And these are issues we should be concerned with much more than regulating robotics and AI in a general way.

Now, there are problems of course. For example, you can make a great deal of money just by designing a successful algorithm. And that is a social problem. AI allows people to make a lot of money. And although it’s an act of genius to create an algorithm, we should also be concerned with the social implications of this.  Do we really want people to find algorithms and reap all the benefits from whatever these algorithms produce?

And this is partly reflected in the Parliament report, which mentions the impact on labour. This is clearly something we should be more concerned about.

Some have called for a “robot tax” to compensate for jobs losses due to robotisation. Do you support such as move? Or do you see alternatives to manage the transition?

There are alternatives: the robot tax and the universal basic income are two ways of addressing the problem for instance. But I don’t think it’s a good idea to introduce a tax at the moment. It’s way too early. Robots have been around for a long time, factories have been automated for decades. Are job losses so serious already that we need to tax those who use or buy the robots?

MEPs want 'free cash' to address job-killer robots threat

EU legislators today (12 January) called for a universal basic income to combat the looming risk of job loss by the onward march of robots, as well as concerns about European welfare systems.

You’re part of a group of experts working on creating legal and ethical standards for AI and robots. What are the main topics that you are discussing in this group? Are there areas of consensus and others where opinion is still divided?

Yes, I collaborate with more than 100 experts on the legal sub-committee of the IEEE standards association, where we talked a lot about liability.

For lawyers, when machines learn, they become unpredictable. When an artificial intelligence is capable of learning, it causes legal problems when it comes to attributing responsibility for certain acts, including for the person who trains the AI. So that’s one aspect.

But again, there are solutions, like insurance or strict liability independent from fault or negligence. We don’t need major changes for this, we could even leave this to the courts. States have done that before, adapting liability laws to reflect technological progress.

The Parliament resolution proposes an obligatory insurance scheme for robots and autonomous machines, similar to what is in place for cars. Is this a reasonable way forward?

It is a possibility, yes. But still, we need to be sure what we’re talking about. For cars, we already have compulsory insurance and it shouldn’t be a problem to include autonomous vehicles. Then we need to ask ourselves what we want to include: physical robots or software, which can also cause damage. There are many open questions in this regard.

‘Robots are leaving the cage’, dazzling EU lawyers and regulators

How should autonomous and self-learning robots be considered from a legal viewpoint? Under plans currently under discussion, they can be regarded as pets or even as “robot persons” in their own right, EURACTIV.com has learned.

The Parliament report also called for the creation of a register for “smart robots” to ensure all have a kind of electronic ID. Again, do you think this is a good idea? It’s probably too early but in the long run, it will become necessary I suppose.

Yes, it will probably become necessary at one point. We already see that with drones – you need to register them in order to identify the owner.

But here again, definitions are important. What is a ‘smart robot’ for instance? I suppose it is another word for an autonomous robot, which is something that still needs to be defined.

There is a process in Geneva discussing autonomous weapon systems. And they are dealing exactly with the same problems. When does a machine become autonomous? Discussions have been going for almost two years and they haven’t found a definition yet. The discussion seems to be about deciding when humans have meaningful control over a robot. And control is a difficult notion, it depends on context and circumstances. It depends on the tools you use, on the chain of command, on the user, and so on.

There are many different concepts of control, which differ from one discipline to another. Political scientists have a different concept of control than engineers for instance. So here also, we’re only at the beginning of an understanding of what we’re talking about.

Obviously, an autonomous killer weapon has a very different purpose than an autonomous car. So the same rules probably won’t apply.

This is what I meant: autonomous weapons are completely different from autonomous cars. The only reason why the draft Parliament resolution report doesn’t mention autonomous weapons is because it is not within its competences.

The Parliament doesn’t have the power to address this and neither does the European Union, which can only address so-called dual use goods which can be used for both military and civil purposes. So they can’t really say anything about autonomous weapon systems.

Which nowadays means killer drones, I guess.

Even on military matters, there is no consensus on what actually falls under the definition of autonomous weapons. It’s clearly not just flying drones, it can be underwater drones or land drones. It might also be software, we don’t know. Even in this specific field where it is rather urgent to come to a definition, there is very little agreement and the process has just started.

Should robots be given a legal personality? The Parliament resolution suggests considering robots as pets or “robot persons” in their own right.

This is a difficult question. There is a precedent here in fact with companies which have a legal personality separate from human beings. So it’s not completely foreign territory. Why not just grant legal personality to a robot – that is possible under the law as it currently stands.

But that would be a trick that should not divert form the essential question of whether we actually want artificial intelligence to have legal personality. And probably at this stage, the answer is no because we wouldn’t know to what we would actually be giving this legal personality.

We have to be aware of this: If an artificial intelligence has legal personality, difficult questions arise, for instance whether an AI can influence voting procedures in a democracy. And that AI might claim also a right to free speech. But what then? This is a fundamental right. And companies do have fundamental rights, including the right to free speech under certain circumstances, which is very controversial. So I’m not sure it would be such a good idea to move forward too quickly on this.

And if an artificial intelligence can be considered as a legal entity, then it should also be subject to some kind of death penalty, in cases of wrongdoing?

It’s just a question of whether it’s desirable. It raises many ethical questions and we don’t really have straightforward answers yet. You can switch off a machine but what if it has data? What happens to the data? You have to be careful about transposing to machines some concepts that were invented with humans in mind. Death can only be applied to humans.

MEP: 'My main concern is that humans are not dominated by robots'

Mady Delvaux, a Socialist MEP from Luxembourg, has been tasked with drafting the first regulatory proposal worldwide to address the rise of advanced robotics and artificial intelligence.

With pets, if a dog is considered dangerous and causes serious injury, a tribunal can order it to be put to death. So you could extend the same principle to robots or AI.

I suppose you could. A judge could order a machine to be switched off or destroyed. But here again our existing legal order can probably deal with such cases when they arise, at least in the near future.

In a famous interview in 2014, Stephen Hawking warned humans could be quickly superseded by self-learning machines, to a point where AI could become a threat to the very survival of the human race. Has this come up at all in your discussions on ethics, or is it too far off?

We actually do discuss this, and refer to it as ‘superintelligence’. It’s always there lurking in the background, and actually distorts our discussions because it is very much a philosophical argument.

For the moment, the purpose of superintelligence is mainly to launch discussions on AI, and it succeeded pretty well. Some authors have made money out of it, which is fine with me. But superintelligence for me is not a primary issue we should be focusing on at the moment. The concepts are highly exotic. I’m not saying it will never happen but we are still quite far away from this.

But you’re right, whenever there is a discussion on AI and robotics, some people look ahead in 30, 40, or 50 years and argue with that in mind. Others who focus on issues which are 5 years away tend to discuss autonomous weapons, autonomous vehicles but don’t care that much about superintelligence.

So I’m not sure this discussion about superintelligence is very helpful for what we’re trying to do now.

Survey shows heightened scepticism towards robots

As robots play a more important role in the EU, moving from manufacturing to healthcare, Europeans are becoming more suspicious of the technology, a Eurobarometer survey shows.

Can artificial intelligence be designed in such a way that it doesn’t lie or deceive human beings? Is that something you discuss in your group?

Yes, absolutely. And here again, we are at the beginning of our discussions. But what might seem rather simple on the face of it very often turns out to be more complicated. When we say we don’t want machines to lie and we always want to know the truth, is that really the case?

If you look at humans, telling the truth is not always the best idea. A simple example we discussed in our group is personals assistants. A secretary may sometimes tell a person who calls that the boss is in a meeting but perhaps the boss simply doesn’t want to talk to you. In fact, it may be a better idea to lie and say the boss is in a meeting.

So when we say we say we want machines to be always transparent, we need to go a step further and ask ourselves whether we actually want this. Telling the truth straight away is not always the best idea. It’s not that simple, you can’t just lay down an ethical principle that machines should always tell the truth.

Because here too, it very much depends on which kind of artificial intelligence you’re talking about. A chat bot for instance should probably reveal its identity otherwise humans might be tricked into a certain behaviour which is undesirable.

So, it all depends on the usage again.  The same rules wouldn’t apply to a chat bot, an autonomous vehicle or an autonomous weapon system.

Absolutely. And it also depends on the kind of person which is interacting with the artificial intelligence. Dealing with a child would be completely different from an adult. So vulnerable users also have to be taken into account in interactions with artificial intelligence.

Isaac Asimov famously wrote down the three laws of robotics in 1942, the first one of which stipulates that a robot should never harm a human being. Are these rules sufficient to ensure humans are not dominated by robots?

Asimov’s novels all demonstrate how difficult it is to apply these basic principles in practice. And this is also true for many human rights principles. Everybody for example can agree on the need to respect human dignity. But consider a laser game where people play at killing each other – is this compatible with human dignity? Some people might think yes, others not. It’s very hard to decide.

So clearly, Asimov’s laws are not something to be applied as such. And research has shown they would be very difficult to apply technically anyway.

So you’re saying there are situations where a robot would be well advised to kill a human being?

[Laughs] No, this is absolutely not what I want to be quoted on. Seriously, – and there is evidence from the discussion in Geneva on autonomous weapons – pretty much everybody agrees on the fact that machines should not kill humans. And I think that is probably the only universal agreement we have at this point in time.

In fact machines do already kill humans, but under their supervision.

Exactly, and that it is the key distinction. But then you have to consider when this happens, when does a machine actually kills a human being autonomously. Is it when the commander is physically distant or when the commander drinks a cup of coffee while the machine is in operation and shoots? How much oversight is needed, how much specific information is needed? Can a commander who operates five, a hundred or a thousand drones at the same time have meaningful control? These are the questions that are being asked and the answers are far from simple.

So, that’s also why this process in Geneva is likely to take longer than just one or two years. In the past, the question came up on whether to prohibit lasers that can permanently blind soldiers on the battlefield. A group of experts was established and it took them two years to come up with a recommendation to ban those weapons. So here it’s likely to take much longer.

[Editor’s note: In fact, there are specific cases where machines will have to make hard decisions on whether to kill human beings. For example, see the Moral Machine of the MIT, where an autonomous car has a sudden brake failure and has to choose between the lesser of two evils.]