Learning to trust the killer robots

An Israeli soldier mans a military robot during the opening of 'Our IDF' exhibition in Holon, Israel, 20 September 2018. [EPA-EFE/ABIR SULTAN]

This article is part of our special report Global futures: Re-imagining tomorrow’s technologies.

It’s been more than 75 years since the American science fiction writer Isaac Asimov published his seminal ‘Three laws on Robotics’, a concise ethical framework that came to govern the principles by which Artificial Intelligence (AI) has been developed worldwide.

The laws are as follows: 1. A robot may not injure a human being or, through inaction, allow a human being to come to harm. 2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law. 3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

The three simple principles are regarded by many as Asimov’s magnum opus, not for their literary prestige but rather for their impact on the world of robotics, which is today advancing in all spheres of life, from medicine and agriculture to transport and defence.

So far, the development of artificial intelligence has relied on those guiding principles, at the MIT and beyond. As recently as 2007, they were used as a foundation for South Korea’s 2007 Robot Ethics Charter.

But how far can we go and remain confident that the machines will serve interests conducive to the benefit of humankind? And how does this square with the use of robots for military purposes?

Trusted autonomy

The question of ‘trust’ in the employment of artificial intelligence in defence systems clearly has the potential to contravene Asimov’s first law: that a robot must never allow a human being to come to harm.

The question is set to be discussed in detail as part of the upcoming Next 100 Symposium in Prague in November, bringing together leading political figures, scientists, economists, and innovators to debate how exactly autonomous systems can solicit the full trust of humankind.

Autonomous weapons, AI are future of defence but require ethical debate, says expert

Should a computer be allowed to take decisions over life and death? As artificial intelligence is playing an increasing role, many issues still need to be discussed in parliament and by the general public, Ulrike Franke told EURACTIV about the ongoing autonomous weapons debate and the future of warfare.

‘Trusted Autonomy’ is the expression applied to this field of ethics in AI. In the arena of defence and security, addressing this issue remains a priority, particularly important during a global context in which the application of lethal autonomous weapons systems (LAWS) is becoming ever the more common.

In layman’s terms, LAWS are robots, designed for warfare, employing AI.

Russia continues to develop its autonomous military arsenal, including AI missiles, while investments into insect-like robot weaponry are being made in Israel, and the South China Post reported earlier this year that their country is developing autonomous submarines designed to launch kamikaze-style attacks.

“Morally repugnant”

Trusted autonomy in Lethal Autonomous Weapons System is low. In September, the European Parliament adopted a resolution calling for an international ban on such ‘killer robots’, stressing that “machines cannot make human-like decisions” and that it is humankind that should remain accountable for decisions taken during the course of war.

Parliament’s decision followed failure in the United Nations to reach a consensus on a blanket ban on Lethal Autonomous Weapons Systems. A bloc of states headed by the US and Russia and including South Korea and Israel stood against any potential prohibition.

UN talks: Can the EU stop the charge of the killer robots?

UN talks continue this week to try and turn the screw on lethal autonomous weapon systems (LAWS) or what has been termed “killer robots”.

Furthermore, in June, the British administration refused to consider a revision of its definition of LAWS  that would align more closely to global standards.

Britain’s reluctance to cooperate fully with the international community was also reflected by France and Germany, both of whom previously stood against a complete ban on killer robots.

Talks are expected to continue in the area of LAWS as part of the UN’s mandate next year. However, UN secretary-general, António Guterres, has been unambiguous in his approach to killer robots, telling the general assembly at the end of September that “the prospect of machines with the discretion and power to take human life is morally repugnant.”

Speaking ahead of the European Parliament’s vote on its stance towards LAWS, EU foreign affairs chief Federica Mogherini came out more mildly than Gueterres, saying that the use of killer robots should be regulated, but such a decision should not stifle innovation in AI.

“We are entering a world where drones could fire – and could kill – with no need for a man to pull the trigger. Artificial intelligence could take decisions on life and death, with no direct control from a human being,’ she said.

“Scientists and researchers should and must be free to do their job knowing that their discoveries will not be used to harm innocent people.”

In her speech, Mogherini also made clear that any potential international agreement would not earmark Europe as being ‘afraid’ of technology. And that is the crux of the issue: ensuring that a sense of ‘trust’ is preserved proportionally.

Imposing a too severe regulatory framework would reinforce the perception that the EU cowers in the face of innovation and is happy to lag behind.

EU lawmakers call for global ban on 'killer robots'

The European Union took a stance against “killer robots” on Wednesday when the European Parliament passed a resolution calling for an international ban on the development, production and use of weapons that kill without a human deciding to fire.

However, with the ever-increasing development of LAWS, Asimov’s original principles are at risk of falling by the wayside, unless ethical frameworks are quickly put in place and respected – a problem Asimov was able to anticipate during his lifetime:

“The saddest aspect of life right now is that science gathers knowledge faster than society gathers wisdom,” he said.

Subscribe to our newsletters

Subscribe