The EU is right to refuse legal personality for Artificial Intelligence

DISCLAIMER: All opinions in this column reflect the views of the author(s), not of EURACTIV.COM Ltd.

The European Union and its institutions, lack the power to determine who is a “person,” argues Thomas Burri. [M U / Flickr]

The European Commission’s recent outline of an artificial intelligence strategy does not give in to European Parliament calls to grant personhood for AI. The Commission is right in this, though not for the reasons mentioned in a recent open letter published by experts, writes Thomas Burri.

Thomas Burri is Assistant Professor of International Law and European Law at the University of St. Gallen in Switzerland.

In a resolution of 2017, the European Parliament urged the European Commission to propose what it called “electronic personality” for sophisticated autonomous robots.

This move by the European Parliament prompted a number of experts to publish an open letter in April 2018 calling upon the Commission to ignore the Parliament’s move and reject “electronic personality”. According to this letter, it would be inappropriate, ideological, nonsensical and non-pragmatic to introduce such a legal status.

The European Commission has now outlined its future strategy to address artificial intelligence. In this outline, the capacity of artificial intelligence to bear rights and duties – “electronic personality” in the Parliament’s parlance – goes unmentioned.

Commission vows to spend €1.5 billion on artificial intelligence by 2020

The European Commission announced on Wednesday (25 April) that it will invest €1.5 billion into artificial intelligence research over the next three years, and was promptly hit with criticism for drafting its strategy years after the United States and China started their own massive funding plans.

The Commission was right not to take up the Parliament’s idea of creating personhood for artificial intelligence and robots. There is some truth to the principal argument in the open letter, namely that such personhood is not necessary to meet liability concerns which artificial intelligence may give rise to. The law knows no lack of mechanisms to ascribe liability in situations of risk and uncertainty.

However, there are two more substantive reasons why the Commission should not address artificial intelligence personhood.

First, the European Union and its institutions, including the Commission and Parliament, lack the power principally to determine who is a “person”. This power resides with the Member States. It encompasses the definition of the term “person” in all respects.

Under EU law, it is up to each Member State to determine who is a natural person. This power is only curbed by international human rights law. Similarly, national law determines who acquires nationality under which circumstances, while EU citizenship merely ties in automatically with this determination – see only Cyprus’s controversial citizenship programme as a case in point.

National law equally determines when an entity becomes a legal person, such as a company or a foundation, while a legal person thus created on the basis of national law can consequently rely upon EU law, notably the fundamental market freedom of establishment.

Indeed, the Court of Justice of the EU has carefully laid down limits to these powers of the Member States in long lines of authority dating back to the 1980s, but the EU has not, as a result of case law, gained the power to determine who is a legal person. One therefore could only fall back on the EU’s sole power in the domain of personhood, namely to establish EU agencies with legal personality, and argue that such agencies are akin to artificially intelligent persons – which would obviously be a stretch. In short, it would have been unlawful, under the EU Treaties, for the Commission to propose the creation of personhood for artificial intelligence.

Yet, there is a second, even more urgent reason why legislatures should principally refrain from creating special personhood for artificial intelligence, a reason which the open letter equally failed to touch upon. In fact, artificially intelligent persons are already here. And one should not add to the problems they already cause by establishing additional “electronic personality”.

Law professor: ‘Autonomous weapons’ notably absent from EU motion on robotics, AI

The European Parliament is voting today (16 February) on a landmark resolution on civil law, robotics and artificial intelligence. Ahead of the vote, Euractiv.com explored the ethical and legal implications of robotisation with a professor of international and European law.

Research has shown that some existing national laws are sufficiently flexible so as to enable attribution of personhood to artificial intelligence. Shawn Bayern, a US scholar at Florida State University, was the first to demonstrate how current company law in the US can be used to establish a legal person, i.e. a company, that is wholly and solely controlled by an artificial intelligence, the result being that artificial intelligence gains legal personality on the basis of the law as it presently stands.

While US law may in this regard effectively spill over to Europe, other researchers have demonstrated that some national laws of the Member States of the EU offer similar options, albeit within narrower confines.

These options national law provides – and it cannot be stressed enough that these are real options based on existing national law, not thought experiments relying on uncertain future legislation – give rise to a plethora of problems, including risks of abuse for criminal purposes, such as money laundering or tax fraud, and the basic concern whether the law is capable of preventing the creation of such artificially intelligent persons in the first place.

Legislatures and experts should address these real options and risks and, more generally, ponder measures to counter creative use of national law. Even though the EU cannot determine who is a “person”, it would, arguably, have the power to counter abusive practice, at least to the extent in which it impacts the internal market.

The European Commission, according to its strategy, currently reviews EU legislation for liability risks arising from artificial intelligence. It should extend the scope of its review beyond liability and factor in artificially intelligent persons established on the basis of national law.

Tech industry and consumer watchdogs at odds over robot liability

Tech industry groups and consumer watchdogs are at odds over whether the European Commission should overhaul the bloc’s legislation on product liability to cover robots and artificial intelligence.

Subscribe to our newsletters

Subscribe