The European Commission’s high-level group on Artificial Intelligence (AI) published their highly anticipated report on artificial intelligence and ethics on Tuesday (18 December), drawing attention to issues related to areas such as identification, citizen scoring, and killer robots.
The report sets out how developers and users can make sure AI respects fundamental rights, applicable regulation and ethical values. It has been put together by 52 experts from academia, business and civil society, who drew attention to a number of “critical concerns” for the future of AI.
One of these is the field of ‘normative citizen scoring’, defined by the authors as general assessments of “moral personality” or “ethical integrity” of people by third parties.
The report states that citizen scoring systems endanger freedom and the autonomy of citizens when carried out “on a large scale by public authorities.”
However, the report does not go on to directly say that the process should be outlawed. Rather, it highlights instances in which “citizen scoring is applied in a limited social domain” and that these should conform to improved levels of transparency.
Citizen scoring on a mass, state-controlled level, is currently put into practice in China, as part of their social credit system.
Currently going through a trial phrase, by 2020, the Chinese government hopes to roll out the system en masse, seeking to systemize the assessment of citizens’ economic and social credit, based on a number of conditions determined by the Chinese authorities.
Covert Artificial Intelligence
Elsewhere in the report, on the subject of ‘Covert AI’ systems, the document notes that AI developers should “ensure that humans are made aware of – or able to request and validate the fact that – they interact with an AI identity.”
As androids in general become ever-more ‘human-like,’ the EU is vying to ensure that robots could never be mistaken for people, thereby guaranteeing a clear moralistic and ethical divide between human and robot principles, behaviours and values.
The inclusion of such ‘hyper-real’ androids in society “might change our perception
of humans and humanity,” the report states. “It should be borne in mind that the confusion between humans and machines has multiple consequences such as attachment, influence, or reduction of the value of being human. The development of humanoid and android robots should therefore undergo careful ethical assessment.”
A further area of worry for the high-level group is the use of AI in identification technologies, such as facial-recognition software.
Such technology is currently being trialled by police forces in the UK, who are seeking to use such software to scan the faces of Christmas shoppers in London in the hope of being able to identify wanted criminals easily.
The high-level group recognises that the ethical issues related to this form of technology in particular centre around the lack of consent given in such instances.
Moreover, the subject of lethal autonomous weapons systems (LAWS), more commonly known as ‘killer robots’, also comes under the spotlight in the report.
Such systems are able to operate without meaningful human control. A self-firing missile tracking machine would be an example of a lethal autonomous weapon system.
In September, the European Parliament adopted a resolution calling for an international ban on such ‘killer robots’, stressing that “machines cannot make human-like decisions” and that it is humankind that should remain accountable for decisions taken during the course of war.
Parliament’s decision followed failure in the United Nations to reach a consensus on a blanket ban on Lethal Autonomous Weapons Systems. A bloc of states headed by the US and Russia and including South Korea and Israel stood against any potential prohibition.
The perspective of the high-level group aligns with the Parliament’s September motion, seeking to address the ethical and legal implications related to accountability, human control and human rights law in the development of LAWS.
More generally, the conclusions from the Commission’s ethical guidelines advocate a human-centric approach to the development of AI, respecting fundamental rights and societal values.
The overarching long-term objective is to foster AI that is trustworthy.
“AI can bring major benefits to our societies, from helping diagnose and cure cancers to reducing energy consumption,” stated Vice-President for the Digital Single Market Andrus Ansip on Tuesday (18 December).
“But for people to accept and use AI-based systems, they need to trust them, know that their privacy is respected, that decisions are not biased.”
The current papers on the table represent a draft report of the Commission’s ethical guidelines, and they are reaching out for feedback on a number of areas of the report, with particular attention paid to the above concerns.
The final edition of the guidelines is set to be published in March 2019.