EU must ‘proactively’ tackle AI discrimination, Jourová says

The EU should not 'copy and paste' everyday racial discrimination and bias into algorithms in artificial intelligence, the EU's Vice-President for Values and Transparency Věra Jourová has said.

The EU's Vice-President for Values and Transparency Věra Jourová. [EPA-EFE/STEPHANIE LECOCQ]

The EU should not ‘copy and paste’ everyday racial discrimination and bias into algorithms in artificial intelligence, the EU’s Vice-President for Values and Transparency Věra Jourová said on Friday (18 September).

Speaking on publication of the Commission’s action plan against racism, Jourová and Commissioner for Equality Helena Dalli hit out at the potential nefarious uses of technology to oppress ethnic minority groups.

“We need to pay very strong attention and vigilance on not creating any kind of new kind of bias or technological discrimination, Jourová said. “In other words, I always say that we have to proactively act against copy-pasting the imperfections and unfairness of the real world into the AI world.”

Jourová further referenced the Commission’s White Paper on Ethics and Artificial Intelligence, published in February this year, which states that further EU regulatory measures could be introduced in terms of seeking to avoid biases in Artificial Intelligence that “could lead to prohibited discrimination.”

Meanwhile, Friday’s action plan goes into more detail with regards to the use of algorithms to perpetuate racial biases in society.

“The use of algorithms can perpetuate or even stimulate racial bias if data to train algorithms does not reflect the diversity of EU society,” the document said.

“As an example, studies have demonstrated that AI-based facial recognition algorithms can exhibit high misclassification rates when used on some demographic groups, such as women and people with a minority racial or ethnic background. This can lead to biased results and ultimately to discrimination,” it added.

Facial recognition has been touted as one particular technology that could extenuate racial biases in society.

In 2018, an investigation from the Massachusetts Institute of Technology found that gender-recognition artificial intelligence software produced by IBM, Microsoft and Chinese company Megvii could correctly identify an individual’s gender 99% of the time – but only for white men, representing an increased risk of false identification for women and ethnic minorities.

Since then, both Microsoft and IBM have dropped their facial recognition development programs – fearing that the technology could amplify racial biases already prevalent in society.

While the Commission’s stance on the future regulation of facial recognition technologies has been vague – leaked papers of its AI strategy have previously floated the notion of a moratorium on the software – it does appear to have adopted a more distinct approach with regards to algorithms used prejudicial means.

In June this year, the European Commission’s Vice-President for Digital policy, Margrethe Vestager, said that certain AI applications used in the field of predictive policing are ‘not acceptable’ in the EU.

“If properly developed and used, it can work miracles, both for our economy and for our society,” Vestager said. “But artificial intelligence can also do harm,” she warned, highlighting that some applications can lead to discrimination, amplifying prejudices and biases in society.

“Immigrants and people belonging to certain ethnic groups might be targeted by predictive policing techniques that direct all the attention of law enforcement to them. This is not acceptable.”

In the policy space, a public consultation on the Commission’s approach to regulating AI concluded in June, and any potentially tougher stance on the use of such technologies will be reviewed as part of a follow-up to the Commission AI White Paper, which is slated to come in early 2021.

[Edited by Zoran Radosavljevic]

Subscribe to our newsletters

Subscribe
Contribute