MEP: Public has a ‘right to know’ about Commission’s lie detector tech

The European Commission is being urged to publish reports on the trials of an Artificial Intelligence lie detector technology, iBorderCTRL, which has been bankrolled by the EU’s long-term research and development funding mechanism, Horizon 2020.

Green MEP Patrick Breyer, who is currently embroiled in a legal battle with the Commission’s Research Agency for its refusal to disclose ethical assessments of the iBorderCTRL system, said on Tuesday (31 March) that the executive is also refusing to publish information on trials that have been conducted for the technology.

Lie detector tech

The iBorderCTRL system has been tested on various frontiers throughout the EU and uses advanced artificial intelligence technologies to analyse micro-expressions. One of its uses is to detect whether a user is lying or not, when presented with a series of questions, in what has been termed ‘deception detection.’

As part of the trials of the technology, MEP Breyer had sought out information on this component of iBorderCTRL and the proportion of ‘false positives’ that had been identified by the system, following an investigation by The Intercept, which found that the technology made several errors, incorrectly identifying four out of sixteen honest answers as false.

Breyer had also pressed the Commission on whether the technology discriminates against certain groups of people, including people of colour, women, the elderly, children, and people with disabilities.

2020 reports

In response, the Commission’s Home Affairs chief, Ylva Johansson, stated that while research on the deception detection element “was one of the several components of potential systems studied,” the project will only make available public research reports later in 2020.

However, in perhaps a veiled reference to the content of the trial reporting, Johansson suggested that the technology may never actually be rolled out publicly in the EU, although the project has received €4.5 million in EU funds and has been tested at border crossings in Hungary, Latvia and Greece.

“iBorderCtrl was a research project and did not envisage the piloting or deployment of an actually working system,” Johansson said.

“A research project can be used to explore the possible uses of new technologies, but a careful assessment of outputs will always be needed before operationalising any result and this will also be based on the acceptability and impact of the explored technology in and on society.”

For his part, Breyer was not satisfied with what he said was the Commission’s withholding of the information.

“The Commission has all reports on the outcome of the trials, but the Commissioner chooses to withhold information on the accuracy and bias of the dubious “video lie detector” technology from Parliament,” he told EURACTIV on Tuesday (31 March).

“Is it because this pseudoscientific algorithm has proven to be an utter failure and useless at borders? With millions of taxpayers money spent on this voodoo AI and the developer now selling it to law firms and insurance companies, the public has a right to know the truth,” he added.

AI & Ethics

More broadly, the Commission has sought to pitch itself as an ethical proponent of Artificial Intelligence technologies.

In the executive’s February 2020 document, Artificial Intelligence White Paper, the Commission held back on a temporary ban on facial recognition technologies. A leaked version of the document had previously floated the idea of putting forward a moratorium on facial recognition software.

In this vein, the executive plans to “launch an EU-wide debate on the use of remote biometric identification,” of which facial recognition technologies are a part.

Meanwhile, a series of ‘high-risk’ technologies have been earmarked by the Commission for future oversight, including those in ‘critical sectors’ and those deemed to be of ‘critical use.’

Those under the critical sectors remit include healthcare, transport, police, recruitment, and the legal system, while technologies of critical use include such technologies with a risk of death, damage or injury, or with legal ramifications. Sanctions could be imposed should certain technologies fail to meet such requirements.

“High-risk AI technologies must be tested and certified before they reach the market,” Commission President von der Leyen said on publication on the White Paper. The Commission will establish an ‘objective, prior conformity assessment’ in order to ensure that such systems meet standards of accuracy and trust.

[Edited by Zoran Radosavljevic]

Subscribe to our newsletters

Subscribe
Contribute