A discussion on algorithmic accountability and transparency is missing from Europe’s digital economy framework. Citizens need assurances that machines are treating them fairly, writes Liisa Jaakonsaari.
Liisa Jaakonsaari is a Finlands Socialdemokratiska Parti MEP (S&D), a member of the IMCO Committee and the author of a Pilot Project on algorithmic transparency.
Algorithms are the fundamental, invisible building blocks of our digital societies. However, there is currently no legislation, best practice or guidance on algorithmic accountability or transparency. A dialogue among tech companies, consumers and regulators is urgently needed not only in Europe, but globally, to ensure that algorithms are audited and that citizens’ rights are safeguarded.
The digital landscape is changing rapidly with the rise of powerful computers, machine learning and big data. Intelligent self-learning algorithms, often referred to as machine learning or deep learning, are linked to the development of Artificial Intelligence (AI). AI will transform our society in fundamental ways, revolutionising efficiency in industry and services, but the question of ethics in this transformation remains paramount.
Consumers have an interest in knowing what kind of data is collected about them and how it is fed into decision-making algorithms. At present, consumers have no way of knowing if they have been discriminated against by an automated decision, for example on a bank loan or a job application. Consumers also lack awareness about the role of algorithms in digital societies. Companies, on the other hand, have an interest in safeguarding how these algorithms operate, as they are considered important trade secrets. However, there is no doubt that an asymmetry exists. This needs to be addressed by improving transparency to ensure that any decision-making that significantly affects citizens is not outsourced to machines.
The European Union has taken some action to strengthen consumer rights, but lacks a general framework on algorithmic accountability and transparency. The issue of algorithms was loosely addressed in the Commission’s Platform Communication published in May 2016. The policy document called for “greater transparency” so that users “understand how the information presented to them is filtered, shaped or personalised, especially when this information forms the basis of purchasing decisions or influences their participation in civic or democratic life”. In other words, more efforts are needed to raise consumer awareness about the role of algorithms in digital societies.
The EU has also adopted a new legislative package on data protection, coming into force in 2018, which includes a requirement for data-driven decisions based “solely on automated processing” that “significantly affect” citizens. Such processing, sometimes used in online credit applications or e-recruiting practices, is subject to safeguards. EU citizens can voice their concerns over data-driven decisions based solely on automated processing and have the right to “obtain an explanation of the decision reached” and the right to challenge the decision. These safeguards, however, apply to a relatively narrow segment of algorithmic decision-making, as the definition of “solely automated” can be circumvented.
Despite the EU’s efforts to date, the influence of algorithms goes far beyond internet platforms, online credit applications and e-recruiting practices. Therefore, a more comprehensive framework on algorithmic accountability and transparency is urgently needed.
The tech industry has taken a step forward in self-governance. In late September 2016, tech giants Google, Facebook, Microsoft, Amazon and IBM launched a Partnership on Artificial Intelligence to benefit People and Society. The Partnership’s mission is to “study and formulate best practices on AI technologies, to advance the public’s understanding of AI and to serve as an open platform for discussion and engagement about AI and its influences on people and society.”
This initiative is welcome, but it is not enough that the issue of algorithms, AI and ethics be governed by tech companies themselves, especially when several notable names are missing from the Partnership. Self-regulation by the tech industry is insufficient to guarantee accountability and ethical AI. Audits on algorithms should be performed by an impartial authority.
As the digital revolution gains pace, we must take steps now to build a framework on algorithmic accountability and transparency in Europe to uphold free speech, the free flow of information, fair competition, as well as, privacy, trust and democracy in digital societies. This framework should provide legal certainty for companies by addressing the issue of algorithmic best practice and accountability and enhance consumer trust in the digital transformation whilst preserving companies’ ability to innovate. However, the question remains: how can we hold algorithmic decision-making accountable without placing the burden solely on citizens, but rather on those who design these technologies?