Content moderation policies continue to face core dilemmas

To ensure compliance, Justita recommends in their report that major platforms are signing a free speech framework agreement, that is administered by the Office of the UN High Commissioner for Human Rights (OHCHR) [DilaraD/Shutterstock]

A new report by the Danish think tank Justitia calls for embedding content moderation in an international human rights framework to ensure common international standards. Yet, critics warn that this voluntary approach is too weak to tackle the ‘Infodemic’.

In the report, Justitia says the legislative measures taken in Germany, Denmark or Austria to counter hate speech would lead to a “regulatory race to the bottom” and result in widespread over-blocking – the removal of legit and unharmful content.

Instead, companies should base their content moderation practices on the International Covenant on Civil and Political Rights, which has established a framework for restrictions of freedom of speech in cases of hate speech or misinformation.

“If those big platforms were to agree to a voluntary pledge, where they adopted a human rights standard for disinformation and hate speech that would go some way towards creating a more transparent and consistent approach to content moderation,” Jacob Mchangama, founder and executive director of Justitia, told EURACTIV.

To ensure compliance, Justita recommends in the report that major platforms should be signing a free speech framework agreement that is administered by the Office of the UN High Commissioner for Human Rights (OHCHR)

Voluntary or regulatory approach?

This human rights approach to content moderation would consist of a voluntary pledge of online platforms to tackle disinformation and online hate speech under the auspices of the OHCHR, to ensure that content moderation lives up to international standards instead of national legislations.

The EU has already positioned itself in the past as a global standards setter, with its Code of Practice on Disinformation and its Code of conduct on countering illegal hate speech online have been picked up by most of the biggest platforms voluntarily.

However, recent leaks by former Facebook employee Frances Haugen have revealed that major online platforms are still having inadequacies in their content moderation systems, and are not living up to their promises to fight harmful content.

Commission pushes for 'timely' update of disinformation code of practice

The review process for the Code of Practice on Disinformation has gained eight new potential signatories including businesses and civil society groups, but the Commission worries over the slow pace of the process. 

Serge Abiteboul, a member of ARCEP, an independent French agency in charge of regulating telecommunications, argued during an event at the Paris Peace Forum last week that this voluntary approach has largely failed and that the practices of online platforms have to be under the strict supervision of regulators.

The Digital Services Act (DSA), Europe’s flagship legislation to tackle the spread of illegal content and make digital giants more accountable in the digital sphere, attempts to combine a strict liability regime for online platforms with voluntary approaches.

According to whistleblower Haugen, it has the potential to become a “global gold standard” in content moderation, she told EU lawmakers in early November.

While the proposed DSA imposes heavy fines on platforms if they fail to take actions to minimise the risk of harmful content, the EU is also working on integrating the code of practice, which is currently reviewed in the Commission, in the DSA framework.

By signing and complying with the voluntary code of practice, online platforms are guaranteed to live up to the high standards for content moderation outlined in the DSA, which is creating new incentives for companies to adopt and implement the code of practice accordingly.

Facebook whistleblower asks European Parliament to get tough with DSA

Former Facebook employee Frances Haugen called on EU lawmakers to “set a gold standard” and take a tough stance in regulating big tech and safeguarding democracy during her testimony before the Parliament on Monday (8 November).

Over-blocking

With the recent surge in the spread of online disinformation, hate speech has made the need for content moderation even more pressing.

A study by the EU has shown that hate speech is five to 15 times more common on platforms that allow violent right-wing extremist content, with Jews and non-whites being the most frequent target of online hate speech.

However, the report by Justitia says that over-removal of content is also posing a problem. According to the study, only 1.1% of deleted online content actually violated provisions under the Danish Criminal Code on hate speech.

The Internet Commission, an NGO that promotes ethical business practices to counter hate speech, says the dynamic in the digital industry requires a flexible approach.

“Hard enforcement requirements for internet companies – well-intentioned measures to curb harmful content – can misfire, leading to under-reporting and over-blocking of content, and a failure to anticipate new types of harm,” Patrick Grady, project lead at the Internet Commission, told EURACTIV.

“To effectively change behaviour, and to future-proof regulation, the starting point is the development of global standards,” he said.

[Edited by Luca Bertuzzi/Zoran Radosavljevic]

Subscribe to our newsletters

Subscribe