Progress stalls on Commission’s online hate speech efforts

Věra Jourová, Vice-President for values and transparency, speaks at the 2019 evaluation of the Code's progress. [Shutterstock / Alexandros Michailidis]

Progress on countering online hate speech is declining in Europe, with the lowest figures in a number of years reported in the latest evaluation of the Code of Conduct on Countering Illegal Hate Speech Online.

Within 24 hours of being notified of hate speech, IT companies reviewed 81% of the alerts, a decrease from 90% last year. On average, the European Commission reported on Thursday (7 October) that 62% of flagged content was removed, a figure lower than that recorded in 2019 and 2020. 

The Code was adopted in 2016 in agreement with Facebook, Microsoft, Twitter, and YouTube, and has since gained a number of other platform signatories, such as Instagram, TikTok, and, as of June this year, LinkedIn.  

Asha Allen, advocacy director at the Centre for Democracy & Technology (CDT), told EURACTIV that “this evaluation sheds light on persistent concerns regarding the European Commissions’ approach to tackling issues such as hate speech, disinformation, and online gender-based violence.”

The results  

The evaluation shows a slowing of progress on a number of fronts compared with last year’s figures. In addition to the falling rates at which content was addressed and removed, the percentage of notifications on which IT companies gave feedback also declined from 67% in 2020 to 60% this year. 

Facebook received the most notifications but came third among the Code’s signatories in terms of removals. Neither Snapchat, Dailymotion nor Microsoft received any notifications during the monitoring period, while Jeuxvideo.com, which received 30, removed 100% of the flagged content.

The rate at which content was removed varied depending on its nature and severity, though declines were again visible throughout different categories.

Around 69% of material that called for murder or violence against specific groups was taken down, compared to 83% last year, and 55% of content that included defamatory words or images aimed at a particular group was removed in comparison to 57.8% in 2020. 

The evaluation also found that hate speech relating to sexual orientation was the most commonly reported type of content, followed by xenophobia and anti-gypsyism. 

“The results show that IT companies cannot be complacent: just because the results were very good in the last years, they cannot take their task less seriously. They have to address any downward trend without delay”, said Didier Reynders, the commissioner for justice.

Enforcement issues

As a self-regulatory instrument, the Code’s implementation is tracked through monitoring undertaken by a number of organisations based throughout the EU. Similarly, voluntary instruments, such as the Commission’s Code of Practice on Disinformation, have drawn criticism for their lack of enforcement mechanisms and reliance on signatories for compliance. 

Despite an ongoing review process, the Commission is worried that larger legislative efforts, such as the proposed Digital Services Act, could distract platforms and overshadow their compliance with the voluntary commitments they’ve made, EURACTIV has learned recently.

Commission pushes for 'timely' update of disinformation code of practice

The review process for the Code of Practice on Disinformation has gained eight new potential signatories including businesses and civil society groups, but the Commission worries over the slow pace of the process. 

 

“[A] gentleman’s agreement alone will not suffice here”, said Věra Jourová, the Commission’s vice-president for values and transparency. “The Digital Services Act will provide strong regulatory tools to fight against illegal hate speech online.”

The DSA, CDT’s Allen told EURACTIV, presents an opportunity for improvement: “Rather than a reliance on ‘voluntary’ codes that push content regulation outside of the bounds of traditional due process protections, we need robust transparency and accountability mechanisms on both companies and governments over how our online expression is handled.”

“We need regulatory clarity and a due diligence framework focused on the human rights impacts of companies products and services, rather than solely on corporate compliance with government-ordered takedowns,” she added.

[Edited by Luca Bertuzzi/Zoran Radosavljevic]

Subscribe to our newsletters

Subscribe