Platforms clamp down on hate speech in run up to Digital Services Act 

Věra Jourová , Vice President of the European Commission for Values and Transparency. [Shutterstock]

The European Commission has applauded efforts by some of the world’s largest tech platforms in stifling the spread of illegal content, in the last evaluation of the EU’s code on countering illegal hate speech online before the Digital Services Act is presented later this year.

Results of the evaluation of the code show that 90% of flagged content was ‘assessed’ within 24 hours, while 71% of such content was eventually removed.

However, the study published on Monday (22 June) did not examine the specific timeframes within which such ‘illegal content’ was removed – an important area considering recent developments in France, where last week the Constitutional Council rejected large parts of a draft law against online hate speech which would have obliged social media giants to remove hateful content within 24 hours.

Moreover, the data included in Monday’s report did not provide a comprehensive overview of the content removal rates across all EU member states, with no figures for Belgium, Greece and Ireland because of too low notification numbers, while organisations in Malta, Luxembourg, the Netherlands and Denmark did not submit data on removal rates at all.

As part of the monitoring exercise, sexual orientation was cited as the most ‘commonly reported ground of hate speech, followed by xenophobia and anti-gypsyism.

The 2016 code is a voluntary agreement between platforms and social media giants, that was introduced in a bid to clamp down on racist and xenophobic hate speech online. Signatories include Facebook, YouTube, Twitter, Microsoft, Instagram, Dailymotion, Snapchat and

Digital Services Act

On Monday, the Commission highlighted the relevance of the evaluation in the context of the upcoming Digital Services Act – the EU’s ambitious bid to regulate the online ecosystem across a range of areas including political advertising and offensive content.

For his part, Justice Commissioner Didier Reynders also stated that while the platforms were going in the right direction with regards to their management of online hate speech, more effort was required on transparency and feedback, and that as a result, the Commission could introduce ‘binding transparency measures’ on illegal speech online.

Monday’s analysis revealed that of the platforms involved in the code, only Facebook provides systemic feedback to users on the outcomes of notifications submitted in response to hateful content.

“I urge the platforms to close the gaps observed in most recent evaluations, in particular on providing feedback to users and transparency,” a statement from Reynders said.

“In this context, the forthcoming Digital Services Act will make a difference. It will create a European framework for digital services, and complement existing EU actions to curb illegal hate speech online.

“The Commission will also look into taking binding transparency measures for platforms to clarify how they deal with illegal hate speech on their platforms.”

In terms of the Digital Services Act (DSA), in the beginning of June, the Commission launched public consultations on its long-awaited Digital Services Act package, with six areas being examined ahead of the proposal at the end of this year, including on online safety, liability, market dominance, online advertising and smart contracts, issues surrounding self-employment online, and the potential future governance framework for online services.

Online Terrorist Content

In addition to influencing the content of the DSA, Monday’s evaluation could also have a bearing on the ongoing inter-institutional negotiations on the EU’s online terrorist content regulation.

Talks between the Council and the Parliament were originally put on hold due to the coronavirus outbreak, and significant differences in position mean that no end is yet in sight for the new measures, which could potentially see online platforms being forced to remove flagged terrorist content within a one-hour timeframe, in addition to introducing ‘proactive measures’ that could take the form of upload filters.

While the Parliament is against the idea of upload filters being included in the text, the Council and Commission both support this inclusion.

[Edited by Zoran Radosavljevic]

Subscribe to our newsletters