Tougher EU hate speech guidelines urge tech giants to prevent ‘digital Wild West’

A new EU law will require messaging apps, platforms like Facebook and Twitter and other digital services to hand over data to law enforcement authorities within 10 days, or six hours in emergencies. [Max Pixel]

The European Commission wants internet platforms to take down illegal posts faster, and is considering new legislation as one way to satisfy pressure from countries like Germany and France.

New Commission guidelines ask social media companies like Facebook, Twitter and YouTube to take down more illegal content like terrorist propaganda and hate speech, and to make sure it stays off the internet by using technologies to monitor what users share.

The guidelines amp up the Commission’s stance on how platforms should react to illegal posts. For now, the EU executive is sticking to a voluntary approach, but it will have political effects. The Commission published its communication on Thursday (28 September), one day before EU heads of state meet in Estonia at a summit dedicated to digital policy.

Germany passed a controversial law earlier this year requiring social media firms to remove hate speech within 24 hours after they are notified about it or face hefty fines. France and other EU countries have also expressed interest in drafting their own national laws.

Estonia confident politics won't overshadow Tallinn ‘digital summit’

Estonia is hosting a summit focused on technology issues this Friday (29 September) but will have to battle for EU leaders’ attention less than one week after the German election, a fresh round of Brexit talks and French President Emmanuel Macron’s proposals on the future of Europe.

The Commission wants to stop member states from creating different rules across the bloc, and the new guidelines are one way for the Commission to step up pressure on tech companies. If this voluntary step doesn’t push firms to remove more illegal content, the Commission announced that it may propose a binding EU law by May 2018.

“We cannot accept a digital Wild West, and we must act,” EU Justice Commissioner Věra Jourová said.

The EU justice chief oversees a non-binding Commission 2016 agreement with Google, Facebook, Twitter and Microsoft that encourages the companies to remove most illegal content that their users share within 24 hours.

Under that agreement, companies removed 59% of illegal content between May 2016 and June 2017. During that time, the firms reviewed 51% of notifications from users about illegal content within 24 hours.

Jourová’s office will analyse the effects of the deal again early next year, before the Commission announces whether or not it will introduce legislation.

“The ugly side of the internet also learned how to show its face. Hate speech is not just words, it can lead to concrete violence against concrete people in real life,” Jourová said.

Hate speech is illegal across the EU, but definitions of what kind of statements are illegal vary between member states.

The Commission announced that it is considering setting a concrete time limit for when companies must remove illegal material.

The new guidelines also ask social media companies to use automatic detection technologies to identify illegal content using algorithms, and to then remove and make sure it is not posted again in a different form, or from a different user account.

Germany set to fine social media platforms millions over hate speech

A new draft German law would fine social media firms up to €50 million if they fail to remove hate speech, jumping ahead of EU plans. The European Commission is still weighing up whether it will propose rules to crack down on online hate speech.

The Commission said it’s “important” for companies to hire people to check posts that are flagged as illegal “especially in areas where error rates are high or where contextualisation is necessary”. Facebook hired hundreds of new employees in Germany after the Bundestag approved the national social media law this summer.

The EU executive is also pushing for companies to rely more on so-called trusted flaggers, or organisations that specialise in identifying illegal content. Social media firms could fast-track the notifications they receive from those organisations.

Tech companies and civil liberties groups have warned that automated technologies to detect illegal content could force firms to constantly watch what users share, and result in a possible clamp down on free speech if they remove too many posts.

EDiMA, a Brussels-based association representing online platforms including Facebook, Google and Twitter, said the demand to make sure illegal posts do not reappear is especially challenging for social media companies.

“We are concerned about some of the wording about ‘notice and stay down’,” said Siada El Ramly, EDiMA’s director.

“There would have to be continuous monitoring. Everybody has a lot of faith in terms of what algorithms can actually do, but algorithms are only as good as the input you can put in them. You can look for the same content again, but if the same content is somehow reframed, it makes it difficult to find content back,” she added.

MEPs and lobby groups in Brussels are divided over whether the Commission should encourage tech firms to use automated technologies. Some argue the technologies are overzealous and will censor social media users.

“I am still convinced that a common European approach is needed if we want to keep a truly digital single market,” said Dutch Liberal MEP Marietje Schaake. She and 23 other MEPs asked the Commission earlier this year to propose binding legislation requiring companies to remove illegal content.

Schaake said that if the Commission does propose legislation next spring, it should protect free speech and not endanger a separate EU law that guarantees that online platforms are not legally responsible for any illegal content their users share.

The so-called e-commerce directive exempts platforms from liability if they are not aware of illegal posts, but firms must remove them “expeditiously” once they are notified. In its new communication, the Commission acknowledged that the exact amount of time platforms can wait before removing posts can vary depending on the type of illegal content.

“Where serious harm is at stake, for instance in cases of incitement to terrorism acts, fast removal is particularly important and can be subject to specific timeframes,” the communication reads.

Google, Microsoft, Twitter and Facebook agree to remove hate speech online

Google, Microsoft, Twitter and Facebook said they will remove posts containing hate speech within 24 hours as part of a new agreement organised by the European Commission to counter extremism on the internet.

Subscribe to our newsletters