Google, Microsoft, Twitter and Facebook said they will remove posts containing hate speech within 24 hours as part of a new agreement organised by the European Commission to counter extremism on the internet.
The four American companies have been meeting with the Commission behind closed doors since December to come up with terms for removing online hate speech, which is illegal in the EU, and posts promoting terrorism.
Civil society groups said the agreement will damage freedom of expression by letting private companies decide what is hate speech.
Two NGOs, European Digital Rights and Access Now, withdrew from the meetings with companies and the Commission because they said they were not consulted on the code of conduct.
“The whole problem is the lack of judicial oversight in this process,” said Maryant Fernández, an advocacy manager at European Digital Rights.
EU law requires member states to punish speech that incites violence or hatred because of a person’s race, religion, nationality or ethnicity.
A 2014 report from the Commission pointed out that laws in EU countries differ widely. In Estonia, for example, incitement to hatred is only illegal if it results in some kind of danger.
EU Justice Commissioner Věra Jourová has insisted member states update and enforce their laws against hate speech.
Jourová said in a statement today (31 May) that terrorist attacks in Paris and in Brussels over the last few months made getting rid of hate speech on the internet a higher priority.
“Social media is unfortunately one of the tools that terrorist groups use to radicalise young people and racist use to spread violence and hatred,” she said.
Under the new agreement, the four companies will review notifications they receive flagging hate speech and remove it quickly. The companies agreed to remove or block access to the posts within the EU, regardless of whether they are posted in Europe or somewhere else.
They also say they’ll work with the Commission to promote “counter-narratives”.
Twitter, Facebook, Microsoft and Google all said in statements that they already remove hate speech.
Facebook’s head of global policy management Monika Bickert said the company’s “teams around the world review these reports around the clock and take swift action”. Facebook reports having 1.6 billion users around the world.
Earlier this month, Microsoft introduced a new policy for removing posts that encourage violence or promote groups that the UN Security Council considers terrorist organisations.
Several of the companies that signed onto the agreement declined to comment on whether they think the new agreement will result in a higher rate of removed content.
According to Fernández, the agreement is “basically a political statement to say they’re doing something” and probably wouldn’t increase the number of posts removed from the internet.
The agreement comes at a time when the Commission has its sights set on regulating online platforms, including Facebook and Google. The executive announced last week that it might include measures affecting the companies in upcoming telecoms and copyright legislation.
While the agreement to remove hate speech is voluntary, the executive wrote in last week’s paper on platforms that it will watch how the code of conduct works out—“with a view to determining the possible need for additional measures”.
According to Twitter’s most recent transparency report, the French government submitted 150 requests for the company to remove posts between July and December 2015. During the same period, the company received 21 requests from the UK and ten from the German government. Governments can request that posts be removed for a number of reasons, including if they contain hate speech.
In the first half of 2015, Google received 20 requests to remove hate speech from Italy, nine from Germany and one from Greece. The company has not yet released its transparency report for July to December 2015.