Christchurch Call: EU struggling to get anti-terror measures right

"Christchurch Call" meeting in Paris on Wednesday (15 May). New Zealand's Prime Minister Jacinca Ardern and French President Emmanuel Macron launched the initiative. [Charles Platieu/EPA] [Charles Platieu/ epa]

New Zealand and France gathered states and social media organisations around the same table on Wednesday (15 May) to take joint action against terror online. The EU has been negotiating a regulation on preventing the dissemination of terrorist content online for months, but critics find it too restrictive and fear censorship. EURACTIV Germany reports

On Wednesday, New Zealand’s Prime Minister Jacinda Ardern and French President Emmanuel Macron launched the “Christchurch Call” on the fringes of the G7 ministerial meeting in Paris. The initiative calls on other states and online platforms to commit to stop the spread of terrorist content online.

France, which has also been the scene of major terrorist attacks in recent years, pledged support for New Zealand in the fight against terrorism, following attacks against two of its mosques on 15 March.

In a Facebook video published on Sunday, Ardern stressed that the attack in Christchurch was different as “the attack was planned to be virally distributed online.” The attacker streamed his attack live on Facebook, whereupon the video was shared a thousand times.

Facebook said it had blocked around 1.2 million copies of the video and deleted another 300,000 within 24 hours.

“One of the challenges we faced in the days following the attack was the spreading of the video’s many versions,” said Facebook’s Vice-President for Integrity, Guy Rosen.

Facebook is now under pressure to act. On the morning of the “Christchurch Call”, the company announced its intention to tighten its rules for live streams. In the future, users whose uploads violate the platform’s policies will be banned from using the live video function for a period of time.

EU lawmakers back one-hour deadline to remove online terrorist content

Lawmakers in the European Parliament’s Justice Committee have backed measures to clamp down on the spread of online terrorist content, including an obligation for platforms to remove offending content within one hour of reporting or face fines of up to 4% of their global turnover.

States set up monitoring points

A regulation to contain content glorifying violence online does not yet exist and the principle of self-regulation currently applies.

Since 2017, Google, Facebook, Twitter and other social media organisations have been using a joint database to record and subsequently block the ‘digital fingerprints’ of suspected terrorist content. In the meantime, some states have introduced independent monitoring units.

Since October, the Federal Criminal Police Office in Germany has employed a unit that searches for illegal content online and reports it to Europol. According to the German Ministry of the Interior, almost 6,000 such reports have been made since.

The UK also recently announced the establishment of an independent regulatory authority to monitor such content.

What is missing so far is a response at the European level.

In September, the European Commission presented a legislative proposal against terrorist content, which the Council of the EU had largely approved.

The draft stipulates that online platforms that harbour content calling for terrorist acts, promoting such acts or describing these in detail, should be removed immediately if a corresponding removal request from a national authority from an EU member state has been sent.

Otherwise, fines of up to 4% of the company’s global turnover could be imposed.

MEPs back plans to quell online terrorist content, but one-hour timeframe is criticised

The European Parliament has backed plans to force online hosting services to remove terrorist content within one hour of reporting, in a move aimed at quelling the spread of extremist propaganda online. However, there was no shortage of those in the industry criticising the timeframe.

A problem for small blogs and forums

However, the Commission’s proposal has been met with opposition from NGOs, journalists and politicians, who fear they will be censored online and doubt whether such a project is feasible.

It was also difficult for the European Parliament to agree on such a proposal.

In a letter to the responsible judicial committee, prominent representatives of the digital industry, among others, had spoken out against the one-hour deadline:

“This extremely short deadline, coupled with onerous sanctions, would entail over-removal of lawful content, which will negatively impact the freedom of expression and related fundamental rights of European users,” the letter read,

Especially for smaller online platforms such as blogs or forums, direct monitoring of all content would be “technically unfeasible”.

EU Parliament against so-called ‘terror filters’

Elisabeth Niekrenz, a lawyer and political advisor at the association committed to promoting online consumer protection, Digitale Gesellschaft, has been intensively involved with the European Commission’s plans.

She sees another problem with the proposal: There are hardly any legal avenues against decisions that block content online. Admittedly, it is possible to file a complaint if a national authority issues a request to remove content.

“But whether and when a constitutional procedure can be launched in all EU member states is a completely different matter. Defining what is terrorist content is also a political matter, for which there is plenty of room for abuse against voices critical of the government,” she told EURACTIV.

Facebook hit by landmark censorship lawsuit in Poland

A Polish NGO has filed a lawsuit against US-based social media giant Facebook, following concerns that the organisation’s freedom of speech was stymied on the platform. The case is considered the first in Europe to address the issue of “private censorship”.

EU Parliament against “terror filter

Many MEPs see this the same way. The draft law that passed through the European Parliament on 17 April has therefore been significantly relaxed.

It now contains a single clause, which intends to protect content for educational purposes and journalistic reporting against the general suspicion of terrorism.

MEPs also decided against mandatory active monitoring of content by using the so-called “terror filter”, as the European Commission had initially proposed. MEPs also excluded messaging services such as Facebook Messenger and non-public platforms from the monitored platforms.

However, critics remain concerned about freedom of expression online. Many types of content, such as satire, risk being erroneously removed, according to Elisabeth Niekrenz.

“Automatic filters have already deleted a large number of videos put online by the NGO “Syrian Archive”, which documents war crimes committed by the Islamic State (IS), presumably because these contained IS flags. “Every unjustifiably removed content is one more attack on freedom of expression,” she said.

What the EU regulation on terrorist content will look like will be negotiated by the new  European Parliament and Council in September or October.

In the coming months, New Zealand’s prime minister will also try to get more online platforms and states to rally behind the “Christchurch Call”. Because social media operate globally, there is a need for a global solution, she said.

[Edited by Zoran Radosavljevic]

Campaigners lock horns with EU justice chief on ‘hate speech’

EU Justice Commissioner Věra Jourová stood by the EU's principles that freedom of speech was not absolute at a panel discussion held in Brussels, saying that attempts to regulate hate speech were justified but her comments prompted a fierce debate. 

Subscribe to our newsletters

Subscribe

Want to know what's going on in the EU Capitals daily? Subscribe now to our new 9am newsletter.