The European Parliament on Wednesday (28 April) formally adopted without a vote controversial legislation which forces online platforms to remove terrorist content within an hour of it being flagged.
The regulation on preventing the dissemination of terrorist content online, which was adopted in the face of opposition from several organisations and MEPs, includes an obligation for digital platforms to remove “terrorist content or disable access to terrorist content in all member states as soon as possible and in any event within one hour of receipt of the removal order.”
These “removal orders” must come from the “competent authority” of each EU country and can be addressed to all member states.
The legislation, which was drafted by the European Commission in the wake of a series of terror attacks across the bloc, is seen as necessary to combat online content disseminated by terrorists to spread their message, radicalise and recruit followers, and direct terrorist activities.
However, critics say that giving platform hosts such a short deadline would encourage them to use algorithms for their moderation. Furthermore, an absence of judicial control and the transnational scope of the removal orders threatens freedom of expression and represents a danger for democracy, they say.
The latest open letter to MEPs calling on them to reject the legislation points “to the contradictions of this text with the decision of the French Constitutional Council of 18 June 2020 on the law to combat hateful content on the internet”.
The Constitutional Council rejected several provisions of the so-called Avia law which would have forced hosts to remove offending content within 24 hours on the grounds that “such a time limit is particularly short” and that the removal order provided for “is not subject to the prior intervention of a judge or to any other condition”.
Lucille Rouet of Syndicat de la Magistrature, one of the signatories to the letter, regrets that the European text presents the same risk as the provisions of the Avia law, that of “preventive withdrawals”.
“The risk is that too many things are withdrawn based on criteria that are too broad, so as not to be sanctioned,” she told EURACTIV.
In light of the criticism, a number of safeguards were added to the proposed legislation. The text now states that “material disseminated for educational, journalistic, artistic or research purposes or for awareness-raising purposes against terrorist activity should not be considered to be terrorist content”.
A provision was also added to deal with the issue of platforms deploying automated filters. The regulation now states that “any requirement to take specific measures shall not include an obligation to use automated tools by the hosting service provider” and adds transparency obligations in this regard.
“The agreement that has been negotiated, from my point of view, seems balanced. Not ideal, but the conditions are guaranteed with regard to the main issues,” Sylvie Guillaume, a French MEP in the Group of the Progressive Alliance of Socialists and Democrats, told journalists earlier this week.
“I know that the French authorities had been lobbying us for several months to get this text voted on,” she added.
However, many remained unsatisfied with the concessions.
“To say that [using automated tools] is not an obligation is to allow it,” Green MEP Gwendoline Delbos-Corfield told EURACTIV, in a “context where the algorithm is going to be cheaper than human means”.
Platforms will no doubt choose this option in the face of the legal risks and the “moral pressure that will set in”, she added.
Meanwhile, questions remain about the practicality of removing content within an hour of it being flagged.
“The one-hour removal deadline will be nigh on impossible for most small platforms to implement effectively,” Jacob Berntsson, head of policy and research at the UN-backed Tech Against Terrorism initiative told EURACTIV.
“Given that terrorists exploit smaller platforms due to their lack of capacity and resources, this regulation therefore risks not only being ineffective in achieving its intended purpose, but also harming smaller platforms in the process,” he added.
The risk of ‘opportunism’
Green MEP Delbos-Corfield also said she feared “opportunism” on the part of some member states who may urge their neighbours to remove content without judicial review. “The term ‘terrorism’ is now being used all over the place, in everyone’s mouth, in a completely undue way,” she said.
“Hungary theorised that you were an enemy of the nation, which is not far from terrorist, when you criticised an element of the government,” said Delbos-Corfield, who is a member of Parliament’s Special Committee on Foreign Interference.
“If tomorrow Marine Le Pen comes to power [in France], she will be able to consider being an ultra-leftist as being a terrorist,” she added.
Privacy campaigner and fellow MEP Patrick Breyer agreed.
“Anti-terrorism legislation is again and again being abused for entirely different purposes such as to crack down on Spanish separatists and artists, French protesters or refugees in Hungary,” he said.
No vote in plenary
The legislation was not put to a vote in the plenary. Since the text as it stands was approved by the Committee on Civil Liberties, Justice and Home Affairs (LIBE) on 11 January by 52 votes in favour and 14 against, a new debate in the Chamber was ruled out.
Delbos-Corfield said the decision not to vote the legislation had left her feeling “immense regret.”
“Every time we avoid a democratic debate, we deprive ourselves of collective intelligence […] even if we don’t vote today, it’s not over yet,” she said.