‘Small platforms’ are the target of online terrorist content regulation, MEP says

[ECR]

This article is part of our special report Regulating against radicalisation.

The EU is taking regulatory measures to clamp down on the dissemination of terrorist content online. In the European Parliament, the file is being dealt with by the Civil Liberties Committee, with MEP Daniel Dalton leading the report. EURACTIV sat down with Dalton to discuss the finer details of the plans.

Daniel Dalton is a British MEP for the European Conservatives and Reformists (ECR). He spoke to EURACTIV’s Samuel Stolton.

How do you define online terrorist content?

We have to be very careful here because there are things that are clearly terrorist content and content that is political expression. So to me, we’ve already got the definition. And that’s the one contained within the 2017 terrorism directive.

However, the challenge here is that the directive doesn’t define what content is, it defines what terrorism is, with reference to “seriously intimidating a population, unduly compelling a government or an international organization to perform or abstain from performing any act, or seriously destabilizing or destroying the fundamental political structure of a country…”

I think that these are good pointers for how we should define online terrorist content.

Why is this content regarded as particularly dangerous?

Many people who are at risk of radicalization are being targeted online. Whether it’s watching videos or learning about certain parts of an ideology that has eventually led to terrorist acts, most of this happens online.

At the same time, there are practical problems, when you have things such as bomb-making guides, for example, that are available on the web.

There are many reasons why this content should be regarded as dangerous, but the most important one is the fact that these types of content clearly does play a role in radicalization.

How do you respond to those who fear this regulation could result in a form of censorship?

Well, of course, there is the worry that legitimate free speech may get caught up in this, which is a worry I share and we will certainly be looking to make sure that we can tighten the regulation up to be certain that this doesn’t happen.

On top of that, we are facing similar issues as to those faced in the Copyright debate, such as things like upload filters and content monitoring.

Ministers clamp down on online terrorist content despite wave of opposition

Ministers sitting on the EU Home Affairs Council adopted their negotiating position on the European Commission’s proposed regulation against the spread of online terrorist content on Thursday (6 December), as those in the industry reacted with frustration to the plans.

How much faith do you have in the platforms dealing with this content themselves, without the need for regulation?

It’s clear that the platforms are not doing enough. There’s lots of content out there that shouldn’t be out there. And it’s not only the political institutions that recognize the need for something to be done. If you talk to most people outside of the Brussels bubble, they would say that platforms have a huge responsibility to make sure that terrorist content is taken down.

Now, from what I understand, it’s not necessarily the big platforms that have the problem. Many of them are doing their own voluntary and proactive measures as it is now. But there are quite a few smaller platforms who are either inundated with offending content, or they are basically not responding to the authorities’ requests for the removal of content.

So I think it’s fair enough for the commission to set out a framework which allows the content authorities to have more teeth when they’re trying to liaise with these platforms and take down content, which shouldn’t be on there.

And how small are these platforms that we’re talking about here?

Very small outfits, we’re talking about one or two man bands. Websites that most people have probably not heard of, but are hosting a huge amount of terrorist content. These sites are the target of the regulation.

This is also why the amendment was made on the ‘proactive measures’ point, which, in the Commission’s original proposal, called for hosting service providers to take steps to protect their services against the dissemination of online terrorist content.

My take on this issue is a little different: I think that we should focus on voluntary measures and the interaction between competent authorities and platforms. We should be honing in on the platforms that consistently fail to comply and that have no voluntary measures in place of their own.

In terms of the moderation of online terrorist content, isn’t there the risk that individuals employed in roles that require the reviewing of content could be affected by engaging with gratuitous and graphic content for hours daily?

I guess the inference you’re making here is that potentially people who are moderating the content themselves could be radicalised.

Frankly, I don’t know how we solve that. The fact that the content exists in the first place, means that it is liable to radicalize, potentially, anyone who comes into contact with it.

Arguably, if one person is looking at it, to take it offline and ensure that millions of people don’t look at it, well that’s clearly a good thing. Which is justification for doing something about it in the first place.

'This is not censorship' says King, amid online terrorist content crackdown

Security commissioner Julian King assured EU citizens on Thursday (13 September) that plans to tackle the spread of terrorist content online do not amount to “anywhere near censorship.”

The Commission’s proposal calls for offending content to be removed within “one hour from receiving the removal order.” Your amendment includes an adage to this point: “depending on the size and means of the hosting service provider.” Do you think this one-hour time order is achievable?

I’ll be tabling some more amendments on this issue, because I don’t think my position was quite right, at that moment.

The aim I have is that the one hour should come at the end of the process. i.e. that you’ve had a referral in the first place, and the competent authority has then contacted the platform, they’ve had a discussion, and the platform is refusing to comply. Only at that stage, in my opinion, should the one-hour order come in.

In terms of the ‘size’ of the hosting provider – that amendment is included to cover one-man outfits who cannot practically respond to such orders within such a short timeframe.

But in this case, an item of terrorist content may have been online for two or three weeks before the order is issued. If the objective of this regulation is to remove online terrorist content as swiftly as possible, don’t you think any time-limited order should be from the moment that the content is uploaded?

Well, that may require technologies such as upload filters, which I am completely against. For me, the one hour is more an enforcement tool for the authorities, rather than justification for saying content such as this should only be online for one hour.

This is about giving the competent authorities the teeth to go after platforms that are not living up to their responsibilities.

Moving on to another amendment that has been made to the Commission’s original text, what’s your take on the scope of the restrictions?  In the draft report, you say that the regulation should only cover terrorist content that has been available to ‘the public’ and not ‘third parties.’

To me, the moral obligations should be on the platform disseminating the information to the public, not on an infrastructure service that might be hosting the content.

Ultimately, it’s the platform that is taking the editorial decision to put up that material to people or to leave the offending content online.

The whole objective of the legislation, that you’re trying to allow platforms to live up to their moral responsibility to keep terrorist content offline is compromised by the fact that businesses that don’t put terrorist content up, but who only host it for someone else, could be targeted. This is a moral hazard issue, why are you going after the people who are not actually responsible for disseminating the content?

Moreover, from what I understand, it’s virtually impossible for cloud infrastructure service providers to identify individual pieces of content that may violate any restrictions.

How badly would cloud service infrastructure providers be affected should they be included in the scope?

Well, as I understand it, because it’s technically impossible for them to identify specific pieces of content, they may be required to shut down entire websites. This would lead to making the business model for cloud service infrastructure providers unfeasible. Customers may turn away.

Was it an oversight, then, that the Commission included cloud infrastructure service providers in the original scope?

Possibly. The Commission said in Committee that they didn’t mean to include those types of cloud infrastructure services. I suspect they were thinking more about consumer cloud services, like Dropbox, for example, who I think should be covered. Others may not agree with me on this point, because material within Dropbox is only disseminated to private groups. But you can have a private group with 10,000 members, which is more Twitter followers than I have.

So for me, consumer cloud services should be included, but cloud hosting services shouldn’t. This is a distinction that we’ve tried to make in the draft report.

Juncker goes to war against disinformation and online terrorist content

The European Commission is set to pursue a crackdown on the spread of online terrorist content and disinformation, its president Jean-Claude Junker announced in his State of the Union address on Wednesday (12 September). 

Subscribe to our newsletters

Subscribe

Want to know what's going on in the EU Capitals daily? Subscribe now to our new 9am newsletter.