Digital Services Act should avoid rules on ‘harmful’ content, Big Tech tells EU

New EU measures regulating the web should avoid, in the first instance, rules on the hosting of online content deemed 'harmful' but not 'illegal,' a Brussels trade association representing the world's largest online platforms has said.

Industry lobby EDiMA has called for the EU's upcoming Digital Services Act to introduce a 'legal basis to act' on the hosting of illegal content, while avoiding, for the time being, the 'challenging' task of regulating against harmful content. [Shutterstock]

New EU measures regulating the web should avoid, in the first instance, rules on the hosting of online content deemed “harmful” but not “illegal”, a Brussels trade association representing the world’s largest online platforms has said.

In a position paper published today (12 October), industry lobby EDiMA has called for the EU’s upcoming Digital Services Act to introduce a “legal basis to act” on the hosting of illegal content, while avoiding, for the time being, the “challenging” task of regulating against harmful content.

“With content/activity that is ‘harmful’ but not illegal, often a service provider will have to make determinations as to where to draw the line between free speech and the right to information versus the possible harm caused to users,” states the paper, which is supported by EDiMA members such as Google, Amazon, Facebook, and Apple.

“The legal basis to act is typically clearer on the part of the service provider when the content is illegal, so there is a basis for specific actions, and redress mechanisms to be put in place for users where content removal is mandated,” the document notes, adding that as a result, the Digital Services Act should therefore “initially focus on content/activity that is already defined as illegal across the EU.”

The Digital Services Act (DSA) represents the EU’s ambitious plan to regulate online services, and will cover areas of the platform economy ranging from liability for content, market dominance, and advertising, to safety, smart contracts, online self-employment, and future governance frameworks. The package of measures is due to be unveiled by the European Commission on December 2.

Siada El Ramly, EDiMA’s director-general, warned against the many “cultural” nuances in the EU that could lead to further complication as to whether certain online content is deemed harmful or illegal.

“It shouldn’t be up to our industry to define the legal context for every EU member state, because every context is wildly different,” El Ramly told EURACTIV, pointing to research in the paper which highlights disparities across the EU on rules for causing religious offense online.

For example, in Italy, Poland, and Spain, causing offense on the basis of religion is illegal under certain circumstances, while there are considerably lighter rules in Denmark and France.

In such a climate, introducing pan-EU regulation that may result in not-illegal-but-harmful content be removed, could have a negative impact on freedom of speech in certain European nations, El Ramly added.

France’s hate speech law unconstitutional

A case in point was recently evidenced in France where the country’s Constitutional Council declared in June that rules outlined in the country’s hate speech law, passed in May 2020, runs contrary to the country’s constitution and freedom of expression.

France’s hate speech law introduced, among other rules, obligations for online platforms to remove “hateful” material within a 24-hour time frame. The failure to remove such content could have resulted in fines of up to €250,000.

Commission mulls restrictions on platform data use in Digital Services Act 

Platform giants will be prohibited from using the data they collect online unless they make this data available for use by smaller platforms, according to a draft of blacklisted practices, seen by EURACTIV, as part of the European Commission’s forthcoming Digital Services Act.

Jourová against content removals

Monday’s paper from Big Tech lobby EDiMA comes at a timely moment, weeks after leaks emerged from the EU executive, detailing a list of potentially “blacklisted” activities by platforms under the forthcoming Digital Services Act.

Hard regulation against content didn’t feature in these draft papers, and other sounds coming out of Brussels suggest that the Commission is unlikely to introduce tough rules on content removals.

Věra Jourová, the European Commission Vice-President for Values and Transparency, held a video call with Twitter CEO Jack Dorsey in mid-September, in which she highlighted the Commission’s intention not to necessarily introduce future rules that would force platforms to remove harmful online content or disinformation, focussing instead on how such content spreads online.

“In order to address disinformation and harmful content we should focus on how this content is distributed and shown to people rather than push for removal,” she said following the meeting.

Despite those reassurances, EDiMA intends to continue making its point. Monday’s paper comes after January’s broader ‘Online Responsibility Framework’ policy document, and ahead of two more granular papers to be published by the Big Tech association.

“Our main message with Monday’s paper is clear,” El Ramly said. “We want to start by finding legal solutions in dealing with illegal content online, before creating safeguards to ensure that harmful content is addressed, without encroaching on freedom of expression.”

'Content removal' unlikely to be part of EU regulation on digital services, Jourova says  

The European Commission has given its clearest indication yet that obligations on digital platforms to remove content are unlikely to feature in far-reaching EU efforts to regulate the web, due to be presented before the end of the year.

(Edited by Frédéric Simon)

Subscribe to our newsletters

Subscribe
Contribute