Disinformation: EU lawmakers ask platforms to do more, their DSA talks go the other way

DISCLAIMER: All opinions in this column reflect the views of the author(s), not of EURACTIV Media network.

The EU institutions are officially calling on tech platforms to increase their efforts to tackle disinformation.[Shutterstock / Lenka Horavova]

The war in Ukraine is playing out across digital services and social media platforms, with disinformation and propaganda at its core. The Spanish fact-checker Maldita.es has already listed more than 750 fact-checks produced to counter disinformation items in just a few weeks.

Diana Wallis is the President of the Board of Directors of EU DisinfoLab.

The EU institutions are officially calling on tech platforms to increase their efforts to tackle disinformation. On March 10, MEP Raphael Glucksman, chair of the Parliamentary special committee on foreign interference, and nine other MEPs called for “clear rules” and “a structural approach to disinformation” in the DSA.

However, despite their declarations, EU lawmakers are failing to put forward a regulatory framework that matches their public positions. The current proposal on the table would actually give incentives for the platforms to do less, not more.

A key element currently in jeopardy is the access to the user redress mechanism foreseen by the Article 17. This new feature allows users to challenge the content moderation decisions taken against them by the platforms, unfortunately as now proposed by a compromise it would only grant access to users whose own content has been removed. 

To illustrate the issue many were recently horrified by the story of a pregnant woman in Mariupol being rushed to an ambulance after Russia bombed a maternity hospital. Russian officials and conspiracy theorists have infamously twisted this event, claiming it was staged and that the woman was an actress. While some of these claims have been removed, this disinformation around the incident continues to propagate. 

In this case, the compromise means that only those perpetrating the disinformation, claiming it was a staged, would have the right to complain against a platform’s moderation of their content. So in a sense, the more the platforms moderate content, the more they will raise the risks of being challenged. So, there is less incentive for them to moderate more content.

The compromise proposal would rather profit the platforms and their business model alongside the abusers of their service whilst dangerously ignoring the needs of the users exposed to or would legitimately wish to counter this disinformation. The only way to counter disinformation is to allow more open access to challenging content moderation decisions pursuant to the platform’s own policies and rules ensuring equal application, whether or not they are taking action to remove content.

If Facebook says it’s labelling all Russian-state information, we need to legally challenge why in 91% of the cases, it failed to do so? If Instagram prohibits advertisement, including claims debunked by fact-checkers, we need to understand on what basis it did allow advertising on alleged US biolabs in Ukraine?

Since November, the Council has agreed in compromise texts to challenge all platform’s decisions to extend this possibility. This includes decisions to not act on content infringing their terms and conditions, a key request from many civil society actors supporting this vision.

We do not want a regulation that relies on Commission President Von der Leyen calling Meta to take down disinformation. This is untenable as it gives full credence to the platforms’ press releases trumpeting what they are alleging doing. What we need most of all is enhanced access to user redress mechanisms in Article 17, as proposed by the Council. Only this will grant public accountability over private decision-making.

Subscribe to our newsletters

Subscribe