Countering disinformation: Is the DSA Punching Below its Weight?

DISCLAIMER: All opinions in this column reflect the views of the author(s), not of EURACTIV Media network.

A gamut of the Commission’s initiatives mark a welcomed step-change in the EU policy against disinformation, but certain influence operations may still slip through the net, writes Paolo Cesarini.  [Shutterstock]

A gamut of the European Commission’s initiatives mark a welcomed step-change in the EU policy against disinformation, but certain influence operations may still slip through the net, writes Paolo Cesarini.

Paolo Cesarini is a former European Commission official, having headed up the Media Convergence and Social Media Unit. He wrote this opinion piece in a personal capacity. 

Public concerns for the spread of false or misleading information on social media have grown steadily during the COVID-19 infodemic and the 2020 US Presidential campaign.

The Digital Services Act (DSA) proposal, and two action plans in support of European democracy (EDAP) and the media and audio-visual sector (MAAP), bear witness of the Commission’s resolve to tackle this global challenge by leveraging on European democratic values and fundamental rights.

Based on the mixed results of the 2018 Action Plan against Disinformation, the Commission has doubled down on its efforts and pledged several complementary actions to regulate big tech and their content moderation practices, enhance resilience against foreign interference in electoral processes, strengthen legal safeguards for professional journalism and push for new industrial policy tools in support of plural and sustainable news media sector.

And just a few days ago, it started to implement its plans by launching a public consultation on possible new instruments to regulate online political advertising.

All these aspects are critical for an effective policy against disinformation. However, the Gordian knot is still about how to set appropriate obligations for global online platforms that could be effective in countering the threat, while balancing user safety and freedom of expression.

The DSA tries to untie this knot by combining three elements: a limited set of due diligence requirements for « very large online platforms » (i.e. services reaching 45 million of active monthly users in the EU, or 10% of the EU population); a legal basis for industry-wide codes of conduct, such as the Code of Practice on Disinformation; and public oversight mechanisms, including independent audits and new enforcement tools, with fines up to 6% of platforms’ global turnover in case non-compliance.

The EDAP complements the DSA by announcing future Commission’s guidance to pave the way for a revised and strengthened Code of Practice.

This new co-regulatory approach is welcomed. Yet, it may fail to address certain key aspects of the phenomenon.

Systemic or endemic risks?

The DSA imposes on very large platforms the obligation to self-assess, on a yearly basis, the « systemic risks » arising from the operation of their services, and take appropriate mitigation measures.

Disinformation-related risks are defined as an “intentional manipulation of the service », normally involving « inauthentic use or automated exploitation”.

While capturing a number of typical artificial amplification techniques, such as the use of fake accounts, social bots, stolen identities, or account take-overs, this definition seems to overlook that disinformation is a multi-faceted phenomenon whose impact depends on fast-evolving technologies, service-specific vulnerabilities, and constant shifts in manipulative tactics.

Certain forms of information manipulation (attention hacking, information laundering, State-sponsored propaganda) do not necessarily entail artificial or inauthentic exploitation, but rather a strategic use of a platform’s service.

Hoaxes or conspiracy theories are often built up through successive manipulative operations on various online resources (bogus websites, fringe media outlets, discussion forums, blogs, etc.) before being injected into mainstream social media, often without the help of fake accounts or social bots, with a view to normalizing a narrative, or legitimizing certain information sources.

In other cases, such as the infamous #StopThe Steal, the intervention of influencers or statements from political leaders are the direct cause for the viral sharing of deceptive messages across organic audiences on social media.

Moreover, recent cases have shown how entire user communities can migrate from one social network to another, with pieces of false information banned on one site reappearing in another, which suggests that disinformation-related risks are endemic to the whole ecosystem.

These examples suggest that the duty for very large online platforms to assess « systemic risks » should cover not only manipulative conducts affecting specifically the security and technical integrity of their services but also content and source-related manipulations that may occur outside their services while being liable to spin disinformation across their user base.

The upcoming legislative debate should address this aspect, as a too narrow definition of systemic risks could severely limit the scope and effectiveness of the mitigation measures and other safeguards (self-regulation, independent audits, public scrutiny and sanctions) provided for in the DSA.

How to ensure a more effective detection?

Effective detection of sophisticated information manipulations can hardly rely only on platforms’ yearly self-assessment. In order to develop proper identification responses, the DSA should include the possibility for vetted researchers and fact-checkers to issue alerts triggering the obligation for platforms to expediently carry out internal investigations, by analogy to what it provides already for trusted flaggers in case of illegal content.

Moreover, online platforms should be encouraged to exchange information between their security teams, notably to facilitate early detection of covert coordinated networks or « cross-platform migration » cases.

Also, it is unclear why the DSA empowers users and civil society organizations to notify potential instances of illegal content but excludes this possibility for disinformation cases.

What type of mitigation measures?

Last but not least, the DSA does not shed much light as to what is precisely required from platforms to mitigate risks emerging from disinformation.

Principles for content moderation, responsible algorithmic design, and demonetization of websites that use disinformation as click-bait to attract advertising revenues are important areas where the Commission’s steer would be essential in order to avoid that too much discretion is left to large digital players, such as to vest them with the role of ultimate arbiters of democracy.

As announced in EDAP, the expected Commission’s guidance for the revision of the Code of Practice, and the multi-stakeholder dialogue that will accompany this process, will hopefully provide clearer orientations on these critical issues.

Subscribe to our newsletters