A European Commission study on the effectiveness of the EU’s code of practice against disinformation has criticised the self-regulatory nature of the framework, suggesting that “sanctions and redress mechanisms” should be put into place in order to ensure compliance with the rules.
The code of practice against disinformation is a voluntary framework signed by platforms including Facebook, Google and Twitter, in which they agree to take measures to control the surge of disinformation online.
The code was introduced by the European Commission in October 2018, in a bid to stamp out fake news in the context of the May 2019 elections for the European Parliament.
However, a study published on Friday evening (8 May), produced for the Commission by the Valdani, Vicari and Associates (VVA) consultancy firm, noted a number of shortfalls in the executive’s approach to stifling the spread of fake news online.
One such area is voluntary and self-regulatory nature of the agreement, which makes it difficult for the platforms to be held to account for breaches in the code, the study states.
“A mechanism for action in case of non-compliance of the (insufficient) implementation of the commitments that platforms signed up to, could be considered to enhance the credibility of the agreement,” the report finds.
“To that effect, the Commission should consider proposals for co-regulation within which appropriate enforcement mechanisms, sanctions and redress mechanisms should be established.”
The report also finds that in order to bolster the effectiveness of the code and make it possible to consider mechanisms for action in case of non-compliance, agreements between the executive and the platforms on key terminology should also be put into place and the code’s requirements should be applied uniformly across member states and platforms.
More broadly, however, the study finds that the code remains relevant and has led to positive results. “For this reason, the Code should not be abandoned, and its implementation should continue,” the document says.
This point was echoed on Friday by Siada El Ramly, director general of EDiMA, the trade association representing online platforms, who said that the code “has produced positive results and that it should be continued.
Coronavirus fake news regulation
In the context of the coronavirus, however, the study states that the examination was carried out before the COVID-19 crisis and that the events that followed the outbreak were not part of the remit. “However, the fact that the pandemic has become the topic of a new wave of disinformation has reinforced the need for action,” the document states.
On this point, in particular, the Commission has recently noted the need to take more robust action against disinformation, after having borne witness to a spike in fake news related to the current public health crisis.
Speaking to members of the Parliament’s internal market committee on Tuesday (14 May), the EU’s Justice Commissioner Didier Reynders said that in the context of the coronavirus outbreak, the issue of how disinformation online is managed by the authorities has only become more pertinent.
“During the crisis we need to continue to work with the platforms, to ask to remove a lot of messages from the different platforms on social media,” Reynders said.
“But then we need to think about regulation because we don’t have for the moment the capacity to go further than that, and to do more than just a voluntary approach with the different actors.”
“It’s true that during the crisis, it’s very important to fight against that, but it will be so very important with the Parliament and the Council to think about the best way to organise a regulation, maybe in the near future,” Reynders said.
[Edited by Zoran Radosavljevic]