Facebook rolls out new COVID-19 misinformation engagement alerts

Facebook CEO Mark Zuckerberg speaks during the VivaTech fair in Paris, France, 24 May 2018. [EPA-EFE/ETIENNE LAURENT]

Facebook is to establish a new system for alerting users who have engaged with misinformation related to the coronavirus, the company’s head Mark Zuckerberg announced on Thursday (16 April). The move comes after pressure from activists for Facebook to clampdown on the spread of fake news related to the pandemic.

Zuckerberg wrote in a post on Thursday that Facebook will “begin showing messages in News Feed to people who previously engaged with harmful misinformation related to COVID-19 that we’ve since removed, connecting them with accurate information.”

An additional statement from Facebook’s Guy Rosen, Vice-President for Integrity at the company, clarified that the messages will be shown to users who have previously liked, reacted or commented on harmful misinformation, which the company has since removed.

“These messages will connect people to COVID-19 myths debunked by the WHO including ones we’ve removed from our platform for leading to imminent physical harm,” Rosen said.

“We want to connect people who may have interacted with harmful misinformation about the virus with the truth from authoritative sources in case they see or hear these claims again off of Facebook.”

The messages, Facebook says, will start appearing “in the coming weeks.”

Activists ask for more

Campaigners have welcomed the move, but added that the company could go further in their efforts against fake news.

“This is a positive move from Facebook, but one that could go further,” Christoph Schott, Campaign Director at the rights group Avaaz, told EURACTIV. “We’d like to see Facebook issue alerts for people who may have seen examples of misinformation on the platform, not only engagements.”

Meanwhile, Facebook’s move to issue alerts to those who have engaged with coronavirus misinformation comes after an investigation by Avaaz which revealed that millions of users had been subjected to such information, without any warning.

Avaaz researchers studied more than 100 pieces of Facebook coronavirus misinformation across six languages, and found that these posts were shared 1.7 million times and viewed an estimated 117 million times across six languages, with disparities in how subsequent warning labels are applied across different languages.

The study also revealed that it can take the company “up to 22 days to issue warning labels for coronavirus misinformation, with significant delays even when Facebook partners had speedily flagged the harmful content for the platform.”

Further research commissioned by Avaaz and conducted by George Washington University and The Ohio State University identifies the practice of issuing retrospective ‘corrections’ to examples of misinformation, suggesting that it could reduce belief in disinformation by nearly half.

“We’d really recommend that the platforms start thinking more seriously about this ‘correct the record’ policy,” Schott said.

Future EU regulation?

Meanwhile, speaking to members of the Parliament’s internal market committee on Tuesday (14 April), Justice Commissioner Didier Reynders said that in the context of the current coronavirus outbreak, the issue of how disinformation online is managed by the authorities has  become more important.

“During the crisis we need to continue to work with the platforms, to ask to remove a lot of messages from the different platforms on social media,” Reynders said.

“But then we need to think about a regulation because we don’t have for the moment the capacity to go further than that, and to do more than just a voluntary approach with the different actors.”

Judging by the Commission’s willingness to address the issue of online disinformation promptly, any regulatory action in this field may come in the Commission’s forthcoming Democracy Action Plan framework, which is due to be presented in Q4 of this year. The Digital Services Act, meanwhile, in which disinformation may be subjected to further restrictions, is likely to be postponed until 2021.

On 27 March, the Commission’s Vice-President for Values and Transparency Věra Jourová sat down with tech platforms Facebook, Twitter, Google, Microsoft, Mozilla, and the association EDiMA, in a second meeting since the coronavirus outbreak in the EU.

The platforms assured Jourová that they have started to actively promote coronavirus information emanating from authoritative sources. They have also pledged to demote or remove forbidden or harmful content on the issue.

[Edited by Benjamin Fox]

Subscribe to our newsletters