Facebook can lead users down vaccine misinformation rabbit hole 

During the investigation, 109 out of the total 180 pages recommended to researchers contained anti-vaccine content. [keport / Shutterstock]

New research into vaccine misinformation on Facebook shows that the platform’s content recommendation algorithm can lead users towards, rather than away from, harmful conspiratorial material. 

Global advocacy group Avaaz conducted an investigation to “stress test” the algorithm by exploring the kinds of content recommended to new users without a history of interacting with anti-vaccine posts. 

The results show that it did not take long before pages spreading vaccine misinformation, including some already tagged with harmful content warnings or led by prominent “anti-vax” conspiracy theorists, were being recommended to users. 

The findings come at a time of increased scrutiny of the role of online misinformation in deterring people from getting vaccinated against COVID-19. Last week, US President Joe Biden told reporters that platforms such as Facebook are “killing people” by hosting such misinformation. 

Algorithmic transparency has been a focus of recent EU regulatory measures, with provisions included in both the Commission’s recent Guidance on the Code of Practice on Disinformation and the upcoming Digital Services Act (DSA). 

LEAK Commission pitching disinformation measures in Digital Services Act

The European Commission is defending its plans to tackle online disinformation from criticism from national governments, according to a working paper leaked to EURACTIV.

The investigation

As part of the investigation, Avaaz used two test accounts to like pages containing anti-vaccine content and then tracked the follow-up pages recommended by Facebook. 

Over the two days on which the test was conducted, 109 out of the total 180 pages recommended to researchers contained anti-vaccine content, suggesting that the platform’s algorithm does little to direct people away from harmful or misleading content of this nature. 

Researchers also found that viewing content concerning vaccines led to the recommendation of pages about autism, even if these pages contained no vaccine-related information.

Avaaz says this suggests that Facebook’s algorithm may have internalised the widely debunked myth of a link between vaccines and autism and that this could be seen as “a form of misinformation in and of itself.”

Design choice

The findings suggest that Facebook’s recommendation algorithm can serve to push users further into networks of vaccine misinformation, rather than providing them with a way out of it or directing them towards more legitimate sources of information. This, Avaaz says, is a “design choice” on the platform’s part. 

In September last year, the report notes, Facebook announced that it would stop recommending health-related groups to users in order to “prioritise connecting people with accurate health information.” Similar action, however, has not been taken in relation to pages containing health-related content.

Nate Miller, Campaign Director at Avaaz, told EURACTIV that this was a “glaring inconsistency” in Facebook’s policy and that “the existing policies at Facebook regarding misinformation in general and COVID-related misinformation, in particular, are simultaneously inadequately enforced and not fit for the task.”

Facebook was not immediately available for comment. 

Transparency measures

When it comes to addressing the spread of misinformation online, the secrecy of platforms surrounding their algorithmic architecture is a key challenge, as is the lack of data provided by companies to prove that they’re taking adequate steps to address issues such as disinformation, Luca Nicotra, Campaign Director at Avaaz, told EURACTIV.

“They say they’re cleaning up their platforms, but then they’re not releasing meaningful data that would then allow us to actually validate their claims,” he said. 

Earlier in 2021, the European Commission released its guidance on strengthening the 2018 Code of Practice on Disinformation, a set of self-regulatory standards agreed to by major tech platforms to tackle the issue of online disinformation. While the Code is not legally binding, its measures are likely to become so once the DSA becomes law.

Commission sets the bar for anti-disinformation measures

The freshly published Guidance on Strengthening the Code of Practice on Disinformation illustrates the European Commission’s expectations on the anti-disinformation measures for online platforms. While the Code is non-binding, the measures are likely to become mandatory following the adoption of the Digital Services Act (DSA).

Both the code and the DSA include a focus on algorithmic transparency, something which Avaaz says will be vital to understanding the impact and reducing the potential harm caused by misinformation on online platforms.

Among the measures proposed are provisions that would require platforms to demonstrate how they’ve changed their algorithms to counter the spread of disinformation.

[Edited by Josie Le Blond/Luca Bertuzzi]

Subscribe to our newsletters