YouTube’s algorithm fuelling harmful content, study says

A new study points to YouTube's automatic recommendations as the main driver for spreading harmful content in the second most popular website of the world. [Shutterstock]

A crowdsourced investigation has accused YouTube’s recommendation algorithms of fuelling harmful content. France and Germany were found to be particularly affected, along with other non-English speaking countries.

The study was conducted by Mozilla, whose main product is Firefox, a web browser that directly competes with Google Chrome. Google is also the parent company of YouTube.

“YouTube needs to admit their algorithm is designed in a way that harms and misinforms people. Our research confirms that YouTube not only hosts but actively recommends videos that violate its very own policies,” said Brandi Geurkink, Mozilla’s Senior Manager of Advocacy.

YouTube is the second most-visited website in the world, with 2.3 billion users per month in 2020. Its number of watch hours increased tenfold between 2012 and 2017, reaching a billion hours per day. That was mainly the result of the platform’s algorithm, which personalises its video suggestions. YouTube assessed that 70% of the time users spent on the platform was based on this recommender system.

Studying the system

The main finding of Mozilla’s report is that ‘the algorithm is the problem”. 71% of the videos reported as harmful content were automatically recommended by the platform. Recommended videos were also 40% more likely to be reported as harmful.

The main types of harmful content identified were videos containing disinformation, violence, hate speech, or scams. The study also calls for transparency in how the system decides which videos to recommend, since in several cases there was no direct relation between the video watched and the suggested videos.

Source: YouTube Regrets: A crowdsourced investigation into YouTube’s recommendation algorithm.

“We constantly work to improve the experience on YouTube and over the past year alone, we’ve launched over 30 different changes to reduce recommendations of harmful content. Thanks to this change, consumption of borderline content that comes from our recommendations is now significantly below 1%,” a YouTube spokesperson told EURACTIV.

YouTube welcomed further research on the functioning of its system and invited Mozilla to share the full dataset. The video-sharing platform estimates that less than 0.2% of the content currently violates its community guidelines, a margin of error that has significantly reduced in recent years thanks to machine learning.

Toning down negative trends

The number of harmful videos, however, might be less relevant than their reach. The study contends that harmful content attracts on average 70% more views per day than other videos.

Source: YouTube Regrets: A crowdsourced investigation into YouTube’s recommendation algorithm.

YouTube’s main source of revenue is advertising, which is based on how much time users spend on the platform. If left unchecked, the algorithm would naturally suggest trending content, regardless of whether it is harmful or not.

According to YouTube, the changes the platform has been applying to its recommendation systems has drastically reduced watch time of ‘borderline’ content since 2019. The platform has also included human oversight to assess what is harmful content.

Nonetheless, these improvements might not have affected all countries equally. “We also now know that people in non-English speaking countries are the most likely to bear the brunt of YouTube’s out-of-control recommendation algorithm,” Geurkink added.

The study found that the level of harmful content in non-English speaking countries was on average 60% higher than in countries where English is a primary language. Germany and France are in the top three countries where the share of harmful videos reported was the highest.

Commission sets the bar for anti-disinformation measures

The freshly published Guidance on Strengthening the Code of Practice on Disinformation illustrates the European Commission’s expectations on the anti-disinformation measures for online platforms. While the Code is non-binding, the measures are likely to become mandatory following the adoption of the Digital Services Act (DSA).

Algorithm accountability

If confirmed, the findings of the report would put YouTube at odds with the Code of Practice on Disinformation, the European Commission’s self-regulatory framework to fight disinformation. The Commission recently published a new guidance for the code, introducing stronger measures to make platforms accountable for their recommender systems.

“Commitments should also include concrete measures to mitigate risks of recommender systems fuelling the viral spread of disinformation,” the guidance reads.

The Commission also urged platforms to be more transparent on how their algorithms work, requesting that they publish the methodology of their recommender systems. The updated Code of Practice is expected to be completed by early 2022.

The issue of algorithm accountability has also been raised in the context of the Digital Services Act (DSA), a major legislative proposal intended to define content moderation. Christel Schaldemose, the lead negotiator in the Parliament, proposed to turn off recommender systems by default and to make platforms accountable for their algorithms’ violations of fundamental rights.

Make online platforms accountable for their algorithms, leading MEP says

EU lawmakers will battle over whether online platforms should be required to open their algorithms to scrutiny, making them accountable for fundamental rights violations, after the European Parliament published its initial revisions to the planned Digital Services Act. The new blueprint also includes stronger opt-in and enforcement measures.

[Edited by Benjamin Fox]

Subscribe to our newsletters

Subscribe