In the fight against fake news, YouTube has a ‘bias toward keeping content up’

YouTube signage in front of one of their buildings in San Bruno, California, USA, 30 March 2018. [EPA-EFE/JOHN G. MABANGLO]

YouTube has a systemic ‘bias towards keeping content up,’ although the video-sharing platform recently removed more than one million channels for violation of its policies as part of the EU’s code of practice against disinformation, EURACTIV has learnt.

During a briefing with Brussels journalists on Tuesday (30 April), senior YouTube executives spoke about the platform’s policy in taking down fake news and disinformation.

“There are the community guidelines, by which we determine which content should stay up on our platform versus being taken down,” said a YouTube executive, who spoke on condition of anonymity. “We tend to have a bias toward keeping content up because we have the value of freedom of expression.”

However, YouTube said that a preferable approach to dealing with content that “comes very close” to violating policies, or content that aims to “disinform or misinform users in harmful ways” is to remove recommendation support for such videos.

In practice, this means that videos created by a channel that have been judged to breach the above conditions will be demoted so as to not be ‘recommended’ to users.

As election looms, Spaniards are hit by WhatsApp disinformation campaign

Spanish citizens have been subject to a series of disinformation campaigns ranging from fake news about Prime Minister Pedro Sánchez signing a Catalan independence deal, to conspiracies about migrants and propaganda against gay people, a new study has found.

However, there’s a catch. YouTube executives revealed on Tuesday that if a user has previously ‘liked’ videos from a particular channel, or is subscribed to the channel itself, they will continue to receive recommendation notifications for that channel.

A case in point that EURACTIV raised with YouTube on Monday was on the issue of controversial ex-leader of the anti-Islam English Defence League, Tommy Robinson, who will be standing for the European elections in May.

Earlier this year, Robinson was banned from Facebook and Instagram for violating hate speech rules. After having previously been permanently banned from Twitter, Robinson is now left with YouTube as his sole platform.

Previously, Robinson has been accused of spreading anti-Islam disinformation and he confessed last November to spreading fake news about a Syrian refugee who had been the subject of a physical attack by pupils at a school in Huddersfield, England. A video of the attack had circulated widely on social media.

Robinson claimed that the refugee had been involved in a separate attack on a girl at the school, prompting the legal team representing the Syrian refugee to threaten legal action.

Pressed on the subject of Tommy Robinson, YouTube told EURACTIV they hadn’t identified any justifiable reason to remove his content.

“We looked at every piece of content that Tommy Robinson uploaded on YouTube,” an executive from the company said. “We didn’t find any violation of our policies.”

However, Youtube did apply the ‘recommended videos’ restrictions on Robinson’s account in early April, and his videos have been demonetised so he can’t generate any capital on YouTube.

EURACTIV took the issue up with Christoph Schott, campaign director for the human rights group Avaaz, an active voice in the fight against disinformation worldwide.

“The issue with disinformation on YouTube is the operation of the algorithm for recommending videos works,” Schott said.

“Research we’ve recently conducted shows YouTube’s restrictions on Robinson’s account have worked, with viewing figures falling considerably since the company decided to crack down on his videos. However, YouTube need to be quicker on the uptake about this. There’s so much out there that is dangerous.”

Research conducted by Avaaz on the impact of YouTube’s restrictions on Tommy Robinson’s channel.

Despite YouTube’s decision not to remove controversial users who have been banned elsewhere, such as Robinson, the company announced last week they had taken down more than one million channels for violation of its Spam, Deceptive Practices & Scams Policies during March.

Under this particular policy, conditions are violated if a channel has misled users with a false description or title on a video, or made exaggerated promises and farfetched claims.

The removals were cited in the most recent edition of the code of practice against disinformation compliance reports. The code is a voluntary framework that aims to quell the spread of fake news online. Signatories include Facebook, Twitter and Google.

“We are pleased to see that the collaboration under the Code of Practice has encouraged Facebook, Google and Twitter to take further action to ensure the integrity of their services and fight against malicious bots and fake accounts,” a joint statement from Commissioners Andrus Ansip, Vera Jourova, Julian King and Mariya Gabriel read in late April.

However, more globally, YouTube came in for criticism earlier this week, when it transpired that the platform recommended a news video on the Mueller report hundreds of thousands of times. The video was produced by the Russian-state funded broadcaster, RT, prompting US Senator Mark Warner to rally the company to do more.

“It’s extremely concerning that YouTube still hasn’t fixed the problems with its algorithm that make it so susceptible to gaming and questionable sources like RT,” he said.

Commission urges platforms to take action on fake accounts before EU elections

Online platforms need to step up their attempts to quell the spread of fake accounts but also provide greater independent access to their content for fact checkers, as part of their wider efforts in compliance with the code of practice against disinformation, the European Commission said on Wednesday (20 March).

[Edited by Zoran Radosavljevic]

Subscribe to our newsletters