Lawmakers in the European Parliament have raised concerns at the role that social media played in the storming of the US Capitol in Washington, saying the EU’s proposed Digital Services Act (DSA) should double down on the spread of conspiratorial material online.
Supporters of outgoing president Donald Trump provoked violent scenes in the US capital on Wednesday, obstructing the certification of the November election votes which would formally declare Democrat Joe Biden as the new US President.
For the first time, social media platforms appeared to recognise their share of responsibility in allowing Donald Trump to spread false allegations that the US election was rigged.
In an unprecedented move, Twitter temporarily banned the US president from its platform, pending his deletion of a series of Tweets the company said could incite violence.
Facebook’s Mark Zuckerberg also announced on Thursday (7 January) that Trump had been “indefinitely” banned from the network “and for at least the next two weeks” for spreading false allegations about the fraudulent nature of the election.
“We believe the risks of allowing the President to continue to use our service during this period are simply too great,” Zuckerberg said in a post on Facebook.
In Brussels, MEPs have been quick to condemn the role that social media platforms have played in mobilising a community capable of carrying out the violent scenes that hit Washington, with the dissemination of conspiratorial material running rife across platforms.
“The riots in Washington have in large part been fuelled by online conspiracy theories so successful they have completely subverted the trust of many Americans in basic democratic institutions,” said Kris Peeters a Belgian centre-right MEP, who led an initiative report on the Digital Services Act and fundamental rights last year.
“Our European Digital Services Act must significantly improve the transparency and enforcement of digital companies so that we can ensure they adequately address the risks, especially how disinformation is shared and amplified,” he added.
Under the Digital Services Act, platforms could face the prospect of billions of euros in fines unless they abide by new rules on advertising transparency, illegal content removal, and data access. Penalties for violations include fines of up to 6% of a company’s annual income.
Alex Agius Saliba, a socialist MEP who led a text last year on the Digital Services Act for the Parliament’s internal market committee, said the EU’s landmark law needs to look more closely at the spread of false content.
“This is an attack on democracy, the rule of law, and I only hope the peace will be restored,” he told EURACTIV. ”
“Digital practices designed to maximise user attention based on illegal or sensationalist content need to be adequately addressed in the DSA.”
Code of practice and the DSA
In terms of disinformation, the proposed rules establish a co-regulatory backstop that will play a crucial role in the EU’s code of practice against disinformation, a voluntary mechanism signed up to by Facebook, Google and Twitter.
For its part, the European Commission has begun work on bolstering efforts against disinformation outlined in the code.
“We are now working on a strengthened Code of Practice on disinformation,” said Věra Jourová, the Commission’s vice-president for values and transparency.
One of the objectives will be “to monitor information on platforms’ policies and access to data, to develop standards for collaboration between fact-checkers and platforms, and to strengthen the integrity of services,” she told EURACTIV on Thursday (7 January).
“Thanks to the Digital Services Act, we will have a legal basis for this move. The Digital Services Act introduces, among others, a general need for big platforms to take risk-mitigating measures. The Code can be one of those measures, if done right,” Jourová added.
The Commission will issue guidance in spring, setting out how platforms need to step up their measures based on its assessment of an updated code of practice against disinformation.
However, socialist MEP Paul Tang, who delivered an opinion on the Digital Services Act for the Parliament’s civil liberties committee, believes guidelines for social media platforms may not be enough.
While “clear guidelines for the largest platforms are indeed needed to stop the spreading of disinformation,” the possibility of direct regulation against disinformation should not be discounted, he said.
“We cannot rely on the goodwill of Twitter and Facebook to make democracy work, given their vast commercial interest and their poor track record,” he told EURACTIV.
Green MEP Alexandra Geese, shadow rapporteur on the DSA for the the internal market committee, concurred. She denounced in particular provisions in the Digital Services Act allowing digital platforms the freedom to assess their own risks across certain fields.
“The Commission wants Google and Facebook to draft their own risk assessments – that is like telling Volkswagen to assess their contribution to climate change,” she told EURACTIV, highlighting the case of AI ethics researcher Timnit Gebru, who was fired from Google in December for allegedly writing a paper explaining the intrinsic bias of artificial intelligence systems.
“How can we expect these companies to write their own risk assessments if they don’t even allow critical thinking and drive out those who work to mitigate the risks?” Geese added.
Riots were planned online
In the days leading up to Thursday’s congressional vote, demonstrations had been planned and announced across various online platforms.
However, much of it happened away from mainstream websites: On the pro-Trump platforms ‘Gab’ and ‘Parler,’ the hashtag #stopthesteal was widely used to spread calls for violence, including pictures of an armed George Washington with the caption: “We shouldn’t count on Trump saving us. Jan 6th We The People need to be saving him.”
Another widely utilised platform is the ‘TheDonald.win‘ forum where users had vowed to ‘Storm the Capitol’ in case Congress gave their blessing to the election result. That comment alone received over 500 upvotes.
But even on mainstream platforms, acts of violence had been openly advocated. In a TikTok video, one Trump supporter asked his fellows to bring their guns to the protests.
For that reason, the escalation in violence could have hardly come as a surprise to US authorities, says Alexander Ritzmann, a consultant to the European Commissions Radicalisation Awareness Network (RAN) and advisor to the Counter Extremism Project (CEP).
“They must have known,” Ritzmann told EURACTIV Germany. According to public FBI documents, the Bureau has been closely following the activities of online groups such as QAnon, a loose collective of conspiracy theorists who believe Donald Trump is their only saviour from the villains among Washington’s “elite”.
The FBI considers QAnon and other “conspiracy theory-driven domestic extremists” as a terrorist threat.
In Brussels, lawmakers are finalising a regulation on terrorist content online (TCO). Among other things, it will introduce a stricter notice-and-action-system which will force platforms to delete terrorist content within an hour of notification.
However, Ritzmann believes that any system relying on notice-and-action cannot be sufficient as long as platforms can lean back and wait until users or authorities ask them to act.
[Edited by Frédéric Simon]