Social media giant Facebook has warned against curtailing freedom of expression as the EU considers measures to clamp down on disinformation campaigns across online platforms.
In the online world, “the scope of what we deem to be acceptable speech” has narrowed over recent years, leading to potential erosions in freedom of expression, said Nick Clegg, Facebook’s Facebook’s VP for Global Affairs, at Rome’s LUISS Guido Carli University on Tuesday (21 January).
Even though other social media companies, such as Twitter, have committed to ban political advertising online, Facebook has repeatedly resisted pressure to take action against political advertising across its platforms.
“In the end you need to be careful once you have curtailed free speech – because once you have curtailed it you can’t turn it back,” he said, adding that Facebook’s position is to “err on the side of free expression where that fine line has to be crossed.”
Despite Facebook’s commitment to ensuring that free speech is allowed to continue across its platforms, Clegg renewed previous calls for regulation across four areas of its operation: harmful content, election integrity, privacy and data portability.
EU fight against disinformation
Meanwhile in Brussels, the European Commission revealed that the EU’s Democracy Action Plan, set to be released later this year, will establish measures in the “fight against disinformation” while also attempting to “ensure free and fair elections,” as well as addressing media sustainability.
A project team on media pluralism and media freedom as part of the Commissioners’ Group on European Democracy has been established to work on issues related to sustainability in the industry, Věra Jourová, the Commission’s Vice-President for Values and Transparency, said yesterday. Members of the group include Jourová and Executive Vice-President Vestager, as well as commissioners Breton, Gabriel, Reynders, and Várhelyi. The first meeting of the collective has been planned for early February.
Throughout last year, and particularly in the run up to the May 2019 elections, the European Commission had attempted to do its part to quell the spread of fake news with the introduction of a code of practice against disinformation.
The code was a voluntary framework aiming to stamp out the spread of fake news online. Signatories to the set of measures included Facebook, Google and Twitter.
In October, as part of the release of the first annual self-assessments reports of the code, the European Commission highlighted “substantial concerns” regarding access to data for independent scrutiny of tech platforms’ efforts against disinformation.
In a statement, the Commission said tech platforms have not been permitting sufficient access to their data to meet the needs of independent scrutiny and there is an “urgent need” for platforms including Facebook, Twitter and Google to establish better relationships with researchers and fact-checkers looking to probe the work platforms conduct in order to stifle disinformation.
More broadly, at the start of this year, Facebook announced plans to stamp out political manipulation online ahead of the November 2020 US Presidential election, allowing users to turn-off certain ad-targeting tools.
The decision comes after serious concerns related to Russian interference in the 2016 election and the misuse of user data as part of the Cambridge Analytica scandal.
[Edited by Frédéric Simon]