The upcoming EU digital services act marks a key moment for democracy across the bloc, writes Iverna McGowan.
Iverna McGowan is the Europe Director at the Centre for Democracy and Technology.
The Von der Leyen Commission has set itself the ambitious task of making ‘Europe fit for the digital age’. At the heart of that plan is the EU Digital Services Act (DSA), which will be a package of laws to update and review the now 20-year-old e-Commerce Directive.
Through the DSA, the EU is considering introducing new laws on content moderation, which affect online public discourse on a global scale similar to the GDPR’s impact on privacy. To safeguard European democracy, the DSA must meet several criteria.
For years, experts have deliberated about how to reform the framework for moderating illegal user-generated content, and now the EU wants to make solutions a legislative reality.
Human rights advocates are concerned about the current EU codes of conduct, which circumvent the human rights obligations of states and the EU by asking private companies to take down content based on broad definitions of what is permissible, without due process safeguards.
Online platforms are imposing their own rules with little public debate, lacking transparency and avenues for redress.
EU law currently provides private companies with a safe harbour from liability for illegal content posted by users. Last week a European Commission official suggested that this approach could be set aside in its DSA proposal.
Though it sounds counter-intuitive, removing this safe harbour entirely would either give companies an incentive to censor, filter, and remove as much content as possible, both lawful and unlawful, or to adopt a neutral role and not take action against disinformation or online harassment.
That’s why it’s crucial that a clear and stable legal framework governs the business of content-hosting. This would allow the DSA to opt for ‘good Samaritan’ moderation and encourage companies to take appropriate measures to address both illegal and so-called harmful content.
For such an approach to be human rights-compliant, companies must be bound by strong transparency requirements and strong procedural safeguards for users. The EU must also ensure that people have avenues to remedy, and that the judiciary remains the final arbiter on decisions on the legality of speech.
The DSA process is also raising questions about how data is monetized and used, despite the protections that EU data rules provide. Social media companies collect and hold deep wells of personal data, which feed the algorithms that decide what content and ads we see online.
Combined with companies’ use of behavioural science and psychology to keep people online, peoples’ vulnerabilities and fears are tapped into in an unprecedented manner, which poses particular risks to vulnerable and at-risk groups. We know that algorithms perpetuate and even amplify discrimination.
What’s more, a major study by MIT showed that false information outperforms true information, and Facebook itself acknowledged as far back as 2016 that “64% of all extremist group joins are due to our recommendation tools…Our recommendation systems grow the problem.”
In its consultation, the EU is examining the advertising business models of large platforms and how disinformation is driven online. Policymakers must review how current EU privacy and data protection laws are enforced in practice, and consider whether they are fit to fight the societal harms stemming from this complex, mass use of personal data.
It is already clear that we all need more choice and power over the use of our personal data; as a society, we should reflect on whether there are things that are simply not for sale, and challenge the very idea of surveillance-based advertising.
Today, the vast majority of voters use social media as the primary channel to seek information and get news, but most EU countries’ electoral laws are designed for an analog era of political campaigning. The challenge of protecting election integrity online remains unresolved, and the EU is looking for solutions.
Equality must be of central concern to the debates on election integrity. Recently, a Channel Four documentary uncovered how voter data was used to target and dissuade Black people from voting in the US 2016 election.
Despite the disparities, there are common themes to national electoral laws across the EU with rules to ensure balance in access to national airways, regulate campaign spending, and ensure the right to vote to all citizens irrespective of gender, social status, race, or ethnicity.
Any proposal which is predicated on defining the difference between a political and non-political ad would be unworkable. The European Commission would be wise to rather focus its energy on governance and procedures.
A crucial point will be to ensure transparency for all online ads so that watchdogs (whether that be national election oversight bodies, civil society groups or data protection bodies) have timely access to the relevant information to help monitor and enforce people’s rights.
The DSA could radically impact our online information ecosystem, with long-lasting and profound implications for key tenets of democracy such as free and fair elections, privacy, free expression, and standards for equality.
The overarching approach must be guided by key principles of transparency and accountability, and most importantly, must put the rights of the individual first.
Edited by Benjamin Fox