This article is part of our special report How participation and digitalisation can help implement EU democracy.
How can online platforms that seek to maximise profits be made compatible with or even foster consumer protection, democratic processes, and worker rights? For European lawmaker Tiemo Wölken, the short answer is regulation.
Last year, Paul Nemitz, the principal adviser to the European Commission’s service for justice and consumer rights, stressed the need for making commercially-driven platforms serve democracy at an event of the Friedrich-Ebert-Stiftung.
Meanwhile, the EU institutions have passed or discussed flagship regulations to bring in rules for the unregulated online world. EURACTIV took stock of the EU’s digital agenda with Tiemo Wölken, an MEP for the German Social Democratic Party (SPD).
The Digital Markets Act (DMA), flagship legislation intended to rein in Big Tech, was recently agreed between the EU co-legislators. How do you think this legislation will change consumers’ and business users’ relationships with these platforms?
Many essential points have made it into the final text, as far as we know. The first one is interoperability for messenger messaging services, mandating gatekeepers to open up one of the most important elements currently guaranteeing their dominance.
Secondly, we have the political agreement to include a ban on targeted ads for minors and a banned on using sensitive personal data to tailor ads in the Digital Services Act. In the DMA, we have a clause that will prevent gatekeepers from spamming users with frequent, constant paths.
Lastly, we have agreed on an ambitious general fair access clause, which will mean that all gatekeepers need to provide access to their services under fair conditions, reasonable and non-discriminatory terms — closing any potential loopholes.
These points will completely change the regulation of Big Tech companies. However, I’m not convinced that 100 or 120 Commission officials will be enough to enforce the DMA properly.
In the Digital Services Act (DSA), some critical parts are being negotiated: the partial ban on targeted ads, provisions on recommender systems, dark patterns, and algorithmic transparency. What is your view on where things currently stand?
I am confident that our rapporteur Christel Schaldemose will reach a strong compromise on recommender systems and algorithmic transparency. We, as the S&D, were pushing for more regulation here from the start, especially following Francis Hogan’s revelations.
Unfortunately, we did not have a political majority for more ambitious measures. Of course, the situation has completely shifted since the Russian invasion of Ukraine. Disinformation has entered the spotlight, mainly through state outlets like Russia Today and Sputnik.
I am convinced that we need a proper legal framework that deals with the dissemination of harmful but legal content. We should not simply treat it as illegal content since we need to be mindful of proportionality and fundamental rights. However, freedom of speech is not freedom of reach. We can restrict the visibility of content without taking down the content entirely.
Online platforms have privatised the public space but follow a commercial logic. How do you think we can handle a situation where a large part of the public debate takes place online, but where these platforms do not necessarily have incentives to respect the democratic process?
The main objective of any private company will always be to make money. We cannot expect private companies to contribute to our democratic debate unless we oblige them with regulation.
Unfortunately, the discussion about the internet as a space for public debates and a natural extension of the physical public sphere always comes a bit too short in most discussions on the EU level, which regards social media as a market to be regulated.
A key issue is that the algorithms employed by the large platforms do not favour open and fair political debate. They’re designed to keep users on the platform as long as possible so that the platform can make more money by showing people ads. The content that keeps people on the platform is not necessarily quality news, but sensationalist reporting or outright disinformation.
An excellent example was a survey conducted by Avaaz on the performance of media outlets on Facebook ahead of the German Federal elections. Major outlets like German television ARD, Der Spiegel and Bild, have more followers than Russia Today. Yet, Russia Today managed to get more interactions with their content.
Russia Today knows how to play the Facebook algorithms. On the other hand, Facebook allowed them and earned a lot of money with them. So, they were happy to support this. That is the core of the issue. If we want online platforms to promote democracy, we have to ensure the algorithms are designed to do this. And this is our task as politicians here.
Do you think there is more that the EU could do to promote pluralism in the online space, for instance, with the development of European platforms?
Creating a European Google or Facebook is doomed to fail. However, we can do things like creating a European public media platform for public broadcasters to share all of their content with all EU citizens. That could be the first step toward a truly European public cultural sphere.
In terms of algorithmic management, there is a proposed directive on the working conditions of platform workers. Also in this case, one could see tensions between the commercial interests and the European values. How do you think that the legislation can square these two aspects?
For social democrats, this is the critical question, not only in the platform worker directive but also in the context of the AI Act. What kind of algorithmic management systems will be covered by the AI regulation will depend on how broad the definition of AI ends up being.
We should regulate the automation of decision making, not a specific technology. Workers need to have the right to know whenever an algorithm takes a decision and to have this decision explained. There needs to be a responsible person who can be held accountable for decisions made by AI. Then we need to discuss if we should allow AI to interfere in all areas of our lives.
[Edited by Zoran Radosavljevic]