Five EU Commissioners all wanted a piece of the action on Tuesday (9 January) at a meeting with CEOs from some of the biggest internet platforms.
The heads of more than 20 tech firms, including Google, Facebook, Microsoft, Twitter and Amazon arrived in Brussels for discussions on a hot-button issue. Tensions have soared between the industry and politicians across Europe over the last year as governments have amped up pressure on platforms to remove illegal content.
Commissioners in charge of home affairs, justice, and digital legislation all attended the meeting.
The CEOs spent half the day holed up in the Berlaymont, the European Commission’s headquarters, to hear whether the EU executive is planning to announce new legislation this spring that would require them to take down posts containing anything illegal, including terrorist material or hate speech.
The Commission has put off introducing hard EU law but threatened over the last year that it might resort to regulating companies if they don’t remove illegal posts on their own. Every few months, the Commission meets with some of the biggest platforms—Google, Microsoft, Twitter and Facebook—to review how quickly they remove illegal posts as part of a non-binding agreement that the firms signed on to in 2016.
The Commission warned last September that it would give the companies a few more months to speed up their rate of removing that material, and promised to announce by May whether it will propose a law.
But pressure has mounted on the Commissioners over the last few months to take a tougher approach.
A controversial German law went into effect last week that forces companies to remove illegal posts—or face fines of up to tens of millions of euros.
Last week, French President Emmanuel Macron said he is drafting a new law regulating how platforms display so-called fake news. The Commission is planning to announce a separate strategy this spring outlining how tech firms should address fake news.
New measures in the two biggest EU member states mean that the Commission may now have more freedom to increase pressure on the industry.
“If the industry does not act – and fast – we will,” the Commission said in a statement after the meeting ended on Tuesday afternoon.
A report from last June showed that Microsoft, Google, Facebook and Twitter removed 59% of illegal hate speech that users flagged to them, representing an increase from previous months. But the Commission wants the firms to remove even more posts, and to respond faster to their users’ requests to take down content.
Justice Commissioner Vera Jourova will announce updated figures on the platforms’ response rates on 18 January. Sources involved in the Commissioners’ decision making said that the next few weeks will be crucial as they decide whether to draft legislation or not.
On Tuesday, the Commissioners took turns emphasising that their patience is wearing thin.
“We continue to keep all options on the table for the next steps,” Home Affairs Commissioner Dimitris Avramopoulos said.
During the meeting, Avramopoulos mentioned last year’s terrorist attacks in Germany, France, the UK, Sweden, Spain and Finland, where he said there was a “misuse of the internet” to recruit supporters, and to help terrorists coordinate.
Jourova said afterwards that “stronger EU level coordination is needed”.
EU Security Union chief Julian King said he wants small and medium-sized companies to join the internet forum group, where Microsoft, Facebook, Google and Twitter report to the Commission voluntarily on how much and how fast they take down illegal material.
One EU source said the Commission is looking at ways to help smaller firms monitor and remove such content at a faster rate, although they have smaller budgets and fewer staff than large platforms.
#onlineplatforms have huge power and influence, also social responsibility. Today's regime is flexible enough for them to take action to remove #illegalcontent – but it is up to platforms to do this. If platforms will not act proactively, legislators will. https://t.co/kOSUFeF8eh
— Andrus Ansip (@Ansip_EU) January 9, 2018
Andrus Ansip, the EU Vice-President in charge of the digital single market, said the firms still do not remove enough illegal posts.
“Fragmentation is an issue and will increasingly be so,” he said in a statement.
But Ansip, a Liberal former Estonian prime minister who has previously disapproved of regulation on hate speech that might spiral into a “ministry of truth”, pointedly asked companies on Tuesday to “detect, remove or disable access to illegal content on a voluntary basis”.
The CEOs told the Commissioners they’re worried about exactly that: they want to avoid a fragmented patchwork of laws across the EU, where some countries impose hefty fines if companies fail to remove illegal posts, and other member states do nothing. They pointed to the German law as a cautionary tale of what they want to avoid.
Social media companies have pushed back against the new German rules. They argue that the law gives the tech industry too much responsibility to determine what is illegal content.
Bernhard Rohleder, the CEO of German tech association Bitkom, which represents firms including Google and Facebook, called the law “a bad example for the European Union”.
“It is not up to private companies to decide on the interpretation of our fundamental rights to freedom of expression and the right to receive information,” Rohleder said.
During their meeting, the tech CEOs stressed to the Commissioners that there are different kinds of illegal content—and hate speech, for example, is easier for their staff to identify and remove than other types of posts.
The CEOs asked the EU Commissioners for more clarity on what kind of content is illegal and what is not, one source said. The source did not specify whether the CEOs expressed a preference for the Commission to propose new legislation that clarifies what kind of content is illegal.
Last year alone, Facebook hired hundreds of new Germany-based employees to monitor posts before the new social media law came into effect there.