France’s top diplomat in charge of technology policy said there could be a need for EU legislation to make sure social media firms remove content promoting terrorism.
France has been leading an initiative with the UK and Germany to pressure online platforms to quickly take down posts containing terrorist material.
The European Commission is currently drafting its own plans to encourage firms including Youtube, Facebook and Twitter to use automated technology tools to identify such content and remove it within one hour, according to a leaked document that emerged earlier this week.
“We are working very confidently and very closely with the Commission on the use of the internet for terrorist purposes. With the European Commission and Europol, this works very well. Our efforts are really reinforcing each other,” David Martinon, France’s tech ambassador, said in an interview.
The Commission is expected to make its plans for a new, non-binding recommendation to tech platforms public within the next few weeks. Julian King, the British Commissioner in charge of the EU security union, has taken the lead in drafting the document.
Martinon credits a French-British initiative from last summer as the basis for the EU’s toughened up approach on big tech companies. French President Emmanuel Macron and British Prime Minister Theresa May agreed in June 2017 to make platforms legally responsible if they do not start removing more terrorist content.
“We created momentum with the Brits on that,” Martinon said.
Macron put Martinon in charge of liasing with Silicon Valley firms when he named him tech ambassador in November.
Martinon described the French president as “tech savvy” and motivated to carve out a new approach to policing how platforms address terrorism.
“He’s interested and he’s pushing, which is good,” he said.
Most EU countries do not have dedicated ambassadors who work on tech policy. Denmark appointed Casper Klynge as its tech ambassador last May, six months before Martinon started his new role. Klynge splits his time between Silicon Valley, Beijing and Copenhagen. Martinon is based in Paris.
After Macron and May’s agreement in June, French and British officials decided to include Germany in their discussions with the companies, according to Martinon.
“We are now three countries leading the way on those topics,” he added.
Martinon is unconcerned about Brexit forging a rift between the countries that have defined Europe’s position on how social media firms should react to terrorist groups’ use of their platforms.
“I don’t see that as a big problem. There is a discussion around European legislation and we will favour that,” he said.
But some illegal content that is notified to tech companies comes from the EU police agency Europol, and the UK’s Europol membership after it leaves the EU in 2019 is not guaranteed. May is expected to argue in a speech at the Munich Security Conference on Saturday (17 February) that the UK should remain a member of Europol and the European arrest warrant system.
The UK’s pitch is that both Britain and the EU stand to lose out if their partnership on security is severed.
Martinon described the UK as “very determined to provoke changes in the way platforms operate on terrorist content”.
“Losing formative influence over European counter-radicalisation activities could have a significant impact, both for the UK and EU – the UK has been a key contributor and is a valued partner given its law enforcement expertise,” said Kate Cox, a senior analyst at British think tank RAND Europe.
The French, British and German push to force platforms to remove content has added pressure on the Commission to address how the companies operate across the EU. The version of the Commission’s draft communication that leaked earlier this week does not legally require companies to remove posts.
That could still change. The EU executive published a document last month that said it is is “looking into more specific steps to improve the response to terrorist content online, before deciding whether legislation is needed”.
One controversial element in the leaked Commission plans is the EU executive’s encouragement for firms to develop automated technology to detect illegal content more quickly. Civil liberties campaign groups have criticised that kind of technology as an overstep to monitor all of their users’ activity.
Martinon admitted that he was skeptical of the idea that “we would let the platforms do the job entirely because it basically means we are giving them the key of the world”.
But he said it’s a big burden on companies if governments ask them to take down content quickly and also ensure there is independent oversight of what they remove.
“When they come back to us and say, ‘We have invested a lot in AI and now we manage to detect and take down 98-99% of terrorist content even before they upload’, this is the result. This is the very positive outcome,” he added.
Facebook and Google already use artificial intelligence to monitor their users’ posts for illegal material.
It’s some of the biggest EU countries that are leading discussions with tech giants about illegal content, but Martinon said he has not faced pushback from other member states.
Germany moved ahead of other EU countries by drafting its own law to force platforms to take down posts containing hate speech—or risk fines of up to €50 million. That law, which went into effect on 1 January, focuses on hate speech and not on content that promotes terrorism.
“You have to have a lot of political power behind you to negotiate with Facebook,” said Kilian Vieth, an analyst at the German think tank Stiftung Neue Verantwortung.
“Germany and Facebook, they’re almost at the same power level.”