Internet filters do not infringe freedom of expression if they work well. But will they?

DISCLAIMER: All opinions in this column reflect the views of the author(s), not of EURACTIV Media network.

Black,Keyboard,With,A,Blue,Enter,Key,Showing,An,Eu

The EU Court has clarified that filters should not be trusted when they cannot do their job with adequate precision. But their supervision remains weak. The upcoming AI Act offers an opportunity to address this, writes Martin Husovec.

Martin Husovec is an assistant professor of law at the London School of Economics.

On Tuesday, the Court of Justice of the European Union published its long-awaited ruling on the constitutionality of filtering to protect authors’ rights on sites like YouTube. According to the Court, the automated blocking of the user content upon upload is permissible if it avoids mistakes.

In 2016, the European Union proposed a new Copyright Directive. It introduced an obligation to pay authors and other right holders for user-generated content on certain platforms. If authors are not interested in the money, they can demand to filter the content that infringes their copyright. 

Poland decided to challenge the obligation to filter before the Court of Justice of the European Union. It argued that it violated the freedom of expression of users whose content could thus be removed without being illegal. No other member state supported Poland.

The Court ruled this week that filtering as such is compatible with freedom of expression. However, it must meet certain conditions. Filtering must be able to “adequately distinguish” when users’ content infringes a copyright and when it does not. If a machine can’t do that with sufficient precision, it shouldn’t be trusted to do it at all. 

Today, we can already automate some infringements with precision. Especially those which do not require assessment of the context. The less of the protected content the user uses, the bigger the problem for the machines. Machines can still find the content but are unable to legally evaluate it. 

This puts pressure on algorithms and artificial intelligence. The hope is that one day machines will be able to do much more. This goal of increasing the efficiency of enforcement in light of technological changes is entirely legitimate.

But will the platforms be sufficiently interested to properly test the quality of their algorithmic solutions? Do we really leave them unsupervised? The worry is that they will simply deploy whatever suits them commercially. 

The Court now says that filters should only be used where the technology is of high quality. But who will judge that it is? The European legislator has taken its hands off any specifics. The member states are also shying away from answering the question.

The Court has indicated that, from a European point of view, the European legislator has done enough already. It said that the directive includes the basic ingredients to protect against abuse. It is now up to the member states to make precision filtering a reality on the ground.

But most member states have so far pointed back to Brussels. They have argued that the European rules are sufficient on their own. According to them, it is enough to copy and paste the directive into national law and let the domestic courts resolve any issues. This kicks the can down the road toward citizens.

Will this landmark decision change this? Probably not. The Court remained sufficiently vague.

The member states should, however, take the principles of the decision to heart. It is the duty of the state that prescribes the blocking to ensure that it does not get out of hand.

What needs to be done is quite clear. Filters should be subjected to testing and auditing. Statistics on the use of filters and a description of how they work should be made public.

Consumer associations should have the right to sue platforms for using poorly designed filters. Some authorities should have oversight of how the systems work and issue fines in the event of shortcomings. 

Without such mechanisms, precision filtering is wishful thinking. 

That is also why the European Union is currently discussing how to best regulate the use of artificial intelligence. We do not want to abdicate on the oversight of machines when they decide about our lives.

However, copyright filtering on platforms is curiously missing among the risky applications foreseen by the AI Act. This omission should be remedied. The Court of Justice just ruled that filtering before content goes online constitutes a prior restraint, a very serious type of interference with people’s freedom of expression.

The Digital Services Act will offer a great set of safeguards. But they will likely work well only when the platforms qualify as very large. For all other platforms using filters to pre-emptively block users’ content, only limited forms of transparency will apply in advance. Most other safeguards kick in only after the individual mistakes are made.

The EU legislature should therefore consider including the use of such upload filters in the upcoming AI Act.

This week, the Court of Justice of the European Union agreed with the European Court of Human Rights on a fundamental principle. The failures of companies entrusted by the state to enforce the law, even if dressed in sophisticated technology, remain the state’s direct responsibility.

Legislatures in Europe should not look the other way anymore and address the oversight of filters head-on.

Subscribe to our newsletters

Subscribe