UK’s expanded Online Safety Bill moves forward

The draft bill was published in May 2021 and, after a period of review, the parliamentary joint committee on the Draft Online Safety Bill issued a report on the legislative proposal in December. [Shutterstock / Drop of Light]

The UK government introduced its Online Safety Bill in Parliament on Thursday (17 March), following a heated debate on the fine balance between protecting people from online harms and preserving their online freedoms. 

The bill addresses an expansive set of issues, from online fraud to child sexual abuse, and, similar to the EU’s Digital Services Act (DSA), is designed to boost the standards, accountability and transparency of online service providers to ensure that they are attending to the safety of their users. 

The draft bill was published in May 2021 and, after a period of review, the parliamentary joint committee on the Draft Online Safety Bill issued a report on the legislative proposal in December, with an extensive set of recommendations for its improvement. 

UK lawmakers call for strengthened Online Safety Bill

Binding codes of practice, measures to keep children from accessing pornography, and new criminal offences related to the online world are needed to strengthen the UK’s proposed Online Safety Bill, according to a parliamentary committee.

Among them, they called for online regulation of behaviour that is illegal offline; the introduction of new criminal offences making some behaviours illegal online; binding codes of practice to be implemented by the UK regulator Ofcom; and the introduction of additional child safety provisions. 

Under the law, the largest “Category 1” companies – not unlike the Very Large Online Platforms (VLOPs) at the core of the DSA – are required to assess the risks posed by the content being posted on their platforms and address material that is considered harmful but which falls below the criminal threshold. 

Secondary legislation is also set to clarify what constitutes the “legal but harmful” material that platforms will be required to remove. 

Where the onus would previously have been on platforms to consider whether material beyond the core examples of this kind of content – such as depictions of self-harm or eating disorders – qualified, companies will now only need to remove that which falls within the boundaries that will be set out. 

The UK government says this change will boost freedom of expression by ensuring that platforms don’t overcompensate for the uncertainty of what is or is not acceptable by unnecessarily removing content, and that decisions of this nature aren’t made arbitrarily by company executives. 

Platforms will also be under an obligation to report any emerging harms to Ofcom, allowing for the addition of more official categories of legal but harmful content in future. 

Included in the changes that the government has made since the bill’s initial publication is the requirement that any website that hosts pornography must put in place “robust checks” on the age of users, echoing legislation recently introduced in France that requires porn websites to verify that anyone accessing the material is over the age of 18. 

It also sets out new plans to tackle anonymous trolls, requiring platforms to hand greater power to social media users to limit the material that they see and the people they interact with online. Notably, Category 1 platforms will now be required to give people the ability to block users who have not verified their identity on the site and to give them the choice to opt-out of seeing harmful content.

Digital anonymity and the Online Safety Bill

As efforts to regulate digital platforms gather steam, attention to the issue of online anonymity is growing.

Also announced this week was the criminalisation of “cyberflashing”, the sending of unsolicited sexual images online, in England and Wales. Under the new terms, offenders could face up to two years in prison. 

The bill also sets out a number of offences to be considered a “priority” for removal, necessitating proactive measures by platforms to ensure their take-down. Among them will be the encouragement of suicide, hate crimes, incitement to violence and harassment or stalking. 

The “priority” classification means platforms must ensure that their algorithms are formulated to prevent users from coming across this material and to minimise the length of time it is available online.  

The bill also pays particular attention to tackling child sexual abuse online. Under its provisions, the current voluntary reporting system will be replaced by a requirement that companies report any instances of such material to the UK’s National Crime Agency. 

“This bill is a once in a generation chance to make sure children are not collateral damage to the growing power of the internet”, said Andrew Puddephatt, Independent Chair of the Internet Watch Foundation. The organisation, which focuses on child safety online, said it welcomed the bill, but that greater clarity is needed in terms of how it will be implemented as well as the timetable.

The bill’s implementation will be overseen by Ofcom, the UK’s media regulator, which will have the capacity to fine non-compliant companies up to 10% of their annual global turnover. The executives of companies found to be in violation of the law could also face prosecution, with potential jail time of up to two years.

The scope of offences that could see executives face criminal liability has also been expanded to include the destruction of or provision of false evidence, refusal to allow Ofcom entry to company premises and failure to provide information or attend interviews requested by the regulator. 

[Edited by Luca Bertuzzi/Nathalie Weatherald]

Subscribe to our newsletters

Subscribe