Skip to main contentSkip to navigationSkip to navigation
Twitter, Instagram and Facebook apps
Category 1 services – the largest and most popular social networks – will have to implement rules that protect ‘democratically important’ content. Photograph: Matthew Vincent/PA
Category 1 services – the largest and most popular social networks – will have to implement rules that protect ‘democratically important’ content. Photograph: Matthew Vincent/PA

Online safety bill ‘a recipe for censorship’, say campaigners

This article is more than 2 years old

Proposals hand Ofcom the power to identify ‘lawful but harmful’ content and punish social networks that fail to remove it

Long-awaited proposals to regulate social media are a “recipe for censorship”, campaigners have said, and fly in the face of the government’s attempts to strengthen free speech elsewhere in Britain.

The online safety bill, introduced to parliament on Wednesday, hands Ofcom the power to punish social networks that fail to remove “lawful but harmful” content. The proposals were welcomed by children’s safety campaigns, but have come under fire from civil liberties organisations.

“Applying a health and safety approach to everybody’s online speech combined with the threat of massive fines against the platforms is a recipe for censorship and removal of legal content,” said Jim Killock, director of the Open Rights Group. “Facebook does not operate prisons and is not the police. Trying to make platforms do the job of law enforcement through technical means is a recipe for failure.”

The centre-right CPS thinktank was similarly critical. “It is for parliament to determine what is sufficiently harmful that it should not be allowed, not for Ofcom or individual platforms to guess,” it said.

“If something is legal to say, it should be legal to type,” CPS’s director, Robert Colvile, added.

In its update to the bill from the white paper first drafted by Theresa May’s government in 2019, the Department for Digital, Culture, Media and Sport added sections intended to prevent harm to free expression. Social networks will now need to perform and publish “assessments of their impact on freedom of expression”.

But the proposed legislation, published on the same day as a bill forcing universities in England to promote free speech, is largely concerned with pushing social networks to take down more content, not less.

One exception is another new section, which would make the UK one of the first nations in the west to require social networks to take active steps to moderate their impact on the democratic process. There are fears, however, that the requirement could lead them to refuse to take action against harmful content in case it was deemed democratically important.

Under the measures, “category 1” services – the largest and most popular social networks – will need to implement rules that protect “democratically important” content such as posts promoting or opposing government policy or a political party before a vote in parliament, an election or a referendum, or campaigning on a live political issue.

They will also be banned from discriminating against particular political viewpoints and will need to apply protection equally across political opinions.

As an example, the government said a company’s rules against content depicting graphic violence could include exceptions to allow campaign groups to raise awareness about the issue, “but it would need to be upfront about the policy and ensure it is applied consistently”.

Such a requirement has been regularly proposed in the US, where accusations of moderation bias against the Republican party have become more frequent than ever since Donald Trump was barred from most major social networks. If the online safety bill passes this year, the UK will be the first country to actively impose such a restriction on social networks.

The latest version of the bill also includes tighter protections for journalism. News websites were already explicitly exempt from much of the law’s remit, assuaging concerns that publications could be censored if they failed to adequately moderate the comments under their articles.

Now the draft bill includes additional protections for journalistic content posted to social networks, including from “citizen journalists”. Social networks will need to have “a fast-track appeals process” for journalists, and “will be held to account by Ofcom for the arbitrary removal of journalistic content”.

The bill also contains new requirements on platforms to act against online fraud, expanding the scope of the harms covered by the legislation. They will be required to take responsibility for scams perpetrated by their users, such as romance scams and fake investment opportunities.

Melanie Dawes, the chief executive of Ofcom, which will be in charge of enforcing the new regulations, welcomed the legislation.

“Today’s bill takes us a step closer to a world where the benefits of being online, for children and adults, are no longer undermined by harmful content,” she said. “We’ll support parliament’s scrutiny of the draft bill, and soon say more about how we think this new regime could work in practice – including the approach we’ll take to secure greater accountability from tech platforms.”

More on this story

More on this story

  • Hire factcheckers to fight election fake news, EU tells tech firms

  • TikTok says it has acted to curb disinformation amid Israel-Hamas war

  • Social media firms ‘not ready to tackle misinformation’ during global elections

  • CEO regrets her firm took on Facebook moderation work after staff ‘traumatised’

  • Reddit moderators vow to continue blackout in API access fees row

  • Ex-Facebook moderator in Kenya sues over working conditions

  • YouTube Kids shows videos promoting drug culture and firearms to toddlers

  • Facebook owner to ‘assess feasibility’ of hate speech study in Ethiopia

  • Rohingya sue Facebook for £150bn over Myanmar genocide

Most viewed

Most viewed