The Online Safety Bill will do more harm than good

Ellie Wheatley

June 28, 2022

By now, you have probably heard about the proposed UK Online Safety Bill. It is intended to ‘protect’ users on over 25,000 digital platforms (including ‘Big Tech’ brands like Facebook and search engines) and make Britain ‘the safest place in the world to be online’.

At first glance, this may sound like a pretty good idea. Who wouldn’t want to get rid of those nasty trolls on Twitter? Indeed, I initially thought protecting the vulnerable from online abuse on various platforms to be a decent and honourable objective.

However, having looked further into the Bill, it began to dawn on me that perhaps these objectives aren’t as straightforward as they sound. In fact, if the Bill becomes law, it may well lead us down a very slippery slope towards a society similar to the one Orwell warned us about in 1984.

Here I will present three things I didn’t know about the Bill, and why they will do more harm than good.

Its fundamental opposition to free expression

The Bill lays out the definition of illegal content and what is to be done about it. Currently, priority illegal content is defined as terrorism and child sexual abuse. Platforms are required to swiftly take that type of content down.

This seems fair enough, however, it also includes content that ‘amounts to’ offenses that the Secretary of State can update (a terrifying thought). This includes hate crime offences under the Public Order Act 1986, such as harassment, incitement to violence or suicide and many more.

You may still be thinking this sounds sensible. Why not get rid of all dangerous and potentially harmful content? However, the real worry lies in how these measures will be implemented. Platforms are required to set out what they consider to be illegal content and the measures they will take to counteract this content and comply with this ‘consistently’. Otherwise, they face hefty fines of up to 10 per cent of their worldwide revenue.

Do we really want Zuckerberg to be deciding what he thinks is ‘reasonably illegal’? Given the vast amount of content on all these platforms such as Facebook or Twitter, they will inevitably have to use Artificial Intelligence (AI) to sift out any illegal content.

This means that there is a very serious risk that too much content will be taken down. Inside jokes, or even references to these ‘offenses’ could be taken down regardless of their context.

And, should we really be letting these platforms (driven by the threat of Ofcom action to be strict on minimising ‘harmful’ content), decide what we should and shouldn’t say? Should this not be left up to juries and judges? The algorithm is unlikely to pick up on nuances such as satire or content aiming to raise positive awareness about an issue such as suicide. Of course, one could come up with hundreds of examples like this, and indeed if this Bill is passed, we will see it first-hand.

It includes private messaging

Platforms, having decided what is illegal or even ‘legal but harmful’ and what measures they will take, must take down content in order to ‘protect’ users – even in your private messages.

While you’re still safe from the watchful eye of Big Brother on traditional text messages, apps like WhatsApp, Messenger and Snapchat will have to monitor your texts in order to create a ‘safe-space’ in all parts of the platforms. This could put an end to jokes or banter that could be deemed offensive in your group chats, or even a bit of teasing from your brother (although I might appreciate that!).

Age verification

Any platform that could be accessed by children with content that could be ‘harmful’ to them, must put in measures to prevent children from doing so.  Platforms will have to age verify all users, with either a driver’s license, passport or credit card.

Sounding a bit intrusive yet? It gets worse. Ofcom is aware that not everyone will have such IDs, so they have suggested that platforms use profiling technology like behavioural biometrics to assess your movements in order to judge if you’re an adult. This is a clear invasion of privacy of sensitive data about your age, address, and even behaviour.

In a bid to protect child users, under 18s will effectively experience a parental lock on everything online. Google search, YouTube and others may filter out ‘harmful’ content to children. As you can imagine, this could encompass a huge range of otherwise educational content. Perhaps the unintended consequence of this act from the nanny-state is that our next generation will be incredibly unprepared for the real world, having been molly-coddled, sorry, ‘protected’, from any unsavoury content.

Overall, although the intentions may appear worthy, it’s clear that this Bill could lead to significant restrictions on our freedom of expression, regardless of the government’s insistence they wish to defend it.

The measures of security are simply too great and too subjective to see how this Bill could lead anywhere else other than a place where anything deemed harmful (subjective in itself) is removed, and our freedom of expression is curtailed. The only thing that should be taken down is this Bill.

Author

Written by Ellie Wheatley

Ellie Wheatley is assistant editor of 1828 and an undergrad at Durham University studying Philosophy & Politics.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.


  • SHARE

Capitalism and freedom are under attack. If you support 1828’s work, help us champion freedom by donating here.

Keep Reading

SUBSCRIBE TO OUR

WEEKLY NEWS BRIEFING

Sign up today to receive exclusive insights