Online Safety Bill: Big Tech Being Held to Account?
Wednesday 1st June 2022
The Online Safety Bill is a landmark piece of proposed legislation, which aims to keep people safe on the internet.
What is the Online Safety Bill?
The right to freedom of expression is enshrined in law, although hate speech is illegal. Online, there is content which is harmful but not illegal. What content is ‘harmful’ though? While you and I may agree on much, our respective definitions of ‘harmful’ online content probably differ.
This is one of the key areas of controversy with the Online Safety Bill – outside of illegal content, how do we protect people from harm whilst avoiding mass censorship?
For years, many (including MPs and Ofcom) have campaigned for a greater onus to be placed on Big Tech to police their platforms. After five years in development, the bill was introduced to Parliament in March for its first reading in the Commons.
The Government’s aim is for this new law to make the internet safer by holding tech giants to account for harmful content. It is hoped that by increasing trust in online services and sites, the UK will become a world leader in tech innovation.
What will new online safety laws cover?
They will apply to a range of businesses including social media platforms, search engines, and apps/websites which allow users to post content. Social media giants such as Meta’s Instagram and Facebook, TikTok and YouTube fall within the scope of the Online Safety Bill, as well as Google.
Each of those companies will have to protect users, particularly children, from exposure to harmful content, including self-harm, harassment, eating disorders, scam–adverts, online abuse, and cyber flashing (which is criminalised under the bill), illegal content and criminal activity. Their terms of service will require updates to reflect how much harm is policed and punished.
Putting control back into the user’s hands
Companies will also have to give users the right to choose who and what they interact with. This includes options to block other users who have not verified their identity, and also to opt-out of seeing harmful content. What constitutes ‘harmful content’ will be clarified and determined by Parliament in secondary legislation. Users who feel their content has been removed unfairly will be able to appeal.
The bill also aims to protect free speech. As a result, things like a democratic political debate on social media platforms would be exempt from regulation.
How will online safety laws be enforced?
Ofcom has been appointed as the regulator for online harmful activity, with powers to request information from companies to investigate compliance, to force tech companies to improve their safeguards and to block sites which refuse to comply, as well as the ability to levy fines up to 10% of a company’s annual global turnover.
The bill also proposes that senior managers who fail to ensure their company complies with requests by Ofcom can be held criminally liable, with penalties of up to two years imprisonment and a fine.
How has the Bill been received?
So far, the response has been mixed.
A number of children’s charities don’t think the bill goes far enough to protect users online as it fails to address harmful content on smaller platforms. There are concerns on the opposite end of the spectrum that the bill restricts free speech too much, and excludes journalism and democratic political debate leaving a loophole to be exploited by extremist publications. There is also the question of whether enforcement will be effective where large tech companies and their senior managers are based overseas.
Whether these issues will be addressed by Parliament is yet to be seen but, as currently drafted, any new legislation in this space will have a huge impact on how tech companies operate in the UK.
If you have any questions about the Online Safety Bill, please contact our Data Protection and Technology experts.