Looking for help with the Online Safety Act  - Ofcom consultation & guidelines? Please get in touch. 

Nadine Dorries, Secretary of State, 4 November 2021, screenshot from Parliamentlive.tv

Tl;DR The Online Safety Bill is a major piece of legislation intended to tackle the very difficult and troubling issues around social media. However, in its desire to remove the bad stuff, the Bill is setting up a legal and technical framework that mandates and enforces the automated suppression of online content and social media posts. The lack of a precise aim has enabled it to be moulded in a way that raises a number of concerns. Government Ministers will have unprecedented powers to define the content to be removed. They will be able to evade Parliamentary scrutiny through the use of Secondary Legislation. Social media platforms will have a wide discretion to interpret the rules and to determine whether content stays up or goes down. These factors, combined with the overall lack of precision in the drafting and the weak safeguards for users, means that the Bill is unlikely to meet human rights standards for protecting freedom of expression online.

UPDATED to reflect the Bill as Introduced to the House of Commons on 17 March 2022.

The Online Safety Bill is a proposed new law tabled in the British Parliament on 17th March 2022 [1]. It has been in gestation since 2014, when the government proposed introducing a mandatory parental control system. At that time, there were regular discussions between government Ministers and lobbyists for the various interests involved, such as children's charities and vendors of Internet filtering systems. In 2015, there was an Online Safety Bill that was not adopted, targeting filtering measures for broadband providers. The 2022 version of the Online Safety Bill targets social media platforms, and potentially any app with a share button. It is grown in complexity and scope. Arguably it has moved a long way from the original aim to protect children from certain unsuitable content. This is a summary of my take on the Online Safety Bill [5].

What are the objectives of the Online Safety Bill?

My starting place was the Bill's objectives. However, there don't seem to be any objectives in the actual text of the Bill, or on the face of the Bill to use the correct term. If we look at the introductory statement, it says the Bill is to 'make provision for and in connection with the regulation by Ofcom of certain internet services; for and in connection with communications offences; and for connected purposes'. Nothing there that tells us anything about keeping the Internet safe and what it is supposed to achieve, other than the broad expectation that Ofcom will be the Internet regulator. The reference to 'communications offences' has to be there because there is a new piece of criminal law within the Bill. But 'for connected purposes' could mean anything.

There are many aims that are frequently quoted and spoken about, such as protecting children, tackling online crime, online abuse, or suicide content. The Secretary of State for the Department of Culture, Media and Sport (DCMS), Nadine Dorries, has declared repeatedly that she is putting the big social media platforms like Facebook on notice. She told the Parliamentary Bill Committee that the Bill is 'not to fix the Internet...[it is] solely aimed at platforms that we know do harm to children'. [1] She also told the committee that the principles of the Bill are to 'protect children, remove illegal content and make platforms respond to content which is legal but harmful.' However, none are mentioned as a specific aim on the face of the Bill. The end effect is that, without a clear aim, the aim of the Online Safety Bill can be understood to be what the hearer wants to hear. This gives the government a lot of leeway to do whatever it sees fit. And that seems to be what is happening. Indeed, there are several contradictions between the stated principles and what the Bill actually does.

The Bill has been some time in gestation - at least since 2014 as far as I am aware. It has been targeted by lobbying, sometimes with the zeal of moral crusade. This in part explains the long list of objectives in the mind of the hearer.

There was a previous Bill that was also entitled the Online Safety Bill in 2015. I wrote about it at the time.[see Online Safety Bill 2015- a back-door to Internet filtering? ] The thinking behind that Bill appears to have been copy-pasted - with a little editing - into the Bill. However, the 2015 Bill set out with a different and clearer purpose. It was trying to get broadband providers to block content in order to protect children. To the naive politician this may like the same problem as the current Bill may be trying to address, but in technical terms, it is quite different. Broadband providers operate within national boundaries, and they control the carriage of the data, but they do not know what the content is. By contrast, the global platforms operate across national borders: they host the content and determine how it can be disseminated. As the technology and market has evolved, the national broadband providers have fallen back into second place, and the global online platforms have emerged as the new super-powers. In this new environment, online content matters have rising to the top of the policy agenda.

What services are addressed by the 2022 Online Safety Bill?

It's remarkably difficult to figure it out from the text of the Bill. The government's Impact Assessment predicts that some 24,000 British businesses will be in scope, but also fails to define exactly who they are. The use of the word 'businesses' rather than 'Internet platforms' suggests the scope could be very wide. It will include micro-businesses (like start ups), as well as the mega global platforms and everything in between. (See also £2 billion cost to British businesses for Online Safety Bill ).

Almost every type of service could be within the scope. Not only social media, but search, messaging services, gaming, dating services, and potentially also video services. Any service with share button could be in its sights. Social media services are the prime target, but search services are included too. Chris Philp, Minister for Tech and the Digital Economy, and an Oxford physics graduate, could not answer definitively how search services would be classified. [2] [3] Encrypted messaging services may also be in scope. (See Online Safety Bill: does government want to snoop on your WhatsApps?).

The Bill seeks to classify Internet services into three categories - as yet undefined, but to be determined by the DCMS Secretary of State after the Bill becomes law. The categorisation determine what requirements will be placed on them, so it matters. The Bill labels them as Categories as 1, 2A and 2b. Category 1 is widely assumed to be the global mega-platforms such as Facebook, Google, Twitter, TikTok, and Instagram, however, it also suggested that it could include dating services and gaming platforms as well.

The lack of definition of these service categories means that Internet sites and services based in the UK will not know for sure whether or not they have to comply with these rules until sometime after the

The Bill does not only apply to British businesses. It seeks to have extra-territorial application, meaning that it applies to services based overseas. They will be expected to comply with DCMS and Home Office requirements if they want to continue serving UK users. I have not seen a figure quoted for how many there are. This could have interesting ramifications, as trust in the UK internationally is falling.

What types of content are targeted?

Bearing in mind that this is content that will be taken down from the Internet or removed from the platforms, it is crucial that both the service providers and citizens can know and understand what it is. However, this information also remarkably difficult to find in the Bill.

The Bill targets illegal content. This is defined as child sexual abuse material, and terrorist content. Both of these are criminal offences in the UK and are already illegal content under existing law. The Bill makes reference to the relevant laws in Schedules 2 and 3.

There is no disagreement in policy circles about the removal of child sexual abuse material. Until now, broadband providers and others have been doing it but it has not been codified into law.

However, the priority focus on terrorism content was something I had not expected, and it does look out of place in this Bill. It suggests a different political agenda at play here. The Bill differentiates between the two types of content and the way they are tackled. Terrorist content is public, whereas the Bill suggests that providers could be asked to address child sexual abuse material on private services.

On 4 February, DCMS announced a number of additional categories of illegal content that will be included. [4] The Bill introduced to Parliament on 17 March 2022 [5] lists 14 criminal offences that are a priority. They are assisting suicide, threats to kill, harassment, drugs and firearms, sexual images, sexual exploitation, assisting illegal immigration, proceeds of crime, fraud, and financial services, as well as an undefined 'inchoate offences'. There is also a new law within a law on 'communications offences that follows recommendations by the Law Commission. The breadth suggests an aimless direction of travel directed by lobbying rather than policy.

The Bill does no more than state the offence, and this is problematic. Internet services will be under an obligation to tackle content that comprises an offence under these laws. The precise nature of the content to be removed under each of these offences requires definition which is yet to come,

The Bill seeks to address 'content harmful to children' but fails to provide a precise definition of what it means. The priorities will be defined in regulations to be made by the Secretary of State after the Bill is passed into law. (See Online Safety Bill: Ministers to get unprecedented powers over speech ). The only criteria is that it 'presents a material risk of significant harm to an appreciable number of children in the United Kingdom' This is a change from the language in the draft Bill of May 2021, for which the criteria was 'a significant adverse physical or psychological impact on a child of ordinary sensibilities' Internet services will be required to take down or remove this content. Such broad and opaque language would seem to leave the door wide open for providers to deal with the content as they see fit. Moreover, how is the user supposed to know if they are uploading that might be taken down, if the criteria are so indeterminate and there is no guidance as to what is intended by these words?

The Category 1 services - as yet to be defined but assumed as Facebook and the mega-platforms - will be mandated to additionally deal with 'content that is harmful to adults'. This too, will be defined by the Secretary of State in Regulations after the Bill has been passed by Parliament. Again, providers are asked to tackle content that which 'presents a material risk of significant harm to an appreciable number of adults in the United Kingdom'. With no clearer definition or understanding of the intention of the law, providers will have a carte blanche to act as they see fit.

The revised Bill adds in provisions to address websites the carry pornographic content. This extends the scope of the Bill because a different approach will be required. I suspect that this is where Ofcom's blocking powers (see below) could be used.

What does the Bill ask Internet services to do?

The answer depends on which type of content they are dealing with, and which category of provider it is. The Bill has some specifics and a lot of generalities. All providers, in all three categories, are expected to seek out and minimise the presence of illegal content. Providers will have to install 'systems and processes' to deal with it. This is understood to mean automated content moderation systems. Given the scale of the online platforms, automation is the only way they can do it.

The Bill additionally could require providers to use Ofcom-accredited content moderation systems -including those overseas - which could be controversial. A concern is raised by the 4 February announcement from DCMS [4], which says that Internet services will have to proactively prevent the restricted content from being uploaded. This means they would check and remove content as users upload it - also known as the upload filter. It is a draconian form of content moderation. This was indeed in the Bill as introduced to the House of Commons [5]. Section 9(3) requires all Internet services operating in the UK to 'prevent individuals from encountering priority illegal content by means of the service'.

All three categories of providers must 'mitigate and effectively manage the risks of harm to children' and even children of different ages, and to do so they must use ' proportionate systems and processes designed to prevent children of any age from encountering, by means of the service'. The Bill as introduced to the House of Commons explicitly states that this means using age verification and age assurance systems. These are two different types of systems that are the subject of heavy lobbying by a new sector of the IT industry, that has a UK-government-funded association and who no doubt stand to make money out of this.

Only providers in Category 1 are required to address content that is harmful to adults. As stated above, we don't know yet exactly which providers will be Category 1. They will have to explain in their terms and conditions what they will do. The Bill does not state a requirement for systems and processes, but it is implicit that content moderation systems will be required because this is the only way they can do the job at scale. The Bill does state three specific actions that they might take to restrict either the content itself, its dissemination or the user's account.

Internet services are asked to work out their own compliance requirements by conducting up to 12 different risk assessments. The complexity involved is likely to bury them in red tape and reduces the attraction of the UK as a place to establish an Internet service. Certainly, it's no place for a start up.

What are the enforcement powers?

Ofcom will have enforcement powers to make a censor blush. Not only fines, up to £18 million or 10% of global turnover, and compulsory compliance orders. It will be able to force Internet companies to install its accredited content moderation software. It will be able to get court orders to block websites and services, wherever they are in the world. These blocking orders could to be exercised against services that refuse to comply with a dozen or so Ofcom Codes that will set out the monitoring and reporting requirements. (See Copyright-style website blocking orders slipped into Online Safety Bill ).

Does it meet human rights standards?

In all likelihood, no! The Bill is inherently confused and confusing. It is over-broad with little precision. Internet companies and online platforms are being asked to monitor, minimise, takedown or remove content that will be defined by the UK government. Some of it is illegal. Much of it could be lawful, but the government wants it down for reasons as yet undefined. The delegation of interpretation to social media services implies an arbitrary targeting of the content which will be removed by private companies, with scant provision for users to get redress if their lawful content is removed, apart from a private complaints procedure.

Human rights law allows for content is to be removed, or taken down, but the State must provide a clear account of what that content is, and say precisely why it is necessary for these actions to be taken. The courts should be the ultimate arbiters, and users should have the right to go to court if they are unhappy with decisions. Arbitrary removal by automated systems at the discretion of a private service provider does not meet human rights standards. (See Online Safety Bill - Freedom to interfere? )

The bill includes a carve-out for 'Journalistic content' in the name of freedom of expression. However, it is a deeply problematic construct that creates a VIP lane to the big platforms for the UK mainstream media, (See Online Safety Bill: One rule for them and another for us .) Another odd construct in the Bill is 'Content of democratic importance'. It's unclear who this is intended to protect or why it is needed. (See What is content of democratic importance? ).

A framework for censorship?

The lack of a precise aim spelled out on the face of the Bill, has resulted this law becoming a pot in which to throw every complaint about online platforms. The resulting framework lays the foundations for a form of censorship that poses a threat to our rights to know and to speak. It's not only about the use of processes to bypass Parliament, Ministerial powers to define content and processes, but its the way the enforcement measures seek to use third parties, and how it over-reaches in expanding its cope. I don't think this was the intention of the Bill's proponents.

In mandating a system where Internet services have to install government-verified automated content moderation systems, the UK government treads in very dangerous territory that is more usually occupied by authoritarian regimes, not democratic governments.

The lack of definition is problematic and pitches the Bill up the wrong way against human rights standards. The potentially 24,000 Internet services will struggle to know what content they are supposed to suppress, and users will now be have a clear idea of what they are forbidden to do. All of this will be under threat of swinging fines and blocking penalties. The wide discretion for private providers to remove content according to ill-defined criteria, leaves users exposed to arbitrary takedowns. They will be at the mercy of those private providers if they seek redress.

The unprecedented power for government Ministers to define content that should be suppressed, without any scrutiny from Parliament, is deeply worrying at a time when the government itself is under scrutiny for corrupt and illegal actions. Maybe that is the point of the Bill. From the government's perspective, it has a lot of wriggle room to change the rules on speech and for a government in crisis like no other, maybe it suits their purpose.

---

In preparing this article I analysed the Draft Online Safety Bill of May 2021 (CP405), and have created a chart of clauses. I updated my analysis, including the re-numbered Clauses and new provisions, using the Bill as Introduced to the House of Commons on 17 March 2022. [5] I have consulted the Impact Assessment and Explanatory Memorandum, and the Bill Committee Report of 21 December 2021. I have watched the evidence of Secretary of State Nadine Dorries on 4 November, and of Minister Chris Philp of 1 February 2022, and Ofcom's evidence from 1 November 2021, checked against the transcripts. I have read a selection of the evidence submitted to the committee. I have seen the DCMS media release of- 4 February. I am aware of the ongoing developments and will be referencing them in a series of blog posts that I plan to publish in the coming weeks.

--

Photo: Secretary of State for Digital, Culture, Media and Sport, Nadine Dorries, appearing before the Draft Online Safety Bill Joint Committee on 4 November 2021. Screenshot taken by me from https://parliamentlive.tv/

---

Iptegrity is made available free of charge under a Creative Commons licence. You may cite my work, with attribution. If you reference the material in this article, kindly cite the author as Dr Monica Horten, and link back to Iptegrity.com. You will also find my book for purchase via Amazon.

About me: I've been analysing analysing digital policy for over 14 years. Way back then, I identified the way that issues around rights can influence Internet policy, and that has been a thread throughout all of my research. I hold a PhD in EU Communications Policy from the University of Westminster (2010), and a Post-graduate diploma in marketing. For many years before began my academic research, I was a telecoms journalist and an early adopter of the Internet, writing for the Financial Times and Daily Telegraph, among others.

Please get in touch if you'd like to know more about my current research.

If you liked this article, you may also like my book The Closing of the Net which discusses the backstory to the Online Safety Bill. It introduces the notion of structural power in the context of Internet communications. Available in Kindle and Paperback from only £15.99!

---

1 Draft Online Safety Bill Joint Committee Oral Evidence 4 November 2021 transcript Nadine Dorries Q284

2 Draft Online Safety Bill Joint Committee Oral Evidence 4 November 2021 transcript, Q289

3. Chris Philp - Wikipedia page

4. DCMS: Online safety law to be strengthened to stamp out illegal content, 4 February 2022

5. Online Safety Bill, as introduced to the House of Commons ( 210285)

I am especially grateful to the following whose work guided me in to the Bill:

Heather Burns excellent analysis for Open Rights Group - Access Denied: Service blocking in the Online Safety Bill

Graham Smith Cyberleagle Online Harms Compendium

Edina Harbinja UK's Online Safety Bill: Not that safe after all?

Alec Muffett Why we need #EndToEndEncryption and why it's essential for our safety, our children's safety, and for everyone's future #noplacetohide

Find me on LinkedIn

About Iptegrity

Iptegrity.com is the website of Dr Monica Horten. I am an  independent policy advisor, with expertise in online safety, technology and human rights. I am a published author, and post-doctoral scholar. I hold a PhD from the University of Westminster, and a DipM from the Chartered Institute of Marketing. I cover the UK and EU. I'm a former tech journalist, and an experienced panelist and Chair. My media credits include the BBC, iNews, Times, Guardian and Politico.

Iptegrity.com is made available free of charge for non-commercial use. Please link back and attribute Dr Monica Horten.  Contact me to use any of my content for commercial purposes.