Big tech accountability? Read how we got here in  The Closing of the Net 

 

TL;DR Social media companies will be required by the government to police users' posts by removing the content or suspending the account. Instead of a blue-uniformed policeman, it will be a cold coded algorithm putting its virtual hand on the shoulder of the user. The imprecise wording offers them huge discretion. They have a conflicted role - interfere with freedom of expression and simultaneously to protect it. Revision is needed to protect the rights of those who are speaking lawfully, and doing no harm, but whose speech is restricted in error.

The Online Safety Bill makes social media companies responsible for policing 'user to user' communications. They will be required by the UK government to take restrictive actions where content is deemed illegal or socially 'harmful'. They will remove user's posts, and suspend accounts. They will have a broad discretion to decide which posts will be suppressed and which users' accounts will be restricted. In this regard, the social media platforms - private companies - will enforcing a law on behalf of the government. Instead of a blue-uniformed policeman, it will be a cold coded algorithm putting its virtual hand on the user's shoulder.

It is reasonable, therefore, to ask the question about free speech rights and how they apply in this context.

The driver behind the Bill was concern for the victims of the harmful speech. The content may show a crime that causes physical and mental harm, for example, child sexual abuse. In other cases, content may cause distress, for example, intimidation and hate speech. Some content may encourage a victim to harm themselves, or others. The protection of the rights of these victims is the concern that the UK Parliament's Joint Committee for the Draft Online Safety Bill has been working on.

However, as soon as one starts to delve into the nature of individual cases, it becomes evident that making determinations about such content is not a simple matter. It's not always manifestly obvious that the content is illegal, or even that the content is harmful. As a consequence, it may not be straightforward to determine whether content complies with the rules for removal.

My consideration here is about the balance of rights, and how they need to be considered with regard to the rights of others. In particular the rights of those acting lawfully who may be restricted without cause. This question has been raised by the the Joint Committee Chair, Damian Collins[2] . It has also been raised in the evidence received by the Committee.

Under current UK law, the European Convention on Human Rights (ECHR) applies, as per the Human Rights Act 1998. I am going to refer to the European Convention. Article 10 is the right to freedom of expression, and this is the law that guides us here. Article 10 says that everyone - every individual - has a right to freedom of expression without interference from a public authority. Article 10 is a two-way right to receive and impart information. It applies on the Internet and on social media platforms, just as it applies offline. Article 10 applies online as well as offline and it applies to the means of dissemination as well as to the content itself. [3]

'Without interference' becomes significant when content is to be suppressed or restrictive actions are taken against users. Interference online is not just about humans making judgments and manually removing users' posts. It can refer to automated techniques. These include monitoring or filtering, artificial intelligence or algorithmic techniques. Any automated process that looks at the content and takes decisions about it, would be interference. When this done on a mass scale - for example, across a platform with 40-something million UK users, it is definitely interference. When it is done at the request of the UK government, it is mass interference by the State.

The position is underlined by Gavin Millar QC in his evidence to the Joint Committee. [4] 'What is really happening here [...] is that the state is giving the SPs powers, which must be exercised in certain circumstances, to interfere in its name with the rights of freedom of expression/privacy of users'

Article 10 applies whether or not the interference relates to content that is illegal. The technical term term is that it is 'engaged' which means that the person making the determination has to consider Article 10 rights. Hence, it will be engaged by all of the Bill's requirements to monitor and remove content that is illegal; harmful to adults; and harmful to children.

It does not mean that illegal content gets away with it, but does mean that the determination has to made in a balanced way. There are sound reasons for this, because it is not always manifest that content is illegal. For example, hate speech, almost always requires the context to be analysed in order to determine illegality. [5 ]

Sometimes harmful intent is disguised by apparently innocuous language. Some speech may be deeply unpleasant for the victim, but not illegal. Graphic violence may be disturbing and it may be gratuitous. In other cases, it may be political content of democratic importance [link] or potentially evidence of a crime that should be preserved, We are seeing many examples of this right now in social media posts from the war in Ukraine.

Compliance with Article 10 requires that the law should precisely identify the speech that it seeks to restrict. It should justify the decision with a legitimate societal aim, and choose the least restrictive remedy. This has also been affirmed in the UK courts in cases concerning copyright enforcement on the Internet.

Taking all of this into account, it's hard to see how the Online Safety Bill could comply with human rights standards. There is no precise definition of the speech that the Bill wants to restrict. It relies on generic descriptions and vague notions. It addresses 24,000 websites, when the government's own Impact Assessment says that over 23,000 of them are low or medium risk of carrying harmful content, and fewer than 800 are likely to be high risk. This in itself suggests that the least restrictive remedy has not been sought. (See £2 billion cost to British businesses for Online Safety Bill )

The Bill has little in the way of safeguards for users. It requires social media companies to operate a complaints procedure for users whose content has been restricted [Clauses 18(4)d and (4)e] but places little emphasis on requiring them to conduct a proper appeals process or offer redress in cases where lawful content has been wrongly suppressed (with the notable exception of media ). The same complaints procedure would also be used by those have reported harmful content that has not been removed.

The social media platforms - private companies - will have considerable discretion to take down content alongside a demand for general monitoring of all their users. With such heavy-booted automation in view, it is very likely that lawful content will be restricted in error. It already happens, and as monitoring and actions are increased at scale, it's likely to happen more frequently.

It is equally important for users know what they can and cannot post. In other words, they should know, and it should be clear, where and how they may be restricted.

The State has a duty under Article 10 to protect freedom of expression. Where, as is the case here, the State is asking private actors to interfere on its behalf, it would seem reasonable that those private companies must implement safeguards. There is also the UN Guide to Business and Human Rights which says that businesses must avoid interfering with human rights.

The Bill does put a duty on all services within scope of the Bill to 'have regard for' protecting users' freedom of expression'[Clause 19(2)]:  When deciding on, and implementing, safety measures and policies, a duty to have regard to the importance of protecting users' right to freedom of expression within the law.

This duty will be enforced by Ofcom on all Internet services under Section 111. Whilst this should be welcomed, it creates a little difficulty. The automated interference is also a duty under Section 111. The two measures are conflicted.  Ofcom will be placed in the unenviable position where it is asked to enforce the automated interference with users freedom of expression (even if justified) at the same as it is asked to enforce the protection of that freedom. It will struggle to do both.

Unless changes are made to the Bill, the rights of those who are speaking lawfully, and doing no harm, will be put at risk.

Photo : Damian Collins, Chair, Joint Committee for the Online Safety Bill, 1 November 2021, screenshot by me via Parliamentlive.tv 

See also What's the point of the Online Safety Bill?

---

Iptegrity is made available free of charge under a Creative Commons licence. You may cite my work, with attribution. If you reference the material in this article, kindly cite the author as Dr Monica Horten, and link back to Iptegrity.com. You will also find my book for purchase via Amazon.

About me: I've been analysing analysing digital policy for over 14 years. Way back then, I identified the way that issues around rights can influence Internet policy, and that has been a thread throughout all of my research. I hold a PhD in EU Communications Policy from the University of Westminster (2010), and a Post-graduate diploma in marketing. For many years before began my academic research, I was a telecoms journalist and an early adopter of the Internet, writing for the Financial Times and Daily Telegraph, among others. 

Please get in touch if you'd like to know more about my current research.

If you liked this article, you may also like my book The Closing of the Net which discusses the backstory to the Online Safety Bill. It introduces the notion of structural power in the context of Internet communications. Available in Kindle and Paperback from only £15.99!

 ---

[1  Online Safety Bill as  introduced to the House of Commons March 2022 
[2] Draft Online Safety Bill Joint Committee Oral Evidence transcript 4 November 2021Q283
[3] Ahmet Yildirim V. Turkey European Court of Human Rights, Application no. 3111/10 

[4] Written evidence from Gavin Millar QC 

[5] See Council of Europe Freedom of expression and information Explanatory Memorandum, paragraph 3 for a definition of hate speech. 

For a legal perspective  please see Graham Smith Cyberleagle : Online Harms Compendium 

Iptegrity in brief

 

Iptegrity.com is the website of Dr Monica Horten. I’ve been analysing analysing digital policy since 2008. Way back then, I identified how issues around rights can influence Internet policy, and that has been a thread throughout all of my research. I hold a PhD in EU Communications Policy from the University of Westminster (2010), and a Post-graduate diploma in marketing. I am on the Advisory Council of the Open Rights Group.  I’ve served as an independent expert on the Council of Europe  Committee on Internet Freedoms, and was involved in a capacity building project in Moldova, Georgia, and Ukraine. For more, see About Iptegrity

Iptegrity.com is made available free of charge for  non-commercial use, Please link-back & attribute Monica Horten. Thank you for respecting this.

Contact  me to use  iptegrity content for commercial purposes

 

Don't miss Iptegrity! Iptegrity.com  RSS/ Bookmark