Access denied: Service blocking in the Online Safety Bill

As the Online Safety Bill approaches the pre-legislative scrutiny process, attention is being drawn to the powers which government will have to redefine, constrain, and censor the boundaries of our free and legal speech. Yet the Bill also contains a range of regulatory options which go beyond the privatised micromanagement of our p’s and q’s.

As it has been drafted, the Bill gives Ofcom, as the online speech regulator, a range of powers and abilities to remove not just content, but an entire service from the British internet. These powers include the ability to apply to the courts to restrict public access to an online service, to restrict a company’s ability to do business, or to even block the company from the UK altogether.

The enforcement powers, as well as the financial penalties at their disposal, will be applicable to senior managers, to companies, and to infrastructure providers for companies. Beyond fines, criminal convictions and imprisonment are also options on the table. And the grounds by which Ofcom will be able to use these options range from questions over illegal content to a failure to tick a compliance box.

Let’s explore how the Online Safety Bill aims to eliminate online harms by making it too dangerous, too costly, or too bureaucratically impossible for anyone to run an online service. In order to understand how the services you use can be blocked under the Bill, you also need to understand the criteria they could be deemed noncompliant with, so we will do our best to explain that here as well. 

Which online services are within the scope of the Bill?

The Online Safety Bill has been promoted as a means of reining in the social media tech giants, but that’s false. Its provisions, and its compliance obligations, will apply to any site, application, or service provider doing business in the UK if that service has any “user-to-user” capabilities, e.g. if it allows users to communicate with each other, read each other’s content, or search for information.

The social media companies the Bill targets, known within the draft as “Category 1” companies, will have more compliance requirements than smaller providers. The rest of the Bill’s compliance obligations are applicable to every other online service provider, including the ones you rely on every day. These requirements are applicable to non-UK businesses which serve customers here or can be reasonably said to be accessed here. There is no minimum business size or turnover threshold.

How will Ofcom enforce the speech regulation regime?

The Bill intends Ofcom’s speech regulation system to be similar to the data protection regulation system, where fines are only levied as a last resort for egregious violations or for a refusal to cooperate with the regulator. Tiered regulatory options such as “notices of enforcement action” will sound familiar, and are intended to work in a similar manner.

Where a financial penalty is levied, the maximum penalty is the greater of either £18 million or 10% of the worldwide income of the company, of a group of companies, or of a senior manager who is considered to have decision-making powers within the company.

That being said, the data protection regulation system exists to uphold the fundamental human right to privacy, whereas Ofcom’s speech regulation system will exist to constrain the fundamental human right to freedom of expression. In that light, its powers to request a court-ordered blocking of a service or its support infrastructure, for what may be matters of free and legal speech, are more devastating and enduring than mere financial penalties.

Beyond fines, the draft Bill also allows for the conviction and imprisonment of senior managers for up to two years for a failure to cooperate with the speech regulator. This provision, as has been well noted, is clearly meant to target certain high-profile American tech celebrities. But as we have explained before, hanging the threat of arrests over British tech sector workers is a recipe for collateral censorship, and a chilling effect on our free speech, as companies will be forced to limit what we can say online out of the constant fear of personal arrests.

What powers will Ofcom have to block access to online services?

Technology notices

The draft Bill will allow Ofcom to impose what is called a technology notice onto a service provider for an alleged failure to carry out the required safety duties for illegal content, which means terrorist or child abuse material. These notices can require the service provider to implement an accredited technology, of Ofcom’s choosing, to scan for and detect the presence of these kinds of content.

Given government’s push to restrict or criminalise the use of end-to-end encryption and to open up all our private communications to proactive scanning, which are two of its outright goals for the Bill, it is likely that a technology notice is the mechanism Ofcom would use to do so, citing child safety as the reason.

Service restriction orders

Where a service is deemed non-compliant with either a technology notice or with any of the compliance requirements we shall discuss shortly, Ofcom can apply through the courts for what is known as a service restriction order. This order does not target the non-compliant service: it targets the infrastructure which supports it, and prevents those infrastructure providers from continuing to support the service. These infrastructure providers could include payment gateways, ad servers and networks, search engines which list the service, and various user-to-user applications which make the service’s content available, such as CDN and hosting providers like Amazon Web Services and Cloudflare.

Access restriction orders

Beyond service restriction orders, there is the “nuclear option” of an access restriction order. Ofcom can petition the courts to impose an access restriction order onto a service which is deemed to either be failing to protect UK users from significant harm, or has already allowed them to experience it. An access restriction order means that internet service providers must block persons in the UK from being able to access the service in question. An access restriction order can also require app stores to remove an app from being downloadable by UK users.

Neither service nor access restriction orders are intended to be fast, easy, or accessible options. The draft Bill states that these levers should be pulled in cases where the compliance failure creates a genuine and severe risk of substantial harm to individuals in the UK. Nevertheless, they are the options on the table, and there is considerable political pressure being placed on Ofcom to use them as soon as is legally possible, and in the case of encryption, to also use them retroactively.

Temporary service and access restriction orders

Ofcom will have the ability to impose interim service and/or access restriction orders. These temporary orders could be viewed as a means of forcing maliciously uncooperative companies to mend their ways. But that is not the political context here. Given the powers which the Bill will give the Secretary of State to censor free speech for political purposes, it is not a stretch to imagine the temporary orders being used to block public access to popular services, such as social media sites, if public opinion turns too rapidly against the government tide.

Transparency of enforcement measures

The transparency of site blocking orders in the UK, under existing laws, measures, and mechanisms, is already quite weak. The draft Bill requires Ofcom to publicise the enforcement measures they will take, including service blocking actions. However, the Bill also notes that “OFCOM may not publish anything that, in OFCOM’s opinion— (a) is commercially sensitive, or (b) is otherwise not appropriate for publication.”

This get-out clause could be used, for example, to impose a Technology Notice onto a service provider to break encryption which forbids the provider to disclose it, similar to the way Technical Capability Notices work in the Investigatory Powers Act.

Is there another way for Ofcom to block services?

In addition to Ofcom’s powers to restrict access to services through the courts, there are provisions buried within the draft Bill which proactively block non-UK businesses from being able to provide their services here. These provisions require that the “children’s risk assessment”, which will be a compliance obligation for any online service which can be accessed in the UK, must be carried out before UK users are able to access a service.

This requirement effectively blocks new businesses from being able to trade in the UK until they can prove that their service is safe for all British children between the ages of 0 and 18, regardless of whether or not their service is targeted at children or carries any risk to them.

Indeed, all businesses within the scope of the law who are currently doing business here or serving UK users, regardless of where they are located, will be required to carry out a children’s risk assessment within three months of Ofcom’s orders, or risk the penalties and restrictions we’ve discussed above.

The children’s risk assessment process includes the Bill’s requirements to implement an age verification or age assurance mechanism, in order to age-gate all site visitors, in order to determine whether or not children are able to access the service. Put another way, collecting personally identifiable data on all of us will become the entry cost of doing business in the UK.

Once overseas companies have passed the children’s risk assessment test and are cleared, by Ofcom, to begin trading in the UK, there are only a few dozen more compliance requirements to go.

What are the compliance requirements which could result in a service being blocked?

Anyone who has run a small online business knows that subjective compliance paperwork, such as the health and safety assessments designed for physical premises which are required for procurement processes, tend to make you feel as if you are being deliberately set up to fail. Now multiply that feeling by a few dozen.

As the Bill has been drafted, Ofcom will be authorised to take the enforcement actions we’ve described above against any company which fails to meet any one of the Bill’s three dozen regulatory compliance requirements. We’ve set out those requirements in the annex tables at the bottom of this page. These requirements include the subjective content and behavioural moderation processes, as will be contained in the Codes of Practice, which are central to the Bill, but also include the paperwork and “tick box” sides of compliance.

As you can see, the compliance requirements stem from the Bill’s intention to transfer the health and safety model from the physical to the digital world. The idea is that all possible risks, harms, and hurts can be prevented from happening through diligent risk assessments, thereby making the UK “the safest place in the world to be online”. We’ve also seen how the idea is that the threat of punishments and fines can simply frighten companies into falling in line.

Yet given the sheer volume and complexity of these compliance requirements, risk assessments, and subjective speculations – as well as the consequences we’ve discussed for getting any one of them wrong – it’s clear that the intention here is not to “rein in the tech giants”. These requirements, which are not repeated in any other western nation, are a means of setting up businesses to fail through impossible compliance obligations which are being imposed under the threat of the deepest personal consequences.

And given what we know about government’s true aims for using this Bill as a political means to constrain our free speech, perhaps that has been the intention all along.

As the pre-legislative scrutiny committee begins its work, we will have more to say on the risks to privacy, freedom of expression, and human rights contained in the small print of the draft Online Safety Bill. We hope that you will continue to support us as we defend your digital rights from one of the biggest regulatory threats we’ve ever seen.

Table 1: Content moderation and service design duties enforceable by Ofcom

DutyUser-to-userSearchAll servicesLikely to be accessed by children*Category 1Citation**
Illegal content risk assessment✔️✔️✔️✔️✔️2/2/7
2/3/19
Children’s risk assessment✔️✔️✔️✔️✔️2/2/7(9)
2/3/19(4)
Adults risk assessment✔️✔️2/2/11
Illegal content duties✔️✔️✔️✔️✔️2/2/9
2/3/21
Freedom of expression and privacy duties✔️✔️✔️✔️2/2/12
Democratic content duties✔️✔️2/2/13
Journalistic content duties✔️✔️2/2/14
Reporting and redress duties✔️✔️✔️✔️✔️2/2/15
Record keeping and review duties✔️✔️✔️✔️✔️2/2/16
Duties to carry out risk assessments✔️✔️✔️✔️✔️2/3/17
Safety duties for services likely to be accessed by children✔️✔️✔️✔️2/2/10
2/3/22
Assessments about access by children✔️✔️✔️✔️✔️2/4
Transparency reports✔️✔️✔️3/1/49
* The draft Bill holds that services which are not using age verification or age assurance to identify the ages of all their visitors will be assumed to be accessed by children. Therefore, “likely to be accessed by children” realistically puts any site or service within a duty of care obligation, regardless of its inclusion or exclusion from that duty of care’s legal definition.

** Citation refers to the Bill text. For example, 4 / 5/ 74 (8) means Part 4, Chapter 5, section 74, paragraph 8.

Table 2: Risk assessment requirements enforceable by Ofcom under Part 2, Chapter 2, Section 7

Requirement to identify, assess, and understand:Illegal content risk assessmentChildren’s risk assessmentAdults risk assessment
The user base✔️✔️✔️
Risk to users of encountering illegal content (terror/CSEA)✔️
Level of harm to users of illegal content✔️
The number of children accessing the service by age group✔️
Level of risk to children of encountering each kind of priority primary content✔️
Each kind of priority primary content which is harmful to children✔️
Each kind of primary content that is harmful to children or adults, with each one separately assessed✔️✔️
Non-designated content that is harmful to children✔️
Level of risk of harm presented by different descriptions of content that is harmful; for children, by age group✔️✔️
Level of risk of functionalities allowing users to search for other members including children✔️
Level of risk of functionalities allowing users to contact other users including children✔️
Level of risk to adults of encountering other content that is harmful✔️
Level of risk of functionalities of the service facilitating the presence or dissemination of illegal content, identifying and assessing those functionalities that present higher levels of risk✔️✔️✔️
The different ways in which the service is used, and the impact that has on the level of risk of harm that might be suffered by individuals;✔️✔️✔️
Nature, and severity, of the harm that might be suffered by individuals by the above, including children by age group✔️✔️✔️
How the design and operation of the service (including the business model, governance and other systems and processes) may reduce or increase the risks identified✔️✔️✔️

Table 3: Administrative obligations to Ofcom

DutyAll servicesCitation
Register with Ofcom for fee payments✔️3/1/51
Respond to information notices✔️4/5/70
Designation of a person to prepare a report✔️4/5/74(4)
Assist the person before preparing a report✔️4/5/74(8)
Cooperate with an Ofcom investigation✔️4/5/75(1)
Attend an interview with Ofcom✔️4/5/76(2)
Duty to make a public statement✔️6/112(3)
Information in connection with services presenting a threat✔️6/112(5)

protect free speech online

When the time comes, ORG will need your help to campaign for a better, rights-based approach to making the Internet safer.

Sign the pledge