The impact of the Online Safety Act on content producers

December 6, 2023
person using a laptop

As its title suggests, the Online Safety Act 2023 is mainly intended to help protect the safety of users of online services, imposing wide-ranging duties on user-to-user platforms and search engines.[1] Notably, it also defines categories of “illegal content” and “content that is harmful to children”.

So producers, such as TV and film companies, commissioning broadcasters and marketing agencies, should consider whether their productions and promotional clips contain any such regulated content, and try to mitigate the risk that content might be taken down by online platforms. In effect, then, the regime introduced by the Act will add a new dimension to the existing task of “legalling” content before transmission, especially as risk-averse platforms might take a hard line on filtering out or removing potentially problematic content.

Overview of the Act

After two years of deliberation, the Online Safety Bill finally became the Online Safety Act, receiving royal assent on 26 October 2023. In many ways, the Act draws on certain aspects of the European Union’s Digital Services Act, which places a strong emphasis on protecting children and prohibiting harmful content.[2]

The Act focuses on two types of internet services (as defined in s. 3):

  1. user-to-user services – allowing platform users to encounter (i.e. to read, view, hear or otherwise experience) content generated by other users (whether generated, uploaded or shared on the platform – and a bot or other automated tool can be a user for that purpose – and whether or not content is actually shared, as long as the functionality allows such sharing); and
  2. search services – consisting of or including search engines and having the capability to search across a multitude of websites or databases,

as well as businesses that provide such services, if they have control over who has access to them (s. 226).

To fall within the scope of the Act, each type of service must have links with the UK, which will occur if they: (a) have a significant number of UK users; (b) have UK users as one of the target markets for the service (or the only target market); or (c) are capable of being used in the UK by individuals and contain user-generated or search content that (assessed reasonably) poses a material risk of significant harm to UK individuals (s. 4). So services that are not based in the UK but target UK consumers are also caught by the Act.

The Act also applies to user-to-user services even if user-generated or user-shared content is only a small part of the business. Existing legislation on video-sharing platforms will be withdrawn, and video-sharing platforms will instead be covered by the Act. Accordingly, the Act is set to have a significant impact on social media platforms, online gaming platforms, e-commerce websites, online forums and technology companies that publish user-generated content.

The following services are specifically excluded from the scope of the Act (s. 55 and Sch. 1, Pt 1):

  • email services;
  • SMS and MMS services;
  • services offering only one-to-one live aural communications;
  • “limited functionality services”, i.e. services that have a low degree of user interaction, such as allowing users to “like” posts, to apply emojis, to engage in yes/no voting, or to submit reviews or comments;
  • services that only enable combinations of exempt forms of user-generated content;
  • user-to-user or search services that are only used for internal business purposes;
  • services provided by public bodies; and
  • services provided by persons providing education or childcare.

Statutory duties

There are numerous duties set out in the Act, and the responsibilities differ depending on the type of service provider, and whether they are categorised as Category 1, Category 2A or Category 2B. Categorisation will depend on whether a service meets certain conditions, which are yet to be determined by the Secretary of State. Nonetheless, it is generally considered that Category 1 services will be large platforms with a high volume of users, and that the majority of companies are likely to fall within Category 2.

All providers of in-scope services must (s. 7(2)):

  1. conduct risk assessments in relation to illegal content;
  2. comply with safety duties concerning illegal content, such as preventing individuals from encountering it, and ensuring that terms of service specify how individuals will be protected;
  3. implement content reporting mechanisms, allowing users and affected persons to report illegal content easily;
  4. implement adequate complaints procedures;
  5. protect freedom of expression and privacy; and
  6. undertake regular record-keeping and reviews.

Category 1 providers have further duties, which require (ss. 7(5) and 38):

  1. illegal-content risk assessments, which include the additional requirement to summarise in the terms of service the findings of the most recent illegal-content risk assessment;
  2. children’s risk assessments;
  3. assessments relating to adult user empowerment;
  4. empowering adult users, including, for example, features that allow them to reduce the likelihood of seeing certain types of content;
  5. protecting content of democratic importance;
  6. protecting news-publisher content;
  7. protecting journalistic content;
  8. protecting freedom of expression and privacy;
  9. keeping records of assessments, including an additional requirement to supply Ofcom with a copy of any such records; and
  10. protecting against fraudulent advertising.

Services that are likely to be accessed by children (i.e. under-18s) have additional duties, which include protecting children online, as well as carrying out risk assessments and implementing measures to manage such risks (ss. 11, 12 and 13). As part of complying with such duties, they must also use age-verification or age-estimation tools (s. 12(3)), in accordance with a code of practice to be put in place by Ofcom (Sch. 4).

Part 10 of the Act also sets out various new communications offences relating to false, threatening, flashing, unsolicited sexual and self-harm encouragement communications, which are beyond the scope of this article, which focuses on aspects of the Act that seem relevant (albeit indirectly in many instances) to mainstream entertainment content.

Types of regulated content

It is important for content producers within the media and entertainment industry to be aware of what content is regulated under the Act, as they will want to avoid the risk of having their content taken down from online services.

The Act draws a distinction between two types of content:

(1) illegal content – i.e. content that user-to-user service providers must take or use proportionate measures to prevent anyone (adult or child) from accessing via the service (s. 10(2)), which is content that amounts to a criminal offence under legislation relating to terrorism, sexual exploitation of children, child abuse, assisting suicide, threats to kill, harassment (or certain other breaches of public order), misuse of drugs or psychoactive substances, weapons, assisting illegal immigration, human trafficking, proceeds of crime, fraud, financial mis-selling, foreign interference or animal welfare (or attempting or conspiring to commit such an offence), as set out in more particular detail in s. 59 and Sch. 5, 6 and 7; and

(2) content that is harmful to children – which is defined in s. 60 and split into three types:

  • primary priority content, i.e. content that user-to-user service providers must use proportionate systems and processes to prevent children of any age from accessing via the service (s. 12(3)(a)), meaning pornography and content promoting suicide, deliberate self-injury and/or an eating disorder (s. 61);
  • priority content, i.e. priority content that user-to-user service providers would need to ensure is only made available on an age-appropriate basis (s. 12(3)(b)), meaning content that: (A) is abusive and targets (or incites hatred on the basis of) race (including colour, ethnicity or national origin), religion (or lack of religion), sex, sexual orientation, disability (physical or mental) or gender re-assignment (past, present or proposed, and physiological or otherwise); (B) promotes (or realistically depicts) serious violence against a person, animal or fictional creature; (C) is bullying (e.g. seriously threatening, humiliating or persistently mistreating); (D) promotes dangerous stunts; and/or (E) encourages self-administering of a harmful substance (or a harmful quantity of any substance) (s. 62); and
  • other content that is harmful to children, i.e. other content that user-to-user service providers would also need to ensure is only made available on an age-appropriate basis (s. 12(3)(b)), meaning content that presents a “material risk of significant harm” (i.e. physical or psychological harm) to an “appreciable number of children” in the UK (the terms "material", "significant" and "appreciable" being undefined) (s. 60).

The secondary legislation to be put in place by the Secretary of State is expected to expand on what constitutes content that is harmful to children, following which the online safety regulator, Ofcom, intends to provide related guidelines on risk assessments and codes of practice.

For adults, Category 1 platforms will need, where proportionate to do so, to adopt optional user-empowerment tools to give adult users control over certain content that they consume (s. 15(2)). The relevant content is:

  1. all user-generated content (other than emails, SMS and MMS messages, one-to-one live aural communications, comments and reviews on the provider’s own content, identifying content accompanying any of the foregoing and news publisher content) (ss. 16(2)(a) and 55(2)); and
  2. content that promotes suicide, deliberate self-injury or an eating disorder, or is abusive and targets (or incites hatred on the basis of) race (including colour, ethnicity or national origin), religion (or lack of religion), sex, sexual orientation, disability (physical or mental) or gender re-assignment (past, present or proposed, and physiological or otherwise) (s. 16(2)(b) to 16(9)).

Sanctions

The Act provides for serious consequences for service providers that do not comply with their obligations in an expeditious manner, and Ofcom has been given responsibility for enforcing such sanctions. A non-compliant service provider can be fined up to £18 million or 10% of its global annual revenue, whichever is greater.

Impact on content producers

While the Act focuses on the regulation of user-generated, user-shared and search content, regular entertainment content could still form part of such content, i.e. if shared or appearing in search results. As such, the Act will still need to be considered not just by online platform providers, but also by content producers in terms of whether a production or promotional clip may contain any illegal or harmful content for the purposes of the Act. Depending on the communications and transmission plans for the content (e.g. if the content is to be distributed via sharing platforms and/or clips to be promoted via social media), that might require the editing-out of certain material.

The Act has faced criticism on the inadvertent impact that it could have on freedom of expression. In an attempt to address that, it now includes statements on the importance of protecting users’ freedom of expression and how it must be safeguarded while complying with the duties imposed by the Act. It is generally addressed in conjunction with the right to privacy, and it is not clear how it will be enforced in practice, and that poses a potential indirect risk to content producers, as it is not obvious how freedom of expression can be fully maintained while adhering to the requirements of the Act.

The onus is on platform providers to determine whether or not content is acceptable in terms of the Act, which could be problematic, as such decisions may be finely balanced and may depend on the context. Critics have expressed concern that the Act’s requirement for control over illegal content and content that is harmful to children could encourage platforms to adopt an unduly cautious approach to the removal of content, and legitimate and uncontroversial content could end up being over-removed as a result. That risk is heightened by the fact that online platforms often use algorithmic moderation systems to filter content. Such systems may not be sophisticated enough to distinguish between legal and illegal content (or between content that is harmful to children and content that is not) with the result that legitimate content, not just problematic content, is removed. That may well be intensified when such systems are also required to remove content that is harmful to children.

Another aspect of the Act that could affect content producers indirectly is the open-endedness of some of the definitions, such as “content that is harmful to children”. That could, in effect, affect content producers that want to depict aspects of authentic lived experiences, which might contain narratives of violence or bullying, for example. Accordingly, there is a potential risk that content producers wanting to touch on such stories on social media, even just as adverts for TV shows or films, might in practice find their content or promotional clips being over-removed under the take-down measures imposed by the Act.

Under the Act, platforms will also have a responsibility to prevent against fraudulent advertising and to remove it when they are alerted to it. That is less likely to be problematic for producers of promotional content, as they are already prohibited from fraudulent advertising under the BCAP Code (governing broadcast advertising) and CAP Code (governing non-broadcast advertising). Yet it might mean that platforms could implement extra measures to ensure that displayed adverts are not fraudulent, and that advertising content is vetted more thoroughly before being permitted to be shown. That might, in turn, lead to extra hurdles for content producers when trying to post trailers or promotional clips.

Conclusion

The Online Safety Act will hopefully bring about positive – and perhaps long overdue – change in making the internet a safer environment for end users, children and adults. Yet while its aims are desirable, the indirect implications should be carefully considered by content producers.

Content producers have long been familiar with the need to make sure their productions comply with applicable laws and regulations, in particular those relating to harmful and offensive content, as well as editorial guidelines and criteria for classifications of content. And in many ways the Act will overlap to a large degree with those existing requirements.

That said, the Act is a pointed attempt to introduce enhanced protection for the safety of internet users. As such, it could make social media networks and search-engine providers more cautious about permissible content, and so content producers will want to re-assess how their content could interact with that new, potentially less permissive environment, and in line with the new content categories under the Act.

From that perspective, much will depend on how the implementation of the Act develops in practice. The Secretary of State is due to introduce secondary legislation, and Ofcom has announced that it will release codes of practice on the implementation of the Act in phases from 2024. It is hoped that those measures will eventually provide some clearer guidance on the subtler implications of the Act, such as how to balance safety considerations with freedom of expression.

So watch this space – and hopefully a safer space that doesn’t unduly cramp creatives’ style.

Astrid BulmerAstrid Bulmer
Astrid Bulmer
Astrid Bulmer
-
Associate
Réré OlutimehinRéré Olutimehin
Réré Olutimehin
Réré Olutimehin
-
Trainee Solicitor

News & Insights