Written evidence submitted by Dr Martin Moore, Director of the Centre for the Study of Media, Communication and Power, King’s College London (OSB0063)
About the Centre and the Author
The Centre for the Study of Media Communication and Power is an academic research centre based in the Department of Political Economy at King’s College London. The Centre conducts research and analysis on the news media, the civic functions of the media and technology, the relationship between media and politics, and the extent to which technology is changing politics.
The author of this submission, Dr Martin Moore, is director of the Centre for the Study of Media Communication and Power and a Senior Lecturer in Political Communication Education. He has had over a decade’s experience working on projects related to media regulation, news standards and the influence of technology platforms on politics. This includes academic research, technological innovation, and policy proposals. He was the founding director of the Media Standards Trust in 2006 where he made written and oral submissions to the Leveson Inquiry related the reform of press self-regulation. This submission reflects the views of the author based on previous experience and on research done since joining King’s College London in 2015. The author is an employee of King’s College London.
Three publications by the author provide relevant context for this submission. Digital Dominance: the Power of Google, Amazon, Facebook and Apple (Oxford University Press, 2018), edited by Martin Moore and Damian Tambini, details the sources of power of the major technology platforms and the threats that dominance represents. Democracy Hacked: How Technology is Destabilizing Global Politics (OneWorld, 2018) identifies some of the individuals, organisations and states responsible for societal online harms like those that the draft Bill seeks to address, and outlines some of the structural reasons why such individuals, organisations and states have been able to act in the way they have. Regulating Big Tech: Policy Responses to Digital Dominance (Oxford University Press, 2021), edited by Martin Moore and Damian Tambini, provides a toolbox of potential responses to the problems posed by the dominant technology platforms to policymakers and researchers.
Summary
This submission builds on the submission by the author to the Online Harms White Paper consultation in 2019.
The 2021 Draft Online Safety Bill develops many of the aspects first outlined in the 2019 Online Harms White Paper: describing the types of organisation within scope, nominating Ofcom as the independent regulator and detailing the powers of the Secretary of State.
Yet, most of the fundamental aspects challenged by our 2019 submission remain relatively unchanged or unresolved (breadth of scope, inclusion of legal with illegal harms, failure to sufficiently define legal harms). Moreover, some of the developments in the draft Bill represent, this submission argues, a significant threat to free expression, to privacy, and to the UK’s digital democratic future.
The draft Bill is long and complex (in itself one of the problems) and therefore this submission will focus on the six aspects that it considers – in addition to the issues highlighted in the 2019 submission – to be the most problematic.
These six aspects are:
These concerns are such that, this submission contends, if passed in its current form, the Bill could:
A revised Online Safety Bill should, this submission concludes, limit itself to:
Outline of 2019 Submission to Online Harms Consultation
This submission builds on that of 2019, by the author, to the Online Harms White Paper (2019).
The 2019 submission supported:
The 2019 submission challenged:
The 2019 submission proposed that any future legislation:
It concluded that many of the problems cited in the Online Harms White Paper were structural and systemic and would not be solved by statutory regulation such as this. Separate interventions would be needed to address them.
This submission re-affirms the points made in the 2019 submission and raises six further concerns with the 2021 draft Online Safety Bill.
6 concerns about the draft Online Safety Bill (2021)
The draft Bill leaves many of the most substantial and consequential decisions unresolved, most notably: scope, regulated content, and protected content.
Who is in scope?
The thresholds that define who will be considered a Category 1 user-to-user service, a Category 2A search service and a Category 2B user-to-user service, are yet to be determined by the Secretary of State after the Bill is passed based on advice provided by Ofcom (Clause 59 and Schedule 4). The criteria for these thresholds remains vague, with Schedule 4 stating that the precise conditions for deciding who is a Category 2B service being:
‘(a) number of users, (b) functionalities, and (c) any other factors that the Secretary of State considers relevant.’ [italics added]
What content will be regulated?
It is yet to be determined what content should be considered harmful, even though it is legal, and therefore to be regulated. It is yet to be determined what the Codes of Practice should be for providers of regulated services (Clause 29). It is yet to be determined how decisions as to what constitutes harmful content will be determined.
What content will be protected?
The legislation states that content of democratic importance, journalistic content, and content published by a recognised news publisher, will be protected. Yet, it remains to be determined what content ought to be considered of democratic importance, how journalistic content ought to be distinguished, and who will be considered to be a recognised news publisher. The existing explanations for these protected categories in the draft Bill are – this submission contends – insubstantial and inoperable.
Leaving these questions unresolved has numerous negative implications, including:
The powers vested in Ofcom, the Secretary of State and the Category 1 services themselves to resolve these questions and to regulate in-scope services are, this submission argues, too extensive.
OFCOM
OFCOM’s powers and duties in relation to the regulated services are outlined in Part 4 of the draft Bill. These include preparing codes of practice for providers of regulated services, carrying out impact assessments, maintaining a register of regulated services, monitoring adherence, conducting investigations, imposing sanctions, researching users, overseeing definition of disinformation, and enhancing media literacy. This summary is not comprehensive.
In order to carry out these duties, Ofcom will have to:
Such a quantity and range of duties is, this submission contends, excessive and impracticable and is liable to lead to:
It is also unrealistic to assume that any single regulator could fulfil all these duties. Any single national regulator would not have sufficient human resources to review, monitor, and penalise all user-to-user services and search services accessible to the UK public in the manner proposed by this legislation. As illustration, in 2020, Facebook had 15 times as many content moderators in the US alone (15,000) as Ofcom had total employees (992) (Thomas, 2020; Ofcom, 2021).
SECRETARY OF STATE
The Secretary of State (SoS) is empowered, amongst other things, to:
Over and above these powers, the SoS can simply direct Ofcom if s/he ‘has reasonable grounds for believing that circumstances exist that present a threat— (a) to the health or safety of the public, or (b) to national security (112(1)), in addition to giving regular guidance to Ofcom and reviewing the overall regulatory framework’. These functions are outlined – in part if not in full – in Part 6.
It is not clear why it should be necessary to invest so much power in the SoS, particularly with regard to determining who is included in the scope of the regulation, determining the content of the codes of practice, and defining harmful content. Moreover, the investment of these powers in the SoS creates numerous opportunities for political influence.
The UK has a long democratic history of protecting free speech and a free press from oversight and interference by the State. These powers would threaten both.
CATEGORY 1 SERVICES
Category 1 services will be obliged to carry out risk assessments for their services to establish the risk of harm – illegal and legal – to children and adults using their services, and put in place processes sufficient to mitigate these harms and address them as they arise. At the same time, they will have to protect content that is of democratic importance, that is journalistic, or that is published by a recognised news publisher.
This necessarily gives significant ongoing power to these companies to define whether content ought to be considered harmful, to decide what content is of democratic importance or journalistic, to determine who ought to be considered a recognised news publisher (if this is not decided by Ofcom), and to take action against any content or individuals on the basis that they are committing harm.
This will not only be very complex and difficult to achieve, but will give Category 1 services pseudo-judicial powers to be arbiters of political speech and judges of harm (including beyond what is illegal).
The extensive powers given to the Secretary of State – as outlined above – provide numerous opportunities for the government to influence the regulation of content online. As listed earlier, this includes deciding which companies are regulated, what content is regulated, and what constitutes ‘harm’. There are minimal protections in the legislation against the political use of such power (threatening to regulate a service, for example, that is critical of government). Indeed, there is even justification for the use of political influence in certain circumstances:
33 Secretary of State’s power of direction
(1) The Secretary of State may direct OFCOM to modify a code of practice submitted under section 32(1) where the Secretary of State believes that modifications are required –
a) to ensure that the code of practice reflects government policy
Ofcom is a statutory body that is, for the most part, independent of government (though as a statutory body is ultimately responsible to government, and the government selects candidates to Chair Ofcom and approves the appointment of the Chief Executive). Yet, there are multiple instances in the draft legislation where Ofcom is required to accept the direction of the Secretary of State.
This openness to political influence – whether acted upon or not – compromises the independence of the regulation and will inevitably lead to concerns about whether decisions (for example as to who to recognise as a news publisher) are made for democratic or political reasons.
There are multiple problems associated with the definitions included in the legislation, these include:
Example: a ‘user-to-user service’
A user-to-user service means, the draft Bill states:
‘an internet service by means of which content that is generated by a user of the service, or uploaded to or shared on the service by a user of the service, may be encountered by another user, or other users, of the service’ Clause 2(1)
This meaning appears to be tautological. The internet is, and has been since its inception, a ‘read-write’ web – a means which users are both able to access content and provide content. In this sense every service on the internet is a ‘user-to-user’ service.
The definition of content in the legislation is similarly unclear and broad, such that it could apply to almost anything published online. As Clause 137 states, content can include:
‘anything communicated by means of an internet service, whether publicly or privately, including written material or messages, oral communications, photographs, videos, visual images, music and data of any description’
The exceptions made for ‘expressing a view’ on content via a like, emoji or upvote, only narrow the definition marginally. Moreover, given the obligation on Category 1 services to police legal harms, it is likely that even these exceptions will be challenged – since there is growing evidence that ratings, such as on social media posts, can cause harm (Wells et al, 2021).
The consequences of this lack of clarity could lead almost anyone providing a service online accessible to users in the UK to assume they may be included in the scope of the legislation, as well as almost anything their users publish. This could deter new services, severely constrain the benefits of existing services, and restrict free expression online.
Numerous other key terms within the legislation also lack sufficient clarity, notably: harms that are legal but harmful; search services; an adult of ordinary sensibilities. The problems associated with this have been raised elsewhere and so will not be detailed here.
Example: ‘Recognized news publisher’
Clause 40 sets out the meaning of a recognised news publisher such that those defined as such can benefit from additional protection by Category 1 services. Yet the meaning, as set out, is defined poorly and open to misinterpretation.
Clause 40(2)(a) states that a ‘recognized news publisher’ will have ‘as its principal purpose the publication of news-related material’ without indicating how ‘principal purpose’ will be assessed. Will it, for example, be evaluated in economic terms, or on the basis of output, or of time spent by employees? The definition of ‘news-related material’ – Clause 40(5) – provides minimal further guidance, stating that this can include ‘news or information about current affairs’, ‘opinion’ related to this, or ‘gossip’. There is no public interest criteria proposed to qualify these criteria, therefore such content could include anything from rumours about a neighbour having an affair, to alerts about the price of bread in Tescos, to opinions about the side-effects of a COVID-19 vaccine.
To be a Recognised news publisher’ this news-related material, the draft Bill continues, ‘(i) is created by different persons, and (ii) is subject to editorial control’. Again, this raises numerous definitional questions. What does it mean to be ‘created by different persons’ – does this mean that individual pieces of journalism have to be produced by multiple distinct people? If so, does that mean hyperlocal news sites – many of whom have a staff of one – will not be protected? What constitutes editorial control? If, as is increasingly the case, an article is scrutinized by an algorithm before publication, does this constitute editorial control?
Publication of such material also has to be ‘subject to a standards code’, though since the legislation does not indicate what is meant by a standards code, and does not point to any standards codes that will or will not be considered acceptable (for example one that has been recognized by the Press Recognition Panel) then this requirement has no substantive bearing.
Therefore, as it stands, the meaning of ‘recognised news publisher’ is vague, inconsistent and arbitrary. As such it is liable to be open to political and commercial misinterpretation and abuse.
A likely consequence could be the production of a ‘white label’ list of acceptable publishers, as determined by negotiations between industry lobbying groups, government, and Category 1 services, and approved – formally or informally – by OFCOM. If this is the case, such a list will represent a significant step backwards with respect to plurality and diversity of publication online, and mean the establishment of a list of statutorily-approved news publishers.
Terms that are not defined
Example: ‘journalism’
Clause 14 gives Category 1 services ‘Duties to protect journalistic content’. These duties include special care being taken when deciding how to treat such content (even should it be the subject of complaint).
Content is considered ‘Journalistic’ if it is published by a ‘recognised news publisher’ (the problems with which are outlined above), or if it is ‘content generated for the purposes of journalism’. There is no definition given of journalism. Though, based on Clause 14(3) it would appear that the definition of journalism in the legislation is equivalent to the definition of modern art – in other words, as defined by the creator (special privileges should be granted to ‘a person who considers the content to be journalistic content’ provided that person created or published it). This may be the only practical way of defining journalism online yet, if this is the case, the legislation will need to specify how or when journalistic content should be defined as such. Does the creator need to define it as journalistic at the point of publication? If so, how? Does it have to adhere to any criteria for what constitutes journalism?
Other terms that are not defined include:
Content of ‘democratic importance’: the explanation in the Bill that – in addition to content published by a Recognized News Publisher – this is content that ‘is or appears to be specifically intended to contribute to democratic political debate in the United Kingdom’ could be applied to virtually any type of speech about the United Kingdom (while at the same time appearing to preclude discussion of international issues of democratic concern). Nor are the distinctions between content of democratic importance and journalistic content made clear.
‘User’: the term ‘user’ appears over 500 times in the draft Bill, yet is not defined, or even made clear if a ‘user’ is a living person (Clause 39(4)(b) states that a bot may be considered to be a user). Without defining a ‘user’ how can a service determine who is deserving of protection, or who ought to be held accountable for harms? Moreover, there is an assumption that the user of a service may be easily distinguished from the provider of a service. Yet online the two can often be complementary and overlapping (for example in the case of medium.com, or the moderator of a subreddit), making it difficult to establish ultimate credit/responsibility.
In addition to the problems associated with definitions in the Bill, this submission proposes a more general critique of the protections offered.
There are entirely valid concerns that content which may be considered harmful – though legal – will be of democratic importance and therefore ought to be protected.
Yet, legislating what this content will be in advance, or who will publish such content, will inevitably lead to the censorship of critical news and information (which it did not occur to legislators to protect) or of individuals/organisations publishing such content (who did not, for example, fulfil all the criteria for a ‘recognised news publisher’).
Moreover, the legislation as drafted does not set out the extent to which protection should be afforded to content (e.g. of ‘democratic importance’), versus speaker (e.g. journalist), versus institution (e.g. ‘recognized news publisher’) or how Category 1 services should prioritize each.
This will further complicate decision-making about what content should or should not be protected. For example:
The main justification for protection is that, if no protection is provided, then important news and information of a public interest could be considered legal but harmful content and removed. The best way to resolve this is not to construct complex and inoperable definitions of protected content, but to remove the legal obligation to regulate legal but harmful content in this Bill.
Clause 101 of the draft Bill obliges OFCOM to prepare a report within 2 years of the legislation passing:
‘(a) describing how, and to what extent, persons carrying out independent research into online safety matters are currently able to obtain information from providers of regulated services to inform their research,
(b) exploring the legal and other issues which currently constrain the sharing of information for such purposes, and
(c) assessing the extent to which greater access to information for such purposes might be achieved.’
There is nothing further in the Bill that would encourage OFCOM to increase the transparency of these companies, or other attempts to compel Category 1 services to provide greater access to their data for the purposes of research and external scrutiny.
Yet, without such research and external scrutiny, we will be unable to properly assess the extent of problematic content and behaviour on these platforms, or assess the harms committed.
Concluding Comments
This draft Bill is, this submission concludes, deeply flawed. If passed it would have severe negative consequences for freedom of expression, privacy and for the future digital services in the UK.
New Online Safety legislation should, this submission proposes, limit itself to:
This limitation does not imply that this will be adequate for addressing all online harms, but that these harms, and other negative externalities associated with digital developments, would be better addressed by other interventions.
References
Thomas, Zoe (2020, 18 March). Facebook content moderators paid to work from home. BBC News. https://www.bbc.co.uk/news/technology-51954968
OFCOM (2021). The Office of Communications Annual Report and Accounts, HC459, p.56.
Wells, Georgia, Jeff Horwitz and Deepa Seetharaman (2021, 14 September). ‘Facebook Knows Instagram Is Toxic for Teen Girls, Company Documents Show’, Wall Street Journal. https://www.wsj.com/articles/facebook-knows-instagram-is-toxic-for-teen-girls-company-documents-show-11631620739
September 2021