Written evidence submitted by Gavin Millar QC (OSB0221)

 

The following are either points which I was unable to cover in my oral evidence to the Joint Committee on 21 October 2021 or expands upon points which I made more briefly in the oral evidence. I have tried to concentrate on the key issues. I am afraid it is a lawyer’s view but I hope helpful nonetheless.

 

The duties of care approach

 

As discussed in Committee the approach adopted in the draft Bill is to impose what are described as duties of care on service providers (SPs)[1].

These are of two types. See Cl 5:

-          The “safety duties” in Cl.9-11. These are essentially to protect against harm: from encountering possibly illegal content (as defined in the Bill); via services likely to be accessed by children; and to adults.

 

-          The “Freedom of expression etc” duties in Cl.12-14. These are essentially to give varying degrees of protection, in various situations, to users’ rights to freedom of expression and privacy, when carrying out the safety duties.

The duty of care as understood in our law, comes from our common law of negligence. It is a duty on A not to inflict damage on B carelessly.

The rights to freedom of expression and privacy are fundamental rights which exist in our domestic law and in international human rights law. They are presumptive rights so that a state measure (for example a piece of legislation) can only interfere with them where this can be justified on the facts as being necessary in a democratic society. This involves a balancing exercise to be carried out on the facts between the right being exercised (how important is the speech or the privacy in issue?) and a particular societal objective or objectives being pursued by the proposed restriction. The starting point is a presumption that the right is guaranteed and that strong justification referring to the particular societal aim/s is required before it can be restricted.  

Once all this is understood, it can be seen that the “Freedom of expression etc” duties in Cl.12-14 are not really duties of care at all[2]. These provisions are not prohibiting the SPs from inflicting damage on someone carelessly. The stated aim of these provisions in the legislation is to oblige the SPs to give some protection to these fundamental rights. Albeit that they are rather weak/limited obligations (see below). 

Nor does the Bill adopt the familiar/established approach in law to the protection of such rights – which starts with the presumption that they are to be protected and requires clear justification on the facts in pursuit of a specified societal aim before this can happen. Instead, it imposes the above two supposed types of duty on SPs; which are, on the face of it, in conflict. On the one hand the SPs are required to do things that will restrict free speech and interfere with privacy. On the other hand they are supposed to protect these rights. And the Bill fails to make clear how these conflicting obligations are supposed to co-exist.

What is really happening here (or would be if this Bill is enacted as legislation) is that the state is giving the SPs powers, which must be exercised in certain circumstances, to interfere in its name with the rights of freedom of expression/privacy of users.

This is, in principle, a legitimate exercise. But the Bill should say clearly that this is what it is doing, rather than labelling anything and everything as a “duty of care” to make it seem more acceptable.

And if the rights in issue are to be properly protected the statutory power/duty to interfere should be targeted at clearly defined types of user content, which justify the exercise of the power to interfere because they are particularly irresponsible and seriously harmful per se. So that there is a clear societal aim in play which can justify such interference.

Obvious examples of such irresponsible and seriously harmful speech would be: hate speech; content encouraging self-harm; particular types of content which are clearly criminal including content uploaded as part of an attempt to defraud others; and content which is exploitative of or harmful to children.   

 

Clauses 12, 13 and 14

 

The approach adopted here is to pick out two particular types of speech, attempt to define them and emphasise their importance.

In the domestic and international law of free speech it is well established that speech on matters of public interest in a democratic society is deserving of the strongest protection. Political speech is the paradigm example of this. Journalism on such matters is also particularly strongly protected.

But speech on matters of public interest in a democratic society is a flexible category of speech. It is not closed. It covers more than just political speech/speech about the activities of government and/or journalism on such matters.

It includes, for example, speech concerned with public health, crime, justice, the environment, professional malpractice, the activities of large corporations, hypocrisy by public figures in their public life and so on. There are indicative examples of types of public interest speech in the exemptions paragraph at the end of the IPSO Editors Code and in the public interest disclosure legislation.

The approach in these clauses in the Bill is flawed because it does not follow this approach.

Indeed Cl.13 and 14 identify only two types of speech (ie content).

These are ill defined for a variety of reasons but most obviously because, even in their own terms, the definitions of the type of speech in issue are too narrow or indistinct.

-          Instead of identifying speech on all matters of public interest in a democracy as being the most important, Cl.13 refers only to content which “is or appears to be specifically intended to contribute to democratic political debate in the United Kingdom” (emphasis added). This is far too narrow. Speech can be on matters of public interest in our democracy, and therefore deserving of strong protection, even if it is not intended to contribute in this way. Indeed, it may have nothing to do with political debate in the UK. This public interest category knows no geographical boundaries, least of all in our 21st century global society.

 

-          Clause 14 is circular: ie content is “journalistic content” if it “is generated for the purposes of journalism”[3]. This is hopeless and does not assist anyone to understand what is covered. It should be concerned with content by which the user who generates it disseminates information and ideas to the public (or a section of the public) which they reasonably perceive to be of public interest. This section singularly fails to grapple with, still less protect, citizen journalism online (in the way this sort of definition would). Clearly in a modern, digital democracy it should do this. Again Cl 14 unduly restricts the category by requiring that the journalism is content which is “UK linked”. Quite aside from the problem of identifying what is “UK linked” journalistic content, journalism (and the need to protect it) knows no geographical boundaries.

They are also flawed for the reasons explained above. They do not require primacy to be given to these forms of high value speech (properly defined) – ie by emphasising the presumptive nature of the right to engage in these forms of speech and the need for strong justification before they can be interfered with. But rather the overriding obligation in Cl 12(2) is merely to “have regard to the importance of” protecting free speech and privacy in carrying out the safety duties. Following on from this, the duties in Cl 13 and 14 are merely to ensure the importance of the types of speech identified there are “taken into account”.

They are also flawed because the obligations to protect these types of speech do not apply (as they should) whenever any SP is exercising the power/duty of interference given by the legislation, but only in certain situations involving Category 1 services.     

 

 Identifying and defining harmful content

 

It is implicit in the above that the approach in the Bill to identifying harmful content is too broad and/or too uncertain.

This is done either by generic definition (description) in the bill or by giving the Secretary of State wide powers to describe types of harmful content in secondary legislation.

It is, obviously, only possible to comment in concrete terms at this stage on the former. As we do not know how the Secretary of State will use the wide discretion. But the approach in the Bill does not engender much confidence in this direction.

 

Legal but harmful to adults

This is especially true of the generic legal but harmful to adults category.

The Cl.46(3) definition has a series of layers which render it both vague and broad at the same time: viz it is content where:

-          the SP has “reasonable grounds to believe” that its “nature is such that there is a “material risk”

 

-          of it “having, or indirectly having…” a “significant adverse physical or psychological impact” 

 

-          on an adult of “ordinary sensibilities”.   

It is too broad because the content does not actually have to be (or even be adjudged to be) harmful to adults in the way described. The SP only has to identify reasonable grounds to believe that this is this might be the case (ie that there is a risk of it).  

And the concept of an indirect” significant adverse psychological impact is a particularly indistinct form of harm[4].

Identifying a hypothetical adult of “ordinary” sensibilities implies a very large degree of subjective judgment. In reality, there is no such person. Some people are more robust and resilient than others.  

The problem with this sort of approach is obvious. It gives the service provider much latitude to characterise content as harmful to adults for the purposes of the legislation when it is not clearly so.

The Bill should not include this generic category. As discussed above, some particular types of content can be identified which will be harmful to adults. But it will be their particular characteristics and effect that will render them so and justify inclusion in the legislation.  

 

“Illegal” content

It is also true of the illegal content category.

On close scrutiny this category is not about illegal content at all. Again, the threshold for categorization is simply a “reasonable belief” on the part of the SP that the content amounts to or is part of a “relevant” (being a specified) criminal offence. See Cl. 41(3).

Again too much latitude is being given to the SP to characterize content as falling within the category.

This is particularly concerning here because applying the statutory wording of most modern criminal offences to the facts is a difficult and technical exercise. It is one which police, CPS and courts often get wrong. This is both because of the flexibility of the language that is used and because of detailed nature of the drafting in most of our contemporary criminal offences. Criminal offences now, especially terrorism and CSEA offences, are much more complex than they were 30 or 40 years ago. 

Section 13(1A) of the Terrorism Act 2000[5], for example, criminalises publication of an image of an article “in such a way or in such circumstances…as to arouse suspicion” that a person is a supporter of a proscribed organisation. This could be tee shirt or banner with text. How qualified is the SP to make the required contextual judgment here? What would the SP make of a photo of someone at a demonstration published by a commentator in the online coverage of the demonstration that suggested the person was such a supporter? There is obviously a very high chance of the SP (or its algorithms) making the wrong evaluative judgment about this sort of content and placing it into the “illegal” category under the legislation. When clearly it should not be subject to any sort of restriction. 

 

Matrix

25.10.21

 

 

5

 


[1] In its 17 September 2021 written evidence Carnegie Trust suggests that the legislation should go further and impose a “general duty of care” on service providers “under which all other duties will sit”.

[2] Indeed it is difficult to see why the safety duties are duties of care as the SP is not carelessly inflicting damage on another.  

[3] See Cl.14(8)(b)

[4] This is defined in Cl.46(7) but this definition simply adds more uncertainty viz: the content causes an intermediary to say or do something “to a targeted adult” that has this impact; or it causes an increase in the “likelihood” of such an impact on an adult encountering the content.

[5] A relevant offence under Cl.42 and Schedule 2 of the Bill.