{"HashCode":-849872376,"Height":842.0,"Width":595.0,"Placement":"Header","Index":"Primary","Section":1,"Top":0.0,"Left":0.0}

 

Professor Jacob Rowbottom[*]written evidence (FEO0059)

 

House of Lords Communications and Digital Committee inquiry into Freedom of Expression Online

 

 

Is free speech under threat? (Question 1)

 

1.              There is a risk in making general statements about free speech being ‘under threat’. On the one hand, the current freedom for anyone with an internet connection to publish and communicate to the world at large would have been unthinkable several decades ago. The current concerns about the proliferation of conspiracy theories and misinformation, for example, reflect the extent of that freedom. On the other hand, there is no doubt that there are various threats to freedom of speech (as there always have been). These threats can come from various sources including the state, the digital platforms and other private actors.

 

2.              Before looking at some of those specific threats, it is worth noting some of the features of digital expression that raise some distinct issues for expression rights. Words published online are often stored and can be viewed for some time after the initial publication. The ease with which content can be located and shared means that it may be seen by an audience that was not anticipated or intended by the author. As a result, informal comments and conversations can be scrutinised long after initial publication, brought to the attention of law enforcement, or become the subject of widespread public disapproval. These features mean that words that would be said and quickly forgotten in the offline world, can become subject to certain formal and informal sanctions in the digital world.

 

Formal sanctions and the existing law (Question 3)

 

3. Looking first at formal sanctions, there are a number of broadly worded provisions that can apply to digital expression. The scale of communications online means that the enforcement of such laws will inevitably be selective. There will be instances where the laws are under-enforced (for example, due to the constraints on police time) and there will be cases where the law is applied expansively and restricts freedom of expression.[1] The Law Commission is currently addressing this issue in relation to the criminal law, but it has a challenging task in designing a law that is flexible, without being over-expansive. Even with reform of the existing laws, police and prosecutors will still need guidelines to ensure that any new offences are enforced in a way that is compatible with speech rights.

 

 

 

 

Informal sanctions (Question 1)

 

4.              The exercise of expression rights online can have consequences for the speaker, aside from legal sanctions. For example, a person that expresses a controversial or unpopular view online may face a hostile reaction or face disciplinary sanctions at work. The two types of reaction can work together, for example where a hostile response from a group of social media users generates considerable negative publicity and the employer takes action in order to avoid reputational damage being caused through association with that publicity. Where such responses to speech come from private actors, there are some difficult questions of principle. While a person is free to speak, he or she cannot complain if others do not like what is said. Moreover, private actors are the holders of speech rights themselves and are free to express opposition to any views stated.

 

5.              One response to this issue is that disciplinary action taken at a workplace goes beyond the mere expression of disapproval by the employer. The exercise of the employer’s power can be considered a form of private censorship. For such cases, one option is to look for stronger protections in employment law to limit the scope for disciplinary action to be taken in relation to speech outside of the workplace (unless there is a convincing justification for such action). That approach may work to limit some of the adverse consequences that can result from the exercise of expression rights.

 

6.              The issue is more challenging where a speaker is not subject to any formal sanction (for example from an employer), but is the recipient of numerous hostile and abusive messages. Legal action may be taken where the hostile messages amount to a criminal offence, such as harassment. Even where that is the case, the scale of the communications and limited resources may mean that the existing legal controls are not enforced. However, in some cases the individual messages expressing the hostile response may fall short of criminal conduct, but in the aggregate may have a serious and disproportionate impact on the individual. This example raises difficult questions, such as when a hostile response from other speakers is deserved and a fair way to express disapproval, and when it may have disproportionate effects on the targeted person. There are also questions about whether the publicity surrounding such an episode should follow a person for years after the event. Such questions call for careful thought and I do not seek to resolve them here, but my instinct is that the solution to such problems is more likely to lie with the regulation/self-regulation of platforms. 

 

7.              The platforms themselves are also a potential source of private censorship. As the Committee has been hearing, the gatekeeping powers of a platform can determine what content is heard. This means that digital speech is already regulated, but it is regulated by the terms and conditions imposed by the platforms. Given the scale of content, it is possible to find examples where platforms have made decisions that meet with our approval and other decisions that are misguided. The bigger issue is not with the merits of the call made in a particular case, but with the fact that such decisions are left to a private company with limited scope for public accountability.

 

8.              While much discussion focuses on the impact of a platform’s internal policies, it is also important to note that the actions of the platforms are sometimes a result of external pressure. For example, advertisers sometimes apply pressure on platforms to change content policies. While such advertiser campaigns may be the product of good intentions and may support a good cause, a system in which content policies are made responsive to advertiser preferences raises broader issues of concern. For example, there may be occasions where advertisers wish for certain content to be removed, but which the public has a right to know. The appropriate constraints on online speech should not depend on the strength of feeling expressed by economically powerful groups of advertisers.

 

Social media regulation and free speech (question 3)

 

9.              The current proposals for social media regulation raise two questions in relation to freedom of speech. First, whether regulation itself would infringe expression rights. Secondly, whether regulation can introduce safeguards to protect speech rights. Starting with the first point, the compatibility of any regulations with the ECHR will depend on a number of factors including:

 

-          The type of speech being regulated. The Article 10 jurisprudence requires heightened protection for political speech and contributions to debates of general interest. Where speech is assigned greater weight, then it may warrant protection even where it can cause harm.

 

-          The reason for the restriction. Article 10 sets out a number of legitimate aims that permit restrictions. There are some reasons that will never provide a valid basis for restricting speech (for example, government cannot use its powers to suppress the speech of an opposing political party for the purpose of electoral advantage).

 

-          The nature of the restriction and extent of the interference. This point will also go to the proportionality of the measure. Accordingly, a criminal sanction will normally require the strongest justification.

 

10.              On the question of compatibility with speech rights, question 3 in the Call for Evidence asks whether ‘lawful but harmful’ content should be regulated. In response, I argue below that a system of regulation need not follow or be limited to existing standards for illegal content.  This means (1) targeting illegal content can still raise free speech issues when applied online, while (2) regulating some content that is otherwise legal but harmful can be compatible with free speech. Starting with the first point, the Government’s latest proposals on Online Harms states that companies ‘will need to ensure that illegal content is removed expeditiously and that the risk of it appearing and spreading across their services is minimised by effective systems’.[2] At first sight, the case for such a restriction may seem self-evident, but it could have potentially far-reaching consequences.

 

11.              To illustrate the point, consider laws that are cast in wide terms to give police the necessary tools for managing certain social problems. For example, public order law includes offences relating to speech that is threatening and abusive. Such laws are not enacted with a goal of 100% enforcement. Instead, considerable discretion is left to the police about when it is appropriate to use its powers and enforce such laws. For this reason, there are dangers in transposing certain existing legal standards to the online environment and demanding maximum compliance (whether it is through the platform’s automated means of detection or through user reporting). The risk is that the discretion and judgment of police, prosecutors and courts will be removed from the equation. For a social media company to introduce systems for the removal of any speech that could conceivably fall within the terms of criminal controls such as public order laws, obscenity law or laws regulating grossly offensive speech would be over-expansive and inhibit freedom of expression. The Government’s proposals state that safeguards will be offered to protect freedom of expression and prevent a risk averse approach to identifying illegal content. Further details of those safeguards are necessary to determine if the concern outlined above will be addressed.

 

12.              While the Online Harms proposals calls for a duty of care in relation to illegal content, there are some types of illegal content that will be subject to stricter obligations. Even in those areas that call for the stricter approach, such as terrorist material, the point made above could arise. For example, there are broad offences in relation to ‘terrorist content’, which are in practice limited through enforcement and prosecution polices. A much wider range of content could be constrained if the letter of a terrorism offence is applied through a platform’s automated systems. That is not to advance an argument against such regulatory obligations. It may be no bad thing if certain types of unlawful content are taken out of circulation without having to resort to the criminal law. However, the potential for the legal standards to be applied to a wider range of material needs to be considered, along with any implications for freedom of expression.

 

13.              There is no need for regulation to be limited to illegal content. The purpose of regulation is often to address problems where the ordinary law would not be effective or proportionate. Regulation can apply standards that go beyond the existing law. For example, regulatory codes restrict the expression of broadcasters in relation to content that is otherwise legal.  Regulation can have some advantages over the ordinary law in addressing certain types of problem. A regulator will often have greater expertise that can guide the application of standards. An expert regulator may also be well placed to oversee the systems within a particular industry. Regulation can also offer a level of flexibility that is harder to achieve with the ordinary law (for example in the range of sanctions available).

 

14.              Regulatory measures may provide a more proportionate response to certain problems, which (as noted earlier) will be relevant in assessing the compatibility with freedom of speech. For example, a system of regulation could require that certain types of content are appropriately flagged, labelled or not prioritised in feeds or rankings. That would not prohibit content or require its removal, but might limit circulation or ensure that people have more information to make an assessment. Such an approach can allow people to exercise the right to speak, without having the content made prominent to the world at large on a continuous basis. Moreover, if regulation can offer an effective solution to certain problems, then there will be less need to resort to the criminal law.

 

15.              To be clear, this does not mean that a regulator should be given an open remit to impose restrictions on any expression that it asserts to be harmful. A clear process and evidential basis will be required to identify categories of harm that warrant regulatory action. The key question in assessing a regulatory system is not simply whether the targeted content is already unlawful or not, but whether the regulatory measure is compatible with expression rights, applying the criteria given earlier (in para 9).

 

Promoting free speech online (questions 4 and 8)

 

16.              Social media regulation could provide an opportunity to offer greater protection for speech rights. As question 4 notes, platforms could be placed under an obligation to protect freedom of expression. Such protection could include provisions for the contestation of decisions, including rights of appeal (as included in the Government’s proposals). Regulators could also have a role in ensuring that the grounds of appeal offered are adequate and that the appeal process is fair. A further option is for regulations to require certain categories of appeal to be heard by an independent body. 

 

17.              The decisions of the digital platforms also shape the system of communication, even where no individual’s free speech rights are being interfered with. The algorithms will determine which content is most likely to be heard by particular audiences. The platform can also determine the terms on which people engage. For example, the design of the platform will determine how people express approval or disapproval of content (a like or thumbs up, for example), the number of people that content can be shared with, whether the number of the views for a particular post is publicly displayed, whether all posts are displayed in the same format (regardless of the authority of the source) and so on. The approach to such questions will vary according to the particular platform and will continue to change, so generalisations are difficult. However, these question of design, can have significant implications for the way debate and discussion will work online. 

 

18.              The questions of design could be subject to some form of regulatory oversight. The Committee’s Call for Evidence raises the possibility of greater transparency in relation to algorithms, which is one option. A further option may be for a regulator to engage in a periodic review of the platforms to identify areas for improvement in the design of its services.

 

 

15 January 2021

5

 


[*]              Professor of Law, Faculty of Law, University of Oxford and University College, Oxford. The evidence provided reflects the views of the author.

[1]              See Scottow v CPS [2020] EWHC 3421.

[2]              Online Harms White Paper: Full government response to the consultation (December 2020) [2.19]. Much will depend on whether the proposals are intended to apply to illegal content generally, or just to specific categories of unlawful content. In some places, the Government refers to ‘relevant illegal content’ [2.22] which suggests at narrower category offences. In a further paragraph, there is a qualification that the illegal content must also meet the definition of harm.