Cairnswritten evidence (FEO0005)


House of Lords Communications and Digital Committee inquiry into Freedom of Expression Online


  1. I am submitting evidence as an individual. I was motivated to do so because I have an interest both in technology and a concern that long-standing norms relating to free expression seem increasingly under threat.


Is freedom of expression under threat online? If so, how does this impact individuals differently, and why? Are there differences between exercising the freedom of expression online versus offline?


  1. Freedom of expression is being undermined online, as I will show below, but the problem is wider than that. Online communication is becoming the primary means of discussing social and political issues of the day [1]. Use of social media and online news is almost ubiquitous in those under 30 [2]. Since the main discussion platforms are publicly traded, American companies, a considerable amount of control of the public sphere is now in the hands of a small number of people with no democratic accountability in the UK.


  1. The proposition that democratically unaccountable content moderation stifles online freedom of expression is axiomatic, and I see no need to expand upon the examples given in your call for evidence to demonstrate it. But what must be considered is how the threat to freedom of expression online is now trivially spread offline, through easily orchestrated campaigns to harm someone's income when they express "wrong" ideas and, in extreme cases, police involvement. There are multiple examples of both, but for the sake of brevity I will cite a small number only.


  1. There is the case of Brian Leach, who was sacked from and later re-instated to his job at Asda for sharing a clip from a Billy Connolly routine[3]. There is the case of Professor Selina Todd, who has been targeted for removal from speaking engagements and even needed physical protection at events as a result of opinions shared online [4]. An important feature in Professor Todd's case is that her opinions went no further than to support the continued application of existing UK equalities legislation. Finally there is the recording of "non-crime hate incidents", which appear on enhanced criminal record checks, against people who have expressed views online that do not amount to a crime. In the case of Miller -v- College of Policing[5] the High Court ruled that such incidents arising from social media posts "disproportionately interfered with…freedom of expression". Nonetheless I understand the practice continues.


  1. You will note that ultimately two of the above examples were resolved unambiguously in favour of free expression. Brian Leach got his job back, and Harry Miller won his case. But the process was the punishment. Seeing others have to fight to retain their employment, or take the police to court with all the associated personal costs, has a chilling effect on everyone. Social media posting is nothing more than a new means of talking nonsense at the pub, and it should be policed similarly rarely.


  1. That social media provides an ease of reporting and evidence-gathering is the critical difference between free expression offline and online. Online communication is temptingly easy for authorities to police when they have targets to meet.


How should good digital citizenship be promoted? How can education help?


  1. I do not consider it an appropriate role of the state to define what it means to be nice, or to cajole people towards that standard of behaviour. The state's role ends at deciding what behaviour is lawful. There is thus no appropriate role for state education in my view.


Is online user-generated content covered adequately by existing law and, if so, is the law adequately enforced? Should ‘lawful but harmful’ online content also be regulated?


  1. There is no one answer to this question. The most visited online adult site had to be embarrassed into removing user-generated content because much of it contained material obtained without consent[6]. The law should have protected the people who were violated by this. More generally, social media affords people an opportunity to spread defamation much more effectively than ever before and, since platforms are not considered publishers, there is little opportunity for redress. I propose legislation to classify platforms as publishers in individual instances if they fail to act on a "right to be forgotten" request in a timely or appropriate way, allowing private individuals to seek redress from the companies rather than the original source of the defamation.


  1. But cases such as the aforementioned Miller -v- College of Policing and the earlier Chambers -v- Director of Public Prosecutions[7] show that social media posts can be far too covered by law in the UK.


  1. One area where existing law can be much better is in employment protection. It should be actionable, and possibly even an offence, to restrict someone's educational, employment or career opportunities because of lawful statements made in a personal capacity. Such a protection would both give people confidence to participate in the public sphere and dis-incentivize the kinds of online witch hunts particularly prevalent on Twitter.


  1. Parliament should be very careful about making exceptions for content that is deemed harmful because bad-faith accusations of harm are one of the primary tactics used to narrow the field of acceptable discourse online.


Should online platforms be under a legal duty to protect freedom of expression?


  1. Yes because they are the de-facto public square. I propose it should be a condition of operating in the UK that large (say, more than 1m active users) social companies do not moderate, or allow to be moderated, any speech that is permissible under UK laws for UK users.


How could the transparency of algorithms used to censor or promote content, and the training and accountability of their creators, be improved? Should regulators play a role?


  1. These algorithms should ideally be open source in order to operate in the UK. Otherwise they should be shared with a regulator. Ofcom's regulations on impartiality should apply to algorithms promoting content (in practice this would likely be similar to switching them off). Censorship for users in the UK should go no further than UK laws limiting speech for large platforms. There is no technological barrier to presenting different content to users in the UK than elsewhere. While this can be worked around with Virtual Private Networks (VPNs) that would not diminish its effectiveness since it would only allow people to seek a more restrictive environment.





How can content moderation systems be improved? Are users of online platforms sufficiently able to appeal moderation decisions with which they disagree? What role should regulators play?


  1. See my earlier comment indicating that a platform should have to demonstrate that a post was unlawful in order to act on it. Since these platforms are the de-facto means of discussing public policy they should not be free to limit speech beyond the law. Such decisions should be subject to oversight by an Ombudsman or regulator, paid for by a levy on the large platforms.


To what extent would strengthening competition regulation of dominant online platforms help to make them more responsive to their users’ views about content and its moderation?


  1. Competition does not protect people's ability to engage in the public sphere. Twitter has quite a small audience relative to the full population, and a minority of its users send the majority of tweets[8]. But this small subset of users exerts disproportionate influence on public policy because Twitter is where journalists and politicians engage[9]. The existence of alternative platforms, such as Mastodon, is irrelevant because a user there is simply screaming into the void.


Are there examples of successful public policy on freedom of expression online in other countries from which the UK could learn? What scope is there for further international collaboration?


  1. To my knowledge, no. The UK has an opportunity to be a leader here. In terms of international collaboration I believe the concrete measures I have proposed:


         employment protections preventing punishment for lawful speech;

         regulation to limit moderation to unlawful speech only; and

         stronger protections for private figures seeking to have major platforms, including adult platforms, remove content that features them without consent


would be best applied with as much international agreement as possible.



December 2020



[1]              Ofcom, 2018, News Consumption in the UK: 2018,, accessed 2020-12-17,

[2]              ONS, 2020, Internet Access - households and individuals,, accessed 2020-12-17,

[3]              National Secular Society, 2019, Employee sacked for sharing clip mocking religion is reinstated,, accessed 2020-12-17,

[4]              BBC, 2020, Oxford University professor condemns exclusion from event,, accessed 2020-12-17,

[5]              Miller -v- College of Policing, [2020] EWHC 225 (Admin),

[6]              BBC, 2020, Pornhub removes all user-uploaded videos amid legality row,, accessed 2020-12-17,

[7]              Chambers -v- Director of Public Prosecutions, [2012] EWHC 2157 (QB),


[9]              Weiss, Bari, 2020, Resignation letter,, accessed 2020-12-17,