Mrs Claire Loneragan—written evidence (FEO0040)


House of Lords Communications and Digital Committee inquiry into Freedom of expression Online


I am a UK citizen who is increasingly concerned that a so-called “progressive” orthodoxy has assumed the moral authority within our cultural institutions, our organs of state, education, commerce and those who police the digital public square.


I have watched with growing alarm what happens to those who speak out against ideology rooted in critical theory and seen how women in particular are being punished for daring to speak out against gender ideology: women are losing their jobs, attacked, threatened and abused for stating facts that were considered quite unremarkable five years ago. At no point has gender ideology been asked to prove the claims it makes (that people can change sex, that transwomen ARE women, or how transgender people are endangered by certain statements of fact). I am submitting this evidence because truth is under threat.


When people see obvious lies held up as facts (e.g. that Eddie Izzard is a woman, that we don’t know if a person is male or female until they’ve declared their pronouns, that “babies are born without a sex”) on the television, in newspapers and even in government, are we really surprised that they lose faith in the authorities over vaccines, the efficacy of lockdowns and how PPE contracts are awarded?


When we lose our freedom of expression, we become more vulnerable to cults, extremists and ideologues. Our best protection is the truth and open debate, and we must be able to say what we believe to be true no matter how uncomfortable some may find it.


  1.         Is freedom of expression under threat online? If so, how does this impact individuals differently, and why? Are there differences between exercising the freedom of expression online versus offline?


Yes – because the big tech companies are effective monopolies who have taken ideological positions on key cultural matters. Twitter, FaceBook, Instagram, YouTube are where people go to find information, but these companies promote voices they agree with and suppress those they don’t like (either through deboosting them or removing them altogether).  


Freedom to state one’s beliefs should not be confused with freedom to say anything at all. Insults, threats, incitement to violence are not at all the same things as statements of personal belief, and there is a clear line to be drawn. An objective statement of fact should NEVER be deemed insulting or threatening.


  1.         How should good digital citizenship be promoted? How can education help?


I don’t believe that people behave badly online because they don’t know any better. I believe they do it because they can. There will always be some who are mean-spirited, abusive or bitter. The question is, are they held to account? What I have witnessed on Twitter is that many are not, and that encourages others.


  1.         Is online user-generated content covered adequately by existing law and, if so, is the law adequately enforced? Should ‘lawful but harmful’ online content also be regulated?


  1.         Should online platforms be under a legal duty to protect freedom of expression?


It depends on the status of the online platform:


a)    If it is a platform rather than a publisher, and therefore not responsible for the content hosted, governments must set the rules for acceptable content within their jurisdiction (defined by geographical location of the end user’s ISP). Content that is not in breach of these regulations, no matter how distasteful, must not be removed/suppressed. The platform should be able to demonstrate that every reasonable effort to comply with local requirements is made, and content that does not comply is removed/suppressed (e.g. incitement to violence in the UK). This of course would require governments to legislate for what is allowed – as is done for physical news media already, and in repressive regimes, they would be obliged to remove/suppress content they might prefer to leave.


b)    If a publisher, and therefore responsible for content, the platform host would already have to comply with legal requirements and should be able to choose what is and is not published. But, crucially, local jurisdictions would have to take action over breaches including defamation and libel (i.e. prosecute the online platform), for reasons of consistency and transparency. As soon as a platform makes content decisions for reasons other than regulatory compliance, they declare themselves to be a publisher and assume responsibility for all content.


  1.         What model of legal liability for content is most appropriate for online platforms?


  1.         To what extent should users be allowed anonymity online?


Anonymity online is essential for some people whose jobs, personal safety and that of their families is to be protected. The unquestioning adherence of so many organisations to critical theory has made the consequences of telling the truth or making counter arguments too costly.


Even where a person’s livelihood is not at stake, the social consequences of stating views that run counter to one’s cultural group can be very high. For instance, apostacy is deemed unacceptable by some religious groups.


  1.         How can technology be used to help protect the freedom of expression?


This is not a technology issue. This is a monopoly and/or orthodoxy issue. Women are banned from Twitter for stating facts and there is no process for appeal and no alternative platform having similar reach and functionality (especially now that Parler has been removed).


  1.         How do the design and norms of platforms influence the freedom of expression? How can platforms create environments that reduce the propensity for online harms?


See question 7 – this is not a technical design question. If a person is deemed to have said the unsayable (e.g. calling a trans identified male “dude”, or promoting a petition in support of JK Rowling – both have happened in the last three months) then that person loses their Twitter account and there is no process of appeal.


That males regularly issue rape threats and death threats is not because the design of Twitter is wrong, it’s because Twitter technical support doesn’t act to remove these individuals when they are reported (much less proactively seek to remove them). This further entrenches the “bro” culture.

The cultural norms of the platforms reflect the world view of their moderators which is not subject to external regulation.


  1.         How could the transparency of algorithms used to censor or promote content, and the training and accountability of their creators, be improved? Should regulators play a role?


  1.     How can content moderation systems be improved? Are users of online platforms sufficiently able to appeal moderation decisions with which they disagree? What role should regulators play?


Twitter has no appeal process.


Bret Weinstein (a liberal but anti-critical theory evolutionary biologist) recently had his FaceBook account terminated with no reason given and was told that the decision was final with no appeal process. Through influential personal contacts and because he is a high-profile individual, he was able to get his account reinstated, and was told it was a “mistake”. This avenue would clearly not be available to most people.


This is not like having your membership of a sports club terminated because these companies are effective monopolies, and increasingly Twitter, FaceBook, YouTube etc are essential services taking over much of what was once done by public libraries. Information about education, education itself, work opportunities, local government information, cultural information, commercial information, travel information are increasingly found only on these platforms.

What is happening is tantamount to being told 30 years ago that you could not use any public library in the UK because you said in a public place that men could not become women. And then being told that it’s not actually a problem because you can still use bookshops and buy newspapers.


  1.     To what extent would strengthening competition regulation of dominant online platforms help to make them more responsive to their users’ views about content and its moderation?


This is vital, and increasingly urgent. Not only are the internet giants de facto monopolies of their specific provisions, these companies are moving in lockstep (in other words, they act as a cartel). Fall foul of one and you fall foul of all of them, as Parler found when its app was pulled from the Google and Apple app stores on the same day, and Amazon Web Services deplatformed it the following day.


It’s all very well to suggest that if Twitter deplatforms enough people then a new platform will be developed to fulfil the market need, but Parler is clear evidence that this is unrealistic.


Furthermore, as dissenting voices are removed from the “digital public square” others will not be exposed to the counter arguments, leaving e.g. critical theorists as the only voices in the room on key issues. Pushing dissenting voices to other platforms will have a polarising effect, further hardening the echo chambers.


  1.     Are there examples of successful public policy on freedom of expression online in other countries from which the UK could learn? What scope is there for further international collaboration?



15 January 2021