{"HashCode":-849872376,"Height":841.0,"Width":595.0,"Placement":"Header","Index":"Primary","Section":1,"Top":0.0,"Left":0.0}

 

Rachel Mary Allenwritten evidence (FEO0076)

 

House of Lords Communications and Digital Committee inquiry into Freedom of Expression Online

 

 

Q1.              Is freedom of expression under threat online? If so, how does this impact individuals differently, and why? Are there differences between exercising the freedom of expression online versus offline?

 

Freedom of speech is at risk because of digital monopolies working hand in hand with the state to censor content that disagrees with the government narratives. For example, Youtube removing talkRadio content featuring lockdown sceptic Peter Hitchens because the government do not want people questioning lockdowns. Twitter just literally banned the US President. This was a politically motivated ban because the establishment dislikes Trump.

 

 

Q2.              Is online user-generated content covered adequately by existing law and, if so, is the law adequately enforced? Should ‘lawful but harmful’ online content also be regulated?

 

Content that does not break the law should not be regulated in any way because it is not illegal. This will be used as an excuse to shut down dissenting opinions. For example, the government will use it as an excuse to shut down anti-lockdown content by making the claim that it is ‘harmful because it spreads the virus’, or anti-war content on the grounds of ‘harmful to national security’. There are also significant problems with defining harm because many interest groups may try to remove content critical of their ideological position by claiming it is offensive. This is very common with trans rights activists who claim that questioning the idea of gender identity is ‘harming’ them despite the fact that anyone should be free to question or believe in any ideology they want.

 

 

Q3.              Should online platforms be under a legal duty to protect freedom of expression?

 

When it comes to Twitter, Facebook & Youtube, I do not fully think they can be referred to as ‘private companies’ because they obviously delete things in line with the official narrative (deleting The Last American Vagabond, for example, because of his criticism of the Covid-19 narrative and lockdowns). Facebook’s fact checkers have links to the Atlantic Council which has links to a lot of American former government figures like Henry Kissinger. Because these things are intertwined with the state I do not believe the ‘they are private companies, therefore they can ban whoever they want’ really applies. Therefore because of the connections between state and tech monopolies they should have an obligation to protect free speech.

 

 

 

Q4.              What model of legal liability for content is most appropriate for online platforms?

 

I have concerns about legal liability for platforms because I fear it will have a chilling effect on freedom of speech. If these companies can be fined massive amounts of money if they don’t ban certain things then they will err on the side of banning things even if those things are not illegal. I do not think they should have legal liability for hosting hate or bigotry or ‘misinformation’ however that may be defined. Any legal liability should be restricted to clear & limited examples of genuinely illegal material such as hosting of child sexual abuse images.

 

 

15 January 2021