House of Lords Communications and Digital Committee - Inquiry into freedom of expression online
Introduction: I am a private individual with an interest in freedom of expression. I use social media and I am concerned that there is increasing content moderation/censorship for political and ideological purposes.
1. Is freedom of expression under threat online? If so, how does this impact individuals differently, and why? Are there differences between exercising the freedom of expression online versus offline?
I have observed that women, and men who are seeking to protect women, are particularly prone to online censorship. This tends to happen when challenging gender ideology or when claiming, perfectly accurately, that biological sex is a real thing and that it matters. I have had such tweets removed and I have been banned temporarily for such tweets. In particular, challenging the nonsense that “transwomen are women” is likely to lead to censorship. When, in reality, transwomen are male.
3. Is online user-generated content covered adequately by existing law and, if so, is the law adequately enforced? Should ‘lawful but harmful’ online content also be regulated?
I think the laws should be the same online and offline. Platforms should be able to set their own rules and, this being so, it is vital that there is free and open competition to allow alternative platforms to develop and flourish so users have a choice. Currently, this is not working as there is something of a monopoly, and it is clear that the big players (Facebook, Twitter, Apple, Google) are all conspiring to kill competition (eg Parler). Government should endeavour to stop the big players from acting in this way.
4. Should online platforms be under a legal duty to protect freedom of expression?
I think that would be a good thing, but I am not sure how it could be achieved.
6. To what extent should users be allowed anonymity online?
Anonymity must be allowed online. This affords protection for people who are discussing views that eg their employer or partner might disagree with. Issues with anonymity are largely around abusive accounts, which are relatively small in number. Address these, but do not ban anonymity.
9. How could the transparency of algorithms used to censor or promote content, and the training and accountability of their creators, be improved? Should regulators play a role?
I think it would be best to have truly independent regulation in this area, particularly to investigate and highlight political or ideological bias in algorithms. It is very clear that much content that is promoted is political or ideological, and this applies as much to search engines that distort history, or give results that are what the platform thinks is ‘right’ rather than what was asked for. And similarly for censorship. Independent regulators should play a role.
10. How can content moderation systems be improved? Are users of online platforms sufficiently able to appeal moderation decisions with which they disagree? What role should regulators play?
Periodic independent review of moderation would seem to be eminently practical and worthwhile. This would facilitate a view on whether there is political or ideological bias in the methodology or aim of the moderators/platforms.
11. To what extent would strengthening competition regulation of dominant online platforms help to make them more responsive to their users’ views about content and its moderation?
This would be a great initiative. It would make dominant platforms more responsive to users’ views. More competition must be a good thing in this area. Currently, the big players have minimal threat from competition and they are able, at will, to destroy them - eg Parler’s recent experience. Users have very limited choice.
13 January 2021