Simon Wadsworthwritten evidence (FEO0075)


House of Lords Communications and Digital Committee inquiry into Freedom of Expression Online


My comments refer in particular to Twitter, as this is the main social media platform I use. I am only an occasional user of Facebook, WhatsApp and Instagram.


I would like to point out at the start that Twitter is generally a very good platform for sharing ideas, even if robustly at times. It has probably promoted freedom of expression more than any other – noting that it gives you the freedom to express yourself to anyone who wants to listen. There is of course no obligation to read any of my tweets, just as there is no obligation to but a newspaper or turn on a particular TV or radio station. It is for the government and their agencies to ensure each citizen has the right to freedom of expression and not to curtail those freedoms.


There should be more effort directed at regulating political advertising and bots via social media, that can allow foreign state actors to fund and distort political campaigns, without accountability.


  1. Is freedom of expression under threat online? If so, how does this impact individuals differently, and why? Are there differences between exercising the freedom of expression online versus offline?


              Not generally. It could be argued that the banning of accounts on Twitter and Instagram is not transparent and there is seems to be no right to appeal to the platform, though I have not experienced this process myself. See also my answer under Question 10.


  1. How should good digital citizenship be promoted? How can education help?


              Important to know what freedom of expression means, and the corresponding limits on such freedom by incitement of hatred, harassment, affecting other users’ privacy (by “doxxing”) and their right to a private life.  Some discussion on what to share and not share from a personal security perspective, removing geographic data from photos and so on would also be good, but is not really relevant under the aegis of freedom of expression.


  1. Is online user-generated content covered adequately by existing law and, if so, is the law adequately enforced? Should ‘lawful but harmful’ online content also be regulated?


              If anything the Misuse of Communications Act 2003 has been misused by police to date. The police are acting against people who have caused offence, without recognising that unlike the telephone you choose who to follow and can easily block nuisance or accounts that cause distress even if they are anonymous – though if a user with many followers organises a “pile-on” in Twitter that can be overwhelming and something that should be looked at. See my suggestion under question 12. There is also a lot of offence by proxy – someone being offended on someone else’s behalf due to someone else’s perceived offence, even when that someone else is no-one they know.


  1. Should online platforms be under a legal duty to protect freedom of expression?


              No. Not unless you plan to nationalise them. Comparing them to publishers of online content, where the content is produced, edited and reviewed prior to being made available is clearly a mistake. It is more likely that filters to prevent such content will reduce freedom of expression. User moderation and media platform processes seems mostly fair.


  1. What model of legal liability for content is most appropriate for online platforms?


              No change to the law. The liability should rest in the first case with the user, and the user should be held responsible. See note on anonymous accounts below.


  1. To what extent should users be allowed anonymity online?


              Anonymous accounts can be useful in presenting information without fear or favour, especially for people working in the public sector. Perhaps if whistleblowing was protected better there would be no need for such anonymity. The accounts are traceable for the police, though perhaps this should be codified as needing a warrant, with adequate controls against abuse, rather than just a police officer requesting such information (I am not aware of the law in this area, so this may already be the case).


  1. How can technology be used to help protect the freedom of expression?


              Not sure what is meant by this question.


  1. How do the design and norms of platforms influence the freedom of expression? How can platforms create environments that reduce the propensity for online harms?


              The balance should be towards freedom of expression. The risk of online harm should be addressed by people feeling free to use the ‘mute’ or ‘block’ tools that exist. These are adequate for all except “pile-ons” see above. A new tool to block anyone on a particular thread which you have not chosen to follow might be a good idea for pile-ons, but that might block innocent accounts too.


  1. How could the transparency of algorithms used to censor or promote content, and the training and accountability of their creators, be improved? Should regulators play a role?


              No view.


  1. How can content moderation systems be improved? Are users of online platforms sufficiently able to appeal moderation decisions with which they disagree? What role should regulators play?


              Reasons for the ban should be explained. An appeal process should allow support from other users, if they feel the person has been unfairly treated.


  1. To what extent would strengthening competition regulation of dominant online platforms help to make them more responsive to their users’ views about content and its moderation?


              There is no cost to use Twitter. I am not clear how competition policy would work for a free service, except in relation to any anti-competitive actions via their advertising revenue.


  1. Are there examples of successful public policy on freedom of expression online in other countries from which the UK could learn? What scope is there for further international collaboration?


              The presumption of the freedom of expression unless otherwise shown to meet certain requirements such as incitement or harassment should remain as-is. There is no right not to be offended – especially when to view or read someone else’s timeline is optional not mandatory.  The only possible additional control should be that users with large numbers of followers, should lose their ‘blue tick’ status if they are found to have caused or instigated a pile-on. Even this is problematic – see the recent case of Leeds United and Karen Carney, where a reasonable albeit robust response unfortunately caused a lot of the followers of the Leeds United account to direct misogynistic comments towards Ms Carney. The solution may in this case, be a retrospective user option to ban (delete) replies to a post – similar in effect to stopping replies to a post ab initio. If they refuse to do this, then the blue tick option can be invoked and in severe cases, the account blocked.



15 January 2021