Written evidence submitted by the Adam Smith Institute
8/09/2021
Digital, Culture, Media and Sport Sub-committee on Online Harms and Disinformation
United Kingdom Parliament
Westminster
London SW1A 0AA
Via email cmscom@parliament.uk
Adam Smith Institute response to the Digital, Culture, Media and Sport Sub-committee on Online Harms and Disinformation’s Online safety and online harms inquiry
The Adam Smith Institute (“ASI”) welcomes the opportunity to provide a submission to the Digital, Culture, Media and Sport (DCMS) Sub-committee on Online Harms and Disinformation’s Online safety and online harms inquiry
The ASI is a neoliberal, free-market think tank. We are independent, non-profit and non-partisan. The ASI takes a deep interest in civil liberties, freedom of expression, and digital innovation having published numerous previous papers on the subject.
The ASI has contributed to debate about the Online Safety Bill since early 2019, when the Government first proposed the ‘duty of care’ model in the Online Harms White Paper. This includes substantial media commentary, inquiry submissions and independent papers. We have raised substantial concerns about the implications of the Bill on freedom of expression and competition.
This submission responds briefly to the questions raised by the Committee. We would be delighted to provide further evidence, in oral and/or written form, if the Committee desires.
1. How has the shifting focus between ‘online harms’ and ‘online safety’ influenced the development of the new regime and draft Bill?
- This change appears to be largely a matter of semantics. The change from “harms” to “safety” has not fundamentally shifted the nature of the proposals or the threat that they pose to freedom of expression. The Committee should perhaps focuses on the substantive issues raised by the model rather than the name of the legislation.
2. Is it necessary to have an explicit definition and process for determining harm to children and adults in the Online Safety Bill, and what should it be?
- Yes, there should be an explicit and limited definition of harm.
- The inclusion of ‘lawful but still harmful’ speech represents a frightening and historic attack on freedom of expression. The Government should not have the power to instruct private firms to remove legal speech in a free society.
- The current definition focuses on any manner of "physical or psychological impact" to either a child or adult’s sensibilities, with special consideration of any “certain characteristic”. This is extremely vague and dangerous. It provides practically unlimited and arbitrary power, since anything can have a psychological impact, to the ministers and Ofcom to censor online speech through guidance. The vagueness of the legislation means there will be nothing to stop Ofcom and a future government including any additional measures in future.
- Even if Ofcom takes a liberal approach when defining harm, the threat of large fines to technology companies will create an incentive to over-censor speech.
- This risks having a particularly negative impact on marginalised communities. A joint group of LGBTQ+ activists, including Stephen Fry and Peter Tatchell, recently warned that the “vague wording makes it easy for hate groups to put pressure on Silicon Valley tech companies to remove LGBTQ+ content and would set a worrying international standard.”[1]
- The definition of harm should focuses exclusively on unlawful speech, that already tackles a range of issues. There should be an explicit section in the Bill to prevent Ofcom from issuing of guidance requiring companies censor legal speech.
Does the draft Bill focus enough on the ways tech companies could be encouraged to consider safety and/or the risk of harm in platform design and the systems and processes that they put in place?
- The Bill completely fails to consider the unintended consequences of automated systems on freedom of expression. The Bill’s model will require substantial use of automated systems to remove potentially harmful or unsafe content to avoid large fines and comply with codes of conduct.
- There have been particularly notable issues with racism of algorithms used by technology companies.[2]
What are the key omissions to the draft Bill, such as a general safety duty or powers to deal with urgent security threats, and (how) could they be practically included without compromising rights such as freedom of expression?
- The Bill fails to protect freedom of expression and nor will it do much to tackle genuinely urgent security threats.
- The Bill would make it more difficult for law enforcement to tackle abusive behaviour by mandating its immediate removal. This would make actual abusers unaccountable for their actions.
- The Bill will not see a single extra penny dedicated to law enforcement or prosecuting serious online crimes or addressing urgent security threats. These matters that can best be addressed by law enforcement, not technology companies. It is inappropriate to outsource these responsibilities.
- Before introducing new legislation, the Government and Committee should consider the use of existing legal mechanisms, common law principles and resourcing police enforcement to better tackle digital crime.
- For example, in the case of harassment, the Government could instruct law enforcement agencies to seek court-ordered Injunction to Prevent Nuisance or Annoyance (IPNAs) to target individuals responsible for serious online abuse. This would target individuals rather than creating broad censorship of legal speech.[3]
Are there any contested inclusions, tensions or contradictions in the draft Bill that need to be more carefully considered before the final Bill is put to Parliament?
- The Bill is a walking contradiction.
- The Government claims that freedom of expression is not threatened, however the law includes special exemptions for democratic and journalistic content. This would not be necessary if the legislation did not threaten free speech.
- It is deeply problematic to create certain exemptions for some groups and not others with respect to freedom of expression. Who you are should not decide whether your speech is protected under the law. Just because an individual happens to be a politician or journalist does not mean your speech is of higher value.
- There is a risk of politicisation of online content moderation provided by the powers to the Secretary of State to direct Ofcom’s priorities.
- The Government claims privacy is not threatened, however, the Bill will effectively mandate age verification – meaning companies asking users to enter their drivers’ licenses and passports to ensure services are age-appropriate.
- This will have a particular concerning effect on minority groups: “Growing calls to end anonymity online also pose a danger. Anonymity allows LGBTQ+ people to share their experiences and sexuality while protecting their privacy and many non-binary and transgender people do not hold a form of acceptable ID and could be shut out of social media.” [4]
What are the lessons that the Government should learn when directly comparing the draft Bill to existing and proposed legislation around the world?
- The Government should be aware that broad terms, like “disinformation” and “harms” are often abused by authoritarian regimes to justify illiberal censorship.
- The Government should be wary about developing a model, particularly with built-in discretion by regulators and ministers to limit freedom of expression, that could be copied by less liberal and less democratic countries. For example, the Pakistan’s recent passage of new digital regulations appears to directly copy the UK’s “online harms” white paper approach.
- In the past German’s NetGZ law inspired new online censorship legislation in Russia, Kyrgyzstan, and Turkey.