Written evidence submitted by The Adam Smith Institute (OSB0129)
The Adam Smith Institute (“ASI”) welcomes the opportunity to provide a submission to the Joint Committee on Draft Online Safety Bill’s Legislative scrutiny inquiry.
The ASI is a neoliberal, free-market think tank. We are independent, non-profit and non-partisan. The ASI takes a deep interest in civil liberties, freedom of expression, and digital innovation having published numerous previous papers on the subject.
The ASI has contributed to policy debate about the draft Online Safety Bill (henceforth “the Bill”) since early 2019, when the Government first proposed the ‘duty of care’ model in the Online Harms White Paper. This includes substantial media commentary, inquiry submissions and independent papers. We have raised substantial concerns about the implications of the Bill on freedom of expression and competition.
This submission, authored by ASI Head of Research, Matthew Lesh, responds to the questions raised by the Committee. We would be delighted to provide further evidence, in oral and/or written form, if the Committee desires.
This submission responds to the questions raised by the Committee by focusing on five key conclusions:
● The Bill substantially reimagines the role of the state with respect to “safety” from all forms of speech, whether lawful or not, that could cause any manner of harm, including psychological.
● Practically, the only way to ensure the internet is entirely ‘safe’ is for the internet to be abolished. And for that matter, if the goal is to ensure humanity is entirely safe, it would be necessary for a meteor to hit planet earth leading to the extinction of humankind. Otherwise, it is certain that perfect safety is neither an attainable, nor desirable goal. It is not desirable because being ‘safe,’ particularly from ideas with which one disagrees, weakens society’s ability to debate controversial issues and intellectually develop.[1]
● The “duty of care” model supposedly creates a requirement on companies to protect their users. But there are user-generated platforms. In practice, the Bill is not simply “regulating Big Tech”. It is regulating the legal speech of tens of millions of citizens who use the internet every single day. The entire model of this legislation is deeply flawed.
● In the name of protecting safety, the Government is sacrificing essential liberties. Under the Bill, the ability to express perfectly legal viewpoints in online settings would be severely compromised.
● The definition of content that is harmful to an adult is extraordinarily broad and vague: “material risk of the content having, or indirectly having, a significant adverse physical or psychological impact on an adult of ordinary sensibilities”.
○ “Certain characteristic” could not only mean existing protected characteristics, such as race, sex and sexual orientation, but also physical appearance or political opinion. The Bill also requires the services to assume that every user has characteristics, or could have a combination of characteristics, that could make them particularly vulnerable.
○ Similarly, an extraordinarily broad array of speech could cause ‘psychological’ harm, which is not simply limited to clinical definitions.
○ Will we be allowed to criticise politicians with memes? Would humorously edited photos of the Vote Leave’s campaign bus come under “disinformation”? Are questions about the makeup of the UK’s immigration harmful? How about issues related to trans rights? There are also even more complex cases in which particular uses of language - whether sarcastic or serious, for example - can be difficult to determine in practice.
○ This is not addressed with reference to “adult of ordinary sensibilities (“C”)”. There is no median with respect to how humans view controversial content, there are wildly divergent sensibilities and views as to where the ordinary sits. Our emotional response to speech is subjective. There can be no such thing as someone of an ordinary sensibility in this respect, nor can that be objectively determined by a company.
○ This risks the definition of harm simply being the person who is most easily shocked, or makes a claim that they are shocked. In practice, it would mean companies being required to “predict, seek out, detect and then either deal with (for adult harmful content), or mitigate, manage or prevent (for various kinds of child harmful content), any number of different hypothetical black or grey hats that might or might not be present in the dark room.”[2] This would have to be done with automated processes.
● This fundamentally undermines the basic rule of law principle that legislation should be proscriptive rather than prescriptive. It should be clearly comprehensible what is required of citizens and companies by the written words of the legislation.
○ The nature of the legislation makes it ever-changing, as ideas about what is decided to be acceptable and what causes psychological distress are in flux. These matters should be regularly reassessed by Parliament directly. Laws should be focused on specific issues not defined so broadly as they could potentially be interpreted to be applied to almost any speech.
○ The Bill’s poor, broad drafting will make it practically impossible for any company to entirely fill the requirements of preventing psychological harm. This will end up with arbitrary content moderation decisions and uneven application of the law.
● This risks having a particularly negative impact on marginalised communities. A joint group of LGBTQ+ activists, including Stephen Fry and Peter Tatchell, recently warned that the “vague wording makes it easy for hate groups to put pressure on Silicon Valley tech companies to remove LGBTQ+ content and would set a worrying international standard.”[3]
● The Bill’s model will require substantial use of automated systems to remove potentially ‘harmful’ or ‘unsafe’ content to avoid large fines and comply with codes of conduct. Rights of appeal under terms of use will be no help in the fast moving world of social media debate and discussion, so if an automated filter misjudges the nature of a piece of content (for example does not classify it as democratically important or fails to grasp a satirical anti-racist point) later reinstating does not restore the freedom of expression of the user.
● With respect to legal speech, companies should be able to make determinations as to what content they host on their platforms. They should not be mandated to handle legal speech in any particular manner by the state, as that de facto means outlawing content.
● Even with respect to safety duties about illegal content, it is likely that the Bill, as currently drafted, would lead to censorship.
● In the first instance, because existing UK law infringes on basic freedom of speech principles including the Public Order Act 1986, Communications Act 2003, Terrorism Act 2000 and 2006, and the Malicious Communications Act 1988.[4] For example, the Communications Act outlaws speech which is “grossly offensive or of an indecent, obscene or menacing character” and sends a message which is “false” for the purpose of causing “annoyance, inconvenience or needless anxiety”.
● Furthermore, even if the existing law were well-drafted and the Bill only applied to unlawful speech, the new regulatory framework, which threatens extensive fines to companies which do not comply with the “duty of care”, would create substantial pressure to remove any content that could potentially be unlawful. This will incentivise excessive censorship in order to identify and deal with unlawful content, it will of course be necessary to monitor all content of law-abiding users, and potentially deploy profiling and surveillance technology on them.
● The scope of the Bill is already extremely broad, with respect to both illegal and otherwise legal speech. This will make it difficult to focus on priority areas of content, by both companies and the regulator. Accordingly, there is a strong case against including any more fields within scope.
● Ofcom will have extraordinary powers to prepare codes of practice, effectively setting the bounds of both illegal speech and creating effective requirements, through terms and conditions, for the removal of legal speech from services. They will also be able to instruct on use of algorithms, predictive search and indexing, and age verification. While it would be legally permissible under the Bill to comply with the safety duty without following Ofcom’s codes, in practice, service providers will follow Ofcom codes to the letter to guarantee their compliance and avoid large fines. This is of course the intention of the legislation.
● Ofcom’s codes will be submitted to the Secretary of State, who will be able to modify the code. Together this means Ofcom and the Secretary of State will be able to define what speech must be removed from online platforms. There is a requirement to lay codes before Parliament for 40 days, however, in the likely event that the Government of the day does not wish to debate the specifics, it is unlikely that MPs will ever have much say on the contents.
● The Bill provides practically unlimited and arbitrary power to ministers and Ofcom to censor online speech through codes of conduct, since anything can have a psychological impact. The broad terms in the Bill means there will be nothing to stop Ofcom and a future Secretary of State including any additional measures.
● In defining what is harmful speech, the regulator could fall victim to partisan pressures, leading to the censorship of certain ideologies or political groups. This could lead to further polarisation and loss of public trust in institutions. The creation of a regulator to enforce online speech codes is a recipe for disaster.
● The supposed protections for freedom of speech contained within the Bill are both weak and would create an unfair and inconsistent application of the law.
● The Bill establishes an extraordinarily broad and threatening set of duties in respect to illegal and ‘legal but harmful’ speech. In contrast, when it comes to freedom of expression there is merely a duty to “have regard to the importance of freedom of expression”.
● The duty is clearly overwritten by duties with respect to safety. The Bill states that a service will have complied with their duty in respect to freedom of expression and privacy “if the provider takes such of the steps described in the code of practice which are recommended for the purpose of compliance with a Chapter 2 safety duty” with only an auxiliary requirement of safeguards of freedom of expression. Ofcom is empowered to determine how freedom of expression should be protected, as well as how it should be infringed.
● There is no equivalent, explicit, duty on Ofcom to uphold freedom of expression within the Bill.
● The Bill also contains special provisions to protect democratic content and journalistic content. This fundamentally reverses normal expectations with respect to freedom of expression. Rather than all speech being presumed to be legal unless it is stated otherwise, the only meaningful protections will be with respect to a limited set of speech and a limited number of people.
○ In practice, this means the regulator, ministers and platforms deciding what speech is of ‘democratic’ nature, likely with respect to those who wield political power, and all other content not having any level of protection if it could be perceived to cause psychological distress to others.
○ It also means, practically for the first time in hundreds of years, the state setting a definition of who is a journalist and limiting speech rights for others.
○ It is unclear why the public should have less freedom than members of the press or politicians. If the press and ‘democratic content’ are exempt from the scheme then, logically, all members of the public and all types of legal speech should be exempt.
● Remove ‘legal but harmful’ speech from the scope of content included in the Online Safety Bill, and instead, focus on unlawful speech. If this legal speech is not removed from scope, the definition of ‘harm’ should be extremely limited and specific.
● Mandate Ofcom to protect freedom of speech as a paramount value when designing codes of conduct with respect to online safety. Not interfering with the freedom of speech of citizens should be the foremost responsibility of the regulator.
● Explicitly allow companies to refuse to comply with Ofcom codes of conduct in cases where freedom of speech of users is impacted by the directions.
● Remove special provisions for journalists and democratic content and expand these protections more broadly to all members of the public.
● Include additional parliamentary oversight with respect to categories of content within scope and codes of conduct
● The Bill shifts responsibility for problematic online speech from the individual, who has undertaken the speech, to the technology platforms that are hosting the speech. The person who has undertaken a serious crime is not within the scope of the Bill. That means it will do little to actually address online child abuse, unlawful online abuse and other heinous crimes.
● Furthermore, the Bill would simply not apply to the substantial hubs of unlawful behaviour on the internet, specifically, the ‘dark web’ where the most serious crimes tend to take place.
● It is entirely unclear why the failure of the Government to undertake basic law enforcement should become the responsibility of legal online platforms.
● In practice, the Bill would actually make the job of tackling serious issues more difficult by mandating its immediate removal. While this may be appropriate in some cases, particularly for the likes of child sexual abuse material, in other cases the record of abusive speech directed at an individual can actually be helpful for law enforcement. Removing it would make actual abusers unaccountable for their actions. It would be harder for civil society groups to monitor content online, which serves as evidence in criminal cases and even war crimes proceedings.
● Furthermore, the Bill will not see a single extra penny dedicated to law enforcement for prosecuting serious online crimes or addressing urgent security threats. These are matters that can best be addressed by law enforcement, not technology companies. It is inappropriate to outsource these responsibilities.
● Before introducing new legislation, the Government and Committee should consider the use of existing legal mechanisms, common law principles and resourcing police enforcement to better tackle digital crime. For example, in the case of harassment, the Government could instruct law enforcement agencies to seek court-ordered Injunction to Prevent Nuisance or Annoyance (IPNAs) to target individuals responsible for serious online abuse. This would target individuals rather than creating broad censorship of legal speech.[5]
● Furthermore, with respect to ‘legal but harmfiul speech, there is little evidence that censorship can create a more harmonious society. The classic proponents of freedom of expression made the point that censorship of the public sphere allows for bad ideas to continue unabated in private spaces - where there is no opportunity to respond. The best response to bad speech is more speech, to actually contest the ideas.
- Provide police and prosecution services with the necessary resources and training to combat online crimes
- Explore existing legal mechanisms (such as IPNAs) to tackle serious online harm at an individual level
● The Bill requires services to treat users as if they are a child by default. It appears the only way to show content that is not just child-friendly will be to undertake robust age verification – meaning companies asking users to enter their drivers’ licenses, passports or credit cards to ensure services are age-appropriate.
● The Bill creates requirements on services to have “processes” to minimise the presence of priority illegal content, and the time it is present. In practice the only way to ensure compliance with this duty would be to scan private messages with respect to certain priority content. The state-mandated general monitoring of content raises serious privacy issues. It would be the equivalent of the Post Office being mandated to open all our private letters and read the contents to check for certain unlawful material.
● The Bill fails to address how new mandates in relation to technology to tackle serious crimes that must be used by companies could undermine privacy and encryption.
● There have been particularly notable issues with racism of algorithms used by technology companies.[7]
- Remove private messaging entirely from the scope of the legislation
- Do not place requirements to scan encrypted messaging services
- Remove requirements to robustly age verify with respect to children
● The Bill creates an extraordinary set of duties on companies of all sizes, particularly with respect to risk assessments as well as an expansive safety mandate. Larger technology companies have expansive resources to develop policies and procedures, hire moderators and develop artificial intelligence to comply with the law. Having to repeat and update risk assessments with every technological change will be a serious obstacle to innovation, especially by smaller (the most dynamic) companies.
● The Government's impact assessment indicates that the proposals will cost £2.1 billion, with an extraordinary £1.7 billion expected to be spent on content moderation. These costs will be crippling for start-ups and scale-ups, cementing the power of big tech. Even this could prove an underestimate of the total cost with respect to lost competition, innovation and investment.
● Facebook founder Mark Zuckerberg warned a meeting of the US Congress that “When you add more rules that companies need to follow, that’s something that larger companies like ours just have the resources to go do and it just might be harder for a smaller company just getting started to comply with.”[8] Facebook is a multi-billion-dollar company that can afford to comply with government regulation in numerous countries by hiring thousands of censors. It is the smaller, newer companies that will struggle to moderate potentially offensive material.
● It will be nigh on impossible for smaller firms to fully comply with this legislation, particularly start-ups who are entering the market with limited resources. This could ultimately lead to a substantial decrease in the willingness of investors to enter the online space in the United Kingdom, seriously undermining the broader goals of the Government to promote online competition, particularly in this space.[9] Previously, Coadec found that 68 per cent of UK investors would respond by reducing investment in local platform businesses because of increased liability.[10]
● Narrow scope of legislation to ‘Category 1’ firms, removing small to medium sized businesses
● Entirely remove ‘business to business’ services from scope of the Bill
● The Government should be aware that broad terms, like “safety” and “harms” are often abused by authoritarian regimes to justify illiberal censorship.
● The Government should be wary about developing a model, particularly with built-in discretion by regulators and ministers to limit freedom of expression, that could be copied by less liberal and less democratic countries. For example, Pakistan's recent passage of new digital regulations appears to directly copy the UK’s “online harms” white paper approach.
● In the past Germany’s NetGZ law inspired new online censorship legislation in Russia, Kyrgyzstan, and Turkey.
● These proposals would make the UK a global leader in online censorship, rivalling the likes of China, Russia and Turkey.[11]
● The internet is a force for good. We have learned over the last eighteen months the power of technology to connect humanity during some of our darkest moments in recent history.
● We have stayed linked while we had to physically distance, and have been able to continue working and our education.
● For many years, we have seen how technology has built communities of shared interests, democratised access to information, and created millions of jobs. They are essential to our lives and provide substantial value.
● The Government’s efforts to regulate the online space will seriously undermine this gigantic contribution to humanity.
22 September 2021
Page 9 of 9
[1] https://www.theatlantic.com/magazine/archive/2015/09/the-coddling-of-the-american-mind/399356/
[2] https://www.cyberleagle.com/2021/06/on-trail-of-person-of-ordinary.html
[3] https://inews.co.uk/news/online-safety-bill-would-give-legal-basis-for-censorship-of-lgbt-people-stephen-fry-and-campaigners-warn-1178176
[4] https://www.adamsmith.org/research/sense-and-sensitivity-restoring-free-speech-in-the-united-kingdom
[5] https://freespeechunion.org/youre-on-mute-the-online-safety-bill-and-what-the-government-should-do-instead/
[6] https://inews.co.uk/news/online-safety-bill-would-give-legal-basis-for-censorship-of-lgbt-people-stephen-fry-and-campaigners-warn-1178176
[7] https://www.theguardian.com/technology/2020/sep/21/twitter-apologises-for-racist-image-cropping-algorithm https://www.vox.com/recode/2019/8/15/20806384/social-media-hate-speech-bias-black-african-american-facebook-twitter
[8] https://reason.com/2019/04/05/mark-zuckerberg-calls-for-government-reg/
[9] https://www.gov.uk/government/publications/unlocking-digital-competition-report-of-the-digital-competition-expert-panel
[10] http://coadec.com/wp-content/uploads/2018/12/The-Impact-of-Regulation-on-the-Tech-Sector.pdf
[11] https://www.pressreader.com/uk/scottish-daily-mail/20190408/281672551317386