Written evidence submitted by LINX (COR0150)

About LINX

 

  1. LINX, the London Internet Exchange is a membership for network operators and service provider exchanging Internet traffic. It is part of our core mission to represent our members’ interests in matters of public policy.
  2. We have more than 850 member organisations, including most major UK ISPs most formerly incumbent European operators, all the worlds largest online consumer social platforms, other major content services, and technical infrastructure services such as content delivery networks.
  3. LINX has worked for 25 years on behalf of its members on the development of policy relating to the regulation of Internet content and the responsibilities of Internet intermediaries for Internet content. We have worked in cooperation with the Home Office, law enforcement representatives, DCMS and predecessor departments, and the Internet Watch Foundation, which we helped to establish.
  4. We are committed to a regulatory environment that protects a workable and responsible balance between the legitimate interests of the State in suppressing illegal content, and the legitimate interests of content publishers and users in being able to exercise their right to receive and impart information within the law. We are similarly committed to achieving regulatory responsibilities for Internet intermediaries that are fair, balanced and proportionate, technically viable and economically feasible, and that forebear from placing an undue burden or closing the window to future innovation in services.
  5. We thank the Committee for the opportunity to contribute to its enquiry into the government’s developing policy for the regulation of “online harms”. These issues, and the prospective requirements the government envisages imposing on our members, are of interest to us all.
  6. This submission represents LINX’s corporate response, written with the intention of contributing to sound public policy for the benefit of all its members and society as a whole. As with any large membership organisation, there is no suggestion that this response is individually endorsed by each of LINX’s 850+ members.

Background to this enquiry

 

  1. While the government’s Online Harms policy consultation, and its interim response to the comments received in reply, was published before the coronavirus variant Coviv-19 became a public health emergency in the UK, the committee’s enquiry is taking place during a period of unprecedented public restrictions known as “the lockdown”. The entire country is worried about the threat to public health, as well as each to their own health. If the government’s public health messaging were not sufficient in itself, and the emotional impact of the appalling daily death count were unduly reduced by the antiseptic presentation of government statistics, then surely the hospitalisation of the Prime Minister himself must have driven home the point to every UK citizen that nobody can be sure of remaining untouched by this dreadful disease.
  2. It is therefore inevitable that the Committee’s enquiry into a subject that is actually perpetual and enduring will be coloured by the exigencies of the moment. The public is eager for information and desperate for reassurance. Certain critical information is, however, as yet unknown to science; other information is being developed, and medical hypotheses are being investigated and confirmed or refuted while the crisis is in progress, and often in the harsh and unforgiving light of public scrutiny. A crucial part of the government’s response to the crisis, and protection of the public, is a public information campaign, including in particular guidance and instructions to the public on how to act (and ordinary activities from which they must extraordinarily refrain).
  3. Into this already most challenging environment, it is of course regrettable that loose talk, rumours, bad science, conspiracy theories and even advice to the public that would be highly dangerous if followed, are all peddled as secret knowledge, reasonable propositions, or even fraudulently mislabelled as official advice.
  4. Thanks to the Internet, not only broadcasters and press barons but every member of society has a voice that can be heard. On balance, we believe this has been enormously beneficial for society, but is does come with a corresponding and inescapable downside. While most people are sensible, the foolish and mendacious can reach an audience never previously possible with a merely literal soapbox. Worse, the “engagement” algorithms of social media platforms make no distinction between curiosity about the extreme and a deeper interest in and acceptance of credible information, resulting in an apparent amplification of controversial or uncreditable messages.
  5. We have no doubt, therefore, that this Committee will hear from a large number of stakeholders who encourage the government to take firm action to rid the Internet of such content. Even amongst our own members, some of the largest have urged the government to silence those who give credence to the conspiracy theory that Covid-19 is spread by 5G mobile network cell towers. At the extreme, adherents to this theory have engaged in assaults on engineering personnel and arson attacks on cell towers – crimes that could credibly be characterised as acts of domestic terrorism. While there is little the government could do to convince the perpetrators of such crimes, it is perhaps understandable that some would think the next best thing is to suppress any propagation of the extremist theory motivating these attacks, except as part of carefully calibrated official messaging designed to discredit the theory.
  6. Moving beyond the Covid-19 crisis, we also anticipate that a broad range of stakeholders will be able to identify a plethora of Internet content they believe to be both harmful and dangerous. We expect they will call on the Committee to use its influence to encourage the government to progress urgently with proposals to empower Ofcom with sweeping powers to purge the Internet of dangerous and harmful content, for the protection of the public in multiple dimensions.
  7. We do not join this chorus. Instead, we invite the Committee to cast a more dispassionate eye on the issue before it, and to give sober consideration to the likely actual consequences of such a policy, beyond its apparently benign aspirations.

A short description of the Online Harms policy

 

  1. Before criticising the Online Harms policy, it is worth setting out the core essence of that policy.
  2. According to the policy as published in the white paper in April 2019, there would be introduced something described as a new “duty of care” to protect the public from harm. This duty would apply to a very wide range of Internet intermediaries, including most categories of service that carry content generated by the service’s users (i.e. “user generated content”). It would not apply to services that only carry content produced by and under the editorial control of the service operator.
  3. The characterisation of the duty as a “duty of care” has been subject to criticism from legal experts, in that it does not conform to any of the usual norms of a duty of care at common law. However, we think this a technical point: we read the duty as simply being a legal mechanism to confer a form of statutory duty on Internet intermediaries, so as to co-opt them as the enforcement arm of a public policy for the suppression of harmful content and behaviour online. In our view, analysis of the duty should be directed mostly at whether intermediaries should be co-opted in such a fashion, how, and for what purposes, whether the mechanisms proposed are justified, reasonable and proportionate to the ends sought, and whether the likely outcomes survive a cost benefit analysis.
  4. The most striking feature of the Online Harms proposal, which has endured a great deal of public criticism, which we join, is that it is intended that the legislation implementing this policy and imposing the duty will neither define “harm” nor set out any clear limits to what might be considered harmful, apart from certain limited exclusions, nor will it set out what Internet intermediaries might be required to do to limit harmful content and behaviour. Instead, an Internet regulator will be decide these things by publishing Codes of Practice by which such intermediaries will be expected to abide. Failure to comply adequately with these Codes may be taken as constituting a breach the duty of care, and render them subject to enforcement and sanctions including, it is suggested, draconian measures such as personal criminal liability for corporate officers.
  5. This is such a bold departure from the normal practice of law-making that it is worth boiling this down to its rawest essence. A state agency would be given the power to decide what types of information may not be permitted[1], and what actions Internet companies must take to suppress what it considers, in its barely fettered discretion, to be harmful.
  6. If there are to be any limits on the discretion of the regulator to decide what content or behaviour should not be tolerated, it is not clear what they are – apart from a small list of exceptions that describe not protections for fundamental rights which the regulator must not infringe, but rather categories of illicit content suppressed by other policies and law enforcement agencies, on whose territory this regulator may not trespass. Nor are there any clear limits to the types of intervention the regulator may require of Internet operators.
  7. It is almost incidental at this point that the regulator is expected to be an independent statutory body, and so largely independent of democratic accountability through Ministerial control and the Minister’s accountability to Parliament. The regulator may, and indeed must, develop and apply its own policy.
  8. Independent statutory regulators are not a novelty, of course, nor are they necessarily a bad model for appropriate regulation. What differentiates the Online Harms regulator is the proposed paucity of statutory guidance or constraints.
  9. Presumably, this seemingly untrammelled power would still ultimately be constrained by the Human Rights Act and the possibility of judicial review according to the Wednesbury principle that the courts will reverse any official decision that is “so unreasonable that no reasonable [authority] acting reasonably could have taken it[2]”.
  10. For an Internet intermediary protesting a regulatory decision, that is a thin reed at which to clutch.
  11. Parliamentarians might also take note that in seeking to rely on the Human Rights Act and judicial review to constrain the regulator to intended decisions, rather than by codifying reasonable parameters to the regulators authority on the face of the implementing Act, Parliament would effectively be transferring to the courts that portion of its own power to legislate in this area that it has not delegated on the regulator.
  12. It is worth noting that following the change from the May to the Johnson administration, the government has indicated one important change in the policy announced in the April 2019 white paper. In April 2019 the policy was that the regulator would seek to make the UK the safest place in the world in which to go online” by “establish[ing] a regulatory framework that seeks to tackle [a] range of online harms” including “more specific and stringent requirements for those harms which are clearly illegal, than for those harms which may be legal but harmful, depending on the context” but certainly including “harmful content or activity that may not cross the criminal threshold”. By February 2020 this had evolved to the following statement

We are also introducing greater transparency about content removal, with the opportunity for users to appeal. We will not prevent adults from accessing or posting legal content, nor require companies to remove specific pieces of legal content. The new regulatory framework will instead require companies, where relevant, to explicitly state what content and behaviour is acceptable on their sites and then for platforms to enforce this consistently.

From this it is clear that “legal but harmful” content will remain within the scope of the regulator’s authority, and accordingly the regulator is still to be asked to determine what lawful content it considers harmful. The regulator may then act to enforce “consistency” by companies in suppressing harmful content where they have indicated that that content is not acceptable on their service. In doing so the regulator will enforce the overall consistency, rather than act as a route of appeal in individual cases. What is not clear from this is whether the regulator will have the power to require companies to prohibit certain categories of content is has deemed harmful from their service: the actions of individuals are excluded from the scope of the regulators’ oversight, but the policy of companies is not.

  1. This ambiguity notwithstanding, we suspect the best reading the government’s Initial Response of February 2020 is to signal that it is willing to tolerate companies deciding for themselves whether to permit particular categories of lawful content on their service, even where the regulator has determined that these are harmful. We would welcome clarification of whether this is indeed the government’s settled intent.
  2. If the government has indeed moved to accept that it is up to private companies to determine for themselves what types of content they permit on their service, within the constraints that duly enacted law provides, that begs the question of whether it is then appropriate to invite a regulator to determine that certain types of lawful content are nonetheless officially disfavoured and still subject to potentially severe regulatory enforcement action in some circumstances. Does that not still create a chilling effect, where companies are nonetheless induced to prohibit disfavoured categories of content? Is this, in fact, the intention? And if so, do the criticisms of administrative overreach of the legislative function not remain valid?
  3. In short, we believe the government needs to clarify: is it still the government’s policy that an Internet regulator should seek to reduce the visibility of Internet content that the regulator deems lawful but harmful? We urge the Committee to press the government for a clear statement on this point.
  4. We also anticipate that the Committee will wish to consider whether to criticise this apparent adjustment in the policy, and urge the government to return to the more restrictive position set out in the 2019 White Paper. We expect that some witnesses may invite the Committee to do so. It is to that issue we address the remainder of this evidence.

 

Learning the lessons of foreign experience

 

  1. On 30th December 2019, Dr Li Wenlian, an ophthalmologist at Wuhan Central Hospital, posted a warning to fellow medical professionals on China’s leading social media site Weibo[3]. Alerting them to an outbreak of a virus he believed looked like Sars, he warned them to wear protective clothing. The cases were thought to come from the Huanan Seafood market in Wuhan, and were patients in his hospital.
  2. Four days later, officials of the Chinese Public Security Bureau ordered him to stop making, and recant, his comments. He was made to sign a letter acknowledging his understanding of instructions to stop “making false comments” and “severely undermining public order”.
  3. This suppression of what turned out to be a vital early warning of the Covid-19 outbreak certainly cost lives in China. Dr Li himself contracted Covid four days after being forced to recant his warning. He died of the same illness at 02:58am on 7th February. During that period, medical practitioners continued to treat patients without PPE, and the virus grew from an officially recognised 291 cases at 20th January to 31,198 cases on the date of Dr Li’s death. At this point, the virus had grown beyond containment: a China-wide epidemic was inescapable if not already in progress, and a global pandemic was probably no longer avoidable.
  4. The decision of Chinese information officials to attempt to suppress the early news of the virus has since been subject to significant criticism in the West. We submit that, properly analysed, it misplaces responsibility to blame the individual decisions of particular Chinese authorities, who arguably applied long-standing Chinese public order policy in a reasonable fashion. The fault lies instead with the policy of suppression itself.
  5. Firstly, Chinese Public Security Bureau officials made a determination that Dr Li’s warning contained “false information”. If, for the purposes of analysing their individual responsibility, we accept that those officials were required by policy to make a determination of truth or falsity, then it is far from clear that they acted unreasonably in judging Dr Li’s warning as containing “false statements”. It was a statement of an individual doctor, unsupported by more than his own anecdotal evidence. There had been no official or medically controlled investigation into the claims he was making public, they were not supported by any recognised medical authority (not even that of his own hospital), nor had he any corroboration from colleagues. Even in hindsight, his specific claim that Sars had broken out again turned out to be inaccurate.
  6. Having determined that Dr Li’s information consisted of “false statements”, those officials could then defend their action to suppress social media postings of his nature by arguing that false medical claims about the outbreak of deadly contagious disease (and similar claims unsupported by verified scientific evidence) are indeed harmful to public order. They induce fear, have a tendency to exacerbate intra-community tensions, and stimulate conspiracy theories about government cover-ups as well as wild and often dangerous stories about folk remedies and recommendations.
  7. Was such a determination of that Dr Li’s statement were harmful and should be suppressed for public protection an unreasonable application of the rules? In the UK we would use the Wednesbury principle in judicial review to determine whether a public authority acted so unreasonably as to be outside the scope of its authorisation. By that UK standard, the Chinese officials application of the rule for the suppression of harmful content would almost certainly be upheld.
  8. If there is blame to be had in this story, we submit that the fault is inherent in the policy itself, not merely its application by particular officials.
  9. Accordingly, Dr Li’s case poses a major challenge to the proposal in Online Harms, and especially to those who would prefer to return to the stricter policy contained in the original white paper and reverse any softening found in the 2020 Initial Response. If the negative consequences of Chinese suppression of Dr Li’s social media posts can be attributed to the policy itself, as we assert, and cannot be dismissed as merely an unreasonable application of the policy, then it is clearly relevant to question any similarity between the rules Chinese officials were attempting to apply and rules being proposed for UK officials to apply in the future.
  10. What sets the UK apart from China is not the infallibility of our public officials, but the differences in the nature of the rules we call upon them to enforce. In the UK, we have (until now at least) shown much greater toleration for the public’s right to make statements that might be considered harmful, much greater reticence to pass legislation limiting freedom of expression and, most importantly of all, a determination to limit prohibited forms of expression to tightly defined categories that are closely scrutinised by Parliament, and a refusal to invest State functionaries with a broad discretion as to what kinds of speech should be censored.
  11. Dr Li’s case demonstrates something important about the choice of what types of costs each country is and is not willing to incur. China has demonstrated a preference for “protecting” the public from “harmful information”, and thereby accepts the costs of excessive suppression. Excessive suppression will certainly occur when officials make mistakes, and when they act overzealously. Beyond that, Dr Li’s case shows that even when officials apply the policy in a reasonable manner based on the information available to them at the time, the consequences can be dire, both in the short term, through the loss of availability of crucial information in a particular message, and in the long term, by creating a culture that is less willing and less able to challenge public authority.

 


Conclusion

 

  1. The UK’s political culture, preferring freedom for the individual over the broad discretionary protection of public officials, does not come without costs of its own. We described some of the cost we are currently incurring as a result in the opening paragraphs of this submission. The Committee will no doubt hear evidence setting out data enumerating these costs in considerable, harrowing detail. No reasonable person could deny the Western appetite for individual freedom entails unavoidable, and often painful cost.
  2. Against that, however, we must weigh not only the benefits, but the cost of adopting the alternative approach.
  3. We will leave it to others to set out a principled case for freedom of expression as a fundamental right and a social good of the first order. If the Committee is convinced of this already, we would have little to add that centuries of Western philosophy has not already investigated. And if the Committee is minded to dismiss the salience of protecting freedom of expression as necessarily taking second place to the urgent need to protect the public against immediate and present harms, there would be little we could say to persuade it otherwise.
  4. Instead, we focus on the practical and pragmatic. The issue is not simply “Should we continue to tolerate harmful content on the Internet, or should we do away with it?”. This Committee, Parliament, the government, cannot only will the ends: it only has control over the means. It is just as much a utopian fantasy to suppose we can simply wish away harmful content, without incurring a corresponding downside, as it would be to suggest that unfettered freedom of speech will solve all social ills without any negative consequences of its own.
  5. Parliament must consider not only the government’s aims, but also the means it proposes to pursue those aims. The means the contained in the Online Harms policy propose to invest Ofcom with almost untrammelled authority to determine what information is harmful, and the measures designed to suppress it. This has clear parallels with the mandate of the Chinese Public Security Bureau.
  6. We acknowledge that there are dissimilarities between Dr Li’s case and the measure proposed in the UK White Paper. There is no suggestion that Ofcom would be empowered to summon individuals before it, or compel them to sign statements recanting their position. Indeed, in its February 2020 Initial Response, the government emphasised that Ofcom would not have a direct role in individual cases at all. And there are many other aspect of Chinese censorship (including the fabled Great Firewall of China) that go beyond what is proposed in Online Harms[4].
  7. Nonetheless, the essential feature of Dr Li’s case is that his social media posting were categorised as harmful by an organ of the State, and so subject to suppression. That is also the fundamental purpose of Online Harms.
  8. We do not think that this essential similarity between the Chinese approach and Online Harms is compromised by the distinct approach in Online Harms of deputising Internet intermediaries to enforce suppression on the State’s behalf.
  9. We do not see anything in the Online Harms White Paper that would provide a limiting principle that would ensure the Internet regulator could not act to suppress unverified, unsupported claims of outbreak of contagious disease, such as those contained in Dr Li’s social media postings.
  10. We note that the government’s position may have evolved since then, and note the repeated references to the importance of freedom of expression in the government’s February 2020 Initial Response. Nonetheless, we remain unclear whether the government’s policy continues to be that a regulator should seek to limit the availability of lawful but harmful content by agreeing with Internet services terms of service that prohibits such content, and then acting to require the operators’ consistent enforcement of such prohibitions.
  11. In the case of the Chinese suppression of Dr Li’s warning, the costs investing the State with a broad mandate for “public protection from harmful information” almost certainly includes the deaths of Dr Li and a very large number of others.
  12. We cannot say whether the single decision of the Public Security Bureau to suppress the earliest warning was itself decisive in allowing the outbreak time to establish itself beyond the possibility of control. But these Chinese rules for the State suppression of harmful information create and reinforce a public culture with a tendency to hide from unpleasant news rather than rigorously and public confront it. It seems undeniable that that culture made at the least a material contribution to Covid-19’s transition from a local outbreak to a global pandemic wreaking medical and economic devastation.
  13. It is reasonable to pose the counter-factual: had the Public Security Bureau not existed, had Dr Li been free to post and others free to join him, and Chinese citizens generally been free to publicly demand official action, might the Chinese response have come a month earlier? Was one of the costs of the Chinese preference for public protection from online harms to give Covid-19 a month’s head-start?
  14. That is just one, solitary example of the cost countries – and the world – incurs when it chooses the route of State suppression of information in the name of protecting the public from online harm.
  15. What we have offered is, of course, not the only example of such costs. We have chosen to elaborate in some considerable detail on this one example, to demonstrate the concrete and practical nature of the costs involve, and to rebut any suggestion that concern for fundamental rights necessarily elevates the theoretical and philosophical over the pragmatic and material concerns.
  16. By focussing in detail on one case, we believe we have pre-empted any reply claiming that Chinese Internet censorship is nothing like what is proposed in Online Harms and have shown, through careful analysis of the case, the essential similarity of the role and actions of the Chinese Public Security Bureau in this case, and the role and actions being proposed for Ofcom.
  17. To put it more generally, we do not think that UK political and social culture is so different from that of China because of some notion of inherent and ineffable British superiority – we reject that as borderline racist – but because of specific policy choices we make, and the resulting costs in terms of policy outcomes that we are and are not prepared to accept.
  18. We therefore invite the Committee to consider the outcomes in Dr Li’s case as evidence of relevant foreign experience when investing public authorities with broad, discretionary powers to suppress content they deem harmful.
  19. In its call for evidence, the Committee invited submission on whether the government’s Online Harms proposal was adequate to address issues arising from the pandemic. We submit that it is never an adequate response to any challenge to simply create an independent regulatory authority with largely untrammelled powers, and leave them to sort it out.
  20. We submit that, in the context of the vital area of access to information and freedom of speech, Parliament must carefully and narrowly define the limits of lawful content: this must never be left to discretionary rulemaking by officials.
  21. We further submit that, when considering any new restrictions on the freedom to impart and receive information, Parliament should have careful regard for the negative consequences of excessive restriction, and the difficulty of avoiding excessive restriction even when apparently reasonable rules are applied in a defensibly reasonable manner.

 

May 2020

 

 

 

 

9

 

 

 


[1] We refer here to the Online Harms proposal as contained in the May 2019 White Paper. For discussion of the extent of the possible further evolution of government policy as contained in the Initial Response of February 2020, see paragraphs 25-29 below.

[2] Associated Provincial Picture Houses Ltd v Wednesbury Corporation (1948) 1 KB 223

[3] See BBC reports https://www.bbc.co.uk/news/world-asia-china-51364382 and https://www.bbc.co.uk/news/world-asia-china-51409801, and further references to original reporting available through Wikipedia https://en.wikipedia.org/wiki/Li_Wenliang#Role_in_2019%E2%80%9320_COVID-19_pandemic

[4] On the other hand, another point of dissimilarity is that the Chinese Public Security Bureau made a determination that Dr Li’s statements were false before declaring them harmful. Nothing in Online Harms suggests that true statements are to be exempt being categorised as harmful, and indeed the very separation of Ofcom from the administration of individual cases makes it harder to imagine how an Internet user might defend their publication from suppression on the grounds that it was true.