Written evidence submitted by Professor Alan Renwick and Alex Walker



Digital, Culture, Media and Sport Sub-committee on Online Harms and Disinformation Inquiry into Online Safety and Online Harms

Written evidence from Professor Alan Renwick[1] and Alex Walker[2]


Introduction and Summary

  1. Professor Alan Renwick leads the UCL Constitution Unit’s research on elections, referendums, and citizens’ assemblies. He has recently conducted multiple research projects examining ways of improving the conduct of elections and referendums,[3] and is currently leading a project examining attitudes to democracy in the UK.[4] Alex Walker was a researcher at the Constitution Unit. He now works at the Constitution Society, but makes this submission in a personal capacity.
  2. Our focus in this submission is on the relationship between the online environment and UK democracy. In particular, we address whether the government’s proposed online safety regime, as set out in the draft legislation, tackles various previously identified online harms to electoral democracy. Given this, we do not answer all of the Sub-committee’s questions, but respond to those that are relevant to this focus.  
  3. We argue that the removal from the draft Bill of any reference to harms to democracy is undesirable, and that a purely free-market approach to political discourse is inadequate. Digital platforms should have a duty not to amplify false and harmful content. Steps to promote high-quality information are also needed.

How has the shifting focus between ‘online harms’ and ‘online safety’ influenced the development of the new regime and draft Bill?

  1. In the course of the development of the policy framework, and the shift in focus from ‘online harms’ to ‘online safety’, measures to strengthen democracy in the face of new challenges posed by digital technology have been dropped from the regime.
  2. The Online Harms white paper, published in April 2019, was explicit that certain kinds of online activity could have detrimental consequences for democracy, in particular through undermining trust in the political system. It referenced the manipulation of voters through micro-targeting, deepfakes, and concerted disinformation campaigns. The white paper concluded that action was needed because activity of this kind can ‘damage our trust in our democratic institutions, including Parliament.’[5] Furthermore, it specified a number of actions that it was expected would be in the regulator’s Code of Practice, and required of certain companies to fulfil their duty of care. These included: using fact-checking services, especially during election campaigns; limiting the visibility of disputed content; promoting authoritative news sources and diverse news content; and processes to tackle those who mispresent their identity to spread disinformation.[6] 
  3. However, when the government published its initial consultation response in February 2020 there was no mention of this aspect of the programme. The full consultation response in December 2020, which set out the shift to a focus on online safety, confirmed that companies would only be expected to deal with disinformation and misinformation that ‘could cause significant harm to an individual’.[7] The measures outlined above no longer featured.
  4. When it comes to tackling the kind of online harms to democracy detailed in the 2019 white paper, the draft Online Safety Bill is much narrower in scope than was initially anticipated. The draft legislation contains new provisions, including a duty that major platforms protect content of ‘democratic importance’.[8] Whilst it is undoubtedly important that the right to express political opinions online be protected, there is a concern that this will encourage companies to take a hands-off approach to potentially dangerous political disinformation. This is in contrast to the more proactive approach previously preferred by the government.

What are the key omissions to the draft Bill, such as a general safety duty or powers to deal with urgent security threats, and (how) could they be practically included without compromising rights such as freedom of expression?

  1. In 2019, the House of Lords Democracy and Digital Technologies Committee was set up to conduct an inquiry into the relationship between digital technology and democracy in the UK. Its report, which was published whilst the development of the online safety proposals was ongoing, detailed the many ways in which online platforms can erode and undermine trust in the political system, especially through the amplification of political mis- and disinformation. The report recommended that the scope of the proposed online safety legislation be widened to encompass a duty of care towards democracy.[9] This is omitted from the draft Online Safety Bill.
  2. Whilst the draft Bill contains duties relating to protecting freedom of expression and disinformation that can cause significant harm to the individual, it contains no provisions relating to false information that generates distrust in democracy. The Democracy and Digital Technologies Committee acknowledged that banning or removing this kind of legal content would limit freedom of expression. There is also little evidence that such action would work, and in some contexts it might indeed cause harm: the act of banning certain misinformation in the context of an election or referendum campaign, for example, might only serve to amplify that misinformation. The Committee concluded, however, that requiring companies to ensure that the visibility of such content is not amplified by their algorithms could be more effective and would not constitute a curtailment of free of speech.[10] As noted above, a measure of this nature was included in the Online Harms white paper.
  3. Also omitted from the Bill are more positive steps that could be taken, such as promoting authoritative news sources. One such step, which the Democracy and Digital Technology Committee endorsed, is that of a democratic information hub. This idea was first detailed in the Constitution Unit’s 2019 report Doing Democracy Better. We reiterate that report’s premise that a healthy information environment is essential for effective democracy. Strategies to counter inaccurate and misleading information are an important aspect of this, but are insufficient in themselves. Promoting accurate, balanced, relevant and accessible information is also crucial.[11] A democratic information hub would help achieve this. It would help strengthen democracy in the face of the profound changes to the information environment that have been brought about by digital technologies. In the next paragraph we set out some of its key features for the Sub-committee’s consideration.
  4. A democratic information hub would provide a coordinated home for relevant and trustworthy information from a variety of different sources. It would do so independently of government and with input from experts and the public. We believe that a new independent public body should be established to set up and run the information hub. It would require public funding to function effectively, including to develop a sophisticated digital resource base and to market it widely to the public. The hub would both aggregate high-quality content from other sources and lead on the creation of new content that voters want. Citizen input – such as through citizens’ panels – would be essential to the latter activity. It would be sensible to build such a hub up gradually, starting with the most basic and uncontroversial information, such as what elections are taking place when and where, who can take part how, and who the candidates are. Starting with such basic information would allow trust and familiarity slowly to develop over time.[12]
  5. An online safety Bill with a wider scope could establish such a public body. This opportunity stands to be missed unless changes to the draft legislation are made.

Are there any contested inclusions, tensions or contradictions in the draft Bill that need to be more carefully considered before the final Bill is put to Parliament?

  1. The duties to protect freedom of expression and content deemed ‘democratically important’ require more careful consideration. As currently drafted, the duties are vague. Without further clarification they could have unintended negative consequences.
  2. In the draft Bill, content of democratic importance is defined as that which ‘is or appears to be specifically intended to contribute to democratic political debate in the United Kingdom’.[13] This a very broad definition. It is not hard to see how those spreading harmful disinformation could claim that such content falls within this category.
  3. The government is committed through its Defending Democracy programme to tackling misinformation and disinformation ‘which many people are rightly concerned pose a threat to public safety, national security and ultimately our democratic values and principles.[14] It has set up a cross-government Counter Disinformation Unit for this purpose. However, the broad protection in the draft Online Safety Bill given to content that appears to contribute to political debate could hamper these efforts, making it harder for the government to work with social media companies to deal with disinformation that undermines democratic values and principles. The government wants to counter the spread of dangerous conspiracy theories, but often these have a political dimension. The legislation as drafted may encourage platforms to take a less proactive approach to this kind of content, in case it can be classified as a contribution to political debate.
  4. Furthermore, the draft Bill includes ‘a duty to operate a service using systems and processes designed to ensure that the importance of the free expression of content of democratic importance is taken into account when making decisions’ about whether to take action against content, such as restricting access to it, or against the individual behind it.[15] This implies that companies will be required to protect harmful content they would otherwise be expected to address, if they believe it to have democratic importance. The breadth of the current definition means that a wide range of harmful content could fall within this category.
  5. The press release which accompanied the publication of the draft Bill offers another definition, saying democratically important content ‘will include content promoting or opposing government policy or a political party ahead of a vote in Parliament, election or referendum, or campaigning on a live political issue.’ Concerns have been raised that this narrower definition will offer protection to politicians and campaigners, but not the general public.
  6. It is important that social media companies do not end up censoring legitimate online political debate. The duty in the draft Bill to ‘protect users’ right to freedom of expression within the law’ seeks to guard against this.[16] Nevertheless, the government has adopted a ‘free market of ideas’ approach that posits that all political content should be protected, regardless of its veracity or potential to cause harm. The full Online Harms consultation response stated this position clearly: ‘Policy or political arguments – both online and offline – which can be rebutted by rival campaigners as part of the normal course of political debate are not regulated and the government does not support such regulation. It is a matter for voters to decide whether they consider material to be accurate or not.’[17]
  7. The ‘free market of ideas’ approach rests on the premise that the process of unrestricted debate reveals the truth. However, there is no evidence that this is the case online. In fact, adopting such a laissez-faire approach can have serious consequences. The storming of the Capitol building in the United States on 6 January 2021 demonstrates what can happen if political conspiracy theories that erode trust in the democratic process are allowed to spread.[18] It is not inconceivable that the government’s stance could precipitate attacks on UK democratic institutions such as those that occurred in the US.
  8. Digital platforms do not represent a level-playing field for political debate. Often algorithms promote content that is false and harmful over that which is authoritative and trustworthy. It would not infringe on freedom of expression to include a duty to ensure the former kind of content – which can erode the norms that underpin democracy – is not amplified.



[1] Alan Renwick, Professor of Democratic Politics and Deputy Director of the Constitution Unit, Department of Political Science, University College London.

[2] Alex Walker, Communications Manager and Researcher, the Constitution Society; former Research Assistant at the Constitution Unit, University College London. (Submitting in a personal capacity.)

[3] Independent Commission on Referendums (Constitution Unit, 2018); Alan Renwick and Michela Palese, Doing Democracy Better: How Can Information and Discourse in Election and Referendum Campaigns in the UK Be Improved? (Constitution Unit, 2019); Working Group on Unification Referendums on the Island of Ireland (Constitution Unit, 2020).

[4] Democracy in the UK after Brexit.

[5] Online Harms White Paper (2019), para. 7.25, p. 70.

[6] Ibid., para. 7.28, p. 71.

[7] Online Harms White Paper: Full Government Response to the Consultation (2020), para. 34, p. 11.

[8] Draft Online Safety Bill (2021), Part 2, s.13.

[9] House of Lords Select Committee on Democracy and Digital Technologies, Digital Technology and the Resurrection of Trust (2020), para. 89, p. 37.

[10] Ibid., paras. 108, 109; p. 42.

[11] Alan Renwick and Michela Palese, Doing Democracy Better: How Can Information and Discourse in Election and Referendum campaigns in the UK Be Improved? (Constitution Unit, 2019), pp. 513. 


[12] For more information see, Alan Renwick and Michela Palese, Doing Democracy Better: How Can Information and Discourse in Election and Referendum campaigns in the UK Be Improved? (Constitution Unit, 2019), pp. 23542.

[13] Draft Online Safety Bill (2021), Part 2, s.13(6)(b).

[14] Chloe Smith, Defending Democracy - Policy Exchange Speech (15 June 2021).

[15] Draft Online Safety Bill (2021), Part 2, s.13(2)

[16] Draft Online Safety Bill (2021), Part 2, s.12.

[17] Online Harms White Paper: Full Government Response to the Consultation (2020), p.21.

[18] Ipsos, ‘How misinformation primed Trump’s supporters for Capitol riot’ (2021).