Written evidence submitted by Full Fact (COR0225)
Home Affairs Select Committee call for evidence on Online Harms
This submission should be considered as supplementary to that provided to the committee by Full Fact in May 2020. Full Fact is now a member of the DCMS Counter-Disinformation Policy Forum.
In the past year, Full Fact has fact checked everything from potentially dangerous claims about cures circulating online to misleading use of data by our political leaders. We have checked more than 400 claims relating to Covid-19 alone. Challenging false claims is a mainstay of our work – but we are clear that this alone isn’t enough. That is why we also make the case for improving the systems, processes and interventions that exist to provide information to the people who need it, when they need it and - crucially - where they are looking for it.
Three things have changed since the committee first asked for evidence in Spring 2020. First, it has become clear that the UK’s response to the Covid-19 pandemic has been hampered by long-standing failures in public data and communications systems. Second, the Department for Digital, Culture, Media and Sport and the Home Office published the full government response to the Online Harms White Paper consultation in December 2020, setting out much needed detail on the plans to regulate internet companies to prevent online harms. Finally, the Law Commission published a consultation on protecting victims of online abuse, including proposed new offences on malicious communications.
We address each of these issues in turn. In summary:
● The Full Fact 2021 report, published in January, highlights repeated instances where the government failed to provide accurate information or correct mistakes - out of 12 requests Full Fact made to ministers in 2020 concerning statements on the coronavirus pandemic, only once did a minister attempt to clarify or correct them.
● We urge all politicians to be prepared to share the sources of information used publicly. The government must be transparent about how they will measure and track their progress towards stated targets and should create a public framework that clearly sets out how errors in public communications will be dealt with.
● To learn from the Covid-19 experience, Full Fact is working with internet companies (Facebook, Google, Twitter), governments (including the UK government) and civil society organisations in the UK and internationally to discuss a better way to respond to crises in the future. We have developed a five level framework to determine the severity of a misinformation incident, to help guide both senior leaders and operational staff during crisis or fast-moving moments to respond effectively. This framework shows the value that can come when different organisations pull together.
● But it cannot continue that the companies are able to make up policies as they go, without independent scrutiny or transparency. Regulation is still necessary to ensure consistent approaches that are able to be independently evaluated.
● We are concerned that the Home Office and DCMS proposals for online regulation, as set out in the Online Harms Full Government Response, do not provide sufficient detail on how misinformation should be tackled or what types of misinformation content fall within scope. Much more detail must be provided to fully understand the impact these proposals will have.
● We have concerns that the Law Commission’s proposals for a new offence on malicious communications will undermine the government’s proposals for distinguishing between illegal and legal but harmful content, in ways that are detrimental to legal clarity and freedom of expression.
Government use of information
- The public has a right to expect authorities to adequately collect, use and communicate the information it needs to manage crises. Crucially, it should do so with honesty, accountability and transparency. On 28 January Full Fact published the Full Fact 2021 Report, which points to repeated instances where government ministers failed to correct their mistakes, or back up public statements with evidence. At a time when the public wants honesty over excuses, the government has failed. Out of 12 requests Full Fact made to ministers in 2020 concerning statements on the coronavirus pandemic, only once did a minister attempt to clarify or correct them.
- Good communication from the government is essential during a crisis, both to reassure concerned citizens and ensure that official guidance is followed. But the thousands of questions we received from the public demonstrate an undeniable confusion over government guidance, with more than a third of questions received asking us how to interpret the new rules. On some occasions it wasn’t a case of leaving rules open to interpretation: we saw both departments and ministers issue conflicting and even incorrect advice during the pandemic.
- We were also disappointed at the way government departments handled our questions during the crisis. Responses were too often slow, unclear or inaccurate; Full Fact was told contradictory things and even faced an unwillingness to engage with questions of accuracy. Of course we recognise that there are significant pressures on the government, from ministers to communications teams, and that mistakes can and do happen, especially in high-pressure situations.
- But the way that errors are handled is crucial, not just in ensuring that the public gets the right information, but also as a way for the government to demonstrate that it is worthy of the public’s trust. It is incumbent on all those in public life to uphold the standards expected of them.
- In many areas, essential changes to processes, standards and behaviours have been overlooked for too long. The pandemic has exposed some of these failings. Our Full Fact 2021 Report makes 10 recommendations that we believe will help improve the collection, use and communication of information in the UK. We urge politicians to be ready to show the data they rely on to others for scrutiny and to be transparent about how they will measure and track their progress towards stated targets. And we call for a public framework that clearly sets out how errors in public communications will be dealt with.
- In the longer term, we have called for serious commitments to invest in public information and communications systems—to make sure the country is never again left so exposed in a crisis.
Online Harms Full Government Response
- The decision by many of the internet companies to suspend - either temporarily or permanently - former President Trump’s accounts on their platforms has rightly provoked much debate in recent weeks. While many have focused on whether the suspension of accounts was justified, we are also concerned that the decision was made without any independent scrutiny, transparency or consistency. This is yet another example of where the internet companies have improvised and created new policies when under pressure. YouTube, for example, has extended the suspension of former President Trump’s account three times (to date) with no transparency on what criteria they are using to make that decision. It cannot continue that the companies are able to make up policies as they go.
- To an extent, the companies themselves recognise this. Full Fact has been working closely with Facebook, Google, Twitter, and DCMS - along with representatives from civil society, academics and fact checkers from around the world - to reflect on the way we all responded to the Covid-19 infodemic in early 2020. It was the shared view of the group that we wanted to learn from that experience and come up with a better way to respond to crises in the future. We have now developed a five level framework to determine the severity of a misinformation incident, to help guide both senior leaders and operational staff during crisis or fast moving moments. This framework, shortly to be released in a consultation version, is important and shows the value that can come when different organisations pull together. Collaboration between government, internet companies, civil society and other experts is crucial to developing a response that has legitimacy and is, ultimately, used across sectors.
- But this work remains voluntary; the internet companies can choose to pull out or not implement the framework and its responses. Therefore, we remain of the view that independent regulation is necessary to set minimum standards and assess whether and how these standards are being upheld.
- On 15 December 2020 the long-awaited Full Government Response to the Online Harms White Paper consultation was published. The government’s proposals set the foundation for regulation that could be effective and impactful. We were particularly pleased to see the Home Office and DCMS recognise the danger of mis and disinformation, particularly in the context of the high levels of harmful health misinformation seen during the Covid-19 pandemic. As we outlined in our previous submission, we strongly believe that decisions about content that could impact freedom of expression must be made through democractic parliaments rather than by private companies.
- But we are concerned that the proposals outlined to date for the Online Safety Bill would, in effect, only be writing into law measures that already exist and, in large part, are being taken by the largest companies. We wrote in May 2020 that we were not convinced that the government’s proposals, if in place, would have had a meaningful impact on the harmful misinformation spreading online. Six months later our view remains the same. The largest internet companies all currently have terms and conditions related to Covid-19 misinformation, and have taken additional steps to remove or reduce access to harmful content. While greater transparency on the scale of the problem and the impact of these measures would be welcome, this by itself would not change how the companies responded.
- Given that the government has not provided a clear timetable for introduction of the Online Safety Bill, it is likely that this legislation will not be implemented until at least 2023. The government and Ofcom must look ahead to the future and create regulation that anticipates the problems of tomorrow, as well as catching up on the problems of today. The companies themselves are constantly experimenting with new ways of presenting information, of providing further context and of scaling their content moderation. The government, and Ofcom, have an opportunity to shape this by providing guidance both on what types of measures are expected, but crucially also how the companies should go about experimenting with new measures.
- Parliament itself should be cautious about proposals to hand this all over to a government-appointed regulator. There is a risk that temporary measures taken during the pandemic at speed and with limited oversight become a new normal and affect people’s freedom of expression online. Great care is needed to ensure appropriate oversight of both any future regulatory system and the day-to-day operational process of tackling misinformation and disinformation. Parliament should ensure that the legislation builds in sufficient, regular, oversight of Ofcom and the direction it sets for the internet companies.
Misinformation content in scope
- To properly regulate companies’ responsibilities in respect of harmful content while protecting freedom of expression there must be clear and consistent guidelines as to what is in scope. We need proportionate responses to clearly identified and evidence-based harms. However, too narrow a scope will mean that some types of harmful misinformation will not be captured within even the basic level of requirements, and potentially not within the Category 1 transparency requirements. There is a risk that the harms cannot be identified, let alone tackled. DCMS must be clearer on what types of content they expect to fall within the different categories, and provide evidence as to why. This cannot be left until after legislation is passed by parliament.
- The full government response states that “the duty of care will apply to content or activity which could cause significant physical or psychological harm to an individual, including disinformation and misinformation. Where disinformation is unlikely to cause this type of harm it will not fall in scope of regulation.”
- We are concerned that the focus on harm only to individuals will mean that content or activity which has a harmful effect on groups of people but where there is not an identifiable individual, or on democracy and wider society, are not in scope. It is far from clear at this stage that this choice is justified. For example, we saw a number of conspiracy theories about the supposed harmful effects of 5G which led to attacks on 5G infrastructure. There is evidence that some engineers received verbal abuse, but we are not aware of any individual being physically harmed. Similarly racist misinformation and disinformation is a sadly regular feature of any incident or crisis; we saw claims that muslims not understanding or complying with lockdown measures caused an outbreak in Leicester. Every election is accompanied by misinformation and disinformation such as false claims on how to vote which seems likely to target participation. It is unclear whether any of this content would therefore fall within scope of the proposals. These are potentially dangerous oversights.
- The government response also outlines a differentiation between content that is harmful to adults (which only the highest risk companies would be required to tackle) and content that is harmful to children (which all companies will need to tackle). While we welcome the government’s intent to have a broader standard of action on children, it is not clear how this differentiation could practically be made when it comes to misinformation and disinformation.
- This is also relevant when considering the exemption for ‘news publishers’ websites’ and for ‘journalistic content online’. Freedom of expression, including media freedom, is crucial to an open society. However, this proposal urgently needs detail and justification. From what little we know at the moment, one possible outcome is legal privileges and immunities for a small group of legacy media companies that will rapidly seem anachronistic in the face of a changing media. We assume that is not the policy intent. Moreover, we note that our fact checking regularly finds inaccuracies in content published by mainstream media, both online and in newspapers. These outlets must also recognise their role in ensuring there is accurate and reliable information available. Most national newspapers’ readership is much higher than many online viral posts; LBC radio station has had multiple videos removed from YouTube for spreading false and misleading information. DCMS must provide clear information sooner rather than later on the boundaries of this exemption, including whether it covers outlets beyond those considered mainstream news outlets (e.g. those who typically publish physical newspapers), such as online only outlets, broadcasters, or content published directly by reporters (e.g. tweets).
- It is promising to see that the government intends to give Ofcom additional powers when an information crisis is underway, including the ability to ask for additional transparency reports from in scope companies. However these additional powers highlight the inadequacy of the regular transparency powers that Ofcom is expected to have. It is not practical to only ask for information annually, given the speed at which misinformation evolves and the speed at which information crises occur. If we only get a snapshot of the problem months later, then the harm will already have occurred. We live in an age of real time information and a credible regulator must have the ability to direct those it regulates to provide information at specified times, in specified forms, and subject to specified verification. These are the powers that, for example, the Financial Conduct Authority has over a wide range of businesses.
- Ofcom should learn from the flaws of the EU transparency requirements under the Disinformation Action Plan - for example, by mandating that the transparency reports have a consistent format to enable comparability between companies. We would also welcome more clarity on who and how the decision is made that a misinformation crisis is underway, and would recommend the government considers the framework criteria Full Fact has developed.
Law Commission proposals on malicious communications
- The proposals published and consulted on by the Law Commission in late 2020, whilst a welcome attempt to bring the law more clearly into line with human rights and proportionality, raise serious concerns about the proposed offences, particularly the proposal to criminalise behaviour where a communication would be likely to cause harm. Fundamentally, we do not at this point understand how the proposed offence would work in practice.
- The proposed offence risks creating a chilling effect on freedom of expression online. The broad language of the proposed offence suggests that individuals are unlikely to be able to anticipate accurately what they can and should do. Additionally, in practice, as we know from our own work with the internet companies and in developing these automated technologies ourselves, machine learning is used to identify, flag, and take action on specific items of content and those who share them. Internet companies are rightly under pressure to act on illegal content. In proposing an offence that creates a broad and hard-to-define category of “likely to cause harm”, the proposal risks the companies removing more content than is necessary.
- This proposal could blur the distinction between illegal content and legal but harmful content, as outlined in the Full Government Response to the Online Harms White Paper consultation, in ways that are detrimental to legal clarity, freedom of expression, and the coherence of the statute book.