[COR0145]

Written evidence submitted by Full Fact (COR0145)

Summary

        Full Fact is no stranger to misinformation; we’ve been checking claims made in public debate for over a decade. But the scale, global reach and unrelenting pace of bad information related to the Covid-19 outbreak has presented a fresh challenge for fact checkers, internet companies and governments alike.

 

        Through our work since the start of the outbreak, a lot of the claims we have seen fall into three groupings:

    1. Claims that become distorted without context;
    2. Claims about the origins, spread or treatment of the virus that use fear as their catalyst;
    3. Claims that masquerade as official advice.

 

        We have been particularly concerned by new and widespread claims about a link between 5G and Covid-19, though we know that similar claims about the harm of mobile technology have been steadily building online for years.

 

        We are also beginning to see more false claims relating to vaccines. There is a real danger that this kind of harmful information will become more widespread as vaccine trials continue and if a vaccine is rolled out to the general public.

 

        It is essential that we counter these arguments with high-quality information. The internet companies clearly have a role to play in that. Full Fact assessed the measures taken by seven platforms across three key areas:

    1. Supplying good information;
    2. Empowering users and giving confidence that appropriate action is taken;
    3. Working with experts to ensure that the best information is being shared.

 

        All the companies have taken some action to address these three areas. But the implementation varies. Our analysis shows that there is a particular gap with the companies being transparent on their actions. The Full Fact Report 2020 recommends that internet companies implement transparency principles as soon as possible, and do not wait for legislation. Internet users should also be able to easily flag suspicious content on whichever platform they're using.

 

        The internet companies need to be transparent about the results of their efforts to restrict Covid-19 misinformation and should be thinking now about preempting issues where misinformation could be harmful, such as on vaccines.

 

        The government’s proposals for Online Harms legislation are capable of providing a basis for legislation to respond proportionately and effectively to specific online harms. That response is important and in many places urgent and we believe the time has come for proposals to be brought before parliament. We particularly welcome the commitments to protecting freedom of speech. However, there is a disappointing lack of detail on action to tackle harmful false information.

 

        If internet companies are left to determine what content and behaviour is acceptable on their platforms, it is unlikely that the duty of care as currently designed will cause meaningful change in tackling the harms caused by misinformation. The rules should be set through open democratic transparent processes in the UK, not through the commercial decisions of US and other internet companies.

 

        Plans to only publish transparency reports annually will prevent regulation from being effective. In the fast-moving online world, the regulator will have limited impact if dealing with data from months ago, or only data that internet companies choose to provide. This has been particularly evident during Covid-19, where misinformation has spread at pace.

Who we are

        Full Fact fights bad information. We’re a team of independent fact checkers, technologists, researchers, and policy specialists who find, expose and counter the harm it does.

 

        Bad information can damage public debate, pose risks to public health and erode public trust. We tackle it in four ways. We check claims made by politicians, public institutions, in the media and online and ask people to correct the record where possible to reduce the spread of specific claims. We campaign for systems changes to help make bad information rarer and less harmful, and we advocate for higher standards.

 

        Full Fact is a UK partner in Facebook’s Third Party Fact Checking programme. The programme gives us access to a queue of posts being shared in the UK which have been flagged as potentially false by Facebook or its users which we can fact check and attach ratings to. Our transparency report on the partnership is on our website[1]. Full Fact’s fact checks have been integrated into Google search since 2017.

 

        Full Fact is a registered charity. We're funded by individual donations, charitable trusts, and by other funders. We receive funding from both Facebook and Google. Details of our funding can be found on our website[2].


The nature, prevalence and scale of misinformation during Covid-19

 

  1. Full Fact has been checking claims made in public debate for over a decade. However, like everyone else, we have been in somewhat uncharted territory this year. That does not mean that all the misinformation we are seeing is new or surprising. Vaccine misinformation, 5G conspiracies and bad information about treatments have been an enduring part of public debate for a long time. But the scale, global reach and unrelenting pace of bad information related to the coronavirus outbreak has presented a fresh challenge for fact checkers, internet companies and governments alike.

 

  1. Since the outbreak of Covid-19, Full Fact has published more than 130 fact checks and articles providing the context and evidence behind claims about the virus, the lockdown, and the government’s response to the crisis. Our team monitors social media, including Facebook and Instagram - through our partnership as part of their Third Party Fact Checking programme[3] -, Twitter, and what the public shares with us from WhatsApp and YouTube. We look for claims in print and online newspapers, and in broadcast media including news programmes on TV and radio. We also check high profile statements from public figures, including parliamentarians.

 

  1. We have seen a huge amount of interest and concern from the public about information and data about the pandemic. Since we launched an online form[4] on 16 March we’ve received over 2,800 questions, claims or concerns about misinformation from members of the public. These have included questions about social distancing rules, medical conditions and treatments, and how the virus spreads[5]. We also get sent many posts and claims via email and on social media that we are asked to check. Ofcom’s research has shown that in each of the past seven weeks, 44 - 50% of people have reported having come across information/news about coronavirus that they think is false or misleading in the past week[6], the majority of whom (63%) said they are coming across it at least once a day (as of week seven).

 

  1. This coverage, as well as our varied fact checking experience, has given Full Fact a unique viewpoint from which to understand the misinformation landscape during the pandemic. The claims we’ve been seeing online and in the media since the outbreak of Covid-19 can largely be categorised in three groups:
    1. Claims that become distorted without context;
    2. Claims about the origins, spread or treatment of the virus that use fear as their catalyst;
    3. Claims that masquerade as official advice.

 

  1. In the first group, we saw claims on social media that children are immune to the virus[7], as well as the mainstream media claiming that a study showed no cases of children transmitting the new coronavirus[8], which was misleading and taken out of context. The fact that children appear to, in some cases, get a milder form of the virus has created a perfect storm of claims that cause confusion. This could be harmful if they affect how people behave. We also see a range of posts and videos on social media that report true stories or statements about other countries, as if they applied in the UK. By taking a true story out of context, for example about lockdown measures[9], these posts can add to confusion and doubt about the UK’s response or, for example, what is happening in our hospitals[10].

 

  1. In the second group, we have done a significant amount of work on claims that suggest a link between the spread of Covid-19 and 5G technology. These claims have often surfaced and spread on social media, but are also being amplified by celebrities[11] and some media outlets and newspapers[12]. Ofcom research showed a peak of 51% of respondents who said they had come across theories linking the origins or causes of Covid-19 to 5G technology (choosing from a list) in week four of its survey, falling to 40% in week seven. From what we have seen, these theories have been steadily building online for years and their origins can be traced back even further, to panics about earlier generations of mobile phone and wireless technology at the turn of the millennium[13]. It's essential that we counter these arguments with clear, high-quality information to convince those who have questions and concerns. The internet companies and telecoms businesses also have a role to play here. Arguably if as a society we had tackled these arguments sooner, before they took hold, it may have been easier to effectively reassure people.

 

  1. We are also seeing a range of false claims relating to vaccines, including claims about the dangers of vaccine trials[14] or suggesting that a vaccine was developed years ago[15] (these are actually for coronaviruses which exist in animals). There is a danger that this kind of harmful information will become more widespread as vaccine trials continue. As we have seen from the way 5G theories have taken hold, there is a need for the government to tackle the issue of vaccine misinformation before the arrival of a viable vaccine. Research shows us that the more effective means of countering vaccine misinformation is through preventative measures, such as showing people debunks of anti-vaccination claims before the original conspiracies[16].

 

  1. Other claims revolve around misconceptions of the term “coronavirus” itself, missing the crucial context that coronaviruses are a large family of viruses that we’ve known about since the 1960s. This means perfectly normal references to coronaviruses that predate this outbreak can suddenly seem like evidence of a conspiracy by Dettol[17].

 

  1. The third grouping includes claims about when certain shops or services are reopening[18], as well as more concerning claims that you can prevent and treat infection[19] by gargling salt water, or even test whether or not you have the virus by holding your breath[20]. This is harmful where it impacts how people behave and how diligently they follow government advice.

The response from internet companies

 

  1. The internet companies have taken significant additional steps to provide accurate information to their users. This should be applauded. But there is also much more that could be done to provide the best information during this time of information overload. The internet companies should recognise their role in serving information to users, and the need to do so responsibly. We believe there are three areas that the internet companies should focus on:
    1. Supplying good information;
    2. Empowering users;
    3. Working with experts.

 

  1. We have assessed 7 products[21] against a set of 12 principles under these three categories. We have given each a rating of either Red, Amber or Green to reflect how well that platform has implemented each principle. A summary of this is provided below.

 

A. Supplying good information

  1. It is critical that users have access to high quality information, particularly when searching for further detail on Covid-19. The internet companies have a responsibility to ensure that good information is served first. Encouragingly, all of the internet companies we assessed have implemented measures on their platforms that meet this criteria to some extent. Internet users finding reliable health advice prominently in major online services may help to save lives.

 

  1. We have been encouraged to see all the companies providing easily accessible links to official national sources prominently on their platforms. Tik Tok has taken a significant step by displaying a Covid-19 badge that appears in the top right corner of every post. Twitter provides links to the NHS - on mobile devices only this is on the newsfeed, on the web browser this can be found in the “explore” tab. Likewise all have some measures to direct users to sources with the most accurate information when searching. Facebook, Twitter and Google search results link users to gov.uk, the WHO and the NHS, and Instagram requires users to click through a pop up directing to the NHS website before seeing search results for Covid-19 relevant search terms. All have introduced new guidance on what is acceptable for Covid-19-related advertisements.

 

  1. WhatsApp does not provide any external information on the platform. While Facebook has announced a pilot that would enable users to search claims made within a message on the internet[22], it is not yet clear whether this will be rolled out or, if it is, how it will function. It is also not clear what messages will be eligible, or how these will be identified. We have seen a number of voice messages being circulated on WhatsApp that unverified claims related to Covid-19[23]. These types of voice messages would not be easy to search for online. WhatsApp does have a list of verified fact checker accounts on its webpage, but this is only accessible if you know to search for it.

 

B. Empowering users

  1. At Full Fact we believe we should all be able make up our own minds about what we see online. Media literacy is important in helping people identify misinformation, but we should also be encouraging people to take action when they have suspicions.

 

  1. While companies are increasingly relying on AI to automatically flag inappropriate content, user reporting remains key. Having an intuitive and accessible process will encourage greater reporting, and the faster removal of inappropriate content. The options for user reporting of content varies between company and platform. On Facebook users can report content as “false news” but on Instagram you need to click “inappropriate” before getting the option of selecting “false information”. This is also an example of the lack of consistency in terminology. Twitter on the other hand has no specific category, with the closest option being “suspicious or spam”.

 

  1. To complement reporting, internet companies should ensure that there are clear guidelines on what is and is not acceptable on the platform. In an information crisis this should include guidance relating to those specific circumstances. Facebook, Google, Twitter, YouTube and Tik Tok have all released guidance on Covid-19, often taking their policies a step further than before. This is most evident on Twitter, which broadened the definition of harm to address content that goes directly against guidance from authoritative sources of global and local public health information.

 

  1. But it is also critical that users have confidence that the guidelines will be enforced. This can only be achieved through transparency on the processes and how these are scaled up when needed. The internet companies are acutely aware of their responsibility to protect freedom of speech, and are often wisely cautious of using the enforcement powers that they have. But that means it is even more important for experts outside of the companies to be able to make independent assessments of when these powers are used.

 

  1. None of the internet companies are sufficiently transparent on the action they have taken to prevent Covid-19 misinformation. Facebook, Instagram, Twitter and YouTube have all published information on how their guidelines are enforced, including detail on their moderating system (e.g. mix of AI and human moderators) but have provided little to no detail on the extent to which these have been applied in relation to Covid-19 misinformation. To date, Twitter is the only company which has provided metrics on the number of accounts challenged and removed. Even that has limited use as it is only at a global level with no breakdown on the numbers of accounts challenged in the UK, or detail on how or why accounts were flagged. Nevertheless, knowing even this basic information from the other companies would give some insight into how the companies are battling the infodemic.

 

C. Work with experts

  1. The internet companies should ensure that those who are investigating misinformation have the best information available. Facebook remains the only company that has a paying relationship with fact checkers through the Third Party Fact Checking programme, of which Full Fact is one of two fact checkers in the UK. We published details on this programme, with specific recommendations on how it could be improved, in 2019[24]. Some of those recommendations have been acted on by Facebook. We have recommended that other internet companies set up similar programmes, but Google has not and Twitter has not.

 

  1. YouTube have introduced a fact check feature in Brazil, India and the US[25], but we have yet to see plans to implement this in Europe. Twitter recently announced[26] plans to introduce labels and warnings on content that has been disputed. Twitter needs to provide more information on how it will identify claims, what trusted partners it is using and how it will make a judgement on whether a tweet is misleading or disputed. Ideally, these judgements would be made by a transparent, independent organisation, and with a clear process for appeal. From both YouTube and Twitter we would like to see this content shared with independent fact checkers and the results shared back to users, akin to Facebook’s Third Party Fact Checking programme.

 

  1. Where companies do not want to partner with fact checkers, there would still be benefit in sharing regular insights on claims that are being widely shared. While some information can be gleaned from Google Trends or Twitter highlights, the companies themselves have the best insights into content going viral. Sharing this with fact checkers would enable us to make decisions on the most impactful claims to check.

 

  1. We were also encouraged to see in February 2020 a joint statement from Facebook, Twitter, Google and others committing to work together during the Covid-19 crisis. However we have not seen any update on that partnership since then. We would welcome detail on what, if any, information has been shared and the impact this may have had.

 

Recommendations

  1. This analysis has highlighted that there is a particular gap in transparency. In the Full Fact Report 2020[27], we recommend that the internet companies, in advance of the government’s Online Harms legislation should start to implement transparency principles as soon as possible. This should include detail on how much content is flagged (both manually and through automatic tools) as false or misleading, what proportion of flagged content has action taken against it, and what this action was.

 

  1. In that report we also recommend that internet users should be able to easily flag suspicious content on whichever platform they're using. Internet companies should share this with independent fact checkers and be transparent about the results of their efforts to restrict Covid-19 misinformation. Finally, the internet companies and others could do better at preempting issues where misinformation could be harmful. We saw that the policy changes made by various companies to remove content related to 5G being a cause of Covid-19 only happened after attempted arson in the UK. By identifying areas where harm could be caused, such as on vaccines, we can all take action to minimise the risk.

Online harms proposals and the impact on misinformation during Covid-19

 

  1. The government’s Online Harms White Paper published in April 2019 recognised the potential harm that can be caused by bad information. There is a great deal that we welcomed in the White Paper, which set a clear aim to address misinformation and disinformation. However we were disappointed to not see further detail on how the government intends to take action in the Interim Response published February 2020.

 

  1. We particularly welcome the commitment to freedom of expression and role of a free press, and the commitment to not control the takedown of specific pieces of legal content online, unlike some other countries. The Full Fact Report 2020 reiterates that efforts to tackle bad information should not be at the expense of the other fundamentals within a democracy, including the rights to freedom of opinion and of expression. Protecting essential freedoms should remain a priority while developing the proposals further.

 

  1. It is not, however, immediately clear whether the Online Harms proposals would have any material impact on tackling Covid-19 related misinformation. Without further information on the detail of how the proposed powers of the regulator will function, it is difficult to make an assessment. The government should also provide further detail on which internet companies it expects will fall within scope. But based on the information provided in the White Paper and Interim Response we have considered the potential impact the duty of care and increased transparency could have.

 

Duty of care

  1. The Government’s interim response stated that “companies will be able to decide what type of legal content or behaviour is acceptable on their services. They will need to set this out in clear and accessible terms and conditions and enforce these effectively, consistently and transparently”[28]. Full Fact has called for the policy response to misinformation and disinformation to be carried out through open transparent democratic processes and we believe that it is ultimately for parliament to decide the delicate balance between free speech and responding proportionately to real harms.

 

  1. The largest internet companies already have terms and conditions that tackle elements of misinformation, albeit to varying degrees. Facebook for example focuses on “inauthentic behaviour”, while YouTube prevents impersonation or harmful content. It is unlikely - and undesirable - that companies will want to insert new T&C’s that prevent the sharing of false or misleading information, given the significant impact on freedom of expression that this could have.

 

  1. Where those T&C’s do exist, it is not clear how and to what extent they are being enforced. Often determining whether a post is inauthentic or harmful requires nuance. Some of the most difficult claims we fact check can take days or weeks to come to a conclusion. The regulator will need clear powers that allow an assessment of the companies processes for enforcing their T&C’s, where they have them.

 

  1. Furthermore, the government has stated it does not intend to define the harms that will be in scope with, as above, companies themselves determining what type of content or behaviour is acceptable. This could lead to inconsistencies, with the regulator having no power to require companies to take action on harmful false health information even where there was a clear risk to life. Therefore even if the duty of care were in place today, it would still be for the companies to decide if and how they want to tackle such content.

 

Transparency

  1. One of the ways in which the regulator may determine if T&C’s are being enforced, and the duty of care fulfilled, could be through regular mandatory transparency reports from the companies in scope. Increased transparency is a positive, as the government states in the interim response: “increasing transparency around the reasons behind, and prevalence of, content removal may address concerns about some companies’ existing processes for removing content”. We welcome the establishment of a multi-stakeholder Transparency Working Group and look forward to the government’s transparency report being published in the coming months.

 

  1. But we urge the government to consider the frequency in which transparency information should be provided. It is not credible to put so much emphasis on annual reports in an age of real time information. In the fast-moving online world, regulation cannot be effective or proportionate if it is dealing with data from months ago or only the data that internet companies choose to provide. This has been particularly evident during Covid-19, where misinformation has spread at pace. While mandatory reporting would be welcome at this time, if it is only published months after the event it will have limited impact. Therefore the government should consider requiring more regular reporting, particularly during an information crisis.

 

  1. The government should also consider setting specific categories of information that must be provided. The experience of the voluntary monthly transparency reports provided ahead of the EU election in 2019 showed that with no consistent format the companies “did not provide the level of detail necessary to allow for independent and accurate assessments”[29]. This will be important in understanding the scale of the problem, the impact of measures taken by the companies and being able to compare between companies.

 

 

May 2020


[1] https://fullfact.org/media/uploads/tpfc-q1q2-2019.pdf

[2] https://fullfact.org/about/funding/

[3]https://fullfact.org/blog/2019/jan/full-fact-start-checking-facebook-content-third-party-factchecking-initiative-reaches-uk/

[4] https://fullfact.org/health/ask-newcoronavirus/

[5] https://fullfact.org/blog/2020/apr/public-concerns-coronavirus/

[6]https://www.ofcom.org.uk/research-and-data/tv-radio-and-on-demand/news-media/coronavirus-news-consumption-attitudes-behaviour

[7] https://fullfact.org/health/children-can-get-coronavirus/

[8] https://fullfact.org/health/children-transmitting-coronavirus/

[9] https://fullfact.org/online/ireland-lockdown-phases/

[10] https://fullfact.org/online/st-marys-video-bodies-ecuador/

[11] Ofcom ruling on Eamonn Holmes comments https://www.ofcom.org.uk/__data/assets/pdf_file/0021/194403/assessment-decision-this-morning-itv-13-apr-2020.pdf

[12] https://fullfact.org/health/5G-not-accelerating-coronavirus/

[13] https://fullfact.org/online/5g-and-coronavirus-conspiracy-theories-came/

[14] https://fullfact.org/online/elisa-granato-fake/

[15] https://fullfact.org/online/dog-vaccine-coronavirus/

[16] Daniel Jolley and Karen M. Douglas, ‘Prevention Is Better than Cure: Addressing Anti-Vaccine Conspiracy Theories’, Journal of Applied Social Psychology 47, no. 8 (August 2017): 459–69, https://doi.org/10.1111/jasp.12453

[17] https://fullfact.org/online/coronavirus-dettol/

[18] https://fullfact.org/online/facebook-coronavirus-lockdown-prank/

[19] https://fullfact.org/health/gargle-salt-vinegar-water-coronavirus/

[20] https://fullfact.org/online/coronavirus-water-breath-test-bad-advice/

[21] Facebook, Instagram, WhatsApp, Google Search, YouTube, Twitter and Tik Tok

[22] https://techcrunch.com/2020/03/21/whatsapp-search-web-coronavirus/

[23] https://fullfact.org/health/viral-voice-message-ambulance-trust/

[24] https://fullfact.org/media/uploads/tpfc-q1q2-2019.pdf

[25]https://www.reuters.com/article/us-usa-alphabet-youtube-factchecking/youtube-expands-fact-check-feature-to-us-video-searches-during-covid-19-pandemic-idUSKCN22A2Y1

[26]https://blog.twitter.com/en_us/topics/product/2020/updating-our-approach-to-misleading-information.html

[27] https://fullfact.org/media/uploads/fullfactreport2020.pdf

[28] https://www.gov.uk/government/consultations/online-harms-white-paper/public-feedback/online-harms-white-paper-initial-consultation-response

[29] https://ec.europa.eu/commission/presscorner/detail/en/STATEMENT_19_2570