Written evidence submitted by the Tony Blair Institute for Global Change
SUBMISSION TO THE DCMS COMMITTEE
This submission sets out an analysis of the range of false information spreading online and a number of recommendations policy makers and the government should consider.
The Tony Blair Institute for Global Change is a not-for-profit set up to support leaders and governments around the world. Our Government Advisory Practice is directly supporting leaders in their on-the-ground fight against Covid-19, and our Policy Futures unit is delivering analysis and advice to help countries mitigate economic impact, source essential equipment, harness the power of technology and position themselves for the rebuilding to come.
The Technology and Public Policy team at the Institute was established to help leaders master the revolution in technology – accessing its benefits and mitigating its risks. Now, in the new Covid-19 reality, we have refocused our mission on answering this question: How can the world use technology to respond to the virus and the crisis it has caused?
Tackling false information in the Covid-19 pandemic - Policy analysis and recommendations
Governments around the world are facing the spread of false information online related to health, public health and conspiracy. It is important they work with social media platforms to take down that information and that social media companies use the full range of tools to prevent the proliferation and amplification of harmful material including systems built primarily for advertising to people.
There are still considerable unknowns around Covid-19 itself, and therefore governments and institutions may themselves not be able to provide the authority, trust and definitive information people are searching for. A balance needs to be struck between not crowding out the ability of the internet and social media to surface networked experts as a counterpoint to centralised authority, while protecting people from information that can have serious implications for public health.
1 | Governments should ensure that there are publicly available resources and tools that directly refute the claims from a position of authority and trusted expertise. Social media companies need clear guidance from governments and international authorities about specific concerns. |
2 | Social media companies and marketing agencies should make all of their influencer and brand tools available to public health authorities and importantly redeploy advertising and behavioural experts to work alongside stretched government comms teams to develop a new public health influencer strategy to counter misinformation.
|
3 | Platforms have quickly changed their rules and are taking welcome steps to address Covid-19 disinformation and misinformation, but they can do more to interrupt the flow of misinformation even within encrypted services. |
1. THE RANGE OF FALSE INFORMATION
There is a range of false information circulating and being distributed on social media, closed groups and messaging platforms related to COVID-19. These relate primarily to health information and advice, government action, financial information and conspiracy.
This information can be distributed (i) deliberately with the knowledge they are false, (ii) forwarded without any consideration of the content, and (iii) sent with the sender believing the content to be true. It is important to delineate the different kinds of false information.
It is important to note that some governments are using the cover of action against ‘fake news’ to suppress legitimate criticism and debate. As with all social media moderation, preserving freedom of expression is important and is the trade-off to protection. Therefore, it is important to separate out false information that has safety of life implications or could create mass panic (mostly health and public health-related information).
The short term implications of false information (particularly rumour and misinformation) are (i) people receiving incorrect, counter-productive and dangerous health advice (as has been seen where people are sharing information on drugs such as chloroquine or hot tea as cures); (ii) populations panic buying supplies believing them to be running out; (iii) not following government instructions
e.g. on social distancing or protecting vulnerable people. The long-term implications are distrust in official sources of public-health information and government public-health instructions, which will make long-term measures difficult to enforce.
Many people are using social media for the first time or are using it much more actively during the pandemic as they look to stay in touch with friends or to follow events in the world. These people are often termed “new digital arrivals” – they lack the experience of engaging in social media and the etiquette of sharing, forwarding and engaging with information they are receiving.
The majority of social media platforms and search engines are tackling misinformation specifically related to the pandemic. Open platforms such as Twitter are easier to research, and it is easier for those platforms and researchers to identify and to act against false information. Closed groups and messaging platforms such as WhatsApp are more difficult to understand. They are also likely to be
some of the most important social platforms for people to stay connected while socially distant, particularly as group chats may be organised to mirror existing local communities.
Social media platforms have broadly taken the following steps[1]:
There are still several gaps and not all platforms have made consistent decisions:
I. Lack of clear reporting frameworks specifically for public health misinformation
II. The trusted-flagger system needs to be explicitly extended to Covid-19 misinformation to ensure external experts can advise on false information
III. Closed groups and encrypted messaging services, such as WhatsApp, cannot be moderated, but they still have a user relationship and contact points they can use.
It is impossible to remove all false information online and to prevent its spread – policy solutions should target (i) slowing the spread of the information; (ii) enforcing against the most dangerous and deliberate misinformation and (iii) debunking that information with fact checking where possible.
The table below sets out a range of options that can be deployed by policy makers and tech companies, including using established counter-extremism methodology. Fact-checking and moderation alone is unlikely to be successful in preventing the spread of information so policy makers should also look at existing community dynamics and other fields such as online advertising for solutions.
Policy solution | Considerations |
Fact checking | Websites and integrated services within social media platforms. Important resources to directly deal with false information and a useful tool for individuals counteracting that information. Needs to be produced by trusted sources with strong, established credentials |
Public information campaigns | Makes people aware of the misinformation circulating. Must deal directly with the misinformation people are seeing – general principles of good online behaviour are important but not immediately effective (behavioural change campaigns take a lot more time) |
Influencer strategy | Specifically identifying and communicating directly with community leaders, religious leaders and other sources of local authority to give them the facts and the tools to tackle disinformation. |
Online interventions | Directly intervening to begin dialogues with those spreading false information, ranging from bots responding to misinformation to directly challenging the people spreading the information and bringing in trained counsellors. |
On the ground intervention | Similar to influencer strategies where key opinion formers in communities are identified and they are given the tools and information to directly counter misinformation |
Enforcement against sharers | Removing people’s access to platforms; financial penalties and criminal prosecutions for those spreading false information. In practice this unlikely to be effective especially if it is government enforcement as it will spread fear and drive people to more encrypted messaging services This also has major freedom of speech implications in more authoritarian regimes |
Identifying and removing harmful users and sources of state disinformation | Effective solution but requires detailed research and cooperation and action with platforms and security services. Potential conflict with freedom of expression and possibility for governments to misuse this ability |
5. COMMUNITY MOBILISATION AND MARKETING STRATEGIES AGAINST MISINFORMATION
While technical and moderation solutions are important the core challenge is getting people to take notice and change their behaviours. Community intervention or social mobilisation has so far been missed by the UK Government, but is a core part of public health communications. For example, the Tony Blair Institute recently published an analysis on social mobilization in disease outbreaks drawing on lessons and examples from the Ebola response in Sierra Leone.
Misinformation is a societal problem accelerated through social media where the virality of content is encouraged and paid for. Companies have spent years building advertising platforms capable of
micro-targeting, tracking behaviours and delivering for brands. These tools can be put to the service of public health.
Social media companies and marketing agencies should make all of their influencer and brand tools available to public health authorities and importantly redeploy advertising and behavioural experts to work alongside stretched government comms teams to develop a new public health influencer strategy to counter misinformation. Getting this right has three key steps:
1. First, the right people need to be identified. A study by the Reuters Institute for Journalism found it is prominent public figures who are spreading misinformation, so it will be important to target them, but micro-influencers, or those with follower numbers less than 10,000 are key opinion formers. They are community leaders, experts, religious leaders who can reach communities and build trust. They can do this in the right language, cultural and community context. Marketing teams have spent years building up lists and strategies to find these people, especially on WhatsApp. The government can combine this with other local data to be highly targeted both towards people in big community groups, but also influential people spread across multiple smaller chats.
2. Second, to get into private groups these micro-influencers need to be armed with clear public health information, resources and fact-checking advice. Even paid influencer programmes should be considered, recognising that the trade-off is that the influencers will have to be transparent about why they are posting. But the priority should be putting government and public health resource into tailoring information specifically for these influencers once they’ve been engaged. Lessons from fighting counter-extremism suggest some of the best interventions are dialogue based, encouraging people to voice their anger and fears and communicate with others they trust or can relate to. Telling people bluntly they are wrong can lead to conflict and not change minds.
3. Finally, these influencers need to be able to feedback into government and social media platforms. If platforms focus on labelling and fact-checking misinformation in-app there needs to be ways influencers can send the misinformation they’re seeing back to social media companies for fact-checking and labelling. The trusted-flagger model YouTube and others have used for reporting terrorist content should be repurposed to deal with misinformation.
In a networked world, our connectivity is a considerable advantage, but the information environment was already chaotic before Covid-19 and pretending to know that which is uncertain only compounds the problem. Governments and politicians need to be transparent and clear with their own public information, strategies and expert-backed hypotheses as a key first step.
At this time of crisis policy makers should be taking advantage of the networked public and repurposing quality data and tools that are already out there. Governments should not be reaching for new rules to restrict freedom of expression, as that can drive further conspiracy theory and fear. Instead they should focus on using community dialogue alongside the existing online advertising industry to get the right and accurate information into communities.
[1] Google companies: Removed COVID-19 misinformation on YouTube, Google Maps, developer platforms like Play, and across ads. But still a lot of evidence to suggest content is there and creators are using strategies to get around AI moderation. Twitter: Increased use of machine learning and automation to take a wide range of actions on potentially abusive and manipulative content. Broadened their definition of harm to address content that goes directly against guidance from authoritative sources of global and local public health information.
Snapchat: Vetting publishers and individuals to make sure they are not publishing misinformation. Apple: App Store is accelerating the reviews process of COVID-19 applications from reputable sources to help make sure users have the best available information rather than false or misleading information. TikTok: Labelled COVID- 19-related videos which point users to trusted information. All ads on COVID-19, aside from prevention-oriented notices from trusted organizations and health authorities, are banned. Facebook companies: specific misinformation centres on launch; new rules to counter public health misinformation; warnings around false or disputed information; controls on the number of times messages can be forwarded on whatsapp.