Written evidence submitted by Meta
Meta Submission to the DCMS Sub-committee on Online Harms and Disinformation’s inquiry into Misinformation and trusted voices
1.1. Meta welcomes the opportunity to respond to the DCMS Sub Committee’s inquiry into Misinformation and Trusted Voices.
1.2. Meta’s mission is to give people the power to build community and bring the world closer together. People use Meta’s platforms to stay connected with friends and family, to discover what's going on in the world, and to share and express what matters to them.
1.3. We want everyone who comes to our platforms to find high quality information that is meaningful and valuable to them. We have no incentive to have misinformation on our platforms: our users tell us they don’t want to see it, and we agree with Governments and the public that internet platforms have a responsibility to make sure the information on our sites is high-quality.
1.4. We use multiple means to achieve that goal, including removing false information that experts have found to be potentially harmful, disrupting the financial incentives of those people who propagate false and misleading information, working with third-party fact-checkers to let people know when they are reading or sharing information (excluding satire and opinion) that has been disputed or debunked, and limiting the distribution of stories that have been flagged as false or misleading by these fact-checkers.
1.5. The terms “misinformation” and “disinformation” can be confusing as there is no agreed definition of what these each mean. At Meta, we conceptualise this issue through an ABC approach: actor, behaviour, and content. So, for what is often called misinformation, we have policies and enforcement techniques against content with a claim that is determined to be false by an authoritative third party, whereas for what is often described as disinformation our approach is to find and stop 'inauthentic behaviour'. This includes coordinated campaigns and specific actors such as state-controlled media that often also engage in the spreading of misinformation. Our inauthentic behaviour policy is intended to protect the security of user accounts and our services and create a space where people can trust the people and communities they interact with.
1.6. In our submission below, we set out how Meta both combats misinformation and delivers authoritative information to users. We also will explain how we tailored our misinformation strategy in order to tackle situations like COVID-19, Climate Change and Russia’s illegal invasion of Ukraine. We have set out our views for this submission grouped under four headings:
1.6.1. Meta’s approach to Misinformation and Disinformation
1.6.2. How we address COVID-19 misinformation
1.6.3. How we address Climate Change misinformation
1.6.4. Our response to the illegal invasion of Ukraine
2.1. To reduce the spread of misinformation on our services, which is defined as content, which is false or misleading, we have implemented a three-pronged strategy, which we have been honing since 2016.
2.1.1. Remove
2.1.2. Reduce
2.1.3. Inform
2.2. We use this three-pronged approach to try and tackle misinformation that we experience on any of our platforms.
Remove
2.3. We have a publicly available global set of rules called our Community Standards. These rules explain what is and isn’t allowed on our services, including strict rules on hate speech, voter suppression, harassment, inciting violence and misinformation where it is likely to directly contribute to the risk of imminent physical harm. Where something violates our rules we will take various forms of action depending on the circumstances, everything from applying warning labels to removal of the content and banning of accounts.
2.4. In determining what constitutes misinformation in these categories, we rely on independent experts, who possess knowledge and expertise to assess the truth of the content and whether it is likely to directly contribute to the risk of imminent harm. This includes, for instance, partnering with human rights organisations with a presence on the ground in a country to determine the truth of a rumor about civil conflict, and partnering with health organisations during the global COVID-19 pandemic.
2.5. The types of misinformation we would remove includes for example misinformation about the election process which could prevent people from voting – such as false news related to the dates, location, time, and methods of voting. We also remove misinformation which could contribute to the risk of imminent physical harm or violence, as well as misleading manipulated videos (“deepfakes”) which are created by artificial intelligence and depict someone saying something they did not say in a misleading way.
Reduce
2.6. Misinformation can be posted by people, even in good faith and we want to strike a balance between enabling people to have a voice and promoting an authentic environment.
2.7. To address this challenge, we’ve built a global network of more than 80 independent fact-checkers, who review content in more than 60 languages. In the UK this includes Full Fact, Reuters, Logically and Fact Check Northern Ireland. When a fact checker rates something as false, we reduce its distribution so fewer people see it and add a warning label with more information. We know that when a warning screen is placed on a post, 95% of the time people don’t click to view it. We also notify the person who posted it and anyone else who previously shared it, and we reduce the distribution of Pages, Groups, and domains who repeatedly share misinformation.
Inform
2.8. It’s not enough to just limit misinformation that people might see. We also connect people to reliable information from trusted experts. Therefore, once fact-checkers rate something as false, not only do we reduce its distribution, but we inform people with more context around why the content is false or potentially misleading and notify users who try to share it. We do this through our centralised hubs such as our COVID-19 Information Center; labels that we attach to certain posts with reliable information from experts; and notifications that we run in people’s feeds on both Facebook and Instagram.
2.9. We also conduct educational campaigns on our platforms to reduce the spread of misinformation. For example, this year we ran an advertising campaign-across Facebook and Instagram to raise awareness of misinformation and to encourage users to check the source of information to ensure it’s credible. The campaign covered issues including the invasion of Ukraine and the elections in Northern Ireland. It reached 18 million people in the UK and 2 million in Ireland across both platforms.
Measuring Third-Party Fact-Checking Partners
2.10. To ensure high standards and accuracy, all of our third-party fact-checking partners are certified through the non-partisan International Fact-Checking Network (IFCN) and follow IFCN’s Code of Principles. These Principles include a series of commitments that organisations must adhere to in order to promote excellence in fact-checking: nonpartisanship and fairness; transparency of sources; transparency of funding and organisation; transparency of methodology; open and honest corrections policy. This certification process is rigorous and supervised by independent assessors and verified by IFCN’s board of advisors.
Meta’s approach to tackling Disinformation
2.11. When it comes to disinformation (understood as the deliberate intent to mislead or manipulate) and influence operations, we adopt a different approach and rely on distinct policies and enforcement measures. We have a dedicated global cybersecurity team that works 24/7 to find and stop coordinated campaigns and adversarial threats that seek to manipulate public debate across our apps.
2.12. Historically, influence operations have manifested in different forms: from covert campaigns that rely on fake identities to overt, state-controlled media efforts that use authentic and influential voices to promote messages that may or may not be false. We address these through our policies on ‘Coordinate Inauthentic Behaviour’ (CIB).
2.13. We define CIB as coordinated efforts to manipulate public debate for a strategic goal, in which fake accounts are central to the operation. In each case, people coordinate with one another and use fake accounts to mislead others about who they are and what they are doing. When we investigate and remove these operations, we focus on behavior rather than content — no matter who’s behind them, what they post or whether they’re foreign or domestic. We keep the public updated about our efforts through our monthly CIB reports, you can read the latest here.
2.14. We’ve found that one of the best ways to fight this behavior is by disrupting the economic incentives structure behind it. We’ve built teams and systems to detect and enforce against inauthentic behavior tactics behind a lot of clickbait. We also use artificial intelligence to help us detect fraud and enforce our policies against inauthentic spam accounts. We take a hard line against fake accounts and block millions each day, most of them at the time of creation. In Q2 2022, we disabled more than 1.4 billion of them.
2.15. We have to date removed more than 150 coordinated inauthentic networks globally. Working with our industry peers, we’ve made progress against influence operations by making it less effective and by disrupting more campaigns early, before they could build an audience.
2.16. To account for a constantly shifting threat environment, we build our defenses with the expectation that threat actors will not stop, but rather adapt and try new tactics. In addition to our work on coordinated inauthentic behavior, our teams identify and address adversarial threats such as cyber espionage, brigading, mass reporting, coordinated social harm, and abusive commercial services.
2.17. We have grown our team focused on disrupting influence operations to over 200 experts across the company, with backgrounds in law enforcement, national security, investigative journalism, cybersecurity, law, and engineering. We continue to expand our technical teams to build scaled solutions to help detect and prevent these behaviors, and we are partnering with civil society organisations, researchers,and governments to strengthen our defences.
2.18. Each quarter we release an adversarial threat report, which provides a broad view into the risks we see worldwide and across multiple policy violations. Our public security reporting began over four years ago when we first shared our findings about Coordinated Inauthentic Behavior (CIB) by the Russian Internet Research Agency.
Real World Examples
2.19. As we mentioned above, over recent years we have experienced situations that require a more tailored approach to reduce or remove misinformation on our platforms. The next three sections will address how we handled the COVID-19 pandemic, climate change and finally the war in Ukraine.
3.1. Misinformation related to COVID-19 has presented a unique risk to public health and safety over the last two years. We used our three pronged misinformation strategy mentioned above, and broadened it in order to address the misinformation arising in the early days of the outbreak.
3.2. Our COVID-19 misinformation strategy is comprised of three primary pillars:
3.2.1. Promoting vaccines and authoritative information.
3.2.2. Removing harmful misinformation content.
3.2.3. Addressing borderline content that could lead to vaccine hesitancy through a variety of interventions.
Promoting Vaccines and Authoritative Information
3.3. An important component of our strategy to combat COVID-19 misinformation and to inform our community is to develop tools that promote vaccines and connect people to reliable information from trusted sources:
3.3.1. Making it easier to get vaccinated: We provided reliable information to help people find vaccine appointments in their area through News Feed messages.
3.3.2. Helping people get questions answered: We directed over 10 million people to resources from the NHS and GOV.uk through our COVID-19 Information Centre which can be found here and via pop-ups on Facebook and Instagram. We later expanded our COVID-19 Information Centre onto Instagram.
3.3.3. Reaching vaccine hesitant communities: We ran a vaccine confidence initiative where we focused on amplifying content from partners to help reach people most affected by COVID-19 and help increase demand for COVID-19 vaccines. In the UK, we launched campaigns with Dope Black Dads, the British Islamic Medical Association and the Caribbean African Health Network that aim to get COVID-19 vaccine information to the communities at highest risk.
3.4. Some of the methods we adopted were:
3.4.1. Profile frames: We know from public health research that people are more likely to get vaccinated if they see others in their community doing so. We supported the Government’s campaign by deploying a range of frames and stickers on our platforms, which can be placed on users’ photos to show their support for the vaccine roll-out. Designers including The Beano, Premier League, and Charlie Mackesy have created a suite of unique frames and stickers that people can use to show “I’ve had my vaccine” or make a pledge that “I will get my vaccine” when their time comes.
3.4.2. Information pop ups and labels on content: We inform people who have come into contact with false or misleading claims through notices and labels and connect them with authoritative information from experts.
3.4.2.1. We add labels on posts about COVID-19 and vaccines to show additional information from the NHS.
3.4.2.2. We show a pop-up when someone goes to share a post on Facebook or Instagram with an informational COVID-19 vaccine label — so people have the context they need to make informed decisions about what to share.
3.4.3. We ran two COVID-19 misinformation media literacy campaigns in the UK: In 2020, in consultation with our fact-checking partners, we conducted a 4-week campaign to educate and inform people about how to detect potential false news related to COVID-19. Alongside the campaign, we ran a study to examine the effectiveness of our approach. In the United Kingdom, the media literacy campaign reached 15 million users. In 2021, we ran a second campaign specifically on combatting vaccine misinformation, in the UK, this campaign reached 13.8 million users.
3.4.4. We ran a series of live talks in partnership with Reuters: including how structural inequalities contribute to vaccine hesitancy and misinformation in underrepresented communities and the relationship between misinformation, conspiracy therapies and polarisation.
3.4.5. We ran Vaccine Misinformation Webinars: We hosted a COVID-19 Vaccine Misinformation Webinar on Facebook for the UK’s top 8,000 admins of the largest Facebook groups on how to combat false information. These 8,000 group admins serve 35 million active UK users in Facebook Groups every month. As part of this we held a Q&A with a top UK scientist sourced for us by Team Halo – a leading global network of scientists and doctors working on the frontline of vaccine development and deployment, and we provided a new guide on combating COVID-19 vaccine misinformation.
3.4.6. Advertising Credits: We’re also helped the UK government and NHS reach tens of millions of people by providing free advertising credits across our platforms so they can administer important information regarding the vaccination programme. Globally, we provided £87 million in free advertising to health bodies and governments.
Removing Harmful Misinformation Content
3.5. In December 2020, we started removing false claims about the COVID vaccine when debunked by public health experts - like fake preventive measures or claims that the virus doesn’t exist. In February, we expanded the list of false claims of what we would remove about COVID-19 and vaccines in consultation with leading health organisations, including the NHS. Working with health experts, we’ve regularly updated these policies as new facts and trends emerge.
Addressing Borderline Content that could Lead to Vaccine Hesitancy
3.6. As part of our efforts to improve the quality of health and vaccine content that people encounter, we have reduced the distribution of certain content about vaccines that does not otherwise violate our policies. We have actively removed certain Pages, Groups, and Instagram accounts that have shared content that violates our COVID-19 and vaccine policies and are dedicated to spreading vaccine discouraging information on platform.
3.7. Our approach to these types of claims is grounded in guidance from expert health organisations, who have emphasised that overcoming vaccine hesitancy depends on people being able to ask legitimate questions about vaccine safety and efficacy and getting those questions answered by trusted sources.
4.1. We take climate change seriously, which is why we are taking steps to make sure people have access to reliable information, all whilst reducing misinformation. When it comes to climate change misinformation, informing people with more context - whether it's through fact-checking, or initiatives like the Climate Science Center - is crucial to our strategy and an approach that can be more impactful than the alternative of just removing content.
Connecting People with Reliable Climate Change Information
4.2. We created the Climate Science Center in over 150 countries to connect people to factual and up-to-date climate information. The Center connects people on Facebook with science-based news, approachable information and actionable resources from the world’s leading climate change organisations.
4.3. Whenever someone on Facebook searches for terms related to climate change, they receive a pop up at the top of their results suggesting they visit the climate science centre. The Climate Science Centre contains a number of features to help educate people about the status of climate science understanding throughout the world. This includes
4.3.1.1. a curated experience showing posts from individual climate experts,
4.3.1.2. authoritative facts about climate science from the UN Environment Programme, and
4.3.1.3. a collection of posts from news sources about climate science.
4.4. We believe the Centre can play an important role in helping people in the UK get access to authoritative information about the science behind climate change, as the UK prepares for the transition to a net-zero economy.
Fact-Checking
4.5. We’ve also made it easier for fact-checkers to find climate-related content because we recognise that speed is especially important during breaking news events. We use ‘keyword detection’ to gather related content about a topic in one place, making it easier for fact-checkers to find.
4.6. Our fact-checkers can and do rate climate science content - when fact-checkers rate a post as false, its distribution is dramatically reduced and we show warning labels with more context.
4.7. As with all types of claims debunked by our partners, we move these posts lower in our Feed so fewer people see them. Like other forms of misinformation, pages groups, accounts, or websites that repeatedly share content, including climate content, rated false by fact-checkers will have some restrictions, including having their distribution reduced. Pages, groups, and websites may also have their ability to monetize and advertise removed, and their ability to register as a news Page revoked.
Additional Contributions
4.8. We are committed to doing more to help the climate crisis and are establishing a dedicated product team to build features that help build on what our community is already doing. Ahead of COP last year, we partnered with Reuters Plus, the content design arm of Reuters, to launch a 2-part campaign showcasing our role in bridging the climate science knowledge gap. The campaign consisted of a panel discussion and informative article developed with Misinformation Policy and Climate Science subject matter experts. The video was viewed over 350,000 times and the article received over 950,000 impressions.
5.1. For the past several months, we have focused significant time and resource on online issues created by the war in Ukraine and Russia's extraordinary act of aggression. We’ve also worked closely with UK Government departments looking at this issue over the last few months.
5.2. We established a special operations center staffed by experts from across the company, including native Russian and Ukrainian speakers, to review and respond to activity across our platform in real time.
5.3. This includes monitoring for signs of inauthentic behavior, removing content that violates our Community Standards - such as incitement to violence or hate speech.
5.4. We have had a fact-checking program established in Ukraine since 2020. Since the start of the conflict, we also added additional Russian language coverage via our fact-checking programs in Estonia, Latvia, Lithuania, Poland and the U.S., and Ukrainian language coverage in Poland. In addition to expanding Russian and Ukrainian language coverage, we have enabled our fact-checking partners in Ukraine, Estonia, Latvia, Lithuania, Poland, Georgia, and the U.S. to fact-check content in Belarus, Russia and Ukraine. In many Central and Eastern Europe countries, this has led to a doubling of total fact-checking capacity.
5.5. Similar to how we are tackling climate change misinformation, use keyword detection to group related content in one place, making it easy for fact-checkers to find. To supplement labels from our fact-checking partners, we are warning users in the region when they try to share some war-related images that our systems detect are over one year old so people have more information about outdated or misleading images that could be taken out of context.
5.6. Additionally, we also launched a news hub for users in Ukraine to connect them to high-quality, timely information to stay safe, find family and friends, and locate support services like housing and immigration assistance. We’ve also added resources in the Emotional Health Hub on Facebook, in both Ukrainian and English, to support those in need. The hub features content that helps parents and caregivers figure out how to discuss the crisis with children, how to identify stress in children and how to navigate their own stress and anxiety during this. We’re collaborating with organisations, such as UNICEF and the Child Mind Institute to distribute guides on Instagram with information on how parents can talk to their kids about the crisis in an age-appropriate way. The guides will include topics like: understanding news headlines, dealing with fear and trauma, and ways to support those affected.
Addressing Hostile Actors and State Controlled Media
5.7. A critical part of our approach to misinformation is ensuring people are informed and understand the content they see on Facebook and Instagram and who is behind it. That's why, since 2020, we have applied labels to State-controlled media on Facebook globally, so people know where this information comes from. In response to the war in Ukraine, we are taking additional steps related to Russian state-controlled media outlets.
5.8. We are globally demoting content from Facebook Pages and Instagram accounts from Russian state-controlled media outlets, prohibiting ads from Russian state-controlled media, demonetizing their accounts, and making them harder to find across our platforms. We have also begun to demote posts that contain links to Russian state-controlled media websites on Facebook. We label these links and provide more information to people before they share or click on them to let them know that they lead to Russian state-controlled media websites. We have similar measures in place on Instagram.
5.9. By providing this additional transparency, we aim to give people more context if they want to share direct links to Russian state-controlled media websites or when others see someone’s post that contains a link to one of these sites.
5.10. In the UK, EU, and Ukraine, we've restricted access to RT and Sputnik. Following attempts by the Russian diplomatic network to spread false information about violent atrocities, including the Bucha attack, we introduced a new policy to remove posts by state-controlled accounts that falsely deny violent attacks on other states.
5.11. And following attempts to use prisoners of war (POWs) for propaganda purposes, in defiance of the Geneva convention, we have added POWs to the list of at-risk groups who we will protect from having their identities exposed, by removing posts or videos that identify them.
5.12. On another note, it is also our goal to support media outlets with high journalistic and ethical standards. Our goal is to support them in terms of audience development and to help develop the business models on which they base their core business. We support publishers in various fields and help them develop their audience by giving them access to free tools and services and offering a wide range of free training courses and workshops. We’ve recently supported media outlets like Ringier Serbia (www.ringier.rs) and Radio Free Europe / Radio Liberty (www.rferl.org), Antena 1 (www.a1.ro).
6.1. Despite all of these efforts, there is a misconception that we have a financial interest in turning a blind eye to misinformation. The opposite is true. We have every motivation to keep misinformation off of our apps and we’ve taken many steps to do so at the expense of user growth and engagement.
6.2. For example, in 2018 we changed our News Feed ranking system to connect people to meaningful posts from their friends and family. We made this change knowing that it would reduce some of the most engaging forms of content, like short form video, and lead to people spending less time on Facebook — which is exactly what happened. The amount of time people spent on Facebook decreased by roughly 5% in the quarter where we made this change.
6.3. As with every integrity challenge, our enforcement will never be perfect even though we are improving it all the time. While nobody can eliminate misinformation from the internet entirely, we continue using research, teams, and technologies to tackle it in the most comprehensive and effective way possible.
6.4. Notwithstanding all these efforts, we acknowledge the issue of misinformation cannot be resolved by private companies alone. A multi-stakeholder approach is needed to define the roles and responsibilities of all actors involved in the online sphere. This is why we have been calling for more regulation in this area.
END
10