Written evidence submitted by Twitter (COR0177)

 

Thank you for inviting organisations to participate in this inquiry. Please see attached:

 

-          Our submission;

-          Our Covid-19 misinformation strategy in the UK;

-          Timeline of safety changes 2014-2020;

-          Link to Testimony at the Select Committee on Democracy and Digital Technologies (17 March 2020).

 

At this critical time, the speed and borderless nature of Twitter presents an extraordinary opportunity to ensure people have access to the latest information from expert sources around the world. To support that mission, our global Trust & Safety team is continuing its zero-tolerance approach to platform manipulation and any other attempts to abuse our service at this juncture. Three priorities are:

 

        Using technology to proactively enforce our rules. 1 in 2 Tweets we now take down for abuse we have detected proactively; for the most egregious content (like terrorism and child sexual exploitation) the proportion has been far higher for quite some time. Most recently, we shared on April 22 that our automated systems have challenged more than 3.4 million accounts targeting manipulative discussions around Covid-19.

        Diverting people away from misinformation. We have partnered with both DHSC and DCMS to redirect users searching for information about Covid-19 to the NHS, and for information about 5G and Covid-19 to a new Gov.uk resource stating the UK Government has seen no evidence of a link. As well as continuing to remove Tweets that contain harmful Covid-19 misinformation, in May we announced we were introducing new labels and warning messages that will provide additional context and information on some Tweets containing disputed or misleading information related to Covid-19.

        Transparency, and making data available for research. Twitter is the only major service that makes public conversations available for study. Tens of thousands of researchers have used our public API (technical interface to allow access to our public data) over the past decade. On April 30 we launched a new program that offers access to a specific, dedicated Covid-19 API endpoint. In practice, this will allow approved developers and researchers to access public conversations about Covid-19 across languages, resulting in a data set that will include tens of millions of Tweets daily. The data can be used to research a range of topics related to the coronavirus pandemic, including areas like the spread of the disease, the spread of misinformation, crisis management within communities and more. We have also included a separate section in our submission about our developing work around algorithmic transparency, and we will continue to advocate for an open Internet.

 

As a relatively small platform (as designated by the Competition and Markets Authority during their Market Study), we are not immune from the impact of Covid-19. We are endeavouring to respond as quickly as possible to requests from governments, civil society and parliaments - we have now testified in two Select Committees on online harms and misinformation in the past 2 months. In the attached submission, we have covered the areas of interest for the Committee as listed here.

 

Response

 

The nature, prevalence and scale of online harms during the Covid-19 period

 

Information provided in the sections below.

 

We have prioritised making all of our public conversations data about Covid-19 freely and publicly available to approved researchers, such that they can assess themselves the nature, prevalence and scale of online harms during the Covid-19 period. In addition, we continue to engage on an ongoing basis with the Counter Disinformation Unit, DCMS, DHSC, NHS and the NCA.

 

We have not recommended any adjustments to reporting processes for law enforcement at this time. We continue to use proactive technologies to identify violations of our rules, particularly in surfacing the most egregious content - indeed, when it comes to terrorism and child sexual exploitation, a vast majority of the accounts that break these rules have been identified proactively for quite some time.

 

We continue to share updates on a regular basis on areas like the volume of Tweets we have identified for breaking our rules on Covid-19 harmful misinformation; and the number of suspicious accounts we have challenged. On 22 April, we shared that since introducing our updated policies on 18 March, we have removed over 2,230 Tweets containing misleading and potentially harmful content. Our automated systems have challenged more than 3.4 million accounts targeting manipulative discussions around COVID-19.

 

Steps that could be taken to mitigate these concerns

 

Information provided in the sections below.

 

Our Covid-19 misinformation strategy in the UK below lists the steps we have taken to mitigate Covid-19 misinformation; in the sections below are the wider steps we have taken to mitigate online harms, including during this period. We continue to explore with governments, civil society and industry peers what more we can do.

 

We have raised with the Government on multiple occasions the importance of involving a broader range of stakeholders in these discussions. There are a wide range of organisations that make up the reality of peoples’ online lives and information environment - app stores, device manufacturers, chat forums and online newspapers. The risks of misinformation posed by newspapers, for instance, are well-documented - this is just one example, as shared by a technology correspondent with The Economist in April.

 

By limiting discussions around these concerns to just a very small number of social media companies, it will be difficult to develop truly impactful solutions.

 

The adequacy of the Government’s Online Harms proposals to address issues arising from the pandemic, as well as issues previously identified

 

Information provided in the sections below.

 

As I stated at the Home Affairs Select Committee last year, “From our perspective, there are plenty of really positive aspects of the White Paper - not least the creation of a new regulator, which could be a big step forward.” Most importantly of all, however, we are not waiting for regulation. Attached is a timeline of the changes we have made, including in the period since Online Harms was first proposed back in 2017.

 

We do believe that a systemic approach to regulation is critical, as the Duty of Care model appears to outline. As I stated at the Democracy and Digital Technologies Select Committee in March: “If you just focus on the outcomes, such as the number of reports you get about X or the percentage you act on, there is a very real risk of creating perverse incentives. It might encourage a company, for example, to narrowly define its rules. It might encourage a company to make it harder to make reports because it is not prepared to deal with the big volume. It might encourage a company to focus overwhelmingly on responding to reports at the expense of proactive work and investing in technology that could help get at the problem at scale.” I have attached the full transcript.

 

The Government has so far only published an initial response to the Online Harms White Paper. They noted that they were minded to appoint Ofcom as the new regulator, which was something we supported in our White Paper submission. Once the Government publishes their full response and/or a draft Bill, we will review and can provide our feedback.


Covid-19 misinformation strategy in the UK

 

Below is a summary of work undertaken in the UK. It is by no means comprehensive, and so please do let us know if the Committee has any further questions.

 

We share information about our global strategy on an ongoing basis via our Covid-19 blog, and through our partnerships in the UK with government and civil society. As the Culture Secretary stated in his testimony to the DCMS Select Committee in April, the ‘single best thing’ we can do is to drive reliance on reliable narratives. Our entire company is focused on ensuring Twitter surfaces authoritative information on Covid-19. To support that mission, our global Trust & Safety team is continuing its zero-tolerance approach to platform manipulation and any other attempts to abuse our service at this critical juncture.

 

Research

 

        Twitter is the only major service that makes public conversations available for study. Tens of thousands of researchers have used our public API (technical interface to allow access to our public data) over the past decade.

        To provide transparency on the Covid-19 conversation on Twitter and further support the research community, on April 30 we launched a new program that offers free access to a dedicated Covid-19 API endpoint. In practice, this will allow approved developers and researchers to access public conversations about Covid-19 across languages, resulting in a data set that will include tens of millions of Tweets daily. The data can be used to research a range of topics related to the coronavirus pandemic, including areas like the spread of the disease, the spread of misinformation, crisis management within communities and more.

        Twitter’s public archive of state-backed information operations is the largest of its kind in the industry. We continue to publicly disclose state-backed information operations and make datasets available for research. First launched in October 2018, and continuously updated, the archive has been accessed by thousands of researchers from around the world, who in turn have conducted independent, third-party investigations of their own. In total, there are now 28 datasets, comprising 8 Terabytes of data.

 

Policy

 

        On March 18, we shared the 11-point criteria our team would be using to assess whether Tweets were considered harmful Covid-19 misinformation and against our rules.

        We broadened our guidance in April on unverified claims that incite people to engage in harmful activity, could lead to the destruction or damage of critical 5G infrastructure, or could lead to widespread panic, social unrest, or large-scale disorder.

        Since introducing our updated policies on March 18, we shared on April 22 that we had removed over 2,230 Tweets containing misleading and potentially harmful content.

        We continue to advise on our website that users should follow @WHO and their local health ministry; and they should seek out the authoritative health information and ignore the noise. We ask them to report it to us immediately if they see something suspicious or abusive. Most importantly, we advise users to think before you Tweet.

 

Partnerships

 

        Over 40 UK government partners and health authorities are onboarded to our Partner Support Portal, which enables trusted partners to expedite a wide range of issues to a dedicated team within Twitter.

        In April, we provided the government with pro bono advertising space and support to promote key #StayHomeSaveLives public health messaging. Not only has this had significant reach, but we also saw people actively participating - committing to #StayHomeSaveLives and the key behaviours required to keep us all safe.

        We also continue to provide pro bono advertising space and support to a number of charities, including around fact-checking and digital literacy. More broadly, for educators and parents, we have a specific media literacy guide, which was created in partnership with UNESCO. We provided this to Ofcom to share with their wider Making Sense of Media Network. In April, we ran a campaign with UNESCO and the EU in promoting media and information literacy. Throughout the week they shared best in class ways to critically analyse what we engage with online and encourage people to #ThinkBeforeSharing.

 

Product

 

        Launched 6 days before the official designation of the virus in January, we partnered with the Department of Health and Social Care to launch a dedicated prompt feature at the top of Search result. This ensures that when you come to the service for information about Covid-19, you are directed to the NHS website where you are met with credible, authoritative content. In May, we developed this further in partnership with the Department for Digital, Culture, Media and Sport - such that when people search for 5G information on Twitter, they are served with a prompt encouraging them to ‘Know the facts’; stating that the UK Government has seen no link between 5G and Covid-19; and directing to a new Gov.uk resource.

 

 

        Twitter Moments are a curated series of Tweets to tell a story. We have ensured a live, up-to-date Moment, with the latest Tweets from the key UK government and health accounts about Covid-19, is available at the top of the home timeline for all UK users. We have seen a 45% increase in usage of these pages globally.

 

        We shared on April 22 that our automated systems have challenged more than 3.4 million accounts targeting manipulative discussions around Covid-19.

 

 

Wider work

 

Data for researchers

 

Twitter is the only major service that makes public conversations available for study. Tens of thousands of researchers have used our public API (technical interface to allow access to our public data) over the past decade. As we testified at the Democracy and Digital Technologies Select Committee in March: “The fact that we are a public, open platform, and researchers publish information and findings on Twitter data all the time, helps get at some of these issues [around online safety]. The challenges on Twitter are well documented. We want to double down on research to support that and work very closely in collaboration with external experts as we find solutions.” I have attached the full transcript.

 

Outside research helps inform our work and measure progress. The University of Oxford Internet Institute found at the end of last year, for instance, that fewer than 2% of links shared on Twitter during the General Election were identified as ‘Junk News’, a tenth of what they had found in 2017 - indeed a majority were actually professional news content. Similarly, research published by the University of Salford in July 2019 demonstrated the value of Twitter in counteracting untruths and hateful discourse. The study, which analysed more than 46,000 tweets sent in the four days tweets in the aftermath of the Grenfell fire (June 2017), found that Twitter was a great platform for spreading positive narratives about Muslims, enabling individuals to spontaneously contest fake news and hate narratives.

 

Algorithmic transparency

 

Transparency, explainability, and consumer choice are critical principles that we are prioritising. Please see below information about how our algorithms work and the data we collect, as well as our ongoing work in these areas.

 

        On the Twitter home timeline, you may have the ranking algorithm turned on. Historically, you would just see, in reverse chronological order, tweets from accounts you followed. That was the case for most of Twitter’s existence. In 2016, we launched a ranking algorithm, following user feedback - research has shown that people find Twitter more useful when they are shown the most relevant Tweets first. The key aspect, however, is that we give all users the option to just turn the ranking algorithm off. If you are on mobile, in the top-right corner there is a sparkle icon where you can turn the algorithm off and once again see tweets in reverse chronological order from accounts that you follow.

        In conversations, we strive to show content to people that we think they will be most interested in and that contributes meaningfully to the conversation. For this reason, the replies, grouped by sub-conversations, may not be in chronological order. For example, when ranking a reply higher, we consider factors such as if the original Tweet author has replied, or if a reply is from someone the individual follows. We also take a range of behavioural signals into account, including into considering replies we might automatically hide. Some examples of behavioral signals we use to help identify this type of content include: an account with no confirmed email address, simultaneous registration for multiple accounts, accounts that repeatedly Tweet and mention accounts that do not follow them, or behavior that might indicate a coordinated attack. As a result we have seen 8% fewer abuse reports from conversations.

        More broadly, we give all users transparency and meaningful controls over the data we collect, how it is used, and when it is shared. In the ‘Your Twitter Data’ section of users’ settings, for instance, users can see the types of data stored by Twitter - such as username, email, phone number and account creation details. If a user has shared a birthday or location with us that will also be shown here. Through this tool, users can modify certain information we may have inferred such as gender, age range, language, and location. Users can also download a copy of their data from Twitter through this tool, and review other inference information - such as which advertisers have included you in their tailored audiences, and demographic and interest data from external ads partners. An individual can disable all personalisation and data settings with a single master toggle within their settings.

 

To further understand the impact of algorithms, we are partnering with researchers at the University of California Berkeley to establish a new research initiative focused on studying and improving the performance of ML in social systems, such as Twitter. Studying the societal impact of algorithms is a growing area of research in which Twitter will continue to support and participate.

 

In the meantime, we have banned all political advertising globally. Political advertising that uses micro-targeting presents entirely new challenges to civic discourse that are not yet fully understood. Political message reach should be earned, not bought, and we therefore will continue to not allow advertising to drive political, judicial, legislative, or regulatory outcomes.

 

Content moderatoration

 

As we shared at this Committee when we testified last year, we have found we are much more effective tackling the harmful content we see through increased use of technology. Now 1 in 2 Tweets we take down for abuse we have detected proactively; for the most egregious content (like terrorism and child sexual exploitation) the proportion is far higher.

 

Our priority is the health and safety of our staff. With that in mind, on 11 March we informed all employees globally they must work from home. For contractors and hourly workers who are not able to perform their responsibilities from home, Twitter will continue to pay their labour costs to cover standard working hours while these restrictions are in effect.

 

We are working on an ongoing basis to enable staff to perform their work from home - safely and securely. We are, however, not immune from impact. We are providing regular updates on our website on what you can expect if you make a report to Twitter, flagging that it will take longer than normal for us to respond. To further protect the conversation, we are:

 

        Instituting a global content severity triage system so we are prioritizing the potential rule violations that present the biggest risk of harm and reducing the burden on people to report them.

        Executing daily quality assurance checks on our content enforcement processes to ensure we’re agile in responding to this rapidly evolving, global disease outbreak.

        Engaging with our partners around the world to ensure escalation paths remain open and urgent cases can be brought to our attention.

        Continuing to review the Twitter Rules in the context of COVID-19 and considering ways in which they may need to evolve to account for new behaviors.

 

We are also increasingly using technology:

 

        To help us review reports more efficiently by surfacing content that's most likely to cause harm and should be reviewed first.

        To help us proactively identify rule-breaking content before it's reported. Our systems learn from past decisions by our review teams, so over time, the technology is able to help us rank content or challenge accounts automatically.

 

Zero tolerance of child sexual exploitation

 

Twitter has zero tolerance towards any material that features or promotes child sexual exploitation. Any content depicting or promoting CSE is removed without further notice, and reported to the National Center for Missing & Exploited Children (NCMEC).

 

In addition to our relationship with NCMEC, Twitter is an active member of the Technology Coalition. This industry-led non-profit organization strives to eradicate child sexual exploitation by mentoring emerging or established companies, sharing trends and best-practices across industry, and facilitating technological solutions across the ecosystem.

 

The latest Twitter Transparency Report, which covers the period January 1 to June 30, 2019 details the progress we have made in this space; Twitter removed 244,188 accounts for violations of our child sexual exploitation policies. Of those unique accounts suspended, 91% were flagged by a combination of technology (including PhotoDNA and internal, proprietary tools).

 

Critically, we are not a service targeting a youthful audience. According to Ofcom, 0% of 12-15 year olds nominate Twitter as their main social media platform. Last month, we were pleased to partner with the IWF to launch a campaign in the UK promoting online safety advice to help parents and carers keep their children safe during the Covid-19 lockdown. We continue to engage regularly with the NCA and Home Office.

 

May 2020

 

Timeline of safety changes 2014-2020