35

 

 

 

Submission to the House of Lords Select Committee on Democracy and Digital Technologies

 

This Avaaz submission will focus on the rise of disinformation powered by digital technology and how that undermines democracy and Avaaz’s workable policy solutions for tech platforms to Correct the Record for everyone exposed to disinformation. Our reports and evidence collected over the past year has convinced us only regulation will ensure this principle is adopted widely. 

 

Summary of the Avaaz submission:

 

Avaaz’s members -- 1.8 million in the UK, 20 million in the EU, and 53 million globally -- are deeply concerned about the damage disinformation is doing to democracies. Large scale carefully orchestrated disinformation campaigns have succeeded in sowing distrust and division across Europe and in the UK. In the last 2 years Avaaz has rapidly built up expertise on how to track these dangerous networks. 

 

In the run up to the European Parliament elections in 2019, Avaaz’s investigative reporting on the distortion of political discourse through the abuse of social media platforms prompted Facebook to take action on pages and groups across Europe whose content reached more than 750 million views in just the three months prior to the elections[1]. Facebook was not the only source -- in Spain, our investigators found that more than one in four Spanish voters had been reached by posts on Whatsapp that they thought were false, misleading, hateful or pose a serious threat to healthy public discourse. There is also extensive evidence of use of YouTube by malicious actors to host disinformation.

 

Our work has uncovered an urgent need for action to slow the spread of disinformation and hate speech, particularly in the context of elections, and growing evidence that the platforms continue to fail to prioritise its changing global nature. 77% of citizens across the United Kingdom think social media platforms should be regulated to protect our societies from manipulation, fake news and data misuse.

 

This urgency seems to be unrecognised by the major social media platforms. On September 24,

2019 Facebook’s VP of Global Affairs and Communications, Nick Clegg delivered a speech on Facebook, Elections and Political Speech[2] in which he described the action it had taken in the past to combat electoral disinformation before announcing that Facebook would from now on downgrade the action Facebook is taking in regards to politicians’ speech -- announcing that politician’s posts would no longer be subject to third-party fact-checking."[3] 

 

We fully support free, open honest discourse on Facebook, but as we will evidence in this report, we have seen examples both in the UK, and across the world, of politicians who peddle disinformation and use divisive issues like race or immigration if it achieves their goal.  The boundaries of political speech are being pushed to their limits in the current unstable world order and platforms cannot be allowed to simply throw up their hands and do nothing. 

 

This is particularly so because it is simple and within the current technical capabilities of the platforms to protect against disinformation without impacting freedom of expression and without putting either the government or the platforms in charge of deciding what’s true and what’s false. We urge the House of Lords to consider the following legislative principles as it considers the evidence before it. 

 

Our Solution? Five Legislative Principles

 

Avaaz has consulted deeply with academics and lawmakers across the globe as well as civil society and social media executives and developed 5 legislative principles[4], which should form the basis for any regulatory regime. These are required to ensure all online              platforms[5] effectively counter disinformation.

 

1) Correct the Record 

Correct the Record exposes disinformation and other manipulations, educates the public, and reduces belief in lies The policy requires platforms to inform users and push effective corrections to every person who saw information that independent fact checkers have determined to be disinformation. This solution would tackle disinformation while preserving freedom of expression.  We have seen reports that Facebook itself has conducted vigorous fact checking programmes in defence of its own reputation - under its “Stormchaser” initiative6. There is no reason why it should reject this approach in relation to election disinformation, a much more serious issue.

 

2) Detox the Algorithm

 

3) Transparency

In the evolving defence of our democracies against disinformation, it is essential that governments, civil society, and the general public be informed about the nature and scale of the threat, and about the measures being taken to guard against it. Online platforms must therefore provide periodic reports listing the disinformation found on their services, the number of bots and inauthentic accounts that were detected, what actions were taken against those accounts and bots, how many times users reported disinformation. The reports must also detail platforms’ efforts to deal with disinformation

 

4) Ban Fake Accounts and Unlabelled Bots

Ban fake accounts and unlabelled bots that act as conduits for disinformation and take action to track down the networks that create and run them, closing the loopholes that allow them to multiply, and reducing the incentives provided by their own services.

 

5) Label Paid Content and Disclose Targeting

Transparency should apply to all paid communications, political and non-political, as citizens ought to be able to know who paid for an advertisement and on what basis the viewer was targeted. Additionally, platforms should label state-sponsored media, particularly countries that deploy state-media to push propaganda, in order to increase transparency by disclosing where content is coming from.

 

Avaaz will go into greater detail on each of these solutions in answer to your specific questions below

 

Avaaz Evidence to the Committee’s questionnaire

 

General 1. How has digital technology changed the way that democracy works in the UK and has this been a net positive or negative effect? 

 

The issues

There are distinct positives where digital technology has promoted genuine debate, empowered individuals and created a new form of engagement with politics and policy-making. However, the positive effects of digital technologies rest on a few assumptions - one of which is that individuals are engaging authentically as themselves where possible. But Avaaz has uncovered significant evidence of the use of fake accounts and bots to amplify minority opinions so that they appear to have more support and sway than they really do. 

 

Ahead of the EU elections, Avaaz conducted investigations in 6 key EU countries and unearthed large networks of disinformation on Facebook. This was the first investigation of its kind and uncovered that far-right and anti-EU groups are weaponizing social media at scale to spread false and hateful content in an attempt to influence the elections by making their views seem to have more widespread currency and support than they actually did. Our findings were shared with Facebook, and resulted in an unprecedented shut down of Facebook pages just before voters headed to the polls. 

 

Media coverage of Avaaz’s investigations:

Guardian

Deutsche Welle

Euractiv

Wired

 

The Evidence:

 

Fake Accounts in use by politicians and supporters of the AfD party in Germany ahead of the European Election.

 

In May 2019 the AfD party in Germany was standing on an anti EU immigration policy platform. We discovered multiple instances of social media manipulation through fake and multiple accounts in violation of Facebook’s terms and conditions. When we reported it to Facebook, they shut down these accounts, although the content they spread, would have been seen by and potentially influenced many voters before Facebook did so.

 

In its simplest form we saw politicians creating multiple accounts for themselves to amplify and repost their content to different audiences. For example  Laleh Hadjimohamadvali was a candidate for the AfD in the elections for Landtag (state parliament) in Saarbrücken and Bundestag (federal parliament) in 2017.  She then stood as a candidate for the position of mayor in Saarbrücken and during this period we observed four accounts, 4 each clearly showing Laleh Hadjimohamadvali’s image on their profile. It is a breach of Facebook’s Community Standards on Misrepresentation to “maintain multiple accounts.” We found personal videos and photos of Hadjimohamadvali across all four accounts, and three of the four profiles are “friends” with each other. On May 17, within 24 hours of being alerted to the accounts, Facebook removed the Laleh Mohamad account. 

 

Beyond this simple form of influence we discovered another AfD politician with 22 fake and duplicate accounts as “friends”. These “friends” reposted the politician’s content making it seem more popular than it really was. This politician had been identified as posting known disinformation, including fake quotes falsely attributed to the Vice-President of the EU commission, Frans Timmermans[6]. As per Nick Clegg’s announcement, such posting of disinformation would not be subjected to third-party fact-checking simply because it comes from a politician and worse, this network of fake “friends” helped make it appear that the message was more popular than it actually was.

 

It is possible that when the politician befriended these profiles he was not aware of them being inauthentic but when we reported it to Facebook seven out of the 22 profiles were immediately shut down.

 

Worryingly, we traced an even more organised network of fake accounts deliberately reposting content. We found a Network of AfD fan pages and groups acting together, closely coordinating their posting, with each of them publicly supporting the AfD party in Germany. Our evidence showed that these apparently separate groups and pages led back to the profile of just one individual - Johann Z., - a self-described AfD supporter living in Germany. See the links in the image below.

 

 

The following screenshots show how these pages and groups are used to spread content, sharing one article almost simultaneously, thereby boosting its apparent popularity on Facebook. 

 

 

 

We found this method of boosting the apparent appeal and popularity of minority polarising views across all the countries we investigated in Europe, and have detailed further instances in Spain, Poland and France in our report.

 

 

 

 

The Solution

 

Fake Accounts and unlabelled bots act as conduits for disinformation and harm voters in precisely the same way that misleading advertising and unfair business practices harm consumers. They must therefore be mandatorily banned on all platforms. Many platforms’ guidelines and policies already include this ban, but they are underperforming on actively searching for fake accounts, closing the loopholes that allow them to multiply and reducing the incentives provided by their own services that favor the existence of bots. 

 

Bots must be prominently and effectively labelled, and all content distributed by bots must prominently include the label and retain said label when the content or message is shared, forwarded, or passed along in any other manner. All such labels must be presented and formatted in a way that it is immediately clear to any user that they are interacting with a non-human.

 

2. How have the design of algorithms used by social media platforms shaped democratic debate? To what extent should there be greater accountability for the design of these algorithms? 

 

The issues

This section deals with the vital aspect of the promotion and monetisation of disinformation through the platforms failure to ensure their algorithms do not supercharge the disinformation increasing the spread.  Decoupling the financial incentive to push disinformation by ensuring algorithms no longer promote it is a simple and key way forward in the fight against its pernicious effect. 

 

Platforms use algorithms to determine when and in what order users see content that the algorithm determines may be of interest to them to keep them viewing the service.[7] Algorithmic content curation like this has important consequences for how individuals find news but instead of human editors selecting important sources of news and information for public consumption, complex algorithmic code determines what information to deliver or exclude. Popularity and the degree to which information provokes outrage, confirmation bias, or engagement are increasingly important in driving its spread.[8] The speed and scale at which content “goes viral” grows exponentially, regardless of whether or not the information it contains is true. Although the Internet has provided more opportunities to access information, algorithms have made it harder for individuals to find information from critical or diverse viewpoints.[9]

 

In this way, without proper care and oversight, display algorithms can increase user engagement with disinformation and other harmful content. These algorithms can be gamed to effectively ensure that the most divisive and senationally fake news quickly reaches a viral status. Platforms should transparently adapt their algorithms to ensure that they are not themselves exponentially accelerating the spread of disinformation. And although social media platforms need virality for profitability, it is crucial for them to monitor exactly what is going viral - an MIT study found that falsehoods on Twitter spread six times faster than true news.[10]

 

The Evidence

 

For example, in our work ahead of the 2019 EU Elections, Avaaz identified and investigated three disinformation networks in France and submitted our findings to Facebook. Together, these networks had a total reach of over 1.7 million followers. We identified different tactics that used platform and web search algorithms to push disinformation. Our full report is here

 

Amplifying the reach of Content through multiple posting

 

One network we found, La Gauche M’a Tuer, systematically spread false news on a large scale, its posts being amplified by a network of accounts that reposted its content. Together, the three pages in this network had 526,339 followers. Facebook demoted La Gauche M’a Tuer’s official page, as well as pages that worked to amplify its content for their repeated spread of fake news. This was in line with its new “Reduce, Remove, Inform” strategy, which includes “reducing the reach of Facebook groups that repeatedly share misinformation.” Demoted content does not show up in news feeds or recommendations but can only be found if specifically searched for. This effectively detoxifies its the platform’s algorithm.

 

Boosting the search ranking of disinformation websites through multiple social media postings

During our investigation we also identified another side to the gaming of search algorithms. Low-authority websites were found to be using a network of social media accounts, posting multiple times through these accounts to boost the numbers of times their sites were visited. This boosts the likelihood of them being featured more prominently in crucial internet rankings - boosting the value of any dis-info they peddle

 

The process is explained by a leading internet search optimization company, MOZ They explain “Ranking refers to the process search engines use to determine where a particular piece of content should appear on a SERP. Search visibility refers to how prominently a piece of content is displayed in search engine results. Highly visible content (usually the content that ranks highest) may appear right at the top of organic search results or even in a featured snippet, while less-visible content [ie low authority content] may not appear until searchers click to page two and beyond”[11].

 

We identified three “alternative media” networks engaging in what Avaaz reported as containing elements of spam behavior to boost three political websites all engaged in the promotion of politicians and policies supporting Brexit: Political UK2, UK Unity News Network and The Daily Brexit. Together, these networks had 1.17 million followers. As part of our analysis we mapped 709 Facebook Pages in the UK and collected 190,745 posts they sent between February 3 and May 7, 2019. We observed that these “alternative media” all appeared in the top 20 most shared websites in the UK, sometimes above The Guardian and BBC UK.  Together, these networks had nearly 5 million interactions in the past three months

 

This was highly inconsistent with their status as “low-authority sites” which is an independently ranked metric13 commonly used to detect spam links.  According to independent rankers, one of these outlets, “The Daily Brexit,” had an authority ranking of  0.4 in a scale from 0 to 100. We undertook statistical analysis of the sites social media activity and found a pattern of cross posting that amplified these Brexit supporting sites content.

 

 

This led us to investigate the sites where we discovered that the sites were reposting content multiple times on Facebook pages and groups, at almost the exact same time, through automated posting or coordination by a handful of individuals. An example below shows posts shared at the exact same time, on May 4, 2019, at 7:57pm, on pages that are part of a network

 

 

It is clear from the actions Facebook took with the French networks that it can apply the brakes on the spread of disinformation and ‘teach’ the algorithm to slow down the spread of certain content. This brings us to the solution of how to ‘detox the algorithm’.

 

The Solution

 

That’s why comprehensively detoxifying the platforms’ algorithms is crucial. “Detoxing the Algorithm” means tweaking social media platforms’ content curation algorithms to ensure that they extract disinformation, pages belonging to disinformation accelerators and malicious actors, and other harmful content out of their recommendations to viewers. This would ensure that the recommendation, search, and newsfeed algorithms are not abused by malicious actors, and that the reach of disinformation is sidelined rather than boosted by these platforms.

 

Avaaz believes that efforts to Detox the Algorithm should be based on these important principles:

 

 

 

Consequences: Downgrade, Correct  and Demonetize Repeat Disinformers.  We go into more detail on our correct the record solution below, but we believe deterrents for disseminating disinformation should be consistent and should form part of the recommendation system’s algorithm and should demote all recognised and verified disinformation. A three strikes rule might apply: when a content provider is caught peddling disinformation three times the platform would ensure that all the providers’ (channels) content is demonetized and no longer accelerated by the recommendations algorithm.

 

For encrypted messaging services steps to prevent the further spread of disinformation, they could include:

 

Creating a separate databank of flagged disinformation: content spreading on the platform, and providing users with access to a special fact-checking database where they can forward problematic content they receive to be reviewed by independent fact-checkers.

 

Platforms must work with independent, third-party, qualified fact checkers to determine whether content is disinformation, and ensure transparency over decisions made regarding the spread of certain content and channels. The “facts” are not for governments or platforms to determine but qualified, independent fact-checkers. 

 

 

 

Education 

3.                  What role should every stage of education play in helping to create a healthy, active, digitally literate democracy? 

 

Education and resilience are crucial, and Correct the Record operates to build this~:content is not removed, it is simply corrected. It adds the truth, and leaves the lies transparently corrected. With each correction the users are given a further piece of crucial information on which sites to trust and how to assess the credibility of the content they see.  This is an important educative function, that safeguards free expression.

 

4.                  Would greater transparency in the online spending and campaigning of political groups improve the electoral process in the UK by ensuring accountability, and if so what should this transparency look like? 

 

One of the reasons Avaaz is campaigning for regulation of the tech giants is that unless they are legally required to be transparent, they can continue to play the role of judge, jury, prosecutor and witness all at the same time. 

 

There is therefore a very urgent requirement of transparency from the platforms to help lawmakers, elections officials and civil society stay ahead of the tactics employed by those who seek to influence election outcomes. Disinformation campaigns are relatively cheap so while transparency around online spending is very important, transparency about the scale of disinformation is equally so. 

 

Online platforms and messaging services should publish periodic transparency reports, conducted by independent auditors, on action taken against the dissemination of disinformation.  In the months preceding elections, online platforms and messaging services shall produce such reports on a monthly basis.

 

Transparency reports shall include at least the following information:

        information about the online platform or messaging service’s measures in relation to the detection, identification and correction of disinformation;

        information about the online platform or messaging service’s measures to limit the spread disinformation;

        Statistics on the number of pieces of disinformation detected and those for which corrections have been issued, following user reports, or proactive measures, respectively including number of views on pieces of disinformation as well as number of users exposed to it

        An overview of the nature and outcome of complaint procedures

 

5. What effect does online targeted advertising have on the political process, and what effects could it have in the future? Should there be additional regulation of political advertising? Privacy and anonymity 

 

The issue

 

Many platform providers already require advertisers wishing to run political ads to verify their identity and location with proof of ID. In order to protect citizens from disinformation warfare, these standards of transparency should apply to all paid political communications - not just those that come directly from political parties or their foundation.

 

The Solutions

 

We believe the following principles are crucial to any suggested regulation of political advertising content 

 

1) Label Paid Content and Disclose Targeting

 

Transparency should apply to all paid communications, political and non-political. Citizens ought to be able to know who paid for advertising content and on what basis the viewer was targeted. All online advertising should include the following information:

 

        A prominent “paid for by” label clarifying for users that they are seeing paid-for content and identifying who paid for the content;

        An easily recognisable button that directly leads the viewer to information showing the name of the entity that is paying the ad with contact information provided and the targeting of the ad. 

        If the ad is paid for by a political party then the party name must also appear in the label.

        Users should be given an accessible and prominent way to access all ads the payee has run in the preceding 12 months, including all the ads it is running currently. 

        These requirements apply to any advertising, paid content, or boosted content in any format (post, news item, article, video, animation, audio or mixed material).

        Platforms must label state-sponsored media in order to increase transparency by disclosing where content is funded or editorially controlled from.

 

2) Remove all paid content that contains disinformation

 

The issue

 

Facebook's current rules state “Facebook prohibits ads that include claims debunked by third-party fact checkers or, in certain circumstances, claims debunked by organizations with particular expertise. Advertisers that repeatedly post information deemed to be false may have restrictions placed on their ability to advertise on Facebook.  Facebook then gives the following examples:

        Ads containing claims which are debunked by third party fact checkers

        Ads which include misinformation about vaccines as identified and verified by global health organizations such as the World Health Organization[12]

 

But Facebook has exempted politicians from even this requirement not to peddle proven disinformation. Nick Clegg’s speech of 24 September stated “This means that we will not send organic content or ads from politicians to our third-party fact-checking partners for review. However, when a politician shares previously debunked content including links, videos and photos, we plan to demote that content, display related information from fact-checkers, and reject its inclusion in advertisements.” The exemption is already being exploited.

 

The Evidence

 

On Tuesday 2 October 2019 October Donald Trump posted an advert which Facebook’s own fact checkers (Politifact and Factcheck.org) had already indicated was false.

 

 

On September 24, Factcheck.org wrote: "[T]here is no evidence that Hunter Biden was ever under investigation or that his father pressured Ukraine to fire Shokin on his behalf."

 

On September 23, Politifact wrote: "Hunter Biden did do work in Ukraine, but we found nothing to suggest Vice President Biden acted to help him." 

 

A Facebook spokesperson reportedly confirmed that the ad does not violate Facebook's policies because political ads are ineligible for fact-checking.  As at 4th October 2019, and despite international press coverage of the post[13] and its disinformation, this post was still available on Donald Trump's Facebook posts feed.[14]

 

The Biden campaign wrote to Facebook challenging a video advertisement that the Trump re-election campaign put out that has been viewed over 5 million times despite the falsehood it peddles. But Facebook told the Biden campaign that they would not take it down because of “Facebook’s fundamental belief in free expression.” CNN rejected this ad. And in a statement[15] justifying their rejection saying it failed to meet advertising standards, they added, 

 

“In addition to disparaging CNN and its journalists, the ad makes assertions that have been proven demonstrably false by various news outlets, including CNN.” 

 

Yet platforms like Facebook, Twitter and Youtube are profiting enormously from spreading these lies. According to figures released by Facebook, the Trump re-election campaign has spent approximately $5 million on Facebook advertising while the Biden campaign has spent around $700,000[16]

 

 

The Solution

 

Avaaz believes that disinformation that has been debunked by fact-checkers, should be barred from paid content including online political advertising. 

 

6.                  To what extent does increasing use of encrypted messaging and private groups present a challenge to the democratic process? And 

7.                  What are the positive or negative effects of anonymity on online democratic discourse? Democratic debate  We have dealt with these two questions together.

The Issue

Encrypted messaging services have become a favoured tool for disseminating disinformation.

The Evidence: 

In Brazil a massive investigation by the newspaper Folha de Sao Paulo before the 2018 presidential run off election found that businesses allied to the Bolsonaro campaign had set up a large-scale illegal mobilization on WhatsApp of up to 300,000 groups reaching up to 77 million Brazilians. The matter was placed under investigation by Brazil’s electoral tribunal.[17] 

In Spain, in 2019 Avaaz’s investigation into WhatsApp revealed that 9.6 million potential voters in Spain (26.1%) had likely seen false, misleading, racist or hateful posts on WhatsApp. That was more than all the disinformation found during the investigation on YouTube, Twitter and Instagram combined -- and was almost as much as we found on Facebook (27.7%).[18]

In India, Amit Shah, the current Minister of Home Affairs who has been President of the ruling Bharatiya Janata Party (BJP) since 2014, summarised the power of WhatsApp in this way: “We are capable of delivering any message we want to the public, whether sweet or sour, true or fake. We can do this work only because we have [3.2 million][19] people in our WhatsApp groups. That is how we were able to make this viral.”[20]  Underscoring the power of social media he said, “It is through social media that we have to form governments at the state and national levels.”[21] 

One investigation[22] reported on over 60,000 messages from 140 pro-BJP WhatsApp groups in India reported that disinformation and hate speech against Muslims occurred in nearly a quarter of the messages. The report found that nearly 23.84% of the messages shared in these groups were Islamophobic with the potential to ‘incite violence.’ Much of this hate speech was embedded in disinformation that Hindus were in some way in danger in India because of the presence of Muslims and/or that Hindus will eventually become a minority in India. (The Hindu population currently makes up 79.8% of the overall population compared to Muslims who make up 14.2%[23])

The hold such disinformation has on majority views is not to be underestimated as this Avaaz-Asia Research Partners poll conducted in India in July 2019 shows:

The Solutions

It’s clear that WhatsApp and other messaging services, whether encrypted or not, should be included in any anti-disinformation legislation because they do have the ability to “broadcast” and amplify messages. Fortunately, technical solutions are possible that would allow providers of encrypted messaging services to alert users to the presence of disinformation without breaking encryption. This preserves the proportionality between combating disinformation while preserving privacy.

We suggest that the providers of encrypted services should be required to incorporate features into their applications that check unencrypted locally stored information against known disformation. Recognising the difficulty of regulation of encryption, we believe that the following 5 principles could guide the design of any encrypted online service provider. 

1)      Transparency of senders

Encrypted services should provide transparency to users for all forwarded messages, messages written by a third-party, and messages sent by a bot. All forwarded messages must come with an accessible and prominent label indicating that the message is a forwarded message from a third-party.  WhatsApp have produced a list of hints and tips to try to gauge if you have been forwarded junk news - and do label forwarded content as forwarded content but this falls short of a clear enough labelling requirement27. Messaging services must label all bots operating on the online communication network as such.

2)      Consent to receive broadcast messages

For all messaging services that provide a broadcasting mechanism28, services must receive clear consent from each and every user before delivering broadcasted messages and Default settings for broadcast messaging could be set to ‘opt out.’ Users should have to manually ‘opt in’ for broadcast features but user consent should only be required the first time an entity wishes to send a broadcast message. Services should provide an accessible and prominent means for users to withdraw the consent that has been given.

3)      Prior approval of data sharing

Any feature which requires any user data to be transmitted to the messaging service should require the prior approval of the user. 

4)      Protection of encryption

Any feature which compromises encryption should also require the prior approval of the user.

5)      Alerts to disinformation

All messaging services should include as part of their applications a feature that alerts the user to the presence of disinformation on their phone. There are many options for how this could be built, but in one version:

The app would regularly download updates to thousands of "descriptions" of independently verified disinformation content and scan incoming messages for matches, much in the same way that anti-virus software works. Descriptions could be content-based hashes representing text, or digital IDs of images or video.

Once suspected matches were identified, users could be alerted, and offered corrections by independent third party fact checkers.

Users could then choose whether to share more information with the service they saw the disinformation on to enable its team to better track and disrupt disinformation networks, including through techniques adapted from your successful anti-spam efforts.

2728 https://faq.whatsapp.com/en/android/26000216/?category=5245250

https://faq.whatsapp.com/en/android/23130793/?category=5245251

The algorithm might optimize and tailor the descriptions downloaded to individual phones to track the most current and serious attacks.

The database of all "descriptions" of disinformation might be generated by a strongly promoted user reporting system, verified by independent third party fact checkers, or in concert with electoral or law enforcement authorities.

8. To what extent does social media negatively shape public debate, either through encouraging polarisation or through abuse deterring individuals from engaging in public life? 

 

The Evidence 

 

Polling done by Avaaz in Europe, the UK, Brazil and India (attached in Annex 1) shows us that there is a huge public appetite for regulating social media giants. One of the reasons for this is because most people are aware that they are exposed to very harmful disinformation and hate speech that is corroding civic discourse in our societies. By distorting our ability to have a shared understanding we are unable to reach consensus. Social media has the capacity to link those with common views in a way that strengthens democratic organization, but when abused it also allows polarising minority views that promote discord to gain a volume far louder than their actual support could justify. 

 

United Kingdom

 

Facebook has detailed rules both about hate speech and about dangerous individuals who are organizing hate[24]. However in Nick Clegg’s speech on behalf of Facebook in September he indicated a stepping back from holding politicians to the standards it expects all other citizens to hold to.  Nick Clegg said: 

 

Facebook has had a newsworthiness exemption since 2016. This means that if someone makes a statement or shares a post which breaks our community standards we will still allow it on our platform if we believe the public interest in seeing it outweighs the risk of harm. Today, I announced that from now on we will treat speech from politicians as newsworthy content that should, as a general rule, be seen and heard.[25]

 

We understand and support the need for open democratic debate, but it is difficult to see how Facebook’s standards on dehumanising speech would prevent this.

 

Facebook’s community standards also state they will “remove content that expresses support or praise for groups, leaders, or individuals involved in these activities.”  Facebook suspended the account of Tommy Robinson, who stood as a Brexit party candidate in 2018,[26] for having “repeatedly broken [community] standards, posting material that uses dehumanizing language and calls for violence targeted at Muslims.” 

 

It also has stated it will remove profiles and pages that continue to show support and/or encourage users to follow banned figures, such as Tommy Robinson. However during our investigation into social media abuse ahead of the European Elections we uncovered examples of political groups doing exactly that. An example of one of these pages is the Pro Great Britain page[27], included in our report to Facebook, which was openly organising support for Robinson and his agenda on Facebook.  This was taken down by Facebook after we identified it to them as it was expresses support or praise for groups, leaders, or individuals involved in expressing hate.

 


Image recorded 18 May 2019

 

Further examples provided in the report include the “For Britain Movement,” an official political party that grew out of UKIP and its party leader, Ann Marie Waters.

 


Image recorded 18 May 2019

 

Facebook has given no guidance on what degree of political activity would enable a person to benefit from this exemption33.  Would Tommy Robinson now fall into that category?  Would his supporters also be protected?

 

The Solution

 

Solutions lies in the principles we have outlined above and in the measures we recommend to Correct the Record which we detail in the section below. We would only add in this section that transparency on page and post takedowns is crucial if the debilitating effect of online dehumanising language is to be addressed.  Specifically Avaaz recommends that all social media platforms should publish the rationale for taking down public profiles and/or pages or messages.  Facebook should do so via the Facebook Newsroom, even in the cases of politicians whose speech they have decided not to act upon, so its application of its own standards is transparent, and the issue of hate speech is given the publicity it deserves. 

 

9. To what extent do you think that there are those who are using social media to attempt to undermine trust in the democratic process and in democratic institutions; and what might be the best ways to combat this and strengthen faith in democracy? 

The issues

Disinformation and Election disruption protection 

The potential for disinformation to disrupt election processes is becoming more widely understood. As we have shown, the European elections were not free from disinformation. The European Commission and the High Representative reported in June 2019 that “we should not accept this as the new normal. Malign actors constantly change their strategies. We must strive to be ahead of them. Fighting disinformation is a common, long-term challenge for EU institutions and Member States.” 

The Evidence

Avaaz’s reporting (on Yellow Vests and on Far Right Networks of Deception) provides many examples of disinformation promoted on Facebook but the issue applies across all social media platforms.  For example significant studies have found that Twitter is a conduit for disinformation, with just a few fake news sites pushing millions of disinformation messages to users during the 2016 US Elections[28]. Twitter              does have a dedicated report option that enables users to tell it about misleading tweets related to voting — starting with elections taking place in India and the European Union - but its definitions of disinformation are limited to voter suppression issues[29]

The Solution

Avaaz has proposed a developed and tested concept to combat disinformation and strengthen faith in democracy, namely Correct the Record. Correct the Record is a simple solution that requires platforms to notify and provide effective fact-checked corrections to each and every person who saw false or misleading content in their feed. Researchers have concluded that effective corrections can reduce and even eliminate the effects of disinformation.[30] Our solution tackles disinformation while preserving freedom of expression; there is no censorship, just as newspapers publish corrections right on their own pages, television stations on their own airwaves; platforms should provide the same service to their users. 

Avaaz’s proposal to ‘Correct the Record’ has been endorsed by EU Security Commissioner Julian King who said, “We need rapid corrections which are given the same prominence and circulation as the original fake news. TIME magazine called Avaaz’s solution a “radical new proposal that could curb fake news on social media.”

Correct the Record is a six-step process, that would be implemented by all platforms with interactive content sharing, commenting, or posting facilities. The steps are:

        DEFINE: The obligation to correct the record would be triggered where:

        Independent fact checkers verify that content is false or misleading; ●              A significant number of people -- e.g. 10,000 -- viewed the content               DETECT: 

        Proactively use technology such as AI to detect potential disinformation with significant reach that could be flagged for fact-checkers;

        Deploy an accessible and prominent mechanism for users to report disinformation;

        Provide independent fact checkers with access to content that has reached e.g. 10,000 or more people. 

 

        VERIFY: Platforms must work with independent, third-party, qualified fact checkers to determine whether reported content is disinformation with sufficient speed to curtail its spread. 

        ALERT: Each user exposed to verified disinformation should be notified immediately using the platform’s most visible notification standard. 

        CORRECT: Each user exposed to disinformation should receive a correction that is of at least equal prominence to the original content and that follows best practices, which could include: 

a.      Offering reasoned alternative explanation, keeping the user’s worldview in mind; 

b.      Emphasising factual information while avoiding, whenever possible, repeating the original misinformation; 

c.      Citing endorsement by sources the user is likely to trust.

d.      For an illustration of corrections that could be deployed on Facebook, see the following page.

6.   CURTAIL: Online platforms should, within a reasonable time after receiving a report from an independent fact-checker regarding disinformation with significant reach, take steps to curtail its further spread. Such steps shall include:

        Disabling forwarding of the disinformation content to more than one user at a time, on messaging services or when applicable;

        Labeling said content, including messages on messaging services, as containing verified disinformation and retaining that label when the content or message is shared, forwarded, or passed along in any other manner; 

        Significantly decreasing or eliminating the prominence and reach given to the disinformation by the online platform's algorithm; and

        Showing the original creator of the content’s name when said content is forwarded. 

Correct the Record would also be line with the Cairncross Review into sustainability of high-quality journalism. One of its recommendations was that platforms should “nudge people towards reading news of high quality.”[31] 

Avaaz has commissioned an independent academic review of the efficacy of correct the record, and will be happy to supply this as an addendum to this submission upon its publication this later year.