Network for Media and Persuasive Communication Bangor University – written evidence (DAD0019)

 

AGAINST OPACITY, OUTRAGE & DECEPTION: Towards an ethical code of conduct for transparent, explainable, civil & informative digital political campaigns

 

Authors:

 

Vian Bakir, Prof. of Journalism & Political Communication (lead contact). Email: v.bakir@bangor.ac.uk

Andrew McStay, Prof. of Digital Life, Email: mcstay@bangor.ac.uk . Both from Network for Study of Media & Persuasive Communication, Bangor University, Wales, UK, and the Emotional AI Project .

 

1.               Summary

 

1.1              Our submission answers the following questions posed by the Select Committee: How has digital technology changed the way that democracy works in the UK and has this been a net positive or negative effect? Would greater transparency in the online campaigning of political groups improve the electoral process in the UK by ensuring accountability, and if so what should this transparency look like? What effect does online targeted advertising have on the political process, and what effects could it have in the future? Should there be additional regulation of political advertising?

 

1.2               Use of digital technologies in political campaigning present benefits and harms to the democratic process. To derive and illustrate these, we focus on the various ‘Leave’ groups’ campaigns in the UK’s 2016 Referendum on whether or not to Remain in, or Leave, the European Union (EU). On benefits, digital political campaigning has the potential to better engage hard-to-reach parts of the electorate; and by enabling officials to tap into voters’ sentiments, it can help politicians identify issues and policies that voters care about, making politicians more responsive to electorates. However, this requires that campaigns are conducted honestly and openly, otherwise we descend into covert, attempted manipulation of electorates. Unfortunately, digital political campaigning is currently opaque, presenting many harms. It has capacity to negatively impact on citizens’ ability to make informed choices; and on their ability to hold political campaigners, and those subsequently elected, to account. It increases the potential for targeted voter suppression. It enables exploitation of people’s psychological vulnerabilities; and leads to unintended exploitation of vulnerabilities as, for instance, children become collateral recipients of online adverts targeted by behaviour rather than age. Finally, opacity increases the potential for societal polarisation through, uncorrected, emotive disinformation targeted at niche audiences, but polluting the entire digital media ecosystem.

 

1.3               Each electoral cycle deploys technological and industry innovations in data-mining and targeting. We argue that we have reached a phase where opacity in the use of these profiling technologies has become problematic, and is likely to worsen with increasing use of Artificial Intelligence (AI) in political campaigning. To combat this, we need a code of ethical conduct for political campaigners to conduct transparent, explainable, civil and informative campaigns.

 

1.4              Recommendations to counter opacity, outrage and deception in digital political campaigning

1.4.1              Put in place (via regulatory requirements) mandatory, publicly available self-evaluations by all political campaign groups following each election and referendum. The self-evaluations should address the criteria of transparency, explainability, civility and informativeness, these criteria forming a code of ethical conduct for political campaigns.

1.4.2              Put in place an independent panel of diverse stakeholders (including fact-checkers, academics, and campaigners from opposing sides) to verify, and critically comment upon, the self-evaluations.

1.4.3              Develop a kite-mark system to brand the transparency, explainability, informativeness and civility of the campaigns, to enable comparisons between elections.

1.4.4              Ensure that the self-evaluations, and verification by the independent panel, are available online in an independent public archive to enable comparisons between elections.

1.4.5              Place in an independent, searchable public archive all micro-targeted messages deployed in any political campaign.

1.4.6              Use public information campaigns and citizenship education within schools to widen understanding of the criteria in the code of ethical conduct for political campaigns, and to help people recognise if a campaign contravenes these codes.

2.               Digital technologies in political campaigning: zeroing in on the Leave campaigns

 

2.1              Use of digital technologies in political campaigns presents benefits and harms. To derive and illustrate these, we focus on the Leave campaigns in the UK’s 2016 Referendum on whether or not to remain in, or leave, the European Union (EU). We focus on the Leave campaigns as they have attracted the most scrutiny. They have been scrutinised by regulatory and criminal investigations in the UK, and by investigative journalists, largely because of their over-spending on their legal limits for campaigning, but also because of their role in disseminating disinformation in a manner that may have proven decisive, given Leave’s narrow margin of victory. The Leave campaigns have also been the subject of revelations from whistleblowers from the now defunct British data analytics company, Cambridge Analytica (namely, Christopher Wylie, contractor at SCL Elections and its subsidiary Cambridge Analytica 2013-14; and Brittany Kaiser, Director of Business Development, Cambridge Analytica, 2015-18) and from the official Leave campaign, ‘Vote Leave (Shahmir Sanni, Vote Leave volunteer, 2016). The stated motivations of these whistleblowers is ethical: a growing disgust with the operations in which they were involved.[1]

 

2.2              Vote Leave’ was the official designated campaign to leave the EU, led by then Conservative Members of Parliament (MPs), Boris Johnson and Michael Gove. There were also unofficial Leave campaigns including youth-oriented campaign group ‘BeLeave’ fronted by Darren Grimes; the ‘Leave.EU’ group founded by Arron Banks and Richard Tice; ‘Veterans for Britain’, and ‘DUP Vote to Leave’.  In many ways, Vote Leave’s digital campaign displayed features entirely commensurate with wider trends in digital political campaigning. However, the various Leave campaigns also display a degree of opacity that troubled regulators, and there is evidence pointing to covert, attempted digital manipulation of populations on the part of Leave.EU. Consideration of the various Leave campaigns allows us to pinpoint the potential benefits and harms to democracy from increasingly granular digital political campaigns, as well as what should be done about this.

 

2.3              Elsewhere, we have detailed the emotive, deceptive, targeted digital information flows in the Leave campaigns.[2] In brief, after winning the EU Referendum, Vote Leave’s campaign director, Dominic Cummings, proclaimed the potency of Vote Leave’s message on: ‘350m / NHS / Turkey’.[3]  Respectively,  these messages were that: the UK was spending £350 million a week on the EU, which it could spend on the National Health Service (NHS) if it left the EU; that Turkey, Macedonia, Montenegro, Serbia and Albania were about to join the EU; and that immigration could not be reduced unless the UK left the EU, thereby taking back control of its own destiny. These were the messages in its Facebook adverts seen by the most people. Such messages are emotive, invoking fear of hordes of immigrants swamping much cherished, but strained, national resources such as the NHS. Certainly, immigration was a key issue for voters.[4] Such messages are also deceptive, as seen by post-referendum fact-checks of Vote Leave’s message on: ‘£350 million’ and ‘Turkey’.[5] Furthermore, Vote Leave’s campaign director, Cummings, made a show during the campaign of refusing to work with Arron Banks (of Leave.EU – one of the unofficial Leave campaigns) while admitting that his campaign relied on their harsh anti-immigration messages.[6]  Indeed, in providing testimony to the UK’s Inquiry into Disinformation and Fake News, Banks highlights the methods with which Leave.EU campaigned: ‘My experience of social media is it is a firestorm that, just like a brush fire, it blows over the thing. Our skill was creating bush fires and then putting a big fan on and making the fan blow’.[7] Banks described the issue of immigration as one that set ‘the wild fires burning’.[8] As reported in The New Yorker: ‘A typical Leave.EU post on Facebook warned voters that “immigration without assimilation equals invasion”’.[9]  Furthermore, an investigation by Channel 4 News in 2019 revealed that Leave.EU was behind a fake video that went viral, garnering hundreds of thousands of views on Facebook. The video, published by Leave.EU as an ‘undercover investigation’ on Facebook, purported to show how easy it is to smuggle migrants into the UK from across the English Channel. Debunking this video several years later, satellite data, seen by Channel 4 News shows that the footage was filmed in reverse, and that the apparent footage of ‘migrants’ entering the UK were actually filmed before the boat had even left British waters.[10]

 

2.4              One might argue that citizens are used to deceptive, emotive political campaigns, and that campaigns such as those conducted by Leave do not merit undue social or political concern. After all, on emotive communication, as far back as the era of classical Greek Democracy (5th century BC), Aristotle recognised the importance of affect in persuasive communications.[11] Writing in the 21st century, influential psychologist Westen argues that issues that arouse emotions have the biggest impact on voting and voter mobilisation: such issues tend to be contentious issues.[12] On deception in political campaigns, while this deprives people of the information that they need to make an informed decision, multiple studies document the long-standing use of deception.[13] Neither is digital targeting of voters new, as described below. However, each electoral cycle sees deployments of technological and industry innovations in data-mining and targeting.[14] We argue that we have reached a phase where opacity in the use of these profiling technologies has become problematic, and that this is likely to get rapidly worse with increasing use of AI in political campaigning. But how did we get here?

 

 


3.               The rise of digital marketing techniques in political campaigns

3.1              Across the past decade, political campaigning has increasingly relied on digital marketing techniques to supplement its traditional focus on demographic market segmentation, opinion polling, targeted campaigning and direct marketing.[15] For decades in the USA and UK, opinion polling allowed parties to merge broad demographic data with psychographic insights on how to craft messages that resonated with large parts of the population. In the UK, more targeted campaigning and direct mail developed in the late 1970s and early 1980s.[16] Political campaigns now combine public voter files with commercial information from data brokers to develop detailed, comprehensive voter profiles.[17] This practice rose to prominence in the US with Barack Obama’s 2008 presidential campaign[18] and has been increasing in the UK since 2015.[19]

3.2           Core features of the use of digital marketing techniques by political campaigns are as follows.

 

3.2.1              Increased spending on digital advertising. The proportion of money that UK political campaigners have reported spending on digital advertising as a percentage of their total advertising spend rose sharply in 2015 (2% in 2014, but 24% in 2015). In the 2015 General Election, which the Conservative Party won, its spending on Facebook ads was far higher than its rivals: it spent £1,209,593 on Facebook ads in the 12 months leading up to the General Election compared to the Labour Party’s £16,500. The Conservative Party also spent £312,033 on Google compared to Labour’s tiny amount of £179.[20] The declared spending of UK campaigners on digital advertising has since maintained an upward trend: 32% in 2016, when the EU Referendum was held, and 43% in 2017 when a UK Parliamentary election was held. [21] In the 2017 General Election, which the Conservative Party also won (although with a reduced majority), its spending on Facebook ads was again higher than its rivals: it spent £2,118,045 on Facebook ads in the 12 months leading up to the General Election compared to the Labour Party’s £577,270. The Conservative Party also spent £562,154 on Google compared to Labour’s smaller amount of £254,516.[22] These figures are an underestimate of the importance of digital campaigning, as they do not include organic growth through re-tweets and re-posts, or free exposure via campaigners’ and activists’ social media activity.

3.2.2              Increased use of data analytics[23] and data management approaches in order to profile[24] and thereby identify target audiences, including the ‘persuadables’ and swing voters. The amount of money that UK political campaigners have reported spending on advertising/data companies and on consultants/strategists has also increased since 2015, with the Conservatives greatly out-spending Labour in both the 2015 and 2017 General Elections. In the 2015 General Election, the Conservative Party spent £384,417 on advertising and data companies compared to Labour’s £150,654; and the Conservative Party spent £429,099 on consultants and strategists compared to Labour’s £223,579. In the 2017 General Election, the Conservative Party spent £481,254 on advertising and data companies compared to Labour’s £300,000; and the Conservative Party spent a huge £822,693 on consultants and strategists compared to Labour’s £337,134. [25] Notably, both Labour and Conservatives spent substantially more on Experian in 2017 than in 2015 – a company that sells data and profiles on individuals and groups, and that helps match existing individual profiles so that citizens can be targeted with the same message across multiple channels. [26] Such developments led to the UK Information Commissioners Office report, Investigation into the use of data analytics in political campaigns, which highlights many areas of concern. These include: the purchasing of marketing lists and lifestyle information from data brokers without sufficient due diligence around those brokers and the degree to which the data has been properly gathered and consented to; use of third-party data analytics companies with insufficient checks that those companies have obtained correct consents for use of data for that purpose; and provision of contact lists of members to social media companies without appropriate fair processing information and collation of social media with membership lists without adequate privacy assessments.[27] The Information Commissioners Office accordingly issued formal warnings to 11 political parties (Conservatives, Labour, Lib Dems, Greens, SNP, Plaid Cymru, DUP, Ulster Unionists, Social Democrat, Sinn Féin and UKIP), indicating that problematic use of data analytics in political campaigns is widespread in the UK.[28]

3.2.3              Use of iterative, large-scale, rapid testing of adverts online to identify and deploy the most persuasive; and to target different audiences online with tailored messages. Reports from campaign insiders in the USA (a country with the longest history of digital political campaigning) show exponential increases in ‘A/B’ testing experiments across the past decade. According to Dan Siroker, Director of Analytics for the 2008 Obama campaign, in 2007 Obama’s campaign utilised digital platforms to incorporate ‘A/B’ testing experiments into its methods. This involved creating multiple versions of a message to be delivered separately, and to randomly selected control groups, in order to measure the results in real time and quickly integrate them into delivery as the winning message became the message.[29] Through such experimentation, the Obama campaign came to predominantly feature Obama’s family in much campaign material along with the button ‘Learn More’.[30] By the 2012 US presidential election, Obama’s digital team ran 500 A/B tests on their web pages which increased donation conversion by 29% and sign up conversions by 161%.[31] By the 2016 US presidential election, Brad Parscale, Trump’s digital campaign manager, claims that his team tested typically around 50,000-60,000 ad variations a dayan exponential increase compared to Obama in 2012.[32] Trump’s campaign utilised Facebook’s tool, Dynamic Creative, to use predefined design features of an advert to construct thousands of variations of the advert, present them to users, and find optimal combinations based on engagement metrics.[33]

4.               Core features of Vote Leave’s digital political campaign: a microcosm of wider trends

 

4.1              In the EU Referendum, Vote Leave’s digital campaign displayed the above three core features, making it commensurate with wider trends in digital political campaigning.

4.1.1               The first core feature is increased spending on digital advertising. Vote Leave’s campaign director, Dominic Cummings, states that Vote Leave spent 98% of its budget on digital advertising (rather than mainstream media advertising),[34] with most spent on adverts that experiments had demonstrated were effective. [35]

4.1.2              The second core feature is increased use of data analytics and data management approaches in order to profile and thereby identify target audiences, including the ‘persuadables’ and swing voters. Vote Leave spent £2,901,566 on data-driven platforms, advertising/data companies, and consultants/strategists in the EU Referendum – over £1 million more than the official Remain group, Britain Stronger In. By far the largest element of Vote Leave’s outlay was on Canadian digital advertising web and software development company, AggregateIQ, (see Table 1).

Table 1: Leave and Remain campaign groups' spending on data-driven platforms, advertising/data

companies and consultants/strategists in EU Referendum (2016)

                page19image115643680

Source: Macintyre, A., Wright, G. & Hankey, S. (n.d.)  p.18

Indeed, Cummings explains that, given Vote Leave’s many campaigning disadvantages (an inability to control the referendum’s timing, the fact that Vote Leave bucked the government’s wish and status quo, and that British broadcasters were then pro-‘Remain’), Vote Leave heavily relied on data scientists.[36] Cummings claims that Vote Leave innovated, ‘the first web-based canvassing software that actually works properly in the UK and its integration into world-leading data science modelling to target digital advertising and ground campaigning’.[37] Vote Leave created a database, the Voter Intention Collection System, which aggregated personal data on potential voters from social media, advertising, website activity, apps, canvassing, direct mail, polls, fundraising and activist feedback.[38] Vote Leave employed AggregateIQ to build a ‘core audience’ for Vote Leave’s adverts, by first identifying the social media profiles of those who had already ‘liked’ Eurosceptic pages on Facebook. Vote Leave advertised to this core audience to bring them onto its website where they would be invited to add their details to its database. AggregateIQ also used Facebook’s ‘Lookalike Audience Builder’, which applied demographic features identified by Facebook in the ‘core audience’ group to the wider UK population. This second group (the ‘persuadables’) consisted of 9 million people on Facebook whom Facebook identified as having the same demographic features as the core audience, but had not previously expressed interest in Eurosceptic content on Facebook by ‘liking’ Eurosceptic pages.[39]  Also, having identified from focus groups that swing voters were confused, and liable to change their voting decision based on whether they had last seen a message from either side of the campaign, Vote Leave ensured that a Vote Leave advert was delivered to swing voters as late as possible in the campaign.[40]

4.1.3              The third core feature is use of iterative, large-scale, rapid testing of adverts online to identify and deploy the most persuasive; and to target different audiences online with different, tailored messages. Via targeted digital advertising, Vote Leave sympathisers were invited to click on an advert that took them to Vote Leave’s website, where they were asked to provide personal details (populating Vote Leave’s database), donate, share messaging, or volunteer time. At each step, messages were tested on an iterative basis: those that failed to convince enough people to move to the next step were re-worked until a success threshold was reached.[41] Cummings estimates that Vote Leave ran around one billion targeted adverts in the run up to the vote, mostly via Facebook, sending out multiple different versions, testing them in interactive feedback loops.[42]  For instance, while funding the NHS from money saved by leaving the EU was identified as a core message to be promoted, other ads were trialled suggesting that we spend the money on storm defences in flooded York, or on education. As such, the ads vary greatly in reach. According to Facebook’s data, some only garnered 0-999 impressions each. Commonly listed ranges were 50,000-99,999 and 100,000-199,999. Nine adverts were viewed 2M-4.9M times.[43] 

 

4.2              While Vote Leave’s digital campaign displayed features entirely commensurate with wider trends in digital political campaigning, it also operated with a concerning level of opacity on several fronts. There is opacity regarding the various Leave campaigns’ interconnectedness, and use of closely related political digital campaigning companies, raising questions about whose data-sets were used to enable political targeting of the British population;[44] and there is evidence pointing to covert, attempted digital manipulation of the UK electorate on the part of Leave.EU.[45] These are discussed below.

 

 

5.               A concerning level of opacity in the Leave campaigns

 

5.1              The Leave campaigns operated with a level of opacity that attracted regulatory concern. They have been scrutinised by the UK Inquiry into Disinformation and Fake News, The Electoral Commission, the Information Commissioners Office and UK police, with some investigations ongoing at the time of writing (September 2019). There remain unanswered questions about which data companies worked on the various Leave groups’ campaigns, and what data was shared between them. What is confirmed is that Aggregate IQ (a Canadian digital advertising web and software development company) worked for Vote Leave, as well as for Be Leave, Veterans for Britain and DUP Vote to Leave.[46] What is less clear is the relationship between Aggregate IQ, SCL Elections/Cambridge Analytica (a British data analytics company),[47] and the various Leave campaigns.

 

5.2              UK electoral law sets limits on the amount of money that campaigners can spend on campaign activity during a ‘regulated period’ before elections and referendums (the regulated period before the EU referendum was 10 weeks). The spending limit for the official designated campaign group is £7m. Separate campaigns can spend up to £700,000 in excess of that as long as they do not coordinate their efforts. If election campaigns coordinate, then they have to declare their spending together. In May 2018, the Chief Operating Officer of Aggregate IQ, Jeff Silvester, told the Inquiry into Disinformation and Fake News that the four Leave campaigns for which Aggregate IQ completed work (Vote Leave, Be Leave, Veterans for Britain, DUP Vote to Leave) had approached Aggregate IQ independently of each other.[48]  However, by July 2018, The Electoral Commission (the independent body which oversees elections and regulates political finance in the UK) found that Vote Leave and BeLeave acted under an undeclared common plan, for which they both relied on the services of Aggregate IQ. BeLeave spent more than £675,000 with Aggregate IQ under this common plan. This spending should have been declared by Vote Leave. It meant Vote Leave’s referendum spending was £7,449,079, exceeding its statutory spending limit of £7m. Also, as an unregistered campaigner, BeLeave exceeded its spending limit of £10,000 by more than £666,000.[49]  The UK Information Commissioner’s Office further clarified that Vote Leave and BeLeave used the same data set to identify audiences and select targeting criteria for ads.[50]

 

5.3              What is less clear is the relationship between Aggregate IQ, Cambridge Analytica/SCL Group, and the various Leave campaigns. By November 2018, the Information Commissioners Office established that Aggregate IQ had worked for SCL Elections to develop a political customer relationship management tool (‘Ripon’) for use during the US 2014 midterm elections; that Aggregate IQ placed online advertisements for SCL Elections in autumn 2014; and that Aggregate IQ worked with SCL Elections on similar software development, online advertising and website development during the US presidential primaries between 2015-2016. However, the Information Commissioners Office concluded that it found no evidence of unlawful activity in relation to the personal data of UK citizens and Aggregate IQ’s work with SCL Elections; that it found no evidence that SCL Elections and Cambridge Analytica were involved in any data analytics work with the EU Referendum campaigns; and that these findings had been confirmed by the federal Office of the Privacy Commissioner of Canada in its separate investigation into Aggregate IQ.[51] Unconvinced, when the Inquiry into Disinformation and Fake News presented its Final Report in February 2019, it observed that Aggregate  IQ had already used Facebook data scraped by a third party app to target voters in the US election; and that Aggregate  IQ had the capability to email potential voters during the EU referendum and also to target people via Facebook. [52]  The Inquiry concludes:

 

There is clear evidence that there was a close working relationship between Cambridge Analytica, SCL and AIQ [Aggregate IQ]. There was certainly a contractual relationship, but we believe that the information revealed from the repository[53] would imply something closer, with data exchanged between both AIQ and SCL, as well as between AIQ and Cambridge Analytica. [54]

 

5.4              There are also questions as to whether Cambridge Analytica/SCL Group worked for the Leave.EU campaign. Arron Banks (Leave.EU group founder) submitted evidence to the UK Inquiry into Disinformation and Fake News that showed that Cambridge Analytica/SCL Group had prepared a detailed pitch to Leave.EU to help make the case to the UK’s Electoral Commission that Leave.EU should be the official campaign group for Leave.[55] Part of this pitch offered voter suppression. The pitch claims that its ‘powerful predictive analytics and campaign messaging capacity can help you to segment and message the population according to a range of criteria’. One of these criteria is ‘Partisanship’. As well as describing the ‘General Voter’ and ‘Ideological Voter’, it describes the ‘Opposition Voter – groups to dissuade from political engagement or to remove from contact strategy altogether’.[56] Banks maintains that Leave.EU did not take Cambridge Analytica’s pitch forward; [57] and since at least March 2017, has publicly denied Cambridge Analytica’s involvement in the Leave.EU campaign.[58] This is despite archived  documents leading up to the EU Referendum where Leave.EU publicly presented Cambridge Analytica as working for them.[59] Furthermore, in July 2019, Cambridge Analytica whistleblower Britanny Kaiser supplied ten documents to the Digital, Culture, Media and Sport Committee that showed that Cambridge Analytica did work for Leave.EU on the EU referendum. This included an analysis of UKIP[60] membership data and survey results to model four key groups of persuadable UK voters to be targeted with Leave.EU messaging: the Eager Activist, Young Reformers, Disaffected Tories and Left Behinds. For instance, ‘Left Behinds’ are described as follows:

-  Feels increasingly left behind by society and globalisation

-  Unhappy with the economy and the NHS, but immigration is most important issue

-  Suspicious of the establishment including politicians banks and corporations

- Worried about their economic security, deteriorating public order and the future generally.[61]

Kaiser’s revelations led to calls from the Digital, Culture, Media and Sport Committee to the Electoral Commission to re-open its investigation into Leave.EU.[62]

5.5              Whether or not Cambridge Analytica officially worked for the Leave.EU campaign, it appears to have influenced the Leave.EU campaign. In her 2019 submission to the UK Inquiry into Disinformation and Fake News, Kaiser argues that Cambridge Analytica completed chargeable work for UKIP and Leave.EU;[63] and that datasets and analysed data processed by Cambridge Analytica as part of a Phase 1 payable work engagement were later used by the Leave.EU campaign without Cambridge Analytica’s further assistance. [64] This aligns with a statement provided by Andy Wigmore (former Director of Communications, Leave.EU) two years earlier, when interviewed by Emma Briant (on 4 October 2017). Wigmore states that Leave.EU copied some of the things that Cambridge Analytica told them about how to target people, but used actuaries working for Arron Bank’s insurance company, Eldon Insurance, to do so. This involved determining: ‘the areas that were most concerned about the EU and we got that from our own actuaries. We had - we have four actuaries which we said right, tell us what this looks like from our data and they’re the ones that pinpointed the twelve areas in the United Kingdom that we needed to send Nigel Farage to’.[65]

5.6               To summarise, Aggregate IQ (a Canadian digital advertising web and software development company) worked for Vote Leave and Be Leave in a coordinated way by sharing data to identify audiences and select targeting criteria for ads; it also worked separately for Leave campaigns, Veterans for Britain and DUP Vote to Leave.[66] Aggregate  IQ worked closely with SCL Elections (an entity synonymous with Cambridge Analytica) on US elections across 2014-16. Cambridge Analytica influenced the Leave.EU campaign, by undertaking profiling work (of persuadable UK voters to be targeted with Leave.EU messaging) in order to produce a convincing pitch for business, although it remains unproven as to whether this work was  paid for,  or merely copied, by Leave.EU.

6.              Psy-ops’: deceptive, covert influence in the Leave.EU campaign

 

6.1              As well opacity regarding the various Leave campaigns’ interconnectedness, and use of closely related political digital campaigning companies, the targeting techniques used by the Leave.EU campaign bear a striking resemblance to military level psychological operations (‘psy-ops’) in their intention to influence target audiences through deceptive and covert means. This is discussed below, with reference to Cambridge Analytica/SCL Group’s Target Audience Analysis methodology; and the efficacy of psychographic targeting with big data (a key part of Cambridge Analytica’s claims).

 

6.2               Target Audience Analysis methodology

 

6.2.1              It is instructive that a core methodology utilised by SCL Group, the Target Audience Analysis methodology, was considered ‘weapons grade communications tactics’ until 2015, according to Cambridge Analytica whistleblower, Brittany Kaiser.[67] A NATO Joint Warfare Centre article (written by a PsyOps specialist in 2015) describes Target Audience Analysis as a military psy-ops tool used to identify and influence influential target audiences in order to change their behaviour, and to model different interventions in this desired behaviour change. The NATO article notes that SCL Group had spent over $40 million and 25 years, developing this group behaviour prediction tool.[68]

 

6.2.2              Indeed, the Target Audience Analysis (TAA) methodology was an integral part of the pitches that Cambridge Analytica/SCL Group prepared for Leave.EU. For instance, one pitch states:

TAA begins with the collection of qualitative data through interviews and focus groups, which allow us to identify key issues and political attitudes that will be tested during the subsequent quantitative phase. …This data is then analysed by our in-house team of data scientists and statisticians to define Target Audience Profiles, which are descriptions of population segments that can be grouped together based on their shared characteristics. …Each of these profiles will outline the views and motivating factors driving behaviour amongst group members, and will also outline the messaging strategies most likely to be effective in influencing them to support the Leave.EU campaign.

The end result of this process is a comprehensive plan for influencing voters likely to be receptive to Leave.EU’s positions and messages.[69]

 

6.2.3              To summarise, Target Audience Analysis is a psy-ops tool used by the military to identify and influence influential target audiences in order to change their behaviour, and to model different interventions in this desired behaviour change. It is presented by Cambridge Analytica/SCL Group as an integral part of their services when it pitched for Leave.EU’s business. Part of this pitch offered voter suppression.

 

6.3              Cambridge Analytica’s psychographic targeting with big data: actions, claims, efficacy

 

6.3.1               Across 2014, through a third-party app, Cambridge Analytica went to great lengths to secretly and deceptively harvest Facebook data (approximately 87 million Facebook profiles from around the world, including US and UK Facebook users). The data collected included users’ Facebook User ID, which connects individuals to their Facebook profiles, as well as other personal information such as their gender, birthdate, location and their Facebook friends list.[70] Cambridge Analytica collected this data in order to train an algorithm that then generated personality scores for the app users and their Facebook friends. Cambridge Analytica subsequently matched these personality scores with US voter records in order to profile voters for further targeted advertising in the 2016 US presidential election campaign.[71]  By November 2018, the UK’s national data regulator, the Information Commissioners Office, noted that breaches by Cambridge Analytica were so serious (e.g. breaches of principle one of the UK’s Data Protection Act 1998 for unfairly processing people’s personal data for political purposes, including purposes connected with the 2016 US presidential campaigns) that it would have issued a ‘substantial fine’ had the company not already gone into administration; and it was pursuing criminal prosecution over Cambridge Analytica’s data misuse.[72] By July 2019, the US Federal Trade Commission filed an administrative complaint[73] against Cambridge Analytica for its deceptive harvesting of Facebook users’ personal information.[74] Why would Cambridge Analytica take such risky steps?

 

6.3.2              According to Cambridge Analytica, psychographic profiling techniques combined with big data analysis enable marketers to understand the personality of the people being targeted in order to tailor messages for them. Cambridge Analytica whistleblower, Christopher Wylie, told the UK Inquiry into Disinformation and Fake News:

If you can create a psychological profile of a type of person who is more prone to adopting certain forms of ideas, conspiracies for example, and you can identify what that person looks like in data terms, you can then go out and predict how likely somebody is going to be to adopt more conspiratorial messaging and then advertise or target them with blogs or websites or what everyone now calls fake news, so that they start seeing all of these ideas or all of these stories around them in their digital environment. They do not see it when they watch CNN or NBC or BBC, and they start to go, ‘Well, why is it that everyone is talking about this online? Why is it that I’m seeing everything here but the mainstream media isn’t talking about how Obama has moved a battalion into Texas because he’s planning on staying for a third term? I keep seeing that everywhere, but why isn’t the mainstream media talking about that?’

Not everyone is going to adopt that, so the advantage of using profiling is that you can find the specific group of people who are more prone to adopting that idea as your early adopters.[75]

The question arises: does psychographic targeting really work, or is it snake oil?

 

6.3.3              Psychographics have been used in marketing research since World War II. Its early forms comprised quantitative attempts to correlate consumer behaviour with scores obtained from standardised, objective personality inventories; and qualitative studies of consumers' motivations. Blending these two traditions, psychographic research emerged in the 1960s. It attempts to move beyond demographics into areas such as personality traits, activities, interests, opinions, needs, values and attitudes in order to offer novel insights into large, representative samples of respondents.[76]

 

6.3.4              Since then, the rise of big data analysis and modelling has enabled access to psychological characteristics and political inferences far beyond the reach of traditional databases.[77] For instance, in 2013, Kosinski, Stillwell and Graepel showed that Facebook ‘likes’ are enough to accurately predict many personal attributes, including political views and personality traits.[78] While still a young field, an increasing body of research agrees that digital footprints can be used to predict the five-factor model of personality traits (the generally accepted model of personality): the ‘Big Five’ personality traits are Openness, Conscientiousness, Extroversion, Agreeableness and Neuroticism (OCEAN).[79] In 2018, a rigorous, multidisciplinary, meta-analysis of 16 relevant studies to determine the predictive power of digital footprints automatically collected from social media over Big 5 personality traits, confirms that these personality traits, ‘can be inferred using digital footprints extracted from social media with remarkable accuracy. The ability to make distinct but similarly accurate predictions of Big 5 traits allows for the identification of social media users with different personality profiles’.[80] The meta-analysis also indicates that prediction accuracy for each trait is stronger when more than one type of digital footprint is analysed. Such findings point to the capacity for covert behaviour change campaigns by those with access to multiple data streams. The study suggests that accurate predictions of the Big 5 personality traits could make it possible to profile individuals, and tailor adverts automatically displayed in individual users’ profiles based on personality. Cue Cambridge Analytica.

 

6.3.5              In 2016, Alexander Nix (then CEO of Cambridge Analytica) claimed that big data analytics allow campaigners to know what sort of persuasive message needs to be delivered, on what issue, nuanced to what personality types, and to what group of people, or even individual, before the act of creating that message begins: ‘If we wanted to drill down further, we could resolve the data to an individual level, where we have something close to four to five thousand data points on every adult in the United States’.[81] The secret sauce for contemporary psychographic profiling and targeting, then, is big data analysis - the possession of multiple data points on each person. In her testimony to the UK’s Inquiry into Disinformation and Fake News, former Cambridge Analytica employee Brittany Kaiser mentions that the tool that Cambridge Analytica developed used far more data than that used by Target Audience Analysis (see Section 6.2): ‘The interesting thing is that the target audience analysis uses a lot less data than psychographic micro-targeting, which is what we use in the United States’ .[82]

 

6.3.6              Of course, the efficacy of such attempts has yet to be categorically demonstrated. Cambridge Analytica may have been selling snake oil. Even the rigorous meta-analysis (described in section 6.3.4) bases its conclusions on just 16 independent psychology studies, most of which are limited to English or Chinese speaking users. Nonetheless, whether or not psychographic targeting with big data is forging new and effective means of manipulating behaviours, the level of opacity is of concern on a number of fronts. It is concerning that profiling and targeting techniques that appear to have been used by the Leave.EU campaign have psy-ops like features. A core psy-ops methodology (Target Audience Analysis) designed to covertly generate behaviour change by the military in target populations was developed by SCL Group – an entity synonymous with Cambridge Analytica (see Section 6.2). Cambridge Analytica/SCL offered its services to Leave.EU, including targeted voter suppression. As part of its pitch to Leave.EU, it used UKIP membership data and survey results to model four key groups of persuadable UK voters to be targeted with Leave.EU messaging. Leave.EU appears to have copied some of the things that Cambridge Analytica told them about how to target people, but used actuaries working for Arron Bank’s insurance company, Eldon Insurance, to do so (see Section 5.5). It is concerning that there also remains an opacity regarding the various Leave campaigns’ interconnectedness, and use of closely related digital political campaigning companies, and associated questions on whose data-sets were used to enable further political targeting of the British population (see Section 5.3). Given Cambridge Analytica’s past deceptive practices (see Section 6.3.1), these concerns should be taken seriously.

7.               AI in political campaigning: highly targeted, emotionally optimised (dis)information

 

7.1              As a result of concerns around the growing use of these opaque, potentially powerful techniques, the UK Information Commissioners Office commissioned a report in 2018 on the future of political campaigning. The report predicts that current practices of usage of big data analysis, targeted advertising and psychographics are likely to be intensified as AI increasingly enters political communications.[83] For instance, on targeting, AI is likely to be increasingly used to optimise campaigns, to work out exactly who should be targeted, when, and with what content, in order to maximise persuasive potential: ultimately, this process could be automated to programmatically generate streams of personalised messages targeted at each voter constantly updated based on A/B testing. On psychographics or similar techniques, this is likely to be increasingly grounded in big data, to produce insights on voters’ personality types, emotional states, moods and views on divisive issues like immigration. In addition to today’s widespread use of sentiment analysis of social media, McStay documents the rise of new data streams about emotions, in what he terms ‘emotional AI’, where technologies aim to read and react to emotions through text, voice, computer vision and biometric sensing.[84] For instance, Beyond Verbal scans for signals in a speaker’s voice that indicate emotional states, and Affectiva uses facial coding to glean users’ micro-reactions and emotions. Both companies claim to offer more authentic insight into behaviour and disposition than text-based expression allows. Increasingly, legacy technology companies such as Microsoft, Amazon and Facebook are becoming much more active with emotional AI products.[85]

 

7.2              It seems unlikely that this array of techniques would be employed simply for wholly truthful, informative campaigns that civilly inform voters of a party’s policies and platforms. Herein, we suggest, lies another use of AI in political campaigning: to automatically generate and disseminate emotive, engaging disinformation at scale.[86] AI is already used to spread deception at scale (e.g. via amplification by bots)[87] as well as to create deceptive, realistic ‘deepfake’ audiovisuals of famous people (e.g. political leaders).[88] Of concern is that unscrupulous campaigners will use AI to ‘feel into’ the desires and fears of individuals and groups to create, as well as spread, targeted, emotive falsehoods at scale.[89] This could rapidly lead to large-scale pollution of the digital environment, as purveyors of emotionally nuanced disinformation trigger bigger cascades of misinformation.

 

7.3              On the contagious aspects of emotion, McStay, in his discussion of the generalised rise of emotional AI, draws attention to the social mechanics of empathy and ‘contagion’. [90] He highlights classic writings on emotional contagion in crowds, characterised by features such as Durkheim’s ‘collective effervescence’[91] and Canetti’s manipulation, eruption, mobs, demolition and transcendence.[92] From this backdrop, McStay keenly notes the import of Facebook’s mood study of online sentiment, where Facebook secretly tweaked and optimised 689,003 people’s News Feeds for a week in January 2012 in order to understand ‘emotional contagion’.[93] The mood study involved showing some Facebook users content in their News Feeds that had a greater number of positive words, and showing other Facebook users content deemed as sadder than average. After the week of exposure to either more positive or negative content, manipulated users within the experiment were more likely to post either especially positive or negative status messages: this shows the power of online emotional contagion. The Facebook mood study also found that when the experimenters reduced both positive and negative content (making News Feeds lacklustre), people reduced the overall amount they posted: this shows the importance of emotionality in engaging Facebook users.

 

7.4              On the contagious aspects of false information, Vosoughi et al.’s big data study of Twitter (from 2006–2017) finds that false information spreads significantly farther, faster, deeper, and more broadly than the truth on Twitter.[94] This applies to all categories of information, but the effects were more pronounced for false political news than for false news about terrorism, natural disasters, science, urban legends or financial information. They also found that robots accelerated the spread of true and false news at the same rate: this implies that false news spreads more than the truth because humans (rather than robots) are more likely to spread it.

 

7.5              Together, the Facebook and Twitter studies indicate the potential, and mechanics, of large-scale pollution of the digital environment. Social media companies already use AI to engage users and target them with messages, and in the UK, social media penetration is very high,[95] with Facebook being the dominant social media platform.[96] If political campaigners choose to purvey emotionally nuanced disinformation, produced and/or targeted via AI, this is likely to trigger far bigger cascades of misinformation.[97]

 

8.              Does digital political campaigning benefit or harm democracy?

 

8.1              Benefits. We identify two potential benefits to democracy that arise from digital political campaigning.

 

8.1.1              Increasing engagement of electorates. The UK Information Commissioners Office acknowledges that, ‘Social media provides unprecedented opportunities to engage hard-to-reach groups in the democratic process on issues of particular importance to them.’ [98] Arron Banks told the New Yorker that the social-media postings of Leave.EU reached working-class voters, particularly in the North of England, who would not otherwise have voted.[99] This was also the stance taken by Alexander Nix, then CEO of Cambridge Analytica, when, in February 2018, he described the micro-targeting business of his company to the UK Inquiry into Disinformation and Fake News:We are trying to make sure that voters receive messages on the issues and policies that they care most about, and we are trying to make sure that they are not bombarded with irrelevant materials. That can only be good. That can only be good for politics, it can only be good for democracy.’[100]

 

8.1.2              Manifesting voters’ desires, concerns and policy preferences to politicians. Some scholars note that  political marketing offers mechanisms by which candidates and elected officials can tap into voters’ sentiments, helping politicians provide services that voters want and allowing voters to communicate their policy preferences to political service providers, electing those who deliver on promises, and kicking out those who do not. Scholars argue that, if conducted openly and honestly, political marketing can ‘advance democratic aims, accurately transmitting the concerns of the electorate to representatives and helping duly-elected leaders develop programs that meet voters’ needs.’[101]

 

8.2              However, to advance democratic aims, such marketing should be conducted openly and honestly by all parties otherwise we find ourselves in areas of concerning opacity. Digital targeting provides campaigns with new, opaque ways to study, market and mobilise potential voters, raising classic questions of covert manipulation and media effects. This opacity is particularly problematic in national elections and referenda, giving rise to the following seven potential harms.

 

8.3              Harms. We identify seven potential harms to democracy that arise from digital political campaigning.

 

8.3.1              Lack of informed choices. As the UK Information Commissioners Office notes: ‘Citizens can only make truly informed choices about who to vote for if they are sure that those decisions have not been unduly influenced’.[102] They observe: ‘If voters are unaware of how their data is being used to target them with political messages, then they won’t be empowered to exercise their legal rights in relation to that data and the techniques being deployed, or to challenge the messages they are receiving’.[103] This is already an issue with current levels of technological deployment in digital political campaigns, but could become far worse. A report commissioned by the Information Commissioners Office flags the coming problem of creation by private companies of new forms of personal data via probabilistic data inferences from the metadata (i.e. data about data) arising from people’s device use and behaviour. The report suggests that political campaigns will be particularly interested in our persuadability to certain messages, whether about immigration or other issues, and that it is hard to see how the user would know that this data exists or exercise their rights to have it removed or corrected.[104]

 

8.3.2              Fragmentation of national conversations weakens ability to hold political campaigners to account. The importance of, and threat to, shared national conversations must be recognised. If deceptive micro-targeting takes place, and if this is not scrutinised by national authorities and media, then there is little chance of those elected on such platforms being held to public account. This risk will intensify if algorithmic marketing techniques become available to all political parties, as the report commissioned by the Information Commissioners Office predicts.[105]  This would enable parties to routinely run millions of algorithmically tuned messages, on a scale that could overwhelm regulators, with deleterious consequences for the transparency and political accountability of campaigns.

 

8.3.3              Targeted voter suppression. As noted in section 5.4, Cambridge Analytica/SCL Group’s pitch to Leave.EU to employ their services for the 2016 EU referendum offered voter suppression.[106] Similarly, in the 2016 Trump presidential campaign, Brad Parscale, the campaign’s digital director in 2016 (and also for 2020), used Facebook’s Lookalike Audiences ad tool in 2016 to expand the number of people the campaign could target by identifying voters who were not supporters of Trump, and then targeting them with psychographic messaging designed to discourage them from voting. Campaign operatives openly referred to such efforts as ‘voter suppression’ aimed at three targeted groups that Hillary Clinton needed to win overwhelmingly: idealistic white liberals, young women and African Americans.[107]

 

8.3.4              Opaque exploitation of vulnerabilities. This involves targeting people who are not equipped to evaluate the message.  Cambridge Analytica whistleblower, Christopher Wylie, explained in 2018 that Cambridge Analytica was built on exploiting Facebook to harvest millions of people’s profiles to then build models to ‘target their inner demons’[108] including targeting people prone to adopting conspiracies with conspiratorial messaging, whether through adverts, blogs or fake news.[109] Such targeted, exploitation of vulnerabilities is difficult for individuals, or for wider society, to detect.

 

8.3.5              Unintended exploitation of vulnerabilities. Certain vulnerable groups, such as children, may become collateral recipients of online adverts. In the UK, adverts for children on traditional media (such as television) are regulated by the Advertising Standards Authority, which specifies that ads should not employ ‘direct exhortation’ or ‘exploit their credulity, loyalty, vulnerability or lack of experience’. However, online political adverts are not covered by the Advertising Standards Authority. In August 2019, transparency researchers revealed that UK political parties were showing highly charged, partisan adverts to 13 to 17-year-olds on Facebook and Facebook-owned Instagram. The transparency researchers uncovered the ads by using ad.watch, a tool that visualises data provided by Facebook on political advertising. Four of these ads were from the Labour Party and were highly partisan. Two Instagram ads depicted Nigel Farage (former leader of UKIP, and leader of the Brexit Party (2019-)) next to Tommy Robinson (a British far-right and anti-Islam activist), and claimed that: ‘The only way to stop the far-right from winning is by voting Labour.’ Another showed Farage next to large capitalised text ‘Don’t let fear win here’. Labour later clarified that any Facebook adverts going to people under the age of 18 was unintended.[110] Indeed, teenagers might have been targeted on factors other than their age, such as the region they live in, or their interests or behaviours. Such unintentional exploitation of vulnerable people may be intensified if parties cede control of message creation and/or targeting to an automated AI-based system. As the report commissioned by the Information Commissioners Office points out, it could result in political parties targeting people who are depressed, anxious or suffering from other psychological problems with adverts designed to appeal to them.[111] This is ethically problematic.

 

8.3.6              Polarisation potential. The opacity of individualised online targeting enables ‘dog whistle’ campaigns that emphasise a provocative position only to sympathetic audiences but remain invisible to others. It enables the targeted, secretive delivery of ‘wedge’ issues (namely, issues that are highly salient and important to specific segments of a voting population) to mobilise small, but crucial, segments.[112] Indeed, research on Twitter content emanating from US locations during the 2016 US presidential election campaign shows that polarising content[113] was surprisingly concentrated in swing states, many of which were also among those with large numbers of votes in the Electoral College.[114] If deliberately inflammatory information circulates, uncorrected, in closed communities, this can generate a polarised, emotive society. Selective exposure, where people prefer and tune into, information that supports their existing beliefs is an old finding in communication research.[115] However, when selective exposure is combined with highly targeted information that is fed into self-reinforcing algorithmic and cognitive systems, or digital ‘echo chambers’, there is even less chance of citizens responding critically to that information. Some empirically demonstrated consequences of algorithmically created filter bubbles and human confirmation bias on social media platforms are limited exposure to, and lack of engagement with, different ideas and other people’s viewpoints.[116]

 

8.3.7              Polluting the entire digital media ecosystem. Social media companies already use AI to engage their users and target them with messages, and social media penetration is very high in the UK, with Facebook being the dominant platform. AI is already used to spread deception at scale (e.g. via amplification by bots) as well as to create realistic ‘deepfake’ audiovisuals of famous people (e.g. political leaders). Of concern is that unscrupulous campaigners will use AI to ‘feel into’ the desires and fears of individuals and groups to create and spread targeted, emotive falsehoods at scale. The deliberate promotion of emotive and false information online is particularly problematic in that studies indicate that both emotion and false information are contagious online (see sections 7.3 and 7.4). Hence, while niche audiences may be micro-targeted with such campaigns, there is potential to pollute the entire digital media ecosystem.

 

8.4              To summarise, contemporary digital political campaigning has the potential to service democracy by engaging hard-to-reach parts of the electorate, and by manifesting voters’ desires, concerns and policy preferences for politicians to act upon. However, this requires that campaigns are conducted honestly and openly, otherwise we can find ourselves in areas of covert attempted manipulation of populations. Unfortunately, digital political campaigning is currently very opaque. This opacity has capacity to negatively impact on citizens’ ability to make informed choices; and on our ability to hold political campaigners, and those subsequently elected, to account. It increases the potential for targeted voter suppression. It enables opaque exploitation of people’s psychological vulnerabilities; and it leads to unintended exploitation of vulnerabilities as, for instance, children become collateral recipients of online adverts targeted by behaviour rather than age. Finally, the opacity increases the potential for societal polarisation through, uncorrected, emotive disinformation targeted at niche audiences, but ultimately polluting the entire digital media ecosystem.

 

9.              Solutions

 

9.1              Digital political campaigns that generate and target messages that are deceptive, and/or designed to bypass thoughtful deliberation in favour of emotionalised engagement, severely challenge the democratic ideal of treating voters as citizens. We argue that political campaigns should treat people as citizens (not as social media ‘targets’ or message ‘consumers’). Citizens are the fundamental constituents of a democratic society, whose ideas and objectives elected leaders must represent and channel into legitimate public policy.[117] To facilitate such citizenship, many solutions have been put forward by UK politicians,[118] UK government[119] and other important stakeholders[120] on how to protect elections and minimise emotive disinformation online. There has been a regulatory focus largely on the social media companies themselves, accompanied by some critical self-reflection and actions on their part.

 

9.2              Much of the focus has been on tackling online disinformation, for instance, by independent fact-checking organisations pronouncing on the veracity of specific stories; by social media companies downgrading or flagging disputed, or provenly false, stories; and by initiatives to educate people to recognise disinformation. While these are worthy efforts, they are not without their problems. For instance, there are complex psychological and sociological issues of how and why people spread and remember false information: disrupting these processes requires care, and may have unintended effects.[121] Indeed, by December 2017, Facebook had looked at the impact of ‘disputed flags on its platform, and concluded that they can be counter-productive.[122] Other efforts developed by Facebook across 2018 include an election integrity programme to act against hacking and malware, to examine the role of ads and foreign interference and to understand fake accounts; and to strengthen its advertising policies and guidelines to provide transparency and to restrict ad content.[123] In October 2018, Facebook announced a verification process, whereby people placing political adverts[124] must prove their identity, to be checked by a third-party organisation. Political adverts suspected of promoting misinformation or disinformation can be reported and, if the advert contains ‘falsehoods’, it can be taken down.[125] However, in August 2019, The Guardian reported that coordinated disinformation campaigns using ‘astroturfing’[126] can still get around Facebook’s transparency requirements for political advertising.[127]

 

9.3              Regulators have increasingly turned their attention to social media and data analytics companies. For instance, across 2018-19, Facebook was maximally fined and Cambridge Analytica sued for violating their users’ privacy in their efforts to develop algorithms to target voters (see Section 6.3.1). In April 2019, the UK government announced in its Online Harms White Paper that a new independent regulator would be introduced to ensure that social media companies and technology firms are legally required to protect their users from a range of online harms, including disinformation, facing tough penalties if they do not comply. Time will tell if financial penalties are merely incorporated into the cost of doing business for social media companies who, ultimately, are driven by their business model of the attention economy, where maximising user engagement in turn maximises advertising revenue. It is possible that the US Federal Trade Commission’s demands in July 2019 that Facebook must submit to new restrictions and a modified corporate structure to hold the company accountable for its decisions about its users’ privacy will have more weight.

 

9.4              Other proposed measures focus on the political campaigners themselves.  In May 2019, the UK government announced measures to safeguard elections via new laws to bar people from running for office if found guilty of intimidating or abusive behaviour; and to ensure that online election material, via imprints, clearly shows who has produced it – thus helping the electorate evaluate and to come to their own conclusions about the online messages that they receive. It also launched a consultation on electoral integrity in order to protect UK politics from foreign influence.[128]

 

9.5              The government’s attempts to inject transparency and integrity at source – i.e. with the politicians and their campaigners – is a good start, but we argue does not go far enough. If it is inevitable that political campaigners will use AI, social media and data analytics to increasingly target voters with messages iteratively tested and tailored to maximise engagement, then a code of ethical conduct for political campaigners should be developed for the campaign to be transparent, explainable, civil and informative. While each of these elements merit greater thought, we outline key considerations below.

9.6              On transparency, it is important to make clear if a political message has come from a political party, how much campaigners spend on digital campaigning, and where this money is spent. The Electoral Commission has long recommended that digital messages should bear an imprint to clearly show who has produced them, thereby enabling voters to evaluate and reach their own conclusions about the digital messages that they receive. The Electoral Commission further recommends that the law is changed so that campaign-related staff costs are included in the spending limits on political party campaign spending: given that digital campaigning, and use of data analytics and political consultants, are growing in importance (see section 3.2), this would greatly improve transparency on how much campaigners spend on digital campaigning.[129] The Electoral Commission also recommends that campaigners provide more information in their post-campaign spending returns about how the money was spent in digital campaigns, including invoices detailing the messages used in those campaigns, and which parts of the country they were targeted at.[130] Knowing where messages come from (at the point of reception), and how much was spent on data analytics and political consultants, and on targeting certain groups with certain messages (soon after the campaign is over), would significantly improve transparency.

9.7               On explainability, ‘explainable AI’ refers to methods and techniques in the application of AI such that the results of the solution can be understood by human experts (rather than black-boxed in Machine Learning where even their designers cannot explain why the AI arrived at a specific decision). Related, the right to an explanation for an output of an algorithm is the subject of much discussion in AI and its sub-field of Machine Learning. It is normally conceived as an individual right to be given an explanation for decisions by AIs that significantly affect an individual, particularly legally or financially (e.g. being denied a loan because of your AI-generated profile). Indeed, Articles 13 and 14 of the EU General Data Protection Regulation (GDPR) state that when profiling takes place, a data subject has the right to ‘meaningful information about the logic involved’.[131]  In the context of voter profiling, this prompts the questions: who has the right to such an explanation, and what is required to explain an algorithm’s decision? We argue that all citizens are affected by the outcomes of digital political campaigns that extensively use AI to profile voters. We also acknowledge that it is impossible to disentangle which aspects of the campaigns proved decisive. Hence, we argue that wherever AI is significantly used in political campaigns (as it was in the EU Referendum – see Section 4), all voters deserve an explanation on who was targeted with specific messages. The details of what that explanation should cover will require further thought, but derived from information considered in this submission, the explanation should be able to include: which audiences were targeted; what the basis of the targeting was (including demographics, psychographics, and probabilistic data inferences from metadata arising from audience’s device use and behaviour); with what sort of messages; to what end (e.g. voter mobilisation, suppression); and which aspects of the campaign most succeeded in engaging voters (e.g. specific adverts, messages, themes, memes).

 

9.8               On civility, the Electoral Commission notes that campaign material must not incite others to commit crimes; and that it is a crime to make or publish a false statement of fact about the personal character or conduct of a candidate.[132] The Committee on Standards in Public Life’s Inquiry into Intimidation in Public Life recommends that political parties should develop codes of conduct about intimidatory behaviour.[133] The Electoral Commission thinks that campaigners should also take more responsibility for the tone of their arguments and the claims they make in their campaign material.[134] More work is needed to generate a typology of civility in political campaigning, but we can start with the legal, and progress to the moral. On the legal, a civil political campaign will not incite others to commit crimes; and will not make false statements of fact about the personal character or conduct of a candidate. On the moral, philosopher Olberding is instructive. Her phrases to describe incivility include: cruel, nasty, aggressive, disrespect, and pitched to provoke anger and outrage. Such characteristics could form a typology of what should be avoided in civil political campaigning. However, Olberding also argues that while the desire to be civil equates with the desire to be moral (i.e. to treat others humanely, with respect, toleration and consideration), sometimes, moral behaviour requires displays of ‘disrespect or enmity, to mock or shun, to insult or shame’ in order to stop those who damage our shared humanity from continuing in their destructive ways. Sometimes, then, we can reason that we need to be righteously incivil, disrupting the usual civil patterns because we morally judge that they need disrupting (e.g. for reasons of integrity or some greater social good). We argue, then, that while political campaigners should strive towards civility, if they deliberately breach the codes of civililty, then they should be able to rationally justify why.[135]

 

9.9              On informativeness. Political campaigns should give voters enough information to freely make an informed judgment on which to base their electoral decision. This is the crux of Bakir et al.’s conceptual framework that distinguishes propaganda from ethically acceptable organised persuasive communications.[136] If political campaigns deliberately seek to misinform voters, or to keep voters uninformed, then the campaign moves from being informative to being deceptive. There are many forms of deceptive communications, but the most common are lying, distortion, omission and misdirection. Deception through lying involves making a statement that is known to be untrue in order to mislead. Deception through omission involves withholding information to make the viewpoint being promoted more persuasive. Deception through distortion involves presenting a statement in a deliberately misleading way (for instance, exaggerating or de-emphasising information) to support the viewpoint being promoted. Deception through misdirection entails producing and disseminating true information intended to divert public attention away from other problematic issues. While other forms of deception may also be pertinent, we argue that the code of ethical conduct in political campaigning should ask: was the information presented by the campaign true, complete, undistorted and relevant?

 

9.10              The institution of a code of ethical conduct for political campaigners should be a process of long-term education of all citizens and campaigners, conducted and contextualised by each election or referendum.[137] This will help voters recognise if, and how, they have been targeted with emotive and/or deceptive online messages (research indicates that people are poor judges in these areas).[138] Cultural norms change with time, and what constitutes uncivil or deceptive political communications will be contextual and interpretive. However, below we outline mechanisms to help determine these categories.  In time, the development of such ethical norms - of transparency, explainability, civility and informativeness - should help constrain political campaigners who are hungry to win at all costs. If digital political campaigners behave badly in one election, society will have the tools and capacity to remember this in the next election.

 

9.11              Over time, this should also act as an inoculation for citizens: to make them aware of the range of techniques in use that may seek to obfuscate, manipulate, enrage and deceive. Inoculation theory was originally pioneered by William McGuire (1964) in efforts to induce attitudinal resistance against propaganda and persuasion.[139] Inoculation theory holds that by activating people’s ‘mental antibodies’ through a weakened dose of the infectious agent, this can confer resistance against future attempts to persuade them. A meta-analysis of studies finds that inoculation is effective at conferring resistance.[140]  Applying inoculation theory to the context of fake news, Roozenbeek & van der Linden (2019) developed a ‘fake news game’ where participants must create a news article about a strongly politicised issue (the European refugee crisis) using misleading tactics, from the perspective of different types of fake news producers. Results from their pilot test of the game in a school setting suggest that playing the game reduced the perceived reliability and persuasiveness of fake news articles, which in turn suggests that educational games may be a promising vehicle to inoculate the public against fake news.[141] We suggest that citizens should be inoculated before the next election, to make them aware of persuasive techniques in political campaigns that may seek to obfuscate, manipulate, enrage and deceive.  To that end, we make the following recommendations.

 

10.              Recommendations to counter opacity, outrage and deception in digital political campaigns

 

10.1              Put in place mandatory (via regulatory requirements), publicly available self-evaluations by all political campaign groups following every election and referendum. The self-evaluations should address the criteria of transparency, explainability, civility and informativeness.

-          On transparency. Evaluate the extent to which paid-for, political campaign messages (including digital) bore imprints to clearly show who produced them; and provide details on how much the campaign spent on digital campaigning, and where this money was spent (including on: data analytics and consultants, messages used in those campaigns, and which parts of the country they were targeted at).

-          On explainability. Provide an explanation of: which audiences were targeted; what the basis of the targeting was (including demographics, psychographics, and probabilistic data inferences from metadata arising from audience’s device use and behaviour); with what sort of messages; to what end (e.g. voter mobilisation, suppression); and which aspects of the campaign most succeeded in engaging voters (e.g. specific adverts, messages, themes, memes).

-          On civility. Reflect upon to what extent the campaign was civil. Did it avoid: inciting others to commit crimes; making false statements of fact about the personal character or conduct of a candidate; cruelty; nastiness, aggression; disrespect; and deliberate provocation of anger and outrage? If any of these elements were deliberately breached, then explain what the justification was.

-          On informativeness. Reflect upon whether the campaign gave voters enough information to make an informed choice on which to base their electoral decision: i.e. was the information true, complete, undistorted, and relevant?

 

10.2              Put in place an independent panel of diverse stakeholders (including fact-checkers, academics, and campaigners from opposing sides) to verify, and critically comment upon, the self-evaluations.

 

10.3              Develop a kite-mark system to brand the transparency, explainability, informativeness and civility of the campaigns, to enable comparisons between elections.

 

10.4              Ensure that the self-evaluations, and verification by the independent panel, are available online in an independent public archive to enable comparisons between elections.

 

10.5              Place in an independent, searchable public archive all micro-targeted messages deployed in any political campaign.[142]

 

10.6              Use public information campaigns and citizenship education within schools to widen understanding of the criteria used in the code of ethical conduct for political campaigns, and to help people recognise if a campaign contravenes these codes.

 

30


 


[1] Wylie, C. 2018. Oral evidence: fake news, HC 363, 27 March. http://data.parliament.uk/writtenevidence/committeeevidence.svc/evidencedocument/digital-culture-media-and-sport-committee/disinformation-and-fake-news/oral/81022.pdf  p.6. Kaiser, B. 2018. Oral evidence: fake news, HC 363, 17 April. http://data.parliament.uk/writtenevidence/committeeevidence.svc/evidencedocument/digital-culture-media-and-sport-committee/disinformation-and-fake-news/oral/81592.html  p.2.  Cadwalladr, C. 2018. The Cambridge Analytica Files. The Brexit whistleblower: ‘Did Vote Leave use me? Was I naive?' The Guardian, 24 March. https://www.theguardian.com/uk-news/2018/mar/24/brexit-whistleblower-shahmir-sanni-interview-vote-leave-cambridge-analytica

[2] Bakir, V. & McStay, A. 2019. CULTURE CHANGE: Incentivise political campaigners to run civil and informative election campaigns.  Submission to All-Party Parliamentary Group (APPG) on Electoral Campaigning Transparency. Aug. https://research.bangor.ac.uk/portal/files/24668915/Bakir_McStay_2019_Culture_Change.pdf

[3] Cummings, D. 2017. On the referendum #22: Some basic numbers for the Vote Leave campaign. https://dominiccummings.wordpress.com/2016/10/29/on-the-referendum-20-the-campaign-physics-and-data-science-vote-leaves-voter-intention-collection-system-vics-now-available-for-all/ p.12.

[4] LSE, Opinium & Lansons 2016. ‘The Impact of Brexit on consumer behaviour’, 8 June. https://www.opinium.co.uk/?s=the_impact_of_brexit_on_consumer_behaviour

[5] Kirk, A. 2017. EU referendum: The claims that won it for Brexit, fact checked. The Telegraph, 13 March. http://www.telegraph.co.uk/news/0/eu-referendum-claims-won-brexit-fact-checked/

[6] Merrick, R. 2019. Dominic Cummings: Vote Leave chief found in contempt of parliament over refusal to give evidence to 'fake news' inquiry. The Independent, 27 March. https://www.independent.co.uk/news/uk/politics/dominic-cummings-vote-leave-contempt-parliament-brexit-inquiry-fake-news-a8841731.html

[7] Banks, A. 2018. Oral evidence: fake news, HC 363, 12 June. http://data.parliament.uk/writtenevidence/committeeevidence.svc/evidencedocument/digital-culture-media-and-sport-committee/disinformation-and-fake-news/oral/85344.html

[8] DCMS. 2019. Disinformation and ‘fake news’: final report, 14 February. Digital, Culture, Media and Sport Committee, House of Commons 1791. https://publications.parliament.uk/pa/cm201719/cmselect/cmcumeds/1791/1791.pdf p.43.

[9] Caesar, E. 2019. Banks, the “Bad Boy of Brexit”, The New Yorker, 18 March. https://www.newyorker.com/magazine/2019/03/25/the-chaotic-triumph-of-arron-banks-the-bad-boy-of-brexit

[10] Channel 4 News, 2019. Revealed: how Leave.EU faked migrant footage, Channel 4 News, 16 Apr. https://www.channel4.com/news/revealed-how-leave-eu-faked-migrant-footage

[11] Kennedy, G.A. 1991. On rhetoric: a theory of civic discourse. New York: Oxford University Press.

[12] Westen, D. 2008. The political brain: the role of emotion in deciding the fate of the nation. New York: PublicAffairs, p.173.

[13] Perloff, R.M. 2018. The dynamics of political communication: media and politics in a digital age. New York and London: Routledge. Bakir, V., Herring, E., Miller, D. & Robinson, P. 2018. Lying and deception in politics. In. J. Meibauer (ed.), The Oxford handbook of politics and lying. Oxford: Oxford University Press.

[14] Chester, J. & Montgomery, K. 2017. The role of digital marketing in political campaigns. Internet Policy Review, 6(4). DOI: 10.14763/2017.4.773

[15] Demographic data is statistically socio-economic in nature: e.g. population, gender, race, age, income, education and employment in specific geographic locations and time periods.

[16] For an overview of use of opinion polling, targeted campaigning and direct marketing in UK campaigns, see: Macintyre, A., Wright, G. & Hankey, S. n.d.  Data & democracy in the UK: Tactical Tech’s data & politics team. https://cdn.ttc.io/s/ourdataourselves.tacticaltech.org/ttc-influence-industry-uk.pdf . Also see: McNair, B. 2011.  An introduction to political communication, London, Taylor & Francis, p.101.

[17] Perloff, R.M. 2018. The dynamics of political communication: media and politics in a digital age. New York and London: Routledge, pp.246-247. Bartlett, J., Smith, J. & Acton, R. 2018. The future of political campaigning. Demos. July. https://ico.org.uk/media/2259365/the-future-of-political-campaigning.pdf p. 27.

[18] Tufekci, Z. 2014. Engineering the public: big data, surveillance and computational politics. First Monday, 19 (7). https://firstmonday.org/ojs/index.php/fm/article/view/4901/4097

[19] The Electoral Commission. 2018. Digital campaigning: increasing transparency for voters, June. https://www.electoralcommission.org.uk/sites/default/files/pdf_file/Digital-campaigning-improving-transparency-for-voters.pdf p.4.

[20] Macintyre, A., Wright, G. & Hankey, S. n.d.  Data & democracy in the UK: Tactical Tech’s data & politics team. https://cdn.ttc.io/s/ourdataourselves.tacticaltech.org/ttc-influence-industry-uk.pdf p.7.

[21] The Electoral Commission. 2018. Digital campaigning: increasing transparency for voters, June. https://www.electoralcommission.org.uk/sites/default/files/pdf_file/Digital-campaigning-improving-transparency-for-voters.pdf p.4.

[22] Macintyre, A., Wright, G. & Hankey, S. n.d.  Data & democracy in the UK: Tactical Tech’s data & politics team. https://cdn.ttc.io/s/ourdataourselves.tacticaltech.org/ttc-influence-industry-uk.pdf p.7.

[23] Data analytics provides automated insights into a dataset. It can use data mining techniques and tools to discover hidden patterns in datasets.

[24] ‘Profiling’ refers to the process of construction and application of user profiles generated by mathematical techniques (such as algorithms) that allow discovery of patterns or correlations in large quantities of data, aggregated in databases. When these patterns or correlations are used to identify or represent people, they are called profiles’. See Elmer, G. 2004. Profiling machines. mapping the personal information economy. MIT Press.

[25] Macintyre, A., Wright, G. & Hankey, S. n.d.  Data & democracy in the UK: Tactical Tech’s data & politics team. https://cdn.ttc.io/s/ourdataourselves.tacticaltech.org/ttc-influence-industry-uk.pdf p.7.

[26] Macintyre, A., Wright, G. & Hankey, S. n.d.  Data & democracy in the UK: Tactical Tech’s data & politics team. https://cdn.ttc.io/s/ourdataourselves.tacticaltech.org/ttc-influence-industry-uk.pdf p.11.

[27] Information Commissioners Office. 2018. Investigation into the use of data analytics in political campaigns:  A report to Parliament, 6 November. https://ico.org.uk/media/action-weve-taken/2260271/investigation-into-the-use-of-data-analytics-in-political-campaigns-final-20181105.pdf p.23.

[28] The formal warnings included a demand for each party to provide Data Protection Impact Assessments for all projects involving the use of personal data. Under the EU General Data Protection Regulation (GDPR), data controllers are required to complete a Data Protection Impact Assessment wherever their intended processing is ‘likely to result in high risk’ to the rights and freedoms of data subjects. Because parties are using special category data (relating political opinions and ethnicity), as well as automated decision making and profiling, they would be required to undertake a Data Protection Impact Assessment under the GDPR. A Data Protection Impact Assessment gives a systematic and objective description of the intended processing and considers the risk to people’s personal data – not only the compliance risk of the organisation involved.

[29] A/B testing is a way to compare two versions of a single variable, typically by testing a subject's response to variant A against variant B and determining which of the two variants is more effective.

[30] Siroker, D. 2010. How Obama raised $60 million by running a simple experiment, Optimizely Blog, 29 November. https://blog.optimizely.com/2010/11/29/how-obama-raised-60-million-by-running-a-simple-experiment/

[31] Formisimo. 2016. Digital marketing and CRO in political campaigns. https://www.formisimo.com/blog/digital-marketing-and-cro-in-political-campaigns/

[32] Beckett, L. 2017. Trump digital director says Facebook helped win the White House. The Guardian, 9 October. https://www.theguardian.com/technology/2017/oct/08/trump-digital-director-brad-parscale-facebook-advertising

[33] Bartlett, J., Smith, J. & Acton, R. 2018. The future of political campaigning. Demos. July. https://ico.org.uk/media/2259365/the-future-of-political-campaigning.pdf p. 33.

[34] Cummings, D. 2017. Dominic Cummings: how the Brexit referendum was won. The Spectator, 9 January.

https://blogs.spectator.co.uk/2017/01/dominic-cummings-brexit-referendum-won/

[35] This figure is likely an over-estimate, and may arise from Cummings referring to the budget he worked with, not the overall campaign budget. Analysis of Vote Leave’s reported spending to the Electoral Commission suggests that more was spent on print media and canvassing than Cummings implied: e.g. Vote Leave report that £179,06 was spent on Royal Mail. See: Macintyre, A., Wright, G. & Hankey, S. n.d.  Data & democracy in the UK: Tactical Tech’s data & politics team. https://cdn.ttc.io/s/ourdataourselves.tacticaltech.org/ttc-influence-industry-uk.pdf p.16.

[36] Cummings, D. 2017. Dominic Cummings: how the Brexit referendum was won. The Spectator, 9 January.

https://blogs.spectator.co.uk/2017/01/dominic-cummings-brexit-referendum-won/

[37] Cummings, D. 2017. On the referendum #22: Some basic numbers for the Vote Leave campaign. https://dominiccummings.com/2017/01/30/on-the-referendum-22-some-numbers-for-the-vote-leave-campaign/

[38] Cummings, D. 2016. 'On the referendum #20: the campaign, physics and data science – Vote Leave’s ‘Voter Intention Collection System’ (VICS) now available for all', 29 October. https://dominiccummings.com/2016/10/29/on-the-referendum-20-the-cam- paign-physics-and-data-science-vote-leaves-voter-intention-collection-system-vics-now-available-for-all/

[39] Howard, P.N. 2018. Claim No: CO/3214/2018, Report of Dr Philip N. Howard Professor, Oxford University to The High Court of Justice, Queen’s Bench Division, Administrative Court, 30 November. https://www.ukineuchallenge.com/wp-content/uploads/2018/12/257136-Expert-report-of-Prof-Howard-FINAL-Signed.pdf

[40] Cummings, D. 2017. On the referendum #22: Some basic numbers for the Vote Leave campaign. https://dominiccummings.com/2017/01/30/on-the-referendum-22-some-numbers-for-the-vote-leave-campaign/ .

[41] Howard, P.N. 2018. Claim No: CO/3214/2018. Report of Dr Philip N. Howard Professor, Oxford University to The High Court of Justice, Queen’s Bench Division, Administrative Court, 30 November. https://www.ukineuchallenge.com/wp-content/uploads/2018/12/257136-Expert-report-of-Prof-Howard-FINAL-Signed.pdf

[42] Cummings, D. 2016. On the referendum #20: the campaign, physics and data science – Vote Leave’s ‘Voter Intention Collection System’ (VICS) now available for all, 29 October. https://dominiccummings.com/2016/10/29/on-the-referendum-20-the-campaign-physics-and-data-science-vote-leaves-voter-intention-collection-system-vics-now-available-for-all/

[43] Griffin, A. 2018. Brexit adverts used by Leave Campaign revealed by Facebook. The Independent, 26 July. https://www.independent.co.uk/life-style/gadgets-and-tech/news/brexit-facebook-ads-leave-campaign-nhs-immigration-boris-johnson-a8465516.html

[44] This is revealed by cross-referencing findings from the UK Inquiry into Disinformation and Fake News (2017-19), The Electoral Commission, and The Information Commissioners Office.

[45] This is revealed by cross-referencing a NATO publication on psychological operations; revelations from Cambridge Analytica (including the company’s publicity, internal documents, and whistleblowers’ testimony); the findings of various UK inquiries into social media manipulation and disinformation; US legal filings; and psychology literature on the ability to predict personality traits from social media.

[46] DCMS. 2019. Disinformation and ‘fake news’: final report, 14 February. Digital, Culture, Media and Sport Committee, House of Commons 1791. https://publications.parliament.uk/pa/cm201719/cmselect/cmcumeds/1791/1791.pdf p.50.

[47] The complex corporate structure of Cambridge Analytica, SCL Elections and SCL Group at the time of the Brexit referendum are explained by Siegelman. (see: Siegelman, W. 2017. SCL Group - Companies & Shareholders. Medium, 9 May. https://medium.com/@wsiegelman/scl-companies-shareholders-e65a4f394158 ). SCL Elections and Cambridge Analytica went into administration in May 2018, but appear to be incorporated by the new structure, Emerdata (see Siegelman, W. 2018. Chart: Emerdata Limited — the new Cambridge Analytica/SCL Group? Medium, 26 March. https://medium.com/@wsiegelman/chart-emerdata-limited-the-new-cambridge-analytica-scl-group-63283f47670d ).

[48] Silvester, J. 2018. Oral evidence: fake news, HC 363, 16 May. http://data.parliament.uk/writtenevidence/committeeevidence.svc/evidencedocument/digital-culture-media-and-sport-committee/disinformation-and-fake-news/oral/83034.pdf p.44.

[49] The Electoral Commission. 2018. Digital campaigning: increasing transparency for voters, June. https://www.electoralcommission.org.uk/sites/default/files/pdf_file/Digital-campaigning-improving-transparency-for-voters.pdf p.4.

[50] ICO. 2018. Investigation into the use of data analytics in political campaigns.  A report to Parliament, 6 November. Information Commissioners Office. https://ico.org.uk/media/action-weve-taken/2260271/investigation-into-the-use-of-data-analytics-in-political-campaigns-final-20181105.pdf p.150.

[51] ICO. 2018. Investigation into the use of data analytics in political campaigns.  A report to Parliament, 6 November. Information Commissioners Office. https://ico.org.uk/media/action-weve-taken/2260271/investigation-into-the-use-of-data-analytics-in-political-campaigns-final-20181105.pdf pp.41-42.

[52] DCMS. 2019. Disinformation and ‘fake news’: final report, 14 February. Digital, Culture, Media and Sport Committee, House of Commons 1791. https://publications.parliament.uk/pa/cm201719/cmselect/cmcumeds/1791/1791.pdf p.51.

[53] This is an Aggregate IQ repository that the Inquiry asked data breach expert, Chris Vickery to examine.

[54] DCMS. 2019. Disinformation and ‘fake news’: final report, 14 February. Digital, Culture, Media and Sport Committee, House of Commons 1791. https://publications.parliament.uk/pa/cm201719/cmselect/cmcumeds/1791/1791.pdf p.50.

[55] Cambridge Analytica/SCL Group. n.d. Leave.EU: profile raising and outreach. https://www.parliament.uk/documents/commons-committees/culture-media-and-sport/Arron-Banks-appendix.pdf

[56] Cambridge Analytica/SCL Group. n.d. Leave.EU: profile raising and outreach. https://www.parliament.uk/documents/commons-committees/culture-media-and-sport/Arron-Banks-appendix.pdf p.3, emphasis added.

[57] See: Banks, A. 2018. Oral evidence: fake news, HC 363, 12 June. http://data.parliament.uk/writtenevidence/committeeevidence.svc/evidencedocument/digital-culture-media-and-sport-committee/disinformation-and-fake-news/oral/85344.html  p.4.

[58] Banks, A., 2017.We had a number of meetings with CA and exchange of ideas but used techniques developed by Goddard Gunster which we have stated before’, [Tweet] 3 Marchhttps://twitter.com/Arron_banks/status/837734663842062338 

[59] These comprise:

-          An article that says the Leave.EU campaign is using Cambridge Analytica. See: James, S.B. 2015. Leave.EU campaign brings in US voter data and messaging firm Cambridge Analytica, PR Week, 15 November. http://www.prweek.com/article/1373378/leaveeu-campaign-brings-us-voter-data-messaging-firm-cambridge-analytica 

-          A Leave.EU launch event, press conference withBrittany Kaiser (Cambridge Analytica), Gerry Gunster, Ian Warren and Arron Banks on 18 November 2015. See​: https://www.youtube.com/watch?v=kuPbAX6Lzhc . The Facebook post mentions Kaiser and Cambridge Analytica:https://twitter.com/carolecadwalla/status/863665765597290496 

-          A post on Leave.EU website (now only available via archive) on how Cambridge Analytica will help collect data on the UK electorate. See: Leave.EU. 2015. The Science behind our strategy. 20 November. http://archive.fo/rN22u 

-          An ​interview with Arron Banks stating that Cambridge Analytica has helped boost Leave.EU’s social media campaign. See: Gallagher, P. 2015. EU referendum. The Independent, 7 December. http://www.independent.co.uk/news/uk/politics/eu-referendum-controversial-leaveeu-co-founder-arron-ba nks-on-why-hes-happy-to-put-noses-out-of-a6762806.html 

-          A Politico article where Banks talks about how Leave.EU is using Cambridge Analytica: ‘This same firm, says Banks, is using its understanding of our Facebook habits to shape Leave.EU's own “micro-messaging”: Cambridge’s profiles enable it to “know what sort of advertising appeals to you. It might be aggressive, it might be passive, it might be all sorts of different forms." See: Colvile, R. 2015. Leave.EU: the anti-political campaign. Politico, 23 December. http://www.politico.eu/article/leave-eu-anti-political-campaign-brexit-referendum-uk-eu-reform/

-          Banks’ tweet on 7 February 2016: ‘Our campaign is being run by Gerry Gunster ( won 24 referendum in the USA and Cambridge analytica experts in SM’: https://twitter.com/Arron_banks/status/696410417569120256 

[60] UK Independence Party (UKIP) is a hard Eurosceptic, right-wing UK political party.

[61] Kaiser, B. 2019. Additional submissions to Parliament in support of inquiries regarding Brexit, July. https://www.parliament.uk/documents/commons-committees/culture-media-and-sport/Britanny-Kaiser-July-2019-submission.pdf pp. 51-52.

[62] Hern, A. 2019. Cambridge Analytica did work for Leave.EU, emails confirm, The Guardian, 30 July. https://www.theguardian.com/uk-news/2019/jul/30/cambridge-analytica-did-work-for-leave-eu-emails-confirm

[63] Kaiser maintains that chargeable work was done by Cambridge Analytica, at the direction of Leave.EU and UKIP executives, despite a contract never being signed; that the invoice was paid by Arron Banks to UKIP directly; and that the intent was to give the analysed UKIP dataset to Leave.EU. See: Kaiser, B. 2019. Additional submissions to Parliament in support of inquiries regarding Brexit, July. https://www.parliament.uk/documents/commons-committees/culture-media-and-sport/Britanny-Kaiser-July-2019-submission.pdf

[64] Kaiser, B. 2019. Additional submissions to Parliament in support of inquiries regarding Brexit, July. https://www.parliament.uk/documents/commons-committees/culture-media-and-sport/Britanny-Kaiser-July-2019-submission.pdf p. 1.

[65] Briant, E.  2018. Three explanatory essays giving context and analysis to submitted evidence. Written submission to Inquiry into Fake News and Disinformation, Digital, Culture, Media and Sport Committee.

http://data.parliament.uk/WrittenEvidence/CommitteeEvidence.svc/EvidenceDocument/Digital,%20Culture,%20Media%20and%20Sport/Disinformation%20and%20%E2%80%98fake%20news%E2%80%99/Written/81306.html .

[66] DCMS. 2019. Disinformation and ‘fake news’: final report, 14 February. Digital, Culture, Media and Sport Committee, House of Commons 1791. https://publications.parliament.uk/pa/cm201719/cmselect/cmcumeds/1791/1791.pdf p.50.

[67] Kaiser, B. 2018. Oral evidence: fake news, HC 363, 17 April. http://data.parliament.uk/writtenevidence/committeeevidence.svc/evidencedocument/digital-culture-media-and-sport-committee/disinformation-and-fake-news/oral/81592.html  pp. 20-21.

[68] Tatham, S. 2015. Target Audience Analysis, The Three Swords Magazine, 28: 50-53. http://www.jwc.nato.int/images/stories/threeswords/TAA.pdf

[69] Cambridge Analytica/SCL Group. n.d. Leave.EU: profile raising and outreach. https://www.parliament.uk/documents/commons-committees/culture-media-and-sport/Arron-Banks-appendix.pdf p.7.

[70] Federal Trade Commission. 2019a. FTC sues Cambridge Analytica, settles with former CEO and app developer, 24 July. https://www.ftc.gov/news-events/press-releases/2019/07/ftc-sues-cambridge-analytica-settles-former-ceo-app-developer

[71] ICO. 2018. Investigation into the use of data analytics in political campaigns.  A report to Parliament, 6 November. Information Commissioners Office. https://ico.org.uk/media/action-weve-taken/2260271/investigation-into-the-use-of-data-analytics-in-political-campaigns-final-20181105.pdf

[72] The Information Commissioners Office also issued the maximum penalty of £500,000 to Facebook in October 2018 for allowing Cambridge Analytica to collect data of up to 87 million users through third-party apps. Facebook was fined under the older Data Protection Act 1998, which meant that Facebook avoided a potential GDPR fine stretching to $1.6bn.

[73] The US Federal Trade Commission issues an administrative complaint when it has ‘reason to believe’ that the law has been or is being violated, and it appears to the Commission that a proceeding is in the public interest. When the Commission issues a consent order on a final basis, it carries the force of law with respect to future actions. Each violation of such an order may result in a civil penalty of up to $42,530.

[74] In July 2019, the US Federal Trade Commission announced that Facebook would pay a record-breaking $5 billion penalty and submit to new restrictions and a modified corporate structure that would hold the company accountable for the decisions it makes about its users’ privacy. This was to settle Federal Trade Commission charges that Facebook violated a 2012 Federal Trade Commission order by deceiving users about their ability to control the privacy of their personal information. See: Federal Trade Commission. 2019. FTC imposes $5 Billion penalty and sweeping new privacy restrictions on Facebook, 24 July. https://www.ftc.gov/news-events/press-releases/2019/07/ftc-imposes-5-billion-penalty-sweeping-new-privacy-restrictions

[75] Wylie, C. 2018. Oral evidence: fake news, HC 363, 27 March. http://data.parliament.uk/writtenevidence/committeeevidence.svc/evidencedocument/digital-culture-media-and-sport-committee/disinformation-and-fake-news/oral/81022.pdf  pp.21-22.

[76] Wells, W.D. 1975. Psychographics: a critical review. Journal of Marketing Research, 12(2): 196-213. https://www.jstor.org/stable/3150443 p.197.

[77] Tufekci, Z. 2014. Engineering the public: big data, surveillance and computational politics. First Monday, 19 (7). https://firstmonday.org/ojs/index.php/fm/article/view/4901/4097

[78] Kosinski, M., Stillwell, D. & Graepel, T. 2013. Private traits and attributes are predictable from digital records of human behaviour. Proceedings of the National Academy of Sciences, 110(15): 5,802–5,805, doi: http://dx.doi.org/10.1073/pnas.1218772110

[79] McCrae, R. R. & Costa, P. T. 1987. Validation of the five-factor model of personality across instruments and observers. Journal of Personality and Social Psychology, 52(1): 81.

[80] Azucar, D., Marengo, D. & Settanni, M. 2018. Predicting the Big 5 personality traits from digital footprints on social media: a meta-analysis. Personality and Individual Differences, 124, 1 April: 150-159. https://doi.org/10.1016/j.paid.2017.12.018 p.157.

[81] Nix, A. 2016. The power of big data and psychographics in the electoral process, 27 September. The Concordia Annual Summit, New York. https://www.youtube.com/watch?v=n8Dd5aVXLCc 

[82] Kaiser, B. 2018. Oral evidence: fake news, HC 363, 17 April. http://data.parliament.uk/writtenevidence/committeeevidence.svc/evidencedocument/digital-culture-media-and-sport-committee/disinformation-and-fake-news/oral/81592.html pp.20-21.

[83] Bartlett, J., Smith, J. & Acton, R. 2018. The future of political campaigning. Demos. July. https://ico.org.uk/media/2259365/the-future-of-political-campaigning.pdf

[84] Emotional AI derives from affective computing techniques and advances in machine learning and AI. It is a weak form of AI in that these technologies aim to read and react to emotions, but they do not have sentience or emotional states themselves.

[85] McStay, A. 2018.  Emotional AI: the rise of empathic media. London: Sage

[86] Bakir, V. & McStay, A. 2018. Fake news and the economy of emotions: problems, causes, solutions. Digital Journalism, 1-22. http://dx.doi.org/10.1080/21670811.2017.1345645

[87] Bradshaw, S. & Howard, P.N. 2017. Troops, trolls and troublemakers: a global inventory of organized social media manipulation. Computational Propaganda Research Project. Working paper no. 2017.12. http://comprop.oii.ox.ac.uk/wp-content/uploads/sites/89/2017/07/Troops-Trolls-and-Troublemakers.pdf

[88] Bartlett, J. 2019. You can’t believe a word any of these people is saying – that’s the ‘deep fake’ era for you. The Guardian, 16 June. https://www.theguardian.com/commentisfree/2019/jun/16/you-cant-believe-a-word-any-of-these-people-is-saying-thats-the-dep-fake-era-for-you

[89] Bakir, V. & McStay, A. 2018. Fake news and the economy of emotions: problems, causes, solutions. Digital Journalism, 1-22. http://dx.doi.org/10.1080/21670811.2017.1345645

[90] McStay, A. 2018. Emotional AI: the rise of empathic media. London: Sage

[91] Durkheim, É. 1997 [1893]. Division of labour in society. New York: Free Press.

[92] Canetti, E. 1981 [1960]. Crowds and power. New York: Continuum.

[93] Kramer, A.D.I., Guillory, J.E. & Hancock, J.T. 2014. Experimental evidence of massive-scale emotional contagion through social networks, Proceedings of the National Academy of Sciences, 111 (29): 8788–90.

[94] Vosoughi, S., Roy, D, & Aral, S. 2018. The spread of true and false news online. Science, 359 (6380).

[95] As of 2019, the UK has 45 million social media users (67% of the UK population). Daily, the average UK based user spends 1 hr 50 mins scrolling through social media sites.

[96] As of 2019, Facebook has 40 million monthly active UK users (71% of UK adults >13 years old), while Twitter has 13.6 million monthly active UK users. See: wearesocial & Hootsuite. 2019. Digital in the UK. https://wearesocial.com/uk/digital-in-the-uk

[97] Indeed, a study of the kinds of content that Twitter users in the US state of Michigan were sharing just before the 2016 presidential election found that: social media users in Michigan shared a lot of political content, but the amount of professionally researched political news and information was consistently smaller than the amount of extremist, sensationalist, conspiratorial, masked commentary, fake news and other forms of ‘junk news’. The study also found that such junk news outperformed real news, but the proportion of professional news content being shared hit its lowest point the day before the election. See: Howard, P. N., Bolsover, G., Kollanyi, B., Bradshaw, S. & Neudert, L.-M. 2017. Junk News and Bots during the U.S. Election: What Were Michigan Voters Sharing Over Twitter? COMPROP DATA MEMO 26 March. http://comprop.oii.ox.ac.uk/wp-content/uploads/sites/89/2017/03/What-Were-Michigan-Voters-Sharing-Over-Twitter-v2.pdf.  A study of the kinds of content that Twitter users across all US states were sharing just before the 2016 presidential election finds a worryingly large proportion (32%) provided links to polarising content from Russian, WikiLeaks, and junk news sources. This content uses divisive and inflammatory rhetoric, and presents faulty reasoning or misleading information to manipulate the reader’s understanding of public issues and feed conspiracy theories. See: Howard, P.N., Kollanyi, B., Bradshaw, S., Neudert, L.-M. 2017. Social Media, News and Political Information during the US Election: Was Polarizing Content Concentrated in Swing States? COMPROP DATA MEMO 28 September. http://comprop.oii.ox.ac.uk/wp-content/uploads/sites/89/2017/09/Polarizing-Content-and-Swing-States.pdf

[98] ICO. 2018. Investigation into the use of data analytics in political campaigns.  A report to Parliament, 6 November. Information Commissioners Office. https://ico.org.uk/media/action-weve-taken/2260271/investigation-into-the-use-of-data-analytics-in-political-campaigns-final-20181105.pdf p.18.

[99] Caesar, E. 2019. Banks, the ‘Bad Boy of Brexit’, The New Yorker, 18 March. https://www.newyorker.com/magazine/2019/03/25/the-chaotic-triumph-of-arron-banks-the-bad-boy-of-brexit 

[100] DCMS. 2018. Disinformation and ‘fake news’: interim report, 24 July. House of Commons 363. p.27. Also see Q657 http://data.parliament.uk/writtenevidence/committeeevidence.svc/evidencedocument/digital-culture-media-and-sport-committee/fake-news/oral/79388.html

[101] Perloff, R.M. 2018. The dynamics of political communication: media and politics in a digital age. New York and London: Routledge. p.250.

[102] ICO. 2018. Investigation into the use of data analytics in political campaigns.  A report to Parliament, 6 November. Information Commissioners Office. https://ico.org.uk/media/action-weve-taken/2260271/investigation-into-the-use-of-data-analytics-in-political-campaigns-final-20181105.pdf p.4.

[103] ICO. 2018. Investigation into the use of data analytics in political campaigns.  A report to Parliament, 6 November. Information Commissioners Office. https://ico.org.uk/media/action-weve-taken/2260271/investigation-into-the-use-of-data-analytics-in-political-campaigns-final-20181105.pdf p.18.

[104] Bartlett, J., Smith, J. & Acton, R. 2018. The future of political campaigning. Demos. July. https://ico.org.uk/media/2259365/the-future-of-political-campaigning.pdf

[105] Bartlett, J., Smith, J. & Acton, R. 2018. The future of political campaigning. Demos. July. https://ico.org.uk/media/2259365/the-future-of-political-campaigning.pdf p.39.

[106] Cambridge Analytica/SCL Group. n.d. Leave.EU: profile raising and outreach. https://www.parliament.uk/documents/commons-committees/culture-media-and-sport/Arron-Banks-appendix.pdf p.3.

[107] Green, J. & Issenberg, S. 2016. Inside the Trump bunker, with days to go. Bloomberg, 27 October. https://www.bloomberg.com/news/articles/2016-10-27/inside-the-trump-bunker-with-12-days-to-go

[108] Cadwalladr, C. 2018. Revealed: 50 million Facebook profiles harvested for Cambridge Analytica in major data breach. The Guardian, 17 March. https://www.theguardian.com/news/2018/mar/17/cambridge-analytica-facebook-influence-us-election

[109] Wylie, C. 2018. Oral evidence: fake news, HC 363, 27 March. http://data.parliament.uk/writtenevidence/committeeevidence.svc/evidencedocument/digital-culture-media-and-sport-committee/disinformation-and-fake-news/oral/81022.pdf  pp.21-22.

[110] Manthorpe, R. 2019. Teens exposed to highly charged political ads on Facebook and Instagram. Sky News, 15 August. https://news.sky.com/story/teens-exposed-to-highly-charged-political-ads-on-facebook-and-instagram-11786042

[111] Bartlett, J., Smith, J. & Acton, R. 2018. The future of political campaigning. Demos. July. https://ico.org.uk/media/2259365/the-future-of-political-campaigning.pdf

[112] Tufekci, Z. 2014. Engineering the public: big data, surveillance and computational politics. First Monday, 19 (7). https://firstmonday.org/ojs/index.php/fm/article/view/4901/4097

[113] E.g content that uses divisive and inflammatory rhetoric, and presents faulty reasoning or misleading information to manipulate understanding of public issues and feed conspiracy theories.

[114] Howard, P.N., Kollanyi, B., Bradshaw, S. & Neudert, L.-M. 2017. Social media, news and political information during the US Election: was polarizing content concentrated in swing states? COMPROP DATA MEMO 28 September. http://comprop.oii.ox.ac.uk/wp-content/uploads/sites/89/2017/09/Polarizing-Content-and-Swing-States.pdf

[115] Lazarsfeld, P.F., Berelson, B. & Gaudet, H. 1944. The people’s choice: how a voter makes up his mind in a presidential campaign. New York: Columbia University Press.

[116] Bessi A., Zollo F., Del Vicario M., Puliga M., Scala A., Caldarelli G. Uzzi, B. & Quattrociocchi, W. 2016. Users polarization on Facebook and Youtube. PLoS ONE, 11(8): e0159641. doi:10.1371/journal.pone.0159641;   del Vicario, M., Bessi, A., Zollo, F., Petroni, F., Scala, A., Caldarellia, G., Stanley, H. E. & Quattrociocchi, W. 2016. The spreading of misinformation online. Proceedings of the National Academy of Sciences, 113(3): 554-559. http://www.pnas.org/content/113/3/554.full.pdf .

[117] Habermas, J. 1989. The structural transformation of the public sphere: an inquiry into a category of bourgeois society. Cambridge, MA: MIT Press. Scammell, M. 2014. Consumer democracy: the marketing of politics. New York: Cambridge University Press.

[118] DCMS. 2019. Disinformation and ‘fake news’: final report, 14 February. Digital, Culture, Media and Sport Committee, House of Commons 1791. https://publications.parliament.uk/pa/cm201719/cmselect/cmcumeds/1791/1791.pdf

[119] Cabinet Office & Kevin Foster MP. 2019. Press release: Government safeguards UK elections, 5 May. https://www.gov.uk/government/news/government-safeguards-uk-elections .

[120] Bakir, V. & McStay, A. 2017. Summary and analysis of all written submissions on how to combat fake news (up to April 2017). Written Submission to Inquiry into Disinformation and Fake News. DCMS. http://data.parliament.uk/writtenevidence/committeeevidence.svc/evidencedocument/digital-culture-media-and-sport-committee/fake-news/written/71533.html . Bakir, V. & McStay, A. 2018. Fake news and the economy of emotions: problems, causes, solutions. Digital Journalism, 1-22. http://dx.doi.org/10.1080/21670811.2017.1345645

[121] Chan, M.S., Jones, C. R., Jamieson, K.H. & Albarracín, D. 2017. Debunking: a meta-analysis of the psychological efficacy of messages countering mis-information, Psychological Science, 1-16. Rubin, V.L., Conroy, N.J., Chen, Y. & Cornwell, S. 2017. Fake news or truth? using satirical cues to detect potentially misleading news. Journal of Economic Perspectives, 31 (2): 211-236.

[122] European Commission. 2018. A multi-dimensional approach to disinformation, 12 March. Directorate-General for Communication Networks, Content & Technology, https://ec.europa.eu/digitalsingle-market/en/news/final-report-high-level-expert-group-fake-news-and-onlinedisinformation p.15.

[123] European Commission. 2018. A multi-dimensional approach to disinformation, 12 March. Directorate-General for Communication Networks, Content & Technology, https://ec.europa.eu/digitalsingle-market/en/news/final-report-high-level-expert-group-fake-news-and-onlinedisinformation

[124] i.e. adverts that features political figures and parties, elections, legislation before Parliament or past referendums.

[125] DCMS. 2019. Disinformation and ‘fake news’: final report. 14 February. Digital, Culture, Media and Sport Committee, House of Commons 1791. Available at: https://publications.parliament.uk/pa/cm201719/cmselect/cmcumeds/1791/1791.pdf  p.62.

[126] Astroturfing is where political campaigners and lobbying companies attempt to create the perception of an upswell of grassroots support for a cause.

[127] Waterson, J. 2019. Revealed: Johnson ally’s firm secretly ran Facebook propaganda network. The Guardian, 1 August. https://www.theguardian.com/politics/2019/aug/01/revealed-johnson-allys-firm-secretly-ran-facebook-propaganda-network?CMP=Share_iOSApp_Other

[128] Cabinet Office & Kevin Foster MP. 2019. Press release: Government safeguards UK elections, 5 May https://www.gov.uk/government/news/government-safeguards-uk-elections

[129] The Electoral Commission. 2018. Digital campaigning: increasing transparency for voters, June. https://www.electoralcommission.org.uk/sites/default/files/pdf_file/Digital-campaigning-improving-transparency-for-voters.pdf p.14.

[130] The Electoral Commission. 2018. Digital campaigning: increasing transparency for voters, June. https://www.electoralcommission.org.uk/sites/default/files/pdf_file/Digital-campaigning-improving-transparency-for-voters.pdf p.11.

[131] Goodman, B. & Flaxman, S. 2017. European Union regulations on algorithmic decision-making and a 'right to explanation'. AI Magazine, 38(3).  doi: 10.1609/aimag.v38i3.2741.

[132] The Electoral Commission. 2018. Digital campaigning: increasing transparency for voters, June. https://www.electoralcommission.org.uk/sites/default/files/pdf_file/Digital-campaigning-improving-transparency-for-voters.pdf p.9.

[133] Committee on Standards in Public Life. 2018. Intimidation in public life: a review by the Committee on Standards in Public Life, March. https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/666927/6.3637_CO_v6_061217_Web3.1__2_.pdf

[134] The Electoral Commission. 2018. Digital campaigning: increasing transparency for voters, June. https://www.electoralcommission.org.uk/sites/default/files/pdf_file/Digital-campaigning-improving-transparency-for-voters.pdf p.10.

[135] Olberding, A. 2019. Righteous incivility. 5 September. https://aeon.co/essays/whats-the-difference-between-being-righteous-and-being-rude . Also see Olberding, A. 2019. The wrong of rudeness: learning modern civility from ancient Chinese philosophy. Oxford University Press.

[136] Bakir, V., Herring, E., Miller, D. & Robinson, P. 2018. Organized persuasive communication: a new conceptual framework for research on public relations, propaganda and promotional culture. Critical Sociology. https://doi.org/10.1177/0896920518764586

[137] Numerous government inquiries and studies into online disinformation agree on the importance of education in improving digital literacy and building societal resilience to disinformation. However, most simply focus on recognising disinformation or deception See: European Commission. 2018. A multi- dimensional approach to disinformation, 12 March, Directorate-General for Communication Networks, Content & Technology, https://ec.europa.eu/digitalsingle-market/en/news/final-report-high-level-expert-group-fake-news-and-onlinedisinformation p.20;  DCMS. 2019. Disinformation and ‘fake news’: final report. 18 February. House of Commons 1791. https://publications.parliament.uk/pa/cm201719/cmselect/cmcumeds/1791/1791.pdf pp.86-88; UK Government 2019. Don’t feed the Beast. https://sharechecklist.gov.uk. The need for greater emotional literacy is also discussed by Wardle, C. & Derakshan, H. 2017. Information Disorder: Toward an interdisciplinary framework for research and policy making. Council of Europe report DGI(2017)09. https://rm.coe.int/information-disorder-toward-an-interdisciplinary-framework-for-researc/168076277c p.60. However, we have yet to see a framework that combines our four criteria.

[138] Wardle, C. & Derakshan, H. 2017. Information disorder: toward an interdisciplinary framework for research and policy making. Council of Europe report DGI(2017)09.  https://rm.coe.int/information-disorder-toward-an-interdisciplinary-framework-for-researc/168076277c 

[139] McGuire, W. J. 1964. Some contemporary approaches. Advances in Experimental Social Psychology, 1: 191–229. doi:10.1016/ S0065-2601(08)60052-0. 

[140] Banas, J. A. & Rains, S. A. 2010. A meta-analysis of research on inoculation theory. Communication Monographs, 77 (3): 281–311. doi:10.1080/03637751003758193. Also see: Cook, J., Lewandowsky, S.  & Ecker, U.K.H. 2017. Neutralizing misinformation through inoculation: exposing misleading argumentation techniques reduces their influence. PLOS ONE, 12(5): 1–21. doi:10.1371/journal.pone.0175799.

[141] Roozenbeek, J. & van der Linden, S. 2019. The fake news game: actively inoculating against the risk of misinformation, Journal of Risk Research, 22(5): 570-580, DOI: 10.1080/13669877.2018.1443491.

[142] Who Targets Me, a citizen-led initiative, monitors political advertising on social media, but only gathers information via volunteers installing software to track political ads on their social media accounts. Their sample is therefore limited to a self-selecting group of concerned people (see https://whotargets.me ) . In October 2018, Facebook announced in the UK that it will publish its own online databases of the political adverts that it has been paid to run, including information such as the targeting, actual reach and amount spent on those adverts. (See: Cellan-Jones, R. 2018. Facebook tool makes UK political ads 'transparent'. BBC News, 16 October. https://www.bbc.co.uk/news/technology-45866129 .) However, for greater trustworthiness, and to enable analysis across all social media platforms used by a campaign, creating an independent, searchable, public register of online political adverts is preferable.