Written evidence submitted by Who Targets Me
Submitted: 3/9/21 | Evidence provided by: Sam Jeffers, Who Targets Me
The use of online political advertising in the UK has grown significantly in the last decade. With increasing expenditure (and Government plans to further increase spending limits in British elections) comes a growing risk of online political advertising being abused by domestic and foreign actors.
We see the Online Safety Bill as the correct vehicle for straightforward, platform-focused safety measures to reduce this risk.
To achieve this, we propose the addition of a series of measures that are:
- Systemic, in that they add practical constraints on potential misuse of political advertising which could be universally applied by platforms.
- Speech-preserving, in that they create no new limitations on what campaigners can say in or do with their advertising.
- Balanced, in that they dramatically enhance trust and transparency, while imposing little by way of administrative burden on platforms of any size or their advertiser clients.
- Stable, in that they create a level playing field for platforms and campaigners, allowing them to compete in an environment where the rules are predictable and consistent.
We do not see this as unnecessary ‘scope creep’ for the Bill. Instead, it is a response to an increasingly urgent need for reform. The UK has already been through a number of electoral cycles where online political advertising has been a significant factor (the 2015, 2017 and 2019 General Elections, as well as the 2016 EU Referendum). With the possibility of an early election in late 2022 or early 2023, and the chance of a second Scottish Independence referendum at some point, it is the right time for the UK to try to mitigate the democratic and national security risks posed by unfettered online political advertising.
Responsible online political advertising can be beneficial for British democracy. Being cheap and accessible, it democratises the process of being heard in a political campaign. It allows different types of engagement to more traditional media coverage and offers the opportunity to reach and motivate new audiences. This is well aligned with the goals of a healthy democratic system.
While acknowledging the positive features, we have also written extensively over the past 5 years about the risks inherent in unfettered online political advertising. Regulators such as the Electoral Commission, platforms including Facebook, advertising trade bodies, a broad swathe of civil society organisations, privacy advocates and all political parties have stated their desire for regulation and/or clarity about rules for online political advertising. We are in full agreement with them.
Large global platforms - particularly Facebook and Google - now take the largest portion of political party campaign budgets. Thousands of new and unregulated campaigners use their platforms each year. The Government should take the opportunity presented by the Online Safety Bill to regulate the platforms’ political advertising systems and procedures to ensure confidence in the British democratic system.
We see five areas of risk posed by the misuse of online political advertising:
To date, the large platforms have responded to public and media pressure about their political advertising systems with a number of self-regulatory actions. This has included setting up new transparency archives, introducing verification measures for advertisers and tweaking policies around specific threats (such as disinformation about voting rules).
These measures are welcome but incomplete. Being self-regulatory, they lack consistency, instead being the sum of their individual responses to commercial and reputational circumstances. This makes them unpredictable and uneven (for example, Twitter banned online political advertising from its platform during the 2019 General Election campaign, while Facebook made numerous changes to their policies in the weeks and days before last year’s US Presidential election).
Legislation would bring about consistency and stability, and would re-centre the rules on the specific needs of the British democratic system. We propose that the Online Safety Bill should be amended to make nine specific provisions on political advertising.
Please note that, where relevant, we have linked to more detailed pieces we’ve written on each of the proposals.
Defining what is, and isn’t, political content is a persistent challenge when thinking about regulating political advertising (and therefore treating it differently to other, ‘non-political’ advertising).
Rather than having the Bill try and define specific actors or content as ‘political’, we propose a system that uses a series of tests to evaluate the performance of platforms’ ability to identify (and properly treat) political and issue-related advertising.
The goal of this approach would be to ensure that platforms are making good faith efforts to identify such advertising on their systems, and are reporting on it accurately, with few false negatives (ads that they miss) and false positives (ads incorrectly labelled as political).
Political speech deserves full and comprehensive scrutiny. Though platforms have unilaterally developed political ad libraries (and similar) over the last few years, they are inconsistent and incompatible with each other.
The Government should use the Bill to define a universal standard for platforms’ publishing data about political ads. This should include information about the advertiser, their current status, their ads, the periods they ran for, the policies the ads were accepted under, any form of targeting used to refine who sees the ad, the size of the audience for the ad, the amount spent, the amount of user engagement and any moderation decisions made by the platform relevant to the advertiser or ad.
Advertising is the sum of the message, the amount of money spent on it, how many people it reaches and who it is targeted at. The Bill should ensure it is straightforward for voters to discover this.
Each election (and often in between them), our research discovers new advertisers who do not disclose their identity or the source of their funds. While there are occasionally good arguments for anonymous campaigning (e.g. in oppressive regimes), they are the exception rather than the norm in the UK.
Platforms should therefore perform strong “know your customer” checks to ensure that people using their services meet the high standards of transparency expected of campaigners in the Britain.
Allowing foreign actors, particularly foreign governments, looking to destablise British democracy, is a national security threat. Global advertising platforms must ensure they properly verify advertisers so they cannot be turned into a vector for such attacks.
To compliment strong checks, platforms should also be required to implement a “cooling off” period for new advertisers, where they limit their reach for a period of time. This would reduce the risk of new actors “popping up” during an election campaign, spending large sums while failing to meet the expected standards of transparency, before disappearing forever (this happened numerous times during the 2019 General Election campaign).
With online political advertising growing exponentially, practical transparency and accountability has become increasingly difficult. The traditional systems of accountability, such as scrutiny by journalists, have been unable to keep up with monitoring thousands of advertisers and tens or hundreds of thousands of political ads being run simultaneously.
We believe that establishing quotas for the number of ads a political advertiser can run would restore balance to the system in the following ways:
- Fewer ads would reduce the incentive for advertisers to collect large quantities of data on voters, posing a risk to privacy. Targeting specific messages would still be possible, but not to the fine-grained extent currently possible.
- It would increase the chance of ads that contain falsehoods being identified and held accountable, particularly when considered alongside the proposals for strong “know your customer” and advertising transparency standards.
- It offers a content-neutral approach that is entirely compatible with the goal of democratic free expression.
Broadcasters are not permitted to cover campaign issues on election day. Currently though political advertising does not stop while people are voting. This risks new issues or false information about voting being raised on election day, with no opportunity to be interrogated or held accountable.
To prevent this, other than for simple “Vote X Today” advertising, platforms should be required to follow the same blackout period required of broadcasters.
To date, platforms have provided insufficient access to data for researchers looking to understand their impact on individuals and society. This lack of concrete evidence has led to a wide distribution of views when it comes to their role. As a result, researchers have been forced to find creative, grey-area solutions to getting hold of data. In one such case, Facebook recently shut down the accounts of respected, ethics-approved and audited US researchers looking at the impacts of online political advertising. Such lack of access and legal uncertainty must end.
The Online Safety Bill should set out a pathway towards vetted researchers (academic, journalistic and civil society) being able to work with platform data in a privacy preserving way.
Ofcom will take on much of the workload associated with the new Online Safety Bill. We agree that, of the available regulatory bodies, they are generally best suited to deeply interrogating the work and data of platforms.
However, we see the Electoral Commission, with its specific expertise around elections and campaigning, as being an extremely valuable partner in monitoring the risks posed by online political advertising. Given this experience, the Bill should co-designate the Electoral Commission to monitor the use of online ads in political campaigns.
Platforms should be regularly audited, with audits ensuring:
- They are properly identifying political and issue advertising (see 1) and are minimising false positives.
- They properly and accurately perform appropriate “know your customer checks”
- Transparency standards are being fully implemented
The Bill includes a range of penalties for platforms who fail to comply with its current scope. This should be extended to include penalties being levied should they fail to treat political advertising appropriately. Working from recommendations from the regulator (who in turn are working from evidence provided by the platform auditor), platforms should be given a period of time to correct vulnerabilities in their political advertising systems before which penalties would apply.
Democracy is precious and extremely valuable. Platforms who are negligent, allowing their systems to be used in ways that are detrimental to national security and to voter confidence, should be properly and substantially penalised.