Daniel Kreiss – written evidence (DAD0098)


  1. What are the main implications for democracy resulting from recent trends in online political campaigning? Which of these trends are positive, and which of significant concern?

There are three relevant considerations to this question. What are the recent trends in online political campaigning? What are the implications or likely effects of these trends? And, how do we evaluate them according to normative theories of democracy?

In many western democracies, we have seen some broad trends in digital campaigning that concern the emergence of technology-intensive campaigning; the rise of data and analytics and social media as central to electoral efforts; and the growing centrality of platforms, and platform companies, as infrastructure for all democratic and electoral processes from discussion in the public sphere and political advertisements to serving as conduits of voting information.

In a technology-intensive era of political campaigning, which increasingly characterizes electoral processes in all western democracies, everything that campaigns do—from contacting voters on their doorsteps to running television and digital advertisements—has an underlying technological and data basis. Political parties around the world have invested considerable resources in developing technological infrastructures to support voter contacts, as well as data infrastructures to figure out which voters to contact, where and how to contact them, determine what is the most effective thing to say, and learn how to get their attention. In part, this is driven by broader shifts in media and technological environments that simply make it harder to get voters’ attention. With the proliferation of media platforms, time-shifting of television, the rise of streaming services and dual screening, and changing media habits, it is much more difficult for political campaigners to get the attention of voters than it was ten, twenty, or thirty years ago.

The rise of data and analytics underlying electoral campaigning and the increasing centrality of social media to the efforts of campaigners reflects this underlying reality. It is hard to reach and break through the noise to engage people, and campaigns want to efficiently contact them and therefore go to where they are, which is increasingly social media for all demographics of the electorate. This, in turn, has also meant that there are new intermediaries in electoral processes around the world: platform companies. This includes Alphabet (Google and YouTube), Facebook (including Instagram), and smaller platforms with varying reach including Snapchat, Twitter, and Reddit. These private companies are increasingly called upon to moderate political speech, and to that end they have policies governing everything from permissible user content to political speech more broadly, and they set the terms upon which campaigns can purchase and run digital advertisements meant to sway electorates. They also facilitate the data and analytics operations of campaigns by being the primary means to distribute targeted content and harvest data.

There are a number of implications of these changes that can be evaluated differently according to various democratic theories. First, we have seen, and studies have shown, that there has been an uptick of political participation in many western democracies, in part given efforts to contact and mobilize electorates efficiently using digital and social media. What the technology-intensive era has decidedly not done is create more robust forms of democratic deliberation or citizen engagement in policy making. In essence, this new technology-intensive era privileges participation over deliberation. In doing so, the ability of campaigns to micro-target voters, which often leads to socially divisive and in-group appeals, and the ways that social media platforms algorithmically reward content that is often the most emotionally engaging, has incentivized political communication that favours in-group solidarity, not a broader civil social solidarity. This, in turn, also favours participation over deliberation.

Similarly, the implication of social media forming the basis for much political communication has, in turn, given rise to both democratically beneficial and problematic processes. Social media platforms such as Facebook and YouTube have undoubtedly increased the availability and reach of political information and content for citizens around the world. People have unprecedented access to information about their government and civic affairs through the communications hosted by these platforms from government agencies, journalism organizations, academic institutions, and think tanks. At the same time, people have newfound capacities to engage in political debate and discussion with peers around the world. This same explosion of political information has, in turn, also created the conditions for the unprecedented scale and reach of deliberate disinformation and misinformation spread in ignorance of its truth value. It has facilitated the rise of new, profit-driven actors who stand to gain from the spread of this mis/dis-information, and strategic entities including foreign agents and states that seek to undermine trust in electoral processes.

Finally, platforms are now the arbiters of global democratic processes in new and unprecedented ways, especially related to the scale at which they operate around the world. On the one hand, their private, corporate orientations has proven a boon to democratic speech and life in authoritarian countries around the world, while expanding sources of political information and opportunities to engage in political speech. On the other, they have also undermined democratic and electoral processes in many countries through the spread of hate speech and facilitation of unaccountable, non-transparent, and even illegal forms of political speech, all the while posing new challenges to regulators seeking to ensure their workings in line with the public interest.

  1. What are the principal variations that exist in platform policies towards online political advertising, and do you think there is a need for a consistent policy across the sector (if so what should this be)?

I am including two charts my research center produced to illustrate the scale of this problem.

The scope of policies and products that exist in relation to digital political advertising are dizzying across the leading social media platforms. While there are clear downsides that we discuss below, this diversity has afforded strategic political communicators with a much greater variety of means of appealing to the public than during the broadcast era of American, and indeed, global, politics. In this, digital and social media have lowered the costs of purchasing paid political speech, which scholars have documented in the context of allowing a much greater range of political actors engage in paid political advertising—including non-elites and challengers to incumbents. Meanwhile, the sheer variety of ads and the diversity of the platforms that they run on has meant that engaging citizens in contexts that are meaningful to them is more likely. All of which likely means that digital political advertising promotes electoral participation.

To start here is an overview here of the major, easily-accessible, digital advertising platforms and the suite of different products and services they offer in the context of digital political advertising.

Google Ads (including YouTube): Parent company Alphabet has multiple advertising-related products and platforms. Google Ads (previously AdWords) includes search engine advertisements, banner ads, video ads (including YouTube), and Gmail ads. Google’s Display and Video 360 platform connects to the larger programmatic media-buying ecosystem outside of Google-managed advertising inventory. Display and Video 360 has minimum spend requirements that make it inaccessible to smaller campaigns. There are also advertising capabilities solely accessible through Google’s API, which takes technical skills many campaign organizations lack.

Facebook and Instagram: Advertisements on Facebook, Instagram, Facebook’s Audience Network and Messenger are all run by boosting a post or in Facebook’s Ad Manager. Currently, political advertisements are not allowed on Facebook Audience Network or Messenger, leaving Facebook and Instagram as the primary carriers of political advertisements. The rules for advertising on these platforms are mostly the same, though ads on Instagram must follow Instagram’s Community Guidelines in addition to Facebook’s. Currently, WhatsApp (also owned by Facebook) does not carry ads.

Reddit: Reddit’s advertising platform is significantly more limited than the other companies. Its barebones capabilities and ambiguous rules are likely why it has not been adopted by many advertisers and serves as an interesting point of comparison to Google, Facebook, and Instagram. 

Snapchat: Similar to Reddit, Snapchat’s smaller and younger user base compared with Facebook, Instagram, and YouTube has largely kept it from being widely used by political advertisers. However, its approach to political advertising moderation and transparency raises interesting alternative approaches to the larger platforms.  

Twitter: Twitter has banned political advertising, but by having to define what is prohibited “political content” and what is restricted “cause-based” content, Twitter’s rules are an interesting and informative point of comparison with other platforms.

Across these platforms, there are significant differences in policies and products when it comes to digital political advertising. To take one highly salient example of the lack of standardization of definitions, categories, and products across platforms, Google is the only company that limits its definition of “political” advertising to paid content that references candidates, government officials, parties, and ballot measures. Every other definition of “political” by platforms is comparatively much broader, including Facebook’s inclusion of “any social issue in any place where the ad is being run” and Reddit’s “public communications relating to a political issue.” While easier to enforce, Google’s definition makes its policies substantively different from those of other companies. For example, Google does not apply its political advertising policies to ads that touch on political issues without referencing candidates, government officials, parties or ballot measures; Facebook and Instagram, Reddit, Snapchat, and Twitter do. 

As a material consequence, as our Center argues this means that in practice such things as the disclosure of paid political content will be significantly different across platforms. Despite the lack of clear US federal regulation or enforcement compelling them to do so in the US at least, all platforms require “Paid for by” information on any political advertisement. However, even though all platforms require information on who paid for political ads, their differing definitions of “political” mean that users on one platform may see “paid for by” on messages that users on another platform would not. This extends to which ads are included in the ad transparency databases that all the major platforms have rolled out voluntarily since the 2016 US presidential election in lieu of clear federal rules or guidelines. These are voluntary efforts, and the companies maintain these databases at considerable effort and expense. As a result, the ads that are contained within them vary given fundamentally different underlying definitions of “political,” in addition to the categories of data that are actually included with them.

There are other variations in platform policies at the level of their policies in the context of political advertising. For example, differences in the scale of platforms likely shape different approaches that platforms have taken with respect to their procedures for moderating political speech. Reddit and Snapchat require every political advertisement to go through human review and specifically cite that they do not necessarily follow their stated policies; they treat political ads on a case-by-case basis. Facebook and Google (and YouTube), on the other hand, rely on an opaque combination of algorithmic screening and human review of political ads for problematic content that violates their community standards and political advertising policies. Facebook, Instagram, and YouTube not only have over a billion users each, their respective ad platforms (Facebook Ad Manager and Google Ads) have millions of advertisers as well. Reddit, Snapchat, and Twitter’s user bases and ad platforms pale in comparison.

One challenge from a regulatory standpoint is that digital political advertising comes in hundreds of formats that are platform specific, for example Google search ads, Facebook sponsored posts, or Twitter promoted tweets. And, there are hundreds of actors with touch points in advertising networks that help place ads and buy audiences. At the same time, in the US the Federal Election Commission has generally failed to create a similar set of standard definitions of political advertising, rules for how they should be disclosed, and requirements for making political promotional communications available to the public. For example, in the US context with respect to digital media, disclosure statements are required for “public communications placed for a fee on another person’s website” by political committees or those who “expressly advocate for the election or defeat of a clearly identified federal candidate or solicit a contribution.” However, the FEC definition of “website” does not include apps and internet-connected devices like smart appliances. In addition, what is required of audio, graphic, and video content online is ambiguous since these formats do not fall neatly into the FEC’s existing categories. 

The lack of standards – whether it comes to formats, delivery mechanisms, and definitions of political ads – raises clear democratic concerns over the clarity of the rules that govern digital political advertising, their transparency and justifications (or lack thereof), and ultimately the mechanisms for accountability stakeholders have over the decision making of platforms. To that end, there are a few steps that platforms and regulators can do to create more standards for digital political advertising to further public disclosure and transparency and, ultimately, democracy—in the US and globally.

First, platforms, and especially Facebook and Google, can be clearer about the existing state of their policies and products in the context of digital political advertising. To construct our table on platform ad policies, for instance, CITAP researchers gathered everything they could find scattered on blogs, in policy documents, through help centers, on interfaces and posts on their platforms, from industry media coverage, and media stories regarding how Facebook (and Instagram), Google, Reddit, Snapchat, and Twitter have differentially embraced their roles as governors of paid political speech primarily from between September and December of 2019. This search had to occur because of, and was made considerably more difficult by, the fact that there was at times no one place to find the policies governing paid political communications on these platforms, policies often changed constantly, and changes were often referred to in numerous parts of platform policy documents. Meanwhile, policies, and changes in policies, were sometimes announced directly to the media, while at other times they were simply placed on a company blog. Sometimes policy changes were not announced at all, or were released on a CEO’s personal Twitter feed.

Second, there can be greater standardization in the definition of political ads. To date, each platform has essentially acted alone, making up their own definition as they go along. Even further, they have seemingly been reinventing the wheel, setting their own course without regard to definitional work performed by election law in the context of electioneering. In the US, the FEC, for instance, defines an “electioneering communication” as “any broadcast, cable or satellite communication that refers to a clearly identified federal candidate, is publicly distributed within 30 days of a primary or 60 days of a general election and is targeted to the relevant electorate.” This definition is remarkably similar to Google’s definition of “political” detailed above, and could serve as a standard for platform self-regulation here. Or, at the very least, if platforms choose to go beyond this definition, they should still include and label advertisements that fit within this definition as such and then work to develop and defend an industry standard that potentially can govern all platforms in this space. Simply ignoring the existing legal frameworks to which other media are held accountable needlessly complicates researchers and the public’s ability to scrutinize political advertising on the same terms as traditional media.  

Finally, and related, following from the standardization of how platforms define “political,” there should be greater standardization around the disclosure of political ads and the transparency of paid political promotions. While platforms have done a laudable job developing their political ad transparency databases on a voluntarist basis, the utility of these databases has been hampered by the lack of standardization across platforms. Basic categories of information differ across platforms, including different things being included entirely based on what is categorized as political. More broadly how data is reported is highly variable.

If the goal of platform political ad transparency databases is to counter the personalized information environments fostered by digital media and micro-targeted advertising, platforms should provide the same level of transparency into political advertisements that journalists and the public had into older forms of media, specifically television and radio, in the US at the very least. Audience targeting strategies were more implicit in traditional media campaigns; the audiences targeted could be deduced based on the television station and show ads were aired on, the time of day they aired, and the designated market area (DMA) they were seen in. To reach this same level of transparency as traditional media, the targeted audience for digital political ads must be made explicit given that geographic location, time, page, or website rarely reveal how an ad was targeted or which actual audiences it was displayed to. This would allow the public to have greater visibility into the communications of campaigns, rivals for the same office and contending parties would be better able to contest one another’s claims, and journalists could hold political advertisers accountable for false or inflammatory appeals as effectively as on television.  Even further, platforms should strive to facilitate counter-speech around the micro-targeting of political advertisements to allay normative concerns. Broadly, in line with the transparency around traditional media buys, the aim should be to achieve similar clarity in spending, the dates and duration advertising ran, and the audiences that were purchased.


  1. What role do platform companies play in facilitating democratic debate; and how much consistency is there in how platforms operate around the globe.

Platform companies, especially Facebook (and Instagram), Google (and YouTube), Twitter, Snapchat, and Reddit are central to contemporary democratic debate. They are, in essence, private companies that serve as public infrastructure. As such, they combine the scale, ubiquitousness, centrality, and fundamental publicness of public infrastructure with commercial imperatives, which in turn has given rise to rapid change for economic reasons, along with technological programmability and algorithmic forms of structuring and shaping public attention.


As private infrastructure for public processes, platforms are central to facilitating democratic debate in the countries around the world that they operate in. Platforms do so not only through paid content guidelines and strategies, but also by hosting much of what these companies consider ‘organic’ speech—those things that politicians, non-governmental organizations, campaigns, or advocacy organizations post on these platforms, as well as routine discourse by citizens in the context of political and social life.


In doing so, all of the major platforms have developed guidelines, rules, and policies for the types of content–paid and organic–that is permissible on these sites. Facebook’s community standards, for instance, actively prohibit hate speech on the platform—a prohibition that applies across the countries that Facebook is active within. At the same time, the company states that it complies with the laws of the countries it operates within in certain contexts.

While there is consistency at the level of policy (Facebook’s community standards, for instance, are global), there are not universal standards of enforcement. Violations of things such as community standards have long been problematic at scale on platforms that generate millions of pieces of content per day, which means that platforms are often reliant on users or third party, public interest stakeholders to bring potentially problematic content to its attention. Even more, platform companies often lack language competencies in many of the countries in which they operate, which in turn limits their ability to enforce their standards. At the same time, policies themselves are written in an interpretatively flexible way, which means companies are making judgement calls regarding what violates standards on these platforms.


Taken together, while there may be global standards for content on platforms, their flexibility combined with uneven enforcement in essence means significant variation in how these companies are engaged in moderating public debate.

  1. How have companies responded to external pressure to improve their content moderation processes, and how satisfactory are these responses?

In some of the Center’s research, we have found that platforms are deeply reactive to external pressures, as opposed to being responsive in an accountable way. We draw a meaningful distinction between being accountable and being reactive. According to the Oxford English Dictionary, being accountable means “liable to be called to account or to answer for responsibilities and conduct; required or expected to justify one's actions, decisions, etc.; answerable, responsible.” In contrast, to be ‘reactive’ refers to: “In general use, that responds or reacts to a situation, event, etc.; esp. (of a person or organization) that reacts to existing circumstances, rather than anticipating or initiating new ones.”

The distinction is important. Being accountable means providing a clear framework for justifying decisions. In this case, it would mean justifying platform change on the basis of articulated normative understandings of democracy and empirical evidence on the workings of platforms. The opposite, however, is more often the case. Platforms have been highly reactive, especially to negative press and public pressure. They appear to attempt to ameliorate bad news coverage and gain positive coverage through policy and product changes.

This has meant an ongoing and confusing set of changes in the content moderation approaches of major platforms. It is difficult to find clear explanations of changes in policies, rationales for content takedowns, or confirm if changes took place, and there is often little accountability regarding platform policies and their enforcement choices. Even more, the unsettled nature of public debate, unclear definitions of harm, lack of clear empirical evidence, and reactive nature of change produces potentially problematic results.

What our CITAP researchers have found is a consistent pattern whereby content moderation is often compelled by external pressure, including in many cases directed at attempts to get these firms to honour and enforce their own policies. For example, in the context of paid political speech, recent cases in the United States have demonstrated how Facebook will change the rules instead of holding a major political figure to account for violations of policy. Meanwhile, content takedowns are often compelled by outside investigation, not the initiative of platform companies.

These responses are deeply unsatisfactory. They raise fundamental issues that relate directly to the health of electoral processes. For example, if platform companies do not set clear policies, and enforce them with clear forms of accountability for bad actors, this raises questions of electoral fairness. Not having effective enforcement is creating clear incentives for political actors to break platform rules given that they know these companies will not act, and even worse incentivizes practices that will lead to a significant reduction in already limited public transparency and disclosure.

At the same time, I believe it is contingent on these platform companies to develop more public, transparent, clearer, and better justified policies and processes around their editorial roles with respect to paid political speech. There is no universal content-based framework for making speech decisions outside of egregious content, the takedown of which is not seriously disputed, but when decisions are made in the course of a campaign, rationales are not clear and can be conflicting depending on who is citing them. There are also no clear institutional means to challenge decisions. This is a significant failure of the firms that maintain private platforms for public speech.

  1. How does the culture of technology companies (for example around gender) affect the way in which they carry out their activities?

As a research community, we lack detailed empirical evidence around the concrete ways that the culture of technology companies affect how they operate and carry out their activities, although we can say a few things. First, technology companies took root in the very specific cultural milieu of the Bay Area of the United States, well documented by scholars, which generally resulted in white and male leaders and CEOs. In turn, these firms also grew in a cultural milieu that generally saw technology as a project of new world making, and, in turn, many founders believed their companies operated outside of the normal strictures of social norms and governmental regulation. Historians of Facebook and Google have shown how these companies were aggressive at growth and development, in the process failing to consider many of the social harms that would look readily apparent in 2020. For example, these companies largely stood up commercial advertising models in the political space, which in turn has meant deep and significant issues relating to verification, state and foreign manipulation, and a lack of public transparency and disclosure around electioneering.

At the same time, there have been growing internal movements in these companies around creating greater accountability over their operations, such as protests at Google in light of #MeToo and internal pushes at companies such as Facebook to have the company take greater actions to safeguard democracy.

More broadly, the lack of racial/ethnic and gender diversity at many platform companies likely matters for all sort of reasons. As decades of work on organizations, media production, and innovation reveals, the composition of teams within organizations matters for the work they produce. Having people of different backgrounds but with enough in common that they can understand each other often produces the most innovative technologies and practices. But even more, who these innovations are for reflects the people producing them. As decades of research on technological design reveals, the groups of people who build technologies shape what they do, what they are used for, how they are designed, and ultimately what they ask of users and how users can actually use them. Teams of programmers, engineers, and developers conceptualize technologies in particular ways to solve problems that they perceive. Ultimately, people build tools in particular ways according to how they see the world, their goals, and how they understand users.

Other bodies of research have shown how diversity—in terms of things such as gender, race/ethnicity, and life experiences—leads to a host of positive outcomes, from enhanced problem solving to productivity. Racial/ethnic and gender diversity also matters for equity and fairness in the workplace and political culture more broadly. For example, research has found that companies with higher percentages of women were more likely to implement radical innovation. Other researchers have shown how teams that have gender balance generally outperform gender-homogeneous teams (especially male-dominated teams) across a number of measures. In the literature on technological design researchers have found that gender-balanced teams generally solve problems faster and bring a broader set of solutions to the table, better utilize the knowledge of all their members, and are more responsive to a wider array of users than their all-male counterparts. Other veins of literature analyze organizational processes, finding that teams with women on them make more empathetic decisions and racially diverse teams are more likely to focus on facts.

More broadly, having more gender- and racial- equitable platform environments is important for their potential to anticipate problems. This might include, for instance, platforms being more aware of the ways that their policies, and bad actors on them, often disproportionally affect women and people of color.

  1. What evidence is there that internet usage is increasing political polarisation? Are there good reasons to be concerned about the growth of online ‘echo chambers’?

In general, I think that internet usage has a tendency to amplify political polarisation. We have good research demonstrating that the internet has historically played only a small role in political polarization compared with the rise of things such as partisan cable news and political talk radio in the United States or more broadly global political dynamics based on identarian movements. At the same time, it is likely also true that social media platforms, in selecting for engagement, often reward and give the most reach to speech that is the most extreme, emotional, or otherwise polarizing.

The literature on echo chambers, on balance, suggests that they are far more publicly hyped than a phenomenon of deep concern. In general, people who use the internet and social media for the purposes of consuming political information draw on more, not fewer, sources of political content, including legacy media organizations. While the algorithms of platforms might select for the most identity-congruent information, people are still exposed to more than they would otherwise be on balance in a world absent of digital and social media.

  1. How can digital literacy amongst citizens best be promoted? And who should take primary responsibility for promoting these skills?

There are a few key tenets to attempts at promoting digital literacy. I think the primary locus for this should be schools, and that information literacy should be a part of primary education from the early grades through high school. Digital literacy should also be made a fundamental component of other science, social studies, and health and wellness education. Regardless of subject matter, exercises promoting digital literacy should also be taught within units on the sciences and social sciences.

Projects that involve students researching subjects for themselves and vetting the information that they come across is one avenue at promoting digital literacy. Another is to have students critically examine their social media feeds, with an eye towards separating out what is factual (and on considering critically on what grounds) or opinion. Broadly, students should have much more exposure to, and be led in thinking critically about, topics in the daily news. This would enable students to explore what they are encountering on a daily basis, discuss the contour and shape of political debate, and examine critically claims and counterclaims in political discourse. Too often there is not a structured space in our primary and secondary educational institutions—or even our institutions of higher learning—to discuss current affairs.

  1. If the UK Government could do one thing to improve the regulation of political campaigns in a digital age, what should it be?

I would advocate for a process-based approach to regulating platforms and political speech. It is clear that a lot of the tensions of contemporary campaigning come in the context of the unsettled role that platforms play in electoral processes. If the UK government had a target for regulation, I would start there.

I covered this extensively above, but to reiterate, here are a few important things that can be done—which can all be grounded under one label. Taken together, I would impose greater standards on platform companies relating to definitions, public disclosure, transparency, and the standardization of the data that platform companies make available to the public and other stakeholders of democratic elections. The UK government can require platforms, and especially Facebook and Google and their related holdings, to be clearer about the existing state of their policies and products in the context of digital political advertising. There can be greater standardization in the definition of political ads. There should be greater standardization around the disclosure of political ads and the transparency of paid political promotions, including targeting data

And, the UK can consider democratic beneficial reforms in platform governance, for example, requiring mechanisms for counter-speech and requiring the establishment of better processes around the ways that platforms justify content moderation and make their decisions transparent and contestable.

12 of 12