Written supplementary evidence submitted by The Center for Countering Digital Hate (OSB0227)
Online Safety Bill
Programmatic Advertising Transparency
What is the problem?
- Each year, respectable companies and their customers unwittingly funnel millions of pounds directly to the Internet’s most malicious and subversive actors and messages. Misinformation and hate sites are almost entirely funded by online advertising—often paid for by unsuspecting mainstream organisations who don’t know what content their brand is appearing next to, and thereby funding.
- The presence of mainstream, respectable organisations next to extremist content—including climate denial, anti-vaxx propaganda, political misinformation, incel ideology and racial hatred—also serves to normalise extremism.
- More information about this problem is available here: VIDEO: Stop Funding Misinformation / www.counterhate.com.
Why is this important?
- It impacts on companies and NGOs: In recent years, the Center for Countering Digital Hate has found that adverts for brands such as Chevrolet, Capital One, DHL International, Boots, Canon, Sensodyne, Paperchase, Bloomberg, the International Rescue Committee and others have been automatically placed on sites dedicated to promoting hate and/or misinformation.
- It legitimises harmful disinformation and has led to dangerous offline behaviour. For example, we know that websites promoting the following disinformation have profited from these adverts:
● Anti-vaccine and Covid-sceptic misinformation
● Climate denial and conspiracy theories about climate activism;
● Conspiracy theories promoting the “stolen election” myth in the wake of the 2020 US presidential election, which directly led to the domestic terror attack on the Capitol on January 6 2021; and
● Misinformation sites promoting antisemitism, Islamophobia, misogyny, and pro-Assad conspiracy theories.
How is this happening?
- The adverts are placed by third-party brokers, such as Google’s Adsense business, which then allocate adverts to particular sites in order to fulfil predetermined target demographic (age / gender / location) and psychographic (attitudinal and behavioural) profiles.
- The use of algorithms to automatically match adverts with web pages in order to reach a target profile has led to these services being called “programmatic advertising”.
- In their zeal to maximise profit, however, brokers often agree to place ads next to harmful content. The business model does not need to consider the values of the organisations in the advertisements because there is no transparency for those organisations about where the advertisements will end up.
- Of course, tech companies also profit from this arrangement. Google states that publishers retain 68% of the revenue generated by Google Ads on their sites, while Google retains the remaining 32%—thus providing a strong disincentive for proper oversight. We know that Google’s “brand safety” systems for adverts are not fit for purpose and, at present, there is little to no transparency around online programmatic advertising.
What will the proposed amendment do?
- The amendment will:
● Require advertisers to publicly declare, on their websites, the domains on which their adverts appear. This creates a driver for corporate accountability that consumers’ money is not being funnelled to content that fundamentally harms society.
● This information is often provided to advertisers by brokers, some of which are updated in real time. This amendment would simply require advertisers to disclose the URLs of the pages on which their adverts appear—but not other information, such as performance data or targeting criteria.
● It wouldn’t create a duty for advertising organisations to conduct costly studies—but by making these URLs publicly available, it will make it easier for researchers, journalists, authorities and the public to instantly access the relevant information. This creates an accountability ecosystem of enabling legislation, transparent corporate behaviour and civil society/ other companies doing the checking. There are organisations such as GDI and NewsGuard that can provide the “checklist” for advertisers. CCDH’s Stop Funding Misinformation has a much shorter and much more focused “Blacklist”.
● It would not conflict with GDPR, because it does not involve any personal information
- Scope and costs: The amendment would only apply to large organisations, thus avoiding the need for smaller entities to bear undue administrative burden. The nominal administrative costs to large-scale programmatic advertisers, such as Google, would be easily absorbed.
What impact will this have?
- The proposed amendment is a simple, common-sense measure which would solve a social harm overnight.
- Since CCDH launched the Stop Funding Misinformation campaign, several sites dedicated to spreading identity-based hate using misinformation have closed, after being starved of revenue derived from Google adverts. Unsurprisingly, large organisations, which invest great sums into the management of their reputations, are almost always extremely quick to pull advertising from malicious content. The advertising industry itself has been receptive to CCDH’s efforts to highlight the problem and potential solutions.
- The requirements on companies created by the amendment would nudge them towards greater corporate responsibility, knowing that others will find it easier to see if they are funding dangerous hate and misinformation sites via programmatic advertising services.
- Transparency requirements will provide a sustained and effective corrective market measure that will have a direct impact on individual and social outcomes - from climate change to online hate. By creating a duty of care on platforms such as Google’s Adsense towards its clients (large brands and corporations) and their customers, there would be an immediate effect on arresting the unwitting financial support given to online harm actors all over the world by cutting off revenue streams for commercialised hate and misinformation.
- In its current form, the Online Safety Bill also contains no provision for greater oversight of advertising. This would prevent OFCOM from examining the role that ads play in funding websites promoting harmful hate and misinformation.
- An amendment to the Online Safety Bill which would require large companies to offer greater transparency over where their adverts have been placed would create a strong and instantaneous reputational incentive for firms to cut off revenue streams for commercialised hate and misinformation.
Won’t the market correct this?
- While a recent #StopHateForProfit boycott of Facebook advertising by some companies / organisations helped to draw attention to the general problem, the economic inequality and imperative to advertise online (particularly for many of the small businesses involved) meant that this was not sustainable. Many entities have returned to advertising online in order to sustain their business and livelihoods, but they are still unclear to what extent, or exactly where, their advertisements are being published. There have been few systemic changes to the advertising business model or transparency.
- There is a network of organisations which provide help to advertisers by assessing websites on which ads appear for their content. These include NewsGuard and GDI. As such, there is a viable ecosystem of commercial providers who can form part of a programmatic advertising transparency ecosystem, all of which would be underpinned by enabling legislation by HMG to create the nudge driver for accountability and responsible advertising practices.
About us: The Center for Countering Digital Hate
- The Center for Countering Digital Hate (CCDH) is an international, not-for-profit NGO that seeks to disrupt the architecture of online hate and misinformation.
- Imran Ahmed, the Chief Executive of CCDH, gave evidence in the first session of the Draft Online Safety Bill Joint Committee, in which he was asked specifically about digital advertising:
- CCDH is independent, we are non-partisan and we do not receive money from technology companies.
- We provided written and oral evidence on the draft Online Safety Bill to the Joint Committee.
Submitted by Imran Ahmed, CEO, CCDH
24 November 20212