Written evidence submitted by ITV (OSB0204)

ITV welcomes the Joint Committee that has been appointed to consider the Government’s draft Bill to establish a new regulatory framework to tackle harmful content online.

Below is ITV’s submission to the Joint Committee.  Please note that this is also ITV's response to the DCMS Select Committee’s inquiry into the Government’s approach to tackling harmful online content as outlined in its draft Online Safety Bill.  We would like this to be considered as ITV's formal response to the Joint Committee’s call for evidence.

 

DCMS Select Committee: Call for Evidence – Online Safety and Online Harms

ITV plc response

 

Summary

ITV welcomes the Committee’s inquiry into the Government’s approach to tackling harmful online content, as outlined in its draft Online Safety Bill. The decisions we take today about how major online platforms are governed and regulated will have profound implications for everyone in the UK, both as citizens and as consumers. It is vital, therefore, that the Online Safety Bill receives appropriate scrutiny.

The Government’s Online Safety Bill is a welcome development. For too long, the handful of major online firms that dominate our online lives have borne no legal liability for the harms that occur, at scale, on their platforms. This needs to change – and to change quickly, given how long it has taken for legislation to be brought forward.

However, the Bill is a missed opportunity to focus on and address the root causes of online harms. It focuses predominantly on trying to address the symptoms that arise from the way in which online platforms are designed and operated - on the systems and processes for the removal of harmful content after it has already been served to UK citizens and consumers and caused harm- rather than preventing harms from occurring at scale in the first place.

It is important to hold internet platforms and social media services accountable for those areas over which they have (or could have) control. We can see the difficulties in holding platforms liable for the editorial of individual pieces of content posted by third parties where the platform is simply an electronic noticeboard, merely passively hosting the content. Risks to freedom of speech from such an approach have been well-documented. But this should not be allowed to become a smokescreen for proper statutory oversight of those areas where platforms and services do have control and are actively promoting, recommending and pushing content. Social media and VSPs are not passive actors, they are active publishers - and it is their actions that are helping drive the scale of harmful content.

These companies are playing an active editorial role in relation to the content they carry. The personalised content and recommendations that individuals are exposed to (including the personalised advertising at the core of the underlying business models) are within the control of the major online players. Whilst content and advertising recommendations may not always be determined by a human editor (as is the case for TV), online platforms and services nonetheless control the algorithms that determine which content we see, how often we see it, and which advertising is placed within and alongside it. 

An active role in content curation means online platforms are publishers. Where platforms recommend, promote or actively serve content they cease to be passive ‘noticeboards’ and become publishers. When taking third-party content and deciding who should see it, where they see it, how often, and alongside what advertising, they do have control of that content even if they did not create it. They are therefore no different to media players. Indeed, they promote their mass media platforms to advertisers in competition to other mass media platforms like television. Accordingly, they must be regulated appropriately, held responsible for the content they choose to curate with sanctions for non-compliance, just as other media players are.

In light of this control, the Committee is right to ask whether the Bill focused enough on the ways tech companies could be encouraged to consider safety and/or the risk of harm in platform design and the systems and processes that they put in place. In our view the Bill has not done enough to address the ways in which platform design is causing harms to occur, and is a key driver of the scale of the problem.

Specifically, these businesses are incentivised to prioritise volume and frequency of viewing and engagement over the prevention of harms. As a result of business models fundamentally driven by viewing and clicks, they are incentivised to increase the amount of time people spend with them and so the amount of advertising they can serve users. The algorithms at the core of these services are widely reported to be designed to put content in front of you that is as ‘engaging’ as possible, so rewarding the sensational, the salacious and, often, the untrue. Partially fixing a problem – by taking down such content as and when you become aware of it – is not enough if it is only there at such a scale in the first place because your business actively recommends it and financially rewards those who create it.

The Committee asks whether there are key omissions to the draft Bill. In our view, the exclusion from the Bill of the advertising model that underpins these services is a crucial omission. 80% of Google’s parent company Alphabet’s $180 billion annual revenue, and 96% of Facebook’s $28 billion annual revenue is derived from advertising.  Online advertising business models are integral drivers of the harms that are clearly occurring. Excluding oversight of online advertising from the regulatory model fundamentally undermines the effectiveness of the new regime, despite the obvious good intent. Until such services cease to profit from the harms they create, and the algorithms cease to drive traffic to harmful content, the harms will continue to happen at scale.

The Committee asks how this could be practically addressed by the Bill without compromising rights such as freedom of expression. ITV suggests that the Bill might be improved in two ways:

1.      Holding major online platforms to account for the impact of the ways they choose to design their platforms: addressing the ways in which platform design incentivises the creation of harmful content, rewards the creation of ever-more-extreme content, funds online misinformation, recommends harmful content to users, and creates echo-chambers that prevent users from seeing such content for what it is.

2.      Making platforms liable for the harmful advertising they publish and profit from: addressing the role of online advertising in funding misinformation and harmful content, and addressing online advertising that is harmful in and of itself (including financial scam advertising, antivax advertising, and anti-climate-change advertising).

The success of the Bill depends heavily on whether such reform are implemented. Absent requirements for online platforms to address these issues, the Online Safety Bill will forever play catch-up, trying to find and take down harmful content after the harms have occurred, at ever greater scale as platforms continue to grow.

Platforms can afford to bear the cost of sensible and proportionate regulation. Although some major online platforms appear to be threatening to pass the cost of regulation onto SMEs and so onto UK consumers, the CMA’s finding that Google and Facebook are in a dominant position demonstrates that they are failing to internalise the negative externalities they are currently creating as a consequence of the way in which they do business. As the government noted in its response to the CMA’s report on the digital advertising market:

“[the CMA’s analysis] shows that in the UK both Google and Facebook are consistently earning profits well above what is required to reward investors with a fair return.   Google earned £1.7bn more profit in 2018 in the UK than the benchmark level of profits.   For Facebook, the comparable figure for 2018 was £650million."

It is unacceptable to suggest that we can have either a thriving advertising ecosystem accessible to SMEs or one which protects consumers, but not both.

We set out our thinking in more detail below in response to the Committee’s questions.

Responses to Committee questions

Does the draft Bill focus enough on the ways tech companies could be encouraged to consider safety and/or the risk of harm in platform design and the systems and processes that they put in place?

Major online platforms and services exercise extensive control

Underlying many of the objections to the government’s Bill is the suggestion that the major online operators somehow have no impact on, or control over, the content people see on their platforms and services. It is important to recognise that these firms are not the passive entities they portray themselves as, merely connecting people with each other, but are actively curating much of the content they host.

We accept that it continues to be important that there are open access platforms, offering spaces into which people can freely upload content to the internet and interact with friends, relatives and their local communities. But that passive role is no longer the model for the major online platforms.

Rather, they determine what we see, how often we see it, and they offer us near endless opportunities to consume content of their choosing based on our likes and dislikes, on our behaviours and purchasing decisions, on who we’re friends with and who we engage with, on what ‘people like us’ enjoy watching. The problem, as one former YouTube engineer put it, is that 

“…the AI isn’t built to help you get what you want — it’s built to get you addicted to YouTube.[1] 

As we set out above, this active editorial role means that they are publishers. Given this control they exercise, they should be held responsible for the content they actively serve just as other forms of media are. This is critical if the Bill is to be effective in tackling the harms it rightly seeks to address. Sweeping arguments against regulation based on the idea that it would somehow end freedom of speech[2] need to be resisted.

 

Active curation by online platforms and services increases the potential for harm

Being exposed to the views of a single individual – just one amongst many different opinions – might well be unlikely to result in harm. We can see the risk to freedom of speech inherent in a regime intended to police each and every piece of user-generated content online in isolation.

But opposition to regulation based only on this perceived threat misses the role that online platforms and services actually do play: that of active content curation and amplification of message, much akin to other media. Whilst expressing one view to a handful of people near me might not pose much threat, presenting someone with 100 such views in quick succession when they are already susceptible to the message suddenly presents a far greater potential for harm. Giving individuals a platform to reach and influence millions without responsibility or consequence goes far beyond any ordinary concept of freedom of speech, an issue Ofcom’s Broadcasting Code is expressly designed to address. And this curation and amplification is what these online platforms are designed to do. 

Show an interest in a topic, even one that is potentially harmful, and their core business model and algorithms will find more of it for you. Recommendations might be badged as such, or content might simply appear in your feed, judged relevant by black box algorithms. Whilst the firms involved may share society’s concerns about such outcomes, the reality is that it is their own business models that are causing the scale of the problem.

 

It is the underlying business models that are driving the problem

Social media video sharing platforms are incentivised to increase the amount of time you spend on them and so the amount of advertising they can serve you. Algorithms appear designed to put content in front of you that is as ‘engaging’ as possible, so rewarding the sensational, the salacious and, often, the untrue. As Mark Zuckerberg acknowledged: 

people will engage disproportionately with more sensationalist and provocative content.[3] 

Partially fixing a problem – by taking down such content as and when you become aware of it – is not enough if it is only there at such a scale in the first place because your business actively recommends it and financially rewards those who create it. And even if, as the likes of Facebook and YouTube suggest, they can deploy AI to screen content at scale and begin to address some of the issues, it seems unlikely they can fully address the problems they are creating. Indeed, Google’s CEO has already admitted as much[4].

Addressing the social harms that government has rightly identified will therefore require a system that addresses the structural corporate root causes as well as the individual manifestations. A major step towards this would be the inclusion of statutory regulation for online advertising, holding online platforms responsible. We discuss this in more depth below.

 

What are the key omissions to the draft Bill, such as a general safety duty or powers to deal with urgent security threats, and (how) could they be practically included without compromising rights such as freedom of expression?

Online advertising is already causing citizen and consumer harm

The focus of the Bill on online content only – rather than also covering online advertising – misses the fact that online advertising itself is already causing citizen and consumer harm. In some cases, these harms are identical to those already identified by government as requiring platforms to address them, for instance with advertising being used to spread disinformation, trick people into fraudulent schemes or being inappropriately targeted at children. 

This stems from the fact that online there is no statutory regulation of advertising and few penalties for non-compliance with the self-regulatory rules in place. This is in stark contrast to advertising content on television, which is heavily regulated – by Ofcom and the ASA – with a statutory underpinning and serious consequences for non-compliance. Given television’s scale and influence in society, proportionate and effective regulation is clearly necessary but the same is now true of major online platforms. The result of this lack of effective regulation online can be seen in the scale of harms arising online as a result of the advertising carried.

Case Study: Martin Lewis Scam Adverts

A powerful example of the sort of harms that online advertising can cause is the use of Martin Lewis’s name, and that of his company, MoneySavingExpert, in promoting fake financial products including binary trading, energy suppliers and PPI. With his own show on ITV, Martin Lewis is one of the most trusted people in the UK. The power of his ‘endorsement’ of these fake products has led to people losing thousands of pounds[5].

MoneySavingExpert says that it has seen these fake ads on Facebook, Twitter, Instagram, MSN News, Sky Sports News Online, Yahoo and Google ads. In one instance, a fake Martin Lewis advert was even placed alongside a story warning about the risks posed by fake Martin Lewis adverts[6]

The dangers of such fraud via paid-for advertising have been routinely highlighted in recent months and years by the Financial Conduct Authority. More recently, in July, the Treasury Committee and the Work and Pensions Committee wrote jointly to the Prime Minister calling for the Bill to include paid-for advertising:

The current proposals would legislate against certain types of user-generated fraud online, but not against the same fraud committed through a paid-for advertisement. Unusually, public bodies and office holders— including the Financial Conduct Authority (FCA), Governor of the Bank of England, Financial Services Compensation Scheme and the City of London Police—have been outspoken in evidence to both of our Committees about the need for the Bill to include paid-for advertising if it is to tackle online fraud. Based on this, both of our Committees have recommended an obligation in the Online Safety Bill for online platforms to prevent online frauds through paid-for advertising.[7]

It is not necessary to look far to see concerns raised about a range of other harmful advertising online – whether adverts promoting an antivax agenda[8], gay ‘cures’[9], or fake ‘cures’ for cancer[10]. Sadly, there have already been examples of advertisers seeking to exploit the covid-19 crisis online.

There also appears to be an issue with false commercial advertising online, where counterfeit goods providers seek to mirror mainstream brands. A study by Europol in 2017 observed:

Counterfeiters are…well aware of the importance of online marketing. They tend to use social media platforms to advertise their products and steer potential consumers to online sales platforms.[11]

Digiday suggests that the platforms not only profit from such advertising but often generate a degree of trust in the products among consumers by carrying the ads them:

Instagram is essentially validating [the ads] by featuring [them] in consumers' feeds. Just like Facebook has to get a better handle on fake news, Instagram needs to get a better handle on advertisement of fake products.[12]

The negative impact of this sort of advertising is immediate and often causes substantial consumer harm. Such advertising, rightly, has no place on television (and would not get through the compliance processes to make it on air in the first place). The lack of effective regulation online – and the focus on take-down after publication, rather than prevention – means consumers are not offered the same protection on global online platforms, who profit from carrying such adverts. Indeed, in some instances they then also benefit from regulators having to buy adverts on the same platforms to combat the fake ones. As Mark Seward of the FCA observed in evidence to the Treasury Committee this year:

“​​The irony of [the FCA] having to pay social media to publish warnings about advertising that they are receiving money from is not lost on us.[13]

It is important to note that harmful advertising is not limited solely to that placed by ‘bad actors’ alone, as some suggest. For instance, despite the clear challenge posed by climate change, new research from UK think-tank InfluenceMap has shown the extent to which major players in the oil and gas industry are using Facebook “…to spread fossil-fuel propaganda[14]” and the tobacco industry is reported[15] to be using influencers to grow the e-cigarette market beyond those looking to give up smoking, according to research by The Bureau of Investigative Journalism.

These harms are potentially exacerbated by the personal, targeted nature of online advertising – the adverts served to an individual visiting a website or using an app will be different to those served to a different user on identical services, sometimes with harmful consequences.

The one-to-one nature of online advertising also means that we lack a clear picture of what is actually happening online. Martin Lewis became aware of the way his brand was being used only through users questioning the adverts. The ASA is reduced to attempting to uncover bad practice online through the use of ‘child avatar’ research rather than full platform data. This lack of transparency appears to enable harmful advertising to run undetected. Certainly, the current regime does not seem to discourage the re-emergence of harmful advertising even when action is taken elsewhere[16].

Given the potential for online advertising to cause consumer harm – and the negative competition implications of the dominant position of the major online platforms – it makes no sense for online advertising not to be properly and independently regulated via the Online Safety Bill.

 

Our proposals for a new regime

In light of this, we and other broadcasters have proposed a regulatory framework focused on two key areas:

  1. Responsibility, compliance and sanctions; and
  2. Transparency and visibility 

The proposals are not particularly radical, innovative or difficult to implement. Instead they build on the strong foundations of the existing co-regulatory regime for broadcasting (and VSPs, under Government plans). As is the case for the current framework for broadcast advertising, under BCAP and the ASA, a co-regulatory arrangement could deliver the balance between consumer protection, flexibility and industry expertise. 

 

Ensuring responsibility, compliance and sanctions online

If the ambition is to have an effective regime for regulating online advertising, creating a level playing field with TV, there are five things in the area of responsibility, compliance and sanctions that must change: 

  1. Online platforms must be legally obliged to ensure compliance with rules online 
  2. Advertising should be complied before it is published via online platforms (or a similarly effective mechanism be put in place to prevent non-compliant advertising running in the first place)
  3. There must be clear legal/regulatory incentives for compliance 
  4. Non-compliance must be risky and unattractive
  5. There needs to be an effective and well-resourced regulator. 

We note that the Internet Association has previously appeared to be attempting to exclude the vast majority of advertising content from regulation by arguing that platforms are simply not aware of the content of the advertising they carry and that new rules:

“…should only apply to advertising over which the VSP has meaningful control. 

To argue that an advantageous and chosen business model precludes regulation is not a serious answer to significant societal issues which are properly regulated in other parallel instances such as TV.   The argument is akin to ITV devolving control of broadcasting advertising on our channels and then claiming ignorance when that advertising broke the rules. Just because VSPs and online platforms more broadly are choosing to operate in a certain way – choosing to offer inventory at unprecedented scale and to carry advertising unreviewed and unseen – should not be confused with an inability to operate differently, in a way that protects consumers from harm. The position in advertising is very clearly distinct from the situation regarding UGC, where content is uploaded by millions of people each day – when it comes to advertising, the content is published by the platform. Ofcom, in its recent consultation on the regulation of Video Sharing Platforms, seemed to recognise this.

We note also that opposition to regulation often cites dangers to freedom of expression from any form of intervention. But as we have said above, advertising is completely different to UGC and other forms of content – it flows through and is monetised by the platform – it isn’t or need not be outside of the control of the platform. There is no reason, therefore, why all advertising on the platform ought not to be pre-vetted or similar to ensure the highest possible standards of compliance before publication, as Clearcast does for broadcasters. 

This is something the online platform should do as the publisher of the advertising with (in future) the legal obligation for compliance. Although pre-scrutiny is not technically required by the regulatory regime for TV, it is hard to see how broadcasters could sufficiently manage the financial and reputation risk of non-compliance otherwise. Online platforms should be held to the same standards.

 

Ensuring transparency and visibility online

Given the likelihood that issues will remain endemic online, it’s clear that the scale of the problem needs to be more accurately and independently verified. There are therefore three things in the area of transparency and visibility that need to change: 

  1. There must be an effective common approach to determining who is seeing adverts online 
  2. There must be clear visibility about what advertising is provided online 
  3. There must be clear separation of advertising content from other content 

It appears that some platforms agree. Google, in evidence to the House of Lords, said:

In terms of regulators, the Government have published their plans for the Online Harms White Paper and have said they are minded to appoint Ofcom. As those plans develop, Ofcom is exactly the right sort of organisation to review what information is in the public domain and what is not, and to reach a credible and detailed understanding of that.[17]

The need for transparency and independent regulation holds true even if the platforms took greater responsibility and managed to address some of the issues. Given the sheer scale and influence of the internet in all aspects of our lives, it is critical for there to be transparency about the effectiveness of the steps they take and the regulation that is in place.

 

Paying for compliance

We have heard some of the major online platforms suggest the introduction of robust advertising compliance, more comparable to broadcast, would increase costs – and these costs would be passed on directly to businesses and advertisers using the platforms. They suggest this would be damaging for small and medium sized enterprises (SMEs), who they highlight as having benefited particularly from the relatively low cost of online advertising.

We too would be concerned were the cost of compliance with an improved regulatory regime simply passed on to SMEs. We make no judgment on the likelihood of this outcome but such a suggestion by the major online platforms – in relation to businesses whose health they cite as evidence of the benefit they offer to the UK – is certainly instructive. It shows that the major online platforms feel absolutely comfortable in their ability to pass on any additional platform compliance costs without any fear of a negative impact on their competitive position.

This seems to stem from a confidence that comes from being in a dominant position (a position the CMA recently found Google and Facebook to hold), leaving advertisers with no credible alternative platforms to switch to. It also shows those major online platforms have no intention of reducing their current levels of super-profitability in order to operate in a way that internalises the negative externalities they are currently creating as a consequence of the way in which they do business.

As the government noted in its response to the CMA’s report on the digital advertising market:

“[the CMA’s analysis] shows that in the UK both Google and Facebook are consistently earning profits well above what is required to reward investors with a fair return.   Google earned £1.7bn more profit in 2018 in the UK than the benchmark level of profits.   For Facebook, the comparable figure for 2018 was £650million."

Given the “harm to SMEs” argument is being used by the major online platforms as a form of shield to avoid additional regulation they don’t like, it seems even more critical that regulation can address not only the harms being created by the major online platforms’ approach to advertising but also to ensure that the cost of compliance is borne by the platforms themselves, rather than being passed on to the businesses that are reliant on them.

It is unacceptable to suggest that we can have either a thriving advertising ecosystem accessible to SMEs or one which protects consumers, but not both.

 

Conclusion

It is clear that statutory regulation of online advertising is necessary, and urgent, given the scale of harm currently being caused to consumers and to competition. Designing an effective system will not be without complexity. But, as the Government seemingly acknowledges in its other work, both Ofcom and the ASA already have the skills and expertise to fully regulate online advertising, as they have successfully done for television for many years. The Online Safety Bill should be used to set clear expectations in relation to consumer protection and the promotion of competition, empower Ofcom and the ASA to develop and implement an effective system, and meaningfully incentivise major online platforms to comply.

 

8 October 2021


[1] https://thenextweb.com/google/2019/06/14/youtube-recommendations-toxic-algorithm-google-ai/

[2] https://www.cnet.com/news/facebook-says-its-trying-to-strike-a-balance-as-it-battles-misinformation/

[3] https://www.facebook.com/notes/mark-zuckerberg/a-blueprint-for-content-governance-and-enforcement/10156443129621634/

[4] https://www.cnbc.com/2019/06/17/google-ceo-sundar-pichai-youtube-is-too-big-to-fix.html 

[5] https://www.moneysavingexpert.com/shopping/fake-martin-lewis-ads/

[6]  https://www.moneysavingexpert.com/news/2017/08/yahoo-removes-martin-ad/

[7] https://committees.parliament.uk/publications/6956/documents/72760/default/

[8] https://www.theguardian.com/technology/2019/nov/13/majority-antivaxx-vaccine-ads-facebook-funded-by-two-organizations-study

[9] https://www.thedrum.com/news/2018/08/27/facebook-removes-ads-promoting-gay-cure-young-lgbt-users

[10] https://moffitt.org/take-charge/take-charge-story-archive/beware-of-bogus-cancer-cures-online/

[11] https://www.europol.europa.eu/sites/default/files/documents/counterfeiting_and_piracy_in_the_european_union.pdf

[12] https://digiday.com/marketing/get-fake-yeezys-counterfeit-ads-instagram/ 

[13] https://committees.parliament.uk/oralevidence/2349/pdf/

[14] https://www.theguardian.com/environment/2021/aug/05/facebook-fossil-fuel-industry-environment-climate-change

[15] https://www.theguardian.com/business/2021/feb/20/tobacco-giant-bets-1bn-on-social-media-influencers-to-boost-lung-friendlier-sales

[16] https://cointelegraph.com/news/bitcoin-scam-ads-featuring-martin-lewis-now-spotted-on-instagram

[17] Katie O’Donovan, Head of UK Government Affairs and Public Policy, Google. Evidence to the House of Lords Select Committee on Democracy and Digital Technologies, 9 March 2020