Written evidence submitted by Global Action Plan

 

 

 

Global Action Plan submission to DCMS Select Committee subcommittee on disinformation and online harms

 

 

  1. Introduction

 

Global Action Plan is a charity that works for a green and thriving planet where people enjoy their lives within the Earth's resources. We help people live more sustainable lifestyles by making connections between what's good for people and what’s good for the planet.

 

Thank you for the opportunity to submit evidence to your committee. We have a longstanding interest in the government’s Online Harms agenda because we are concerned about the harmful impact the current digital environment has on children’s health and wellbeing, and the negative impact this has on their ability to live healthy, sustainable lifestyles in the future.

 

Our submission focuses on answering your third question (regarding the draft Bill’s focus on platforms’ design, systems, and processes) and your fourth question (regarding major omissions). It also touches on your final question about relevant lessons from regulatory initiatives from around the world.

 

We believe there is much to welcome in the draft Bill as it marks an important step away from failed “self-regulation”. It recognises that the health of our digital environment matters too much for decisions about its future to be left entirely to private companies headquartered overseas. The draft Bill would introduce a degree of democratic oversight and accountability, and shift some of the burden for staying safe online away from individual users and, in the case of children, their parents/carers.

 

However, as it stands we believe the Bill is greatly weakened by the omission of any measures which address the primary business model of all the main social media platforms - a form of advertising variously referred to as “targeted advertising”, “behavioural advertising”, “micro-targeting”, or (our favoured term) “surveillance advertising”.

 

We think this is a critical omission which your committee should be concerned about for two reasons. Firstly, the surveillance advertising business model is the primary driver of platforms’ decisions about design, systems and processes, and so this omission will weaken the ability of the regulator to drive improvements at these levels. Secondly, such advertising is a significant proportion of the content which users are exposed to on these platforms, which is not currently regulated effectively, so its exemption leaves a glaring regulatory gap.

 

We believe such a serious omission would reduce the effectiveness of the draft Bill across the board, in its stated aim of “making the UK the safest place to be online”. However, our expertise lies in the impact of surveillance advertising on children. Our evidence in this submission therefore focuses upon the impact of this omission on efforts to improve the safety of services likely to be accessed by children.

 

We explain how surveillance advertising incentivises platforms to make decisions about their design and operation which fail to prioritise, and often run in conflict to, children’s safety and wellbeing. We then explain how surveillance advertising can directly harm children, given the volume and nature of the content with which it targets them. Finally we offer some suggestions for how the draft Online Safety Bill could be altered to bring consideration of surveillance advertising and its impacts within scope.

 

We recognise that this omission is no accident - s39(2)(f) of the draft Bill makes an explicit and deliberate exemption for “paid-for advertisements”. We also recognise that the government will justify this exemption by arguing that OSB is not the intended vehicle for the regulation of advertising and that a regulatory regime for advertising already exists. It will also point out that it is proposing to conduct a further “Online Advertising Consultation” later this year.

 

However, we would encourage the committee to interrogate this position, for at least 3 important reasons:

 

  1. The surveillance advertising business model is central to so many of the platforms which the draft Bill seeks to regulate, and the principal driver of decisions about design, systems and processes which the draft Bill seeks to improve
  2. Surveillance adverts are a significant proportion of the content which users see on social media platforms, with proven capacity to do harm, which are not currently regulated effectively
  3. It is not clear what legislation, if any, will follow from the “Online Advertising Consultation”, which means it is not clear when these gaps can be filled if not in the Online Safety Bill

 

 

  1. How surveillance advertising incentivises platforms to make harmful decisions about design, systems and processes

 

Advertising is the pre-eminent business model for most of the largest companies which would fall within the scope of the draft Bill. Alphabet, the parent company of Google and YouTube, generated almost 84% of its 2020 revenue from advertising, around $135bn. For Facebook it was 98.5%, and almost $70bn. Not all of this advertising is surveillance advertising - for example Google’s search service generates significant revenue from adverts targeted to specific search terms (which would be termed a form of “contextual advertising). However, surveillance advertising has become the primary mode of monetising adverts for most of the largest user-to-user platforms which would fall within the scope of this legislation including Facebook, Instagram, Youtube, Twitter, and TikTok.

 

Surveillance advertising relies on large-scale data collection and behavioural profiling of internet users, to present them with highly personalised adverts. Sophisticated and opaque content algorithms serve users with content which maintains their “engagement” (i.e. keeps them on the platform, so they view ads and generate further behavioural data) and target them with adverts on the basis of their behavioural profile. It incentivises design, systems and processes which maximise “engagement” without consideration of whether or not that engagement is harmful, and minimises “friction” - including “friction” which could help keep a user safe. If content is “harmful” but “engaging”, then the logic of the business model is to amplify it. This logic leads to platforms amplifying politically extreme content of the sort that incited the attack on the US Capitol, or self-harm content, or inflammatory conspiracy theories - and to that content being served disproportionately to those most vulnerable to it (i.e. most likely to “engage”).

 

Many of the “functionalities” listed in S135(2), which the draft Bill seeks to regulate through the safety and risk assessment duties detailed in Pt2 Ch2, serve a purpose related to the surveillance advertising model. For example, all the functionalities related to “expressing a view on content”, listed in s135(2)(f), provide opportunities to harvest data about a users’ interests and preferences which are then used for behavioural profiling. Yet the draft Bill is silent on how platforms should balance risks posed by functionality against the value of the data the same functionality yields. The draft Bill is also silent on how Ofcom should assess, let alone seek to limit, the role which surveillance advertising plays in the choices a platform makes about its “functionalities” and how it manages the risks they pose.

 

The draft Bill is therefore silent on one of the key drivers of online harms. As the 5Rights foundation explained to us:

 

There is not a single online harm or socio-digital problem that is not made worse by micro-targeting. Disinformation is more destructive when targeted at the people most likely to believe or act on it. Elections are less free, fair, and transparent when the information received by one voter is different to and/or concealed from another. Polarisation is deepened by filter bubbles that entrench our biases, reduce our capacity for empathy, and even constrain our freedom of thought. Pro-suicide, self-harm, or eating disorder content is far more dangerous when served up automatically, proactively, and repeatedly by the recommender systems of platforms popular with young people. Enabling businesses to communicate more persuasively with their customers cannot outweigh the risks to children that the whole surveillance advertising system poses.

 

The tensions between the surveillance advertising business model, and the stated aims of the OSB, are particularly acute in the case of platforms likely to be accessed by children (i.e. in practice most of the largest social media platforms). This is firstly because, as the draft Bill recognises, children are more vulnerable as users of online platforms, and more dependent on safety measures, and safety measures frequently come into conflict with the “maximise engagement, minimise friction” incentives of the surveillance advertising business model. Secondly, children are even less equipped to understand the business model or to understand how behavioural profiling works, and are more vulnerable to being “nudged” or manipulated by it. Finally, children are also less equipped to make informed decisions or give informed consent about their data being collected or used to profile and target them.

 

 

 

  1. Why surveillance advertising should be considered a form of potentially harmful, under-regulated content on user-to-user services

 

As well as being a business model which incentivises platforms to make decisions about their design, systems and processes, surveillance advertising is also, in and of itself, a type of content with significant potential to be harmful - especially to children. It is a type of harmful content which platforms themselves have a strong economic incentive not to tackle, and which existing advertising regulation does not adequately address.

 

The volume of surveillance advertising to which children are exposed on social media platforms is significant. A Global Action Plan survey revealed that, on average, teens see one ad every 8.1 seconds while scrolling through their Instagram feeds, equivalent to 444 adverts per hour.  Based on average online time, this means that a third of 14 year olds could be exposed to 1,332 adverts a day – ten to twenty times as many adverts as children see on TV alone.

 

Delivering micro-targeted adverts to children, regardless of the product being marketed, carries a higher risk of harm than with adults, because their brains and sense of self are still developing, and their ability to understand how they are being targeted is more limited. Indeed amongst younger children even their ability to distinguish between advertising and other content is limited. Ofcom’s research in 2019 found that only 25% of 8–15-year-olds were able to identify the top results from a Google search as adverts, despite them being clearly labelled with the term ‘ad’. 

 

Children’s use of the internet should play an important role in their development into well-rounded adults, by enabling them to explore ideas and interests freely. However the ubiquity of surveillance advertising means that their development can be unduly influenced and manipulated by recommender systems which encourage obsessions and “rabbit holes”, and shape their values in an unhealthy way. There is growing evidence showing a positive relationship between the amount of advertising children see and their levels of materialism. This is found across all age groups, from pre-school to teenagers. Studies have also demonstrated that materialism also impacts other areas of children’s lives. More materialistic children have been shown to have lower wellbeing, perform worse academically, be less generous towards others, and care less about the environment.

 

In addition, surveillance advertising frequently enables children to be targeted with harmful products, or on the basis of profiling which has identified them to have potentially risky interests. A 2021 study found that it was possible, using Facebook’s admanager, to target children on Facebook aged between 13 and 17 based on such interests as alcohol, smoking and vaping, gambling, extreme weight loss, fast foods and online dating services.

 

Researchers have also suggested a fundamental reason why pre-existing frameworks of advertising regulation have proved unable to protect children from harmful content delivered by surveillance advertising. The UK’s existing system of regulation for advertising, in common with many other western democracies, is reactive and relies on complaints-based mechanisms by which irresponsible, unethical or dangerous advertising can be initially challenged by concerned third parties. This is ineffective for surveillance advertising because of a phenomenon termed “epistemic fragmentation”. Researchers at Oxford University have observed that “consumers most likely to have the relevant information and motivation to raise a complaint [about a harmful or exploitative advert] are not themselves vulnerable, but are aware of those more vulnerable to harm.” With surveillance advertising however, “consumer’s ‘personal context’ is hidden from others, meaning nobody knows exactly what others see and cannot raise a complaint on their behalf.”

 

In other words, traditionally an age-inappropriate or otherwise unethical advert was likely to be seen and challenged by other concerned citizens. They could then complain to the ASA, the publisher, or the advertiser. Inappropriate targeting of vulnerable children by surveillance advertising is much less likely to be seen by those well-placed to flag it - whether their own parents, or other concerned citizens. In 2015 a tube advert by a company called Protein World, depicting a very slender model and asking commuters if they were “beach body ready” prompted hundreds of complaints to tfl and the ASA, and a change.org petition targeting the company. The advert was withdrawn, an ASA complaint upheld, and TfL introduced new guidelines around the depiction of female bodies in tube adverts.

 

A similar “body-shaming“ advert on social media, micro-targeted to teenagers profiled as feeling anxious about their weight, is much less likely to face the same level of public scrutiny or challenge because a vulnerable teenager with body image issues is relatively unlikely to raise a complaint . Those citizens more likely to challenge it most likely won’t see it, and even if they did, wouldn’t see who else is being targeted. The governance mechanisms which have, albeit imperfectly, protected children from other forms of inappropriate or harmful advertising - whether more formal, like the ASA, or less formal, like parents easily being able to see the same content as their children - simply do not, and cannot, work effectively for surveillance advertising.

 

S7(9)(d) of the draft Bill correctly requires that children’s risk assessments carried out by platforms must include consideration of “functionalities that present higher levels of risk”. We believe surveillance advertising functionalities should be added to the list of functionalities which must be considered in this way. This is because they hand the ability to profile and target children with advertising content, at scale, to any individual adult or entity which is able to pay - unsupervised by the children’s parents and with limited scope for any other third party to scrutinise their actions or motives. This has been shown repeatedly to lead to children being exposed to content that can have a harmful impact on their wellbeing and development. The government would presumably justify this by arguing that advertising is already regulated in other ways, but this argument simply doesn’t stack up given that the traditional approach to regulating advertising is entirely unfit to regulate surveillance advertising.

 

 

 

  1. Assessment of current draft Bill’s impact on harms associated with surveillance advertising

 

In our view, the draft Bill would as it stands be unlikely to make a major impact on the harms to children caused by surveillance advertising.

 

Clause s39(2)(f) explicitly exempts surveillance advertising as a category, along with all other forms of “paid-for advertising”. This means there would be no new regulatory constraints on advertising content or targeting.

 

The safety and risk assessment duties on platforms introduced in Part 2, Chapter 2, and Ofcom’s risk assessment duties in Part 4, Chapter 3, do introduce some regulatory oversight of platforms’ design choices, systems and processes. S135(2) makes it clear that risk assessment should include many of the functionalities which enable surveillance advertising, and s61(6) makes it clear that Ofcom should include the “business model” as a “characteristic” in its assessments. However, no provision is made for the regulator to critique the providers’ own risk assessments, or to challenge the role which “characteristics” like the surveillance advertising model have played in their decisions.

 

Overall in our view these provisions are insufficient to address the fundamental conflict between the creating safe, healthy online environments, and the imperatives of the surveillance advertising business model. They do not seek to impose any specific constraints on the role which this business model plays in platforms’ decision-making, or give Ofcom any powers to challenge the ways in which data-harvesting or ad-targeting shapes how a platform operates.

 

It is thus hard to imagine, with these current powers, a situation in which the new regulation would lead to a specific functionality or design feature, which is lucrative from a surveillance advertising point of view, being nonetheless removed because of its role in enabling harmful content or behaviour.

 

 

 

  1. Potential approaches to reducing the harm caused by surveillance advertising to children

 

Our research to date has concluded that the most straightforward way to protect children from the harms associated with surveillance advertising would be to prohibit it, and its associated data practices.

 

This should not be confused with a ban on all forms of advertising to children. Rather it would require platforms to adopt other forms of advertising instead - most likely contextual advertising which does not target individual users based on behavioural profiling. This would remove the unhelpful design incentives associated with surveillance advertising, and mean that the adverts to which children were exposed instead were less manipulative, and more transparent and accountable to regulators and the wider public.

 

We have identified two options for preventing surveillance advertising to children:

 

a)      A focused ban on surveillance advertising for users under the age of 18. Platforms would only be permitted to serve surveillance advertising to users whom they have established, to a reasonable degree of confidence, are over 18.  Platforms would need to switch surveillance advertising off by default, with only those users that the platform has actively determined are over 18 receiving them.  Enforcement action could be taken against companies failing to take reasonable steps to ensure that they don’t serve surveillance adverts to under-18s. In practice, this would give user-to-user services likely to be accessed by children a choice: either develop sufficiently robust ways of enabling adult users to prove their age in order to opt in to surveillance advertising, or switch to other forms of advertising for all ages. Platforms would be free to develop their own approaches to ensuring only over-18s received surveillance adverts, but would need to satisfy the regulator that their approach was sufficiently robust. But any approach should put the burden on the platform to protect children from these adverts by default and by design - not on children or their parents/carers to opt out.

 

b)      A general ban on surveillance advertising. The simplest way to protect children from the harms of surveillance advertising may be to simply prohibit surveillance advertising practises for all users, regardless of age. Doing so would avoid the complexities of age verification, or of attempting to improve through regulation an intrinsically intrusive and manipulative set of advertising practices for which traditional approaches to advertising regulation do not work. Such a ban could be achieved through banning website or app owners from using users’ personal data – save a potential ‘green’ list of hard data, e.g. location, age and gender - to sell ad space, and from sharing users’ personal information to real time auctions for ad space. Platforms would be forced to switch to other forms of online advertising which don’t rely on surveillance of individual users, such as contextual advertising.

 

Another option, instead of banning surveillance advertising, would be to introduce some specific safety duties which addressed this business model as a risk factor. This could take the form, for example, of “fiduciary obligations” for platforms towards their users as data subjects. Any app or website which collects or processes data about its users would be required to do no harm and exercise due care to them. Ofcom could issue Codes of Practice detailing how the duty of care should be exercised with relation to surveillance advertising.

 

The introduction of safety duties related to surveillance advertising would not necessarily preclude all forms of surveillance advertising, but would give the regulator the ability to require a different balance between surveillance-advertising-by-design and safety-by design, and place some regulatory constraints on manipulative or predatory practices. This could include more extensive protections for the most vulnerable users including children. Individual services would retain a degree of flexibility and room for innovation in how they discharged their fiduciary duties.

 


 

  1. Relevant developments in other parts of the world related to regulation of the surveillance advertising business model

 

Were the UK to target the surveillance advertising business model as part of the Online Safety Bill, it would have the potential to be a global first. However, this is no longer certain - as other jurisdictions are debating and could potentially introduce such measures, possibly even ahead of the UK.

 

The EU Commission’s draft proposals for the regulation of digital platforms did not include any measures specifically targeting the surveillance advertising business model. However, the European legislative process is still underway, and influential voices are arguing that surveillance advertising should be within scope.  In October 2020, the EU Parliament voted to call for a phased-in ban to be considered as part of the legislation. In February 2021, the European Data Protection Supervisor (EDPS), Wojciech Wiewiorówsk, went further:

 

“Given the multitude of risks associated with online targeted advertising, the EDPS urges the co-legislators to consider additional rules going beyond transparency. Such measures should include a phase-out leading to a prohibition of targeted advertising on the basis of pervasive tracking, as well as restrictions in relation to the categories of data that can be processed for targeting purposes and the categories of data that may be disclosed to advertisers or third parties to enable or facilitate targeted advertising.”

 

Also in February 2021, 20 MEPs from across four of the main political parties launched the Tracking-Free Ads Coalition with the explicit aim of introducing a prohibition of surveillance advertising in the DSA. A new compromise amendment to the DSA was voted through by the LIBE Committee in July 2021, which explicitly bans the use of surveillance advertising for political purposes, to under 18s and for other 'sensitive' categories. Similarly bold proposals have been tabled in the IMCO committee and appears likely that some version of these will be added to the draft text of the DSA by the European Parliament by the end of the year.

 

In the USA, legislative proposals to restrict surveillance advertising are starting to gather bi-partisan support.  The Kids PRIVCY Act, introduced in the House of Representatives by Rep Kathy Castor (Democrat) in July, would ban all data-driven advertising to under 18s.  The Children and Teens Online Privacy Protection Act, introduced in the US Senate by Senators Markey (Democrat) and Cassidy ( Republican), would ban surveillance ads to under 13s and require opt-ins for 13-15 year olds.

 

In July 2021, Facebook announced voluntary limits on surveillance advertising for under-18s across Instagram, Facebook and Messenger. It stated that “Starting in a few weeks, we’ll only allow advertisers to target ads to people under 18 (or older in certain countries) based on their age, gender and location. This means that previously available targeting options, like those based on interests or on their activity on other apps and websites, will no longer be available to advertisers.” The company describes this as “taking a more precautionary approach in how advertisers can reach young people”. Concerns have been raised that Facebook will continue to collect profiling data on under-18s with a view to targeting them after their 18th birthday, but the announcement is still extremely significant for at least 2 reasons:

       One of the largest players in the surveillance advertising system has itself accepted that children are particularly vulnerable to surveillance advertising and need extra protections

       A major platform is demonstrating that it is perfectly possible to operate a large-scale user-to-user service, which generates advertising revenue, without targeting under-18s with surveillance ads.

 

  1. Conclusion

 

At present the draft Online Safety Bill is largely silent, beyond some very weak and general provisions, on the matter of surveillance advertising either as a category of content or as a driver of platform’s decisions about their design, systems and processes.

 

In our view improving the design of user-to-user services requires the influence of surveillance advertising imperatives to be weakened, and as it stands the draft Bill is extremely unlikely to achieve this. The Bill is also unlikely to reduce the potential for harm caused by surveillance adverts themselves, given that it explicitly exempts them from regulation - despite existing regulation of advertising being manifestly unfit.

 

We consider this omission a particularly significant one for the safety of children, given that the imperatives of surveillance advertising are in stark conflict with a safety-by-design approach, and that children are particularly vulnerable to the manipulative impacts of surveillance adverts. However, we think it is an omission which also weakens the draft Bill’s safety objectives for other users.

 

We expect the government to argue that matters related to advertising are beyond the scope of this piece of legislation. However, we would encourage your committee to challenge that policy decision given the clear relationship between the surveillance advertising business model and online harms, and given the clear inadequacy of current regulation of this form of advertising. At the very least the government should provide more clarity as to how these issues will be addressed if not within the OSB.

 

Oliver Hayes, Policy & Campaigns Lead

September 2021