Written evidence submitted by Guardian Media Group (OSB0171)

 

About Guardian Media Group

 

Guardian Media Group (GMG) is one of the UK's leading commercial media organisations and a British-owned, independent, news media business. GMG is the owner of Guardian News & Media (GNM), which is the publisher of theguardian.com and the Guardian and Observer newspapers, both of which have received global acclaim for investigations, including persistent investigations into phone hacking amongst the UK press, the Paradise Papers and Panama Papers, Cambridge Analytica and the recent Pegasus Project. As well as being the UK’s largest quality news brand, the Guardian and Observer have pioneered a highly distinctive, open approach to publishing on the web and it has achieved significant global audience growth over the past 20 years. Our endowment fund and portfolio of other holdings exist to support the Guardian’s journalism by providing financial returns.

 

Introduction

 

GMG welcomes the opportunity to submit a response to the Draft Online Safety Bill (Joint Committee) as it begins to review the details of the draft Online Safety Bill (OSB).  The advent of global search and social platforms has created new opportunities and challenges for citizens and also for journalism. They have exacerbated some existing social harms and created new ones. Many of the harms referenced  in the draft Online Safety Bill are real, serious and are already illegal in the UK. The Guardian and Observer have reported extensively on many of them.[1]

 

It is now widely accepted that the current approach to regulation of online platforms requires principles-based reform. Over recent years it has become clear, largely through journalistic reporting rather than regulatory action, that the senior management of the platforms do not understand their legal and ethical role under the current regulatory framework[2]. They appear unsure about whether existing regulation allows them to intervene to remove content, or whether by intervening, this automatically means that they take on editorial responsibilities.  It has also become clear that the biggest online platforms are willing to close down the examination of platform practices, by academics[3] and politicians[4], as a means to avoid scrutiny.

 

Recent reporting by the Wall Street Journal (WSJ) on internal Facebook documents shows, amongst other things: how moderation policies are applied differently depending on the power and celebrity of the individual[5], that Facebook has conducted internal research which clearly demonstrates the negative impact that usage of Instagram can have on the mental health of teenage girls, yet has not chosen not to place any of this research in the public domain, or provide it to policymakers when asked[6], and; how a 2018 news feed algorithm change led to publishers and political parties “reorienting their posts toward outrage and sensationalism.”[7] 

 

It is clear that the current legal framework does not enable policymakers and regulators to obtain the information required to understand the impact that leading social media companies are having on children and adults living in societies across the world, let alone provide regulators with the powers necessary to hold the biggest platforms to account.

 

What is striking about the WSJ’s reporting, is how the balance between the safety and wellbeing of users, and the goal of driving ever deeper engagement of users (Meaningful Social Interactions) is often tipped in favour of the latter.  The business model at the heart of social media platforms is designed to create engaged user bases through which personal data is processed in order to create intelligence, which is then used to target advertising to citizens on any device, at any time of the day, wherever they are engaged, whether on or off that social media platform. The activity of users across Facebook company websites - including the website of Facebook’s Oversight Board[8] - is used to create perfect sight of the user, which can then be used to underpin targeted advertising, or to enable the company to move into new markets (whether in terms of geography or business model).[9] It is not a business model that relies on the publication of high quality content, nor on prioritising the wellbeing of the user, rather it relies on the ability of those platforms to capture and retain the attention of the user and the network of friends and associations with whom the user communicates on a regular basis. 

 

Despite the knowledge that we now have about the negative externalities that are created by online platforms, including knowledge about economic harms imparted by the world leading online platforms and digital advertising market investigation undertaken by the Competition & Markets Authority, little action has been taken to use existing legal powers to address key pinch points of the online platforms business model.  Recent research released by the Irish Council on Civil Liberties[10] suggests that the vast majority of cases brought against the major tech platforms by the lead data protection authority for those companies, the Irish Data Protection Commission, remain outstanding. Despite enforcement capabilities for UK GDPR being brought back to the UK’s ICO, no action has been taken by the ICO to enable UK citizens to express their right to consent to the purposes for which their data is used by the big platforms. 

 

We welcome the government’s commitment to implement the recommendations of the Digital Markets Taskforce to create a new Digital Markets Unit (DMU), and the accompanying legislation to impose pro-competitive interventions on the platforms, but these remedies are unlikely to come into effect until 2023 at the earliest. 

 

The OSB is right to subject platforms to a greater degree of responsibility and accountability for the content and activity that they host.  But in tandem with efforts to make platforms more accountable for the content and activity that they facilitate, the government should devote parliamentary time and resources to expedite the legislation required to create the new DMU.[11] 

 

The OSB is key to empowering UK citizens with new rights in relation to what they can expect from the services provided by in-scope platforms.  But the DMU and associated pro-competitive intervention powers are vital to enable UK citizens to become active participants in a reformed digital economy in which there is fair trading, open choices, and in which there is accountability and trust.  By empowering the consumer to express their right to choose how their personal data is used for commercial purposes, the biggest online platforms will no longer be able to take user consent for granted. By giving citizens greater choice and control over how their personal data is used, the largest digital platforms will be forced to acquire user trust by making changes to their services that genuinely improve the welfare and experience of all users.[12]

 

In responding to the Joint Committee’s call for evidence we:

       Highlight inherent tensions and uncertainties in the government’s current approach to the Online Safety Bill (OSB);

       Welcome the Bill’s acknowledgement of the need for an exemption for news publisher’s own services, as well as where news publisher content is distributed on 3rd party platforms, and outline ways in which that exemption could be clarified to meet the government’s objective;

       Urge the government to prioritise implementation of government for a new Digital Markets Unit, and accompanying pro-competitive interventions, before it seeks to implement legislation that would serve to regulate speech, and;

       Provide details on how the issue of scam ads, or malvertising, is enabled through the online advertising ecosystem.

 

 

Call for evidence

 

Will the proposed legislation effectively deliver the policy aim of making the UK the safest place to be online?

 

Will the proposed legislation help to deliver the policy aim of using digital technologies and services to support the UK’s economic growth? Will it support a more inclusive, competitive, and innovative future digital economy?

 

As we note above, the government has already taken a number of steps that seek to force the largest online platforms to take their responsibility to children and adults more seriously.  The Age Appropriate Design Code is now in force[13], placing a welcome set of obligations on the platforms in relation to the design of their services.  Ofcom is currently gearing up to enforce new obligations in relation to Video-Sharing Platforms[14], and enforcement powers in relation to breaches of GDPR have now passed from EU authorities to the UK ICO.  To date, no meaningful action has been taken by the UK ICO against the biggest platforms in relation to the way in which those platforms gain consent for the use of personal data for commercial purposes, and despite multiple reports calling for action to tackle dominance in the business models of online platforms, we await the first steps to implement these new powers through Parliament. 

 

The recent WSJ reporting highlights that repeatedly, the safety and wellbeing of users on the Facebook platform have been compromised in favour of driving ever deeper engagement of users, with the object of feeding the data driven advertising business model.  The negative externalities that are in focus of the OSB, are symptoms of this business model, they are not incidental to it.  Without implementation of plans to empower UK citizens to take action to withdraw their consent to being subjects of this business model, it remains unclear whether the OSB alone can address the dysfunction at the heart of these businesses.  Or more broadly, whether these reforms can support a more inclusive, competitive, and innovative future digital economy. 

 

The recent advertiser boycott of Facebook[15], instigated in response to the volume and nature of hateful speech acts on its platform, illustrates the need for change well.  It is reported that  the Facebook CEO told staff that they’re “not gonna change our policies or approach on anything because of a threat to a small percent of our revenue, or to any percent of our revenue”.[16] What the Facebook CEO knew was that the policy of preventing Facebook users from expressing their right to consent as to how their personal data is used by Facebook companies,  those users had a stark and unavoidable choice - either agree to the terms in click-wrap agreements[17], or not use the service. 

 

Given the degree to which we now live our lives online through social media platforms, users understandably find it difficult to leave the communities built on those services[18].  Without the ability for consumers to withdraw their consent for their data to be used to underpin the online platforms business model, the largest social media platforms are aware that they will be able to find other advertisers to fill the vacuum left by those advertisers who choose to boycott the platform.  Advertisers know this too.  On 18th December 2020, for example, one of the biggest brands to join the boycott of Facebook, Unilever, announced that it was to resume marketing across Facebook platforms.[19]

 

The government has a blueprint that could help to break this chain.  A blueprint that could empower UK citizens, and support the wider policy aim of more widespread UK economic growth.  The commissioning of multiple recent government commissioned reports looking at competition in the digital economy, culminated in the world leading online platforms and digital advertising market investigation undertaken by the Competition & Markets Authority.  We welcome the government’s commitment to implement the recommendations of that report, and the report of the Digital Markets Taskforce on the need to create a new Digital Markets Unit, and accompanying legislation, which will be empowered to implement pro-competitive interventions on key platforms with strategic market status.  Without expedition of these proposals, it is unlikely that any of the proposed remedies outlined in that work will come into effect until 2023 at the earliest.  

 

Does the proposed legislation represent a threat to freedom of expression, or are the protections for freedom of expression provided in the draft Bill sufficient?

 

We are concerned that proposals in the OSB could create unnecessary tensions, due to a shift in emphasis as to how social media platforms are policed.  The OSB not only seeks to regulate harms that have been judged illegal by the UK Parliament, but seeks to regulate content that is legal, but deemed harmful by the government.  The lack of clarity around obligations to regulate legal but harmful content, backed by potential fines and other sanctions if in-scope platforms fail to do so, has the potential to chill legitimate freedom of expression on these platforms.

 

Such imprecision is not necessary, given that the UK Parliament has experience of passing laws that restrict speech, whilst also meeting long standing legal obligations in relation to protecting free expression[20].  It is on the basis of that tradition, that we do not believe that the same regulatory framework should be applied to illegal and legal harms. Legal but harmful content includes harms that are subjective and vague such as ‘disinformation’ and ‘intimidation’. We agree, for example, that the introduction of action against subjective harms raises potential issues in relation to the right to free expression, including in relation to minority groups, such as the LGBTQ+ community.[21]

 

Similarly, the OSB includes obligations on in-scope platforms to combat the “evolving threat of disinformation and misinformation”. Judgements as to which individual pieces of content represent disinformation or misinformation would require careful subjective judgement by a human being, in order to understand the detail and nuance of any speech act.  Yet we know that the scale of sharing of user generated content across companies owned by just one in-scope platform business, Facebook, means that they will most likely use machine learning and algorithms to make highly subjective decisions. Such machine driven decisions are unlikely to be able to make nuanced judgements in the way that a human editor can, and could lead to the overblocking of user content.  This has the potential to lead to a significant chilling of free expression. 

 

As well as machine learning, it is likely that in-scope platforms will rely on third parties, such as fact checkers to assess journalism distributed on those platforms. GNM has recent experience of environmental coverage being targeted by fact checkers for removal from Facebook, based on claims that the article did not provide certain context[22] to the specific points made in the article.  In reality, the level of context that the complainant in question was requesting in the article, was well beyond that which would normally be provided in a journalistic context. 

 

The ad hoc process by which fact-checking and content takedowns takes place, misses the nuance of the judgements made when publishing a story. It enables fact-checkers to condense peripheral concerns into a top line accusation suggesting that: the Guardian is misleading/producing clickbait on the environment.  This enables fact checking organisations to post articles claiming issues with a legitimate story, without putting any of the claims in that post to the journalist or the news publisher more broadly. The impact of these claims can be significant, leading to articles being demoted in News Feeds, and potentially being labelled as misinformation.  The effect of such accusations on public perceptions of our journalism, and the trustworthiness of mainstream science journalism more broadly, is a significant concern.  The onus to correct the labelling of articles by Facebook, is for the news publisher to contact the fact-checking organisation in order to  issue a correction, or to dispute a rating.   Fact checking organisations can, therefore, wield a high degree of power over the credibility of individual news articles and, over time, news sources themselves.  Yet the basis on which an organisation is given official fact-checking status by Facebook, is unclear.  Facebook has, for example, designated right-wing sites such as the ‘Daily Caller’ into a position of fact-checker, to make these sorts of decisions.[23]

 

Fundamentally, if a harm is too vague to be defined by policymakers, it is not reasonable to impose an obligation on platforms to make decisions as to whether a piece of content or activity reaches an unclear threshold.  This aspect of the proposed OSB should be interrogated further by the sub-committee.

 

Content in Scope

 

The draft Bill specifically places a duty on providers to protect democratic content, and content of journalistic importance. What is your view of these measures and their likely effectiveness?

 

We welcome the government’s commitment, articulated in the OSB, to exempt news publishers (and their content) from the scope of the Bill.  We believe that Clause 40 provides a comprehensive definition, and we are also pleased that recognised news publisher content falls outside of ‘regulated content’.  By extension, we are pleased that news publisher content also falls outside the ‘legal but harmful’ risk assessment duties on platforms (Clauses 45 and 46).

 

The Government’s intention to protect news publisher content, has been clear throughout the development of the OSB[24].  However, the Bill’s current drafting does not yet satisfy the Government’s aim to ensure that news publishers remain out of scope.

 

Detail of the proposed exemption

 

The Bill aims to provide two forms of exemption for news publishers:

       First, an exemption for news publishers’ websites from the scope of the legislation generally, and;

       Second, an exemption for news publisher content, from some of the duties of care, where that content appears on in-scope services.

 

In overview, the general exemption for publishers’ websites is provided in Paragraph 5 of Schedule 1 (the “limited functionality services” exemption). The specific content-based exemptions are found in Clause 18, which limits the application of the duties of care for search services, and Clause 39, which defines regulated content for user-to-user services. This section of our response will address each in turn and will then focus on redress.

 

News Publisher Websites

 

It appears that, without a clear exemption, any news publisher’s website with links to the UK that allows users to upload or share content (including by commenting on an article) would be in-scope of the Bill and a regulated service.  The government’s stated objective appears to be to exempt news publishers’ websites per se from the scope of the Bill by virtue of the limited functionality exemption in Schedule 1 Paragraph 5. Provided that a news publisher’s website satisfies the requirements in Schedule 1 Paragraph 5, then the intended effect of Paragraph 5 and Clause 3(7) is such that they are not a regulated service, and the website is intended to remain outside the scope of the legislation.

 

However, this is not clear on the face of the Bill, which means that it could be subject to misinterpretation once the OSB becomes law.  To prevent misinterpretation of the Bill’s provisions, the Bill should expressly state that Schedule 1 Paragraph 5 applies to ‘recognised news publishers’, as defined in Clause 40.  Without such an express statement, news publishers may face an unnecessary hurdle to rely on the very mechanism that has been designed to explicitly exempt their services from the scope of the Bill. Without amendment, the current exemption is framed by reference to the functionality of inter-user communication on a service, as opposed to either the nature of the website’s content or the identity of the Service Provider.

 

By framing the exemption this way, any news publisher website which goes further than the inter-user functionalities described in Schedule 1 Paragraph 5 may be classified as a regulated service, and would appear to fall within the scope of regulation. Many news publishers offer various functions for users on their websites that could be deemed as outside of the listed functionalities in Schedule 1 Paragraph 5, including games and online workshops. As currently drafted, the Bill does not exempt all news publisher websites as the government has stated it intends to do. Moreover, framing the exemption through the functionality of inter-user communication on a service, would disincentivise news publishers from investing in innovative features and services as part of their websites. 

 

On the basis of the current drafting, we are concerned that such innovation could leave news publishers facing a stark choice: maintain a static news site sitting outside of the scope of the online safety bill framework, or choose to innovate but risk all of their online services being subject to the obligations of the OSB. We do not believe that this choice is in the interests of readers, nor do we think it is the government’s intention.

 

In addition, paragraph 5(a) of Schedule 1 is also problematic. The words “relating to” should not introduce a test of relevance by reference to the subject matter of the content, which we believe was never the government’s intention.  The wording should be clarified such that “communicate” specifically means communication with other users. The real issue is whether the comment or review function is ancillary to provider content (i.e. it is not a freestanding chat functionality). This is also important to ensure that a function to communicate with the publisher or its journalists would not undermine the news publisher exemption.

 

We also note that the limited functionality exemption can be repealed or amended by the Secretary of State by regulation under Clause 3(9)-(11) of the Bill. We believe that this ‘Henry VIII clause’ should be rejected in favour of Parliament having the right to repeal this clause only by  primary legislation.

 

There is a very good definition of a news publisher set out in Clause 40 of the Bill. To address the issues above and ensure that the exemption is effective, an additional  sub-clause should be added to Clause 2, to make clear that news publisher content, whether published on their own websites or distributed by user-to-user or search services, is out of the scope of the Bill and that this rests on the fact that it is published by news publishers as defined in Clause 40.

 

Schedule 1 Paragraph 5 should also be amended to (i) make clear that it relates to readers’ comments sections on news publishers’ websites; (ii) to accurately reflect how user-to-user communication works on news publishers’ websites; and (iii) ensure it does not limit innovation.

 

Exempting News Publisher content which appears on in-scope services

 

The second category of exemption is intended to apply where news publisher content is posted onto in-scope user-to-user services. This is achieved through Clause 39 of the Bill, which defines “regulated content”, and which excludes “news publisher content” from its scope. This has the effect of excluding news publisher content from the scope of the safety duties imposed by the Bill on in-scope services.

 

In the case of user-to-user services, for content to be “illegal content”, “content that is harmful to children”, or “content that is harmful to adults” the content must also be regulated content.[25] By excluding news publisher content from these categories, news publishers are excluded from being required to apply the safety duties under Clauses 9-11 of the Bill to their content.

 

In respect of search services, the exemption in Clause 18(2) is more straightforward. It provides that none of the duties imposed on search providers extend to recognised news publisher content, including the duty in Clause 23 concerning rights to freedom of expression and privacy.

 

Redress

 

Crucially, however, these exemptions have a fundamental drawback that risks undermining the news publisher exemption itself.  While the Bill makes it clear that the duty of care does not apply to news publishers, and that platforms and search engines do not face any sanction if they do not apply their codes of conduct to news publisher content when it is shared on social media, neither are they under any duty of care not to apply their codes of conduct to it. The impact of this is that, when combined with high penalties for not taking action against content to which the Bill does apply,  the OSB could incentivise platforms and search engines to err on the side of caution whenever their algorithms encounter content that might put them at risk. 

 

Clauses 39-40 do not stop platforms blocking news publisher content, and, as described more fully below, the protections for journalistic content in Clause 14 are insufficient to do so. News is a perishable commodity, meaning that an appeal to the platform, followed by another to Ofcom, is of limited value.

 

Duty to protect Journalistic Content

 

The Bill aims to create a duty on in-scope Category 1 Service Providers to protect journalistic content on in-scope services via Clause 14 of the Bill. This provision is accompanied by a redress mechanism for the wrongful removal of journalistic content by platforms.[26] This section of the response will address each in turn.

 

Category 1 Services

 

Only Category 1 Services are subject to the proposed duty to protect “journalistic content” defined in Clause 14 as: (i) regulated content or news publisher content which is (ii) “generated for the purposes of journalism”; and (iii) “UK linked”.  The effect of (i) is that the Clause 14 duty will apply to both regulated and exempt journalistic content on in-scope services. It will apply equally to an article shared by a recognised news publisher, a post by a journalist or small publisher which does not meet the Clause 40 definition, and a post by a “citizen journalist” provided, in each case, the content is UK linked and generated for the purposes of journalism. 

 

This raises three primary concerns: (i) there is no meaningful standard for decision making involving journalistic content; (ii) there is no specificity in provisions for handling complaints; and (iii) the proposed system of redress (which stops with the Service Provider), as outlined below, is not fit for purpose. 

 

As we note above, the combination of an unclear definition of harmful content, combined with significant penalties for not doing so, wis likely to incentivise in-scope platforms to take down journalistic content via the blanket application of algorithms that will not be able to distinguish between journalistic and non-journalistic content.  The over-blocking of journalistic content by semantic and keyword blocking technologies deployed by so-called brand safety vendors, demonstrates how blunt such automated tools are in practice[27].  At present, there are no repercussions for an overly zealous approach, and scant redress for reinstatement of that content.  This means there is an absence of much needed tension in the OSB regime as currently drafted, which could exacerbate concerns about the effect of the OSB on free expression and the distribution of journalistic content on in-scope platforms.

 

Similarly, there is no equivalent of the Clause 14 duty for search engines, which is a significant lacuna that must also be addressed.  The current drafting of Clause 14 of the OSB leaves it to the Service Provider to decide how freedom of expression is taken into account when preparing its terms of service. The requirement is then to apply those terms of service consistently. Again, this means there is an absence of much needed tension in the OSB regime as currently drafted, which could exacerbate concerns about the effect of the OSB on free expression and the distribution of journalistic content on in-scope search engines.

 

The Bill should go further than requiring Category 1 Services to “take into account” the importance of the free expression of journalistic content when designing their systems of processes, and instead set a positive standard to be applied when they make decisions about it. We believe that this can only be achieved by drafting a clear and watertight news publisher exemption to avoid the conflicting parallel system of regulation created by the journalistic protections. For example, adding an additional sub-clause to Clause 2, making clear news publisher content is out of the scope of the Bill when it is distributed by user-to-user or search services, as well as when published on news publishers’ own websites, as set out in paragraph 2.1.9 above.

 

Redress

 

The Bill provides that Category 1 Services must make available a dedicated and expedited complaints procedure available to either the user who uploaded the content, or its creator[28] and to ensure that content is “swiftly reinstated” in the event of a complaint being upheld.[29] This, however, is inadequate to protect news content.

 

It is inappropriate for the terms ‘journalistic content’, ‘content of democratic importance’, and what is meant by ‘protect’ and ‘take in account’, to be determined by platforms. The current draft also creates a conflicting parallel system of regulation for journalistic content ultimately overseen by Ofcom. For any exemption and redress procedures to be fit for purpose, they must be effective in practice and the swiftness of redress specified must be commensurate to the relatively short shelf life of news.

 

We note concerns, expressed by the Chair of the House of Lords Communications Committee,  that there may need to be a tiered approach to any appeals process, to ensure that redress is prioritised for journalistic content of democratic importance.[30]

 

These issues can only be meaningfully addressed by implementing the government’s objective of a watertight exemption for news publishers, which would mean extending the duty of care so that social media companies are obliged not to apply their codes of  conduct to news publisher content.

 

Earlier proposals included content such as misinformation/disinformation that could lead to societal harm in scope of the Bill. These types of content have since been removed. What do you think of this decision?

 

As we note above, judgements as to which individual pieces of content represent disinformation or misinformation would require careful subjective judgement by a human being, in order to understand the detail and nuance of any speech act.  Yet we know that the scale of sharing of user generated content across companies owned by just one in-scope platform business, Facebook, means that they will most likely use machine learning and algorithms to make highly subjective decisions. Such machine driven decisions are unlikely to be able to make nuanced judgements in the way that a human editor can, and could lead to the overblocking of user content.  This has the potential to lead to a significant chilling of free expression. 

 

Are there any types of content omitted from the scope of the Bill that you consider significant e.g. commercial pornography or the promotion of financial scams? How should they be covered if so?

 

During the recent covid crisis, many publishers, including GNM - have seen a rise in the number of attempts by criminals to inject scam ads, or fraudulent advertising (or malvertising) into automated programmatic auctions.[31]  We had assumed that the increase in these  attacks was due to these parties being able to pick up cheap inventory as a result of reduced market demand for digital advertising inventory through ad exchanges.  Experiments with floor pricing found that setting a higher floor price for the sale of that inventory did not prevent malicious ad impressions.  It did, however, have the effect of blocking a large volume of clean advertising impressions from landing on our site. The amount of money that malvertisers are prepared to pay to appear on trusted sites, gives an indication as to how potentially lucrative the act of malvertising can be.

 

Premium publishers are the visible end point of the supply chain, and have been the subject of well-publicised malvertising attacks[32].  Premium publishers spend  significant sums with security vendors to try and prevent bad actors from buying advertising impressions on our website[33].  Yet ultimately, premium publishers are not provided with the identity of the buyers of our advertising inventory, meaning that we are reliant on ad tech partners that do have access to that information to prevent those advertising impressions running on our site.

 

We are concerned that bad actors are currently able to capitalise on the porous nature of the online advertising ecosystem, cloaking or disguising their identity in order to purchase inventory on trusted news sites.  This poses broader questions about the ease with which bad actors are able to trade in the online advertising market, some of which have been addressed in the CMA’s online platforms and digital advertising market study.[34]

 

The disconnected nature of the online advertising ecosystem - particularly the lack of common data standards and the lack of access to market data - enables threat actors to cloak or disguise their identity in order to purchase inventory on a range of websites and services.[35] While the Advertising Standards Agency has worked with the IAB UK to develop a scam ads alerts service, in truth, this initiative does not address the ability of threat actors to keep moving from one DSP to the next.  It is an initiative that enables consumers to report individual incidences of scam advertising, but it is not systemic in its ambition, nor does it tackle the root cause.[36] 

 

Our belief is that it is the absence of consistent market data being generated and passed across key parts of the supply chain - Demand Side Partners (DSPs) and Supply Side Partners (SSPs) - that enables threat actors to re-use seemingly clean buying profiles over and over again. To be clear, we understand that these are sophisticated actors, using complex technologies to try and evade detection.  The fact is, however, that there is currently no way to consistently identify those threat actors across different services, meaning that those threat actors can jump from DSP to DSP and maintain access to an audience of users once they are discovered. As soon as they are blocked on one DSP, they shift their campaigns to another DSP that may not have an understanding of the real identity of that user.

 

Because publishers and SSPs do not have access to consistent information that would enable them to identify threat actors who pose as legitimate buyers, they are unable to relay the information to DSPs that would allow DSPs to block a known bad entity across all the access points to the online advertising ecosystem.   As a result, publishers and SSPs end up playing whack-a-mole, with the same entity appearing again and again across different DSPs, or through different buying seats within the same DSP. The difficulty for premium publishers and SSPs to identify an entity across all DSPs makes it easy for threat actors to keep this game up for weeks or months. Threat actors can burn a profile on one DSP, but stay live on another. 

 

There are some emerging voluntary industry solutions that could help to pass basic information in the ad markup displayed to the user.  This could enable much more rapid investigations into scam and malware originating on their advertising platforms.[37]  Widespread industry adoption - rather than adoption of these solutions as part of so-called industry gold standards[38] - would be key, however, to the goal of minimising opportunities for bad actors to prosper.

 

The Joint Committee should take evidence from independent industry specialists and technology vendors, to understand whether a change to the legal liability of ad tech intermediaries could lead to a tightening of business processes.  This may include the creation of greater friction in the ad purchase process, and the imposition of more onerous know your business customer obligations, the effect of which could be to prevent potential malvertisers from accessing and migrating through the online advertising ecosystem.

 

Are the definitions in the draft Bill suitable for service providers to accurately identify and reduce the presence of legal but harmful content, whilst preserving the presence of legitimate content?

 

As we note above, the OSB not only seeks to regulate harms that have been judged illegal by the UK Parliament, but seeks to regulate content that is legal, but deemed harmful by the government.  The lack of clarity on obligations to regulate legal but harmful content, backed by potential fines and other sanctions if in-scope platforms fail to do so, has the potential to chill legitimate freedom of expression on these platforms.

 

Services in Scope

 

Will the regulatory approach in the Bill affect competition between different sizes and types of services?

 

As noted earlier in this submission, we do think it would be preferable for the government to pass legislation that focuses on the underlying business model that is a key driver of the harms facilitated by online platforms.  The danger is that in seeking to tackle the symptoms of these issues through the regulation of speech, there are not only unintended consequences for individual citizens as they seek to express themselves online, but also for smaller businesses that have to comply with a potentially complex online safety regime[39].  An overly zealous approach to the regulation of speech could not only have a negative impact on the business models of developers that do seek to put the welfare of their users first, but also on the rights of individual citizens to participate in democracy. 

 

We believe there is a strong case for the government to expedite implementation of its plans for a new Digital Markets Unit (DMU).  We hope that through proposed pro-competitive interventions of the DMU against platforms with Strategic Market Status, the DMU will empower UK citizens to withdraw their consent for their data to be used for commercial purposes by platforms that do not place the welfare of users at the heart of platform design.

 

Algorithms and user agency

 

What role do algorithms currently play in influencing the presence of certain types of content online and how it is disseminated?

 

The algorithms that underpin many of the platforms in-scope of the proposed OSB legislation that are popularly used by citizens in the UK are not designed to further deliberative policy debate, informed decision making, or the objectives of wider society. Those algorithms are often produced by listed businesses, built to meet commercial objectives.  We have raised concerns previously with other Parliamentary Committees about the impact of Facebook’s 2018 change on the incentives for posting comments on that platform.

 

It is noted in recent WSJ reporting, that ‘the new algorithm’s heavy weighting of reshared material in its News Feed made the angry voices louder. “Misinformation, toxicity, and violent content are inordinately prevalent among reshares”.[40]’  The reporting also found that Facebook employees had been told that many political “parties, including those that have shifted to the negative, worry about the long term effects on democracy”.[41]  The reporting further suggests that fixes for the algorithm change were developed, but that Facebook management decided not to implement those changes ​​broadly if there “was a material tradeoff with MSI (Meaningful Social Interactions) impact.”[42]

 

This WSJ’s recent reporting suggests that, at the very least, the designated online safety regulator should have the legal powers to request internal documents and analysis from in-scope platforms that enable the regulator to assess and understand the impact of their platform policies and algorithms on individual users and the broader media environment.

 

 

Matt Rogerson

Director of Public Policy

Guardian Media Group

16th September 2021

 

28 September 2021

16


[1] E.g.s include Cambridge Analytica investigations, Facebook Files, Christchurch shootings and others.

[2] https://www.theverge.com/interface/2019/4/3/18293293/youtube-extremism-criticism-bloomberg

[3] https://www.theguardian.com/technology/2021/aug/14/facebook-research-disinformation-politics

[4] wsj.com/articles/senators-seek-answers-from-facebook-after-wsj-report-on-instagrams-impact-on-young-users-11631664695

[5] https://www.wsj.com/articles/facebook-files-xcheck-zuckerberg-elite-rules-11631541353

[6] https://www.wsj.com/articles/facebook-knows-instagram-is-toxic-for-teen-girls-company-documents-show-11631620739?mod=hp_lead_pos7&mod=article_inline

[7] https://www.wsj.com/articles/facebook-algorithm-change-zuckerberg-11631654215?mod=hp_lead_pos7

[8] https://t.co/sGXBbif93c?amp=1

[9] https://www.telegraph.co.uk/technology/google/9109239/Google-users-ignore-major-privacy-shakeup.html

[10] https://www.irishexaminer.com/news/arid-40697300.html

[11] https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/1003913/Digital_Competition_Consultation_v2.pdf

[12] https://www.theverge.com/2021/5/27/22456206/instagram-hiding-likes-experiment-results-platformer

[13] https://ico.org.uk/for-organisations/guide-to-data-protection/ico-codes-of-practice/age-appropriate-design-code/

[14] https://www.ofcom.org.uk/consultations-and-statements/category-1/guidance-vsp-harmful-material-measures

[15] https://www.nytimes.com/2020/08/01/business/media/facebook-boycott.html

[16] https://www.theguardian.com/technology/2020/jul/02/mark-zuckerberg-advertisers-boycott-facebook-back-soon-enough

[17] See para 4.156 of the CMA Online platforms and digital advertising market study - ​​https://www.gov.uk/cma-cases/online-platforms-and-digital-advertising-market-study#final-report

[18] https://www.washingtonpost.com/technology/2020/11/19/can-not-quit-facebook/

[19] https://edition.cnn.com/2020/12/18/tech/unilever-facebook-twitter-advertising/index.html

[20] The human rights organisation Liberty, provides a useful summary of speech offences in the UK https://www.libertyhumanrights.org.uk/human-rights/free-speech-and-protest/speech-offences

[21] https://inews.co.uk/news/online-safety-bill-would-give-legal-basis-for-censorship-of-lgbt-people-stephen-fry-and-campaigners-warn-1178176

[22] https://climatefeedback.org/evaluation/guardian-article-on-arctic-methane-emissions-lacks-important-context-jonathan-watts/

[23] https://www.theguardian.com/technology/2019/apr/17/facebook-teams-with-rightwing-daily-caller-in-factchecking-program

[24] See for example, paras 1.10- 1.12 of the government’s response to the White Paper Consultation https://www.gov.uk/government/consultations/online-harms-white-paper/outcome/online-harms-white-paper-full-government-response

[25] For illegal content see Clause 41(2)(a), for content that is harmful to children see Clause 45(2)(a) and for content that is harmful to adults see Clause 46(2)(a).

[26] Clause 14(3)-(6).

[27] https://www.thenewhumanitarian.org/analysis/2021/01/27/brand-safety-ad-tech-crisis-news

[28] Defined in Clause 14(11) as the publisher itself.

[29] Clause 14(3)-(6).

[30] https://committees.parliament.uk/publications/6025/documents/68088/default/

[31] https://blog.confiant.com/fake-celebrity-endorsed-scam-abuses-ad-tech-to-net-1m-in-one-day-ffe330258e3c

[32] https://www.theguardian.com/media/2020/feb/27/martin-lewis-calls-for-publishers-to-act-over-fake-news-ads-mail-online

[33] For example, GNM procures services from Confiant for this purpose https://www.confiant.com/resources/blog/fizzcore-threat-actors

[34]https://www.gov.uk/cma-cases/online-platforms-and-digital-advertising-market-study#final-report

[35] https://ico.org.uk/media/about-the-ico/documents/2615156/adtech-real-time-bidding-report-201906.pdf

[36] https://www.asa.org.uk/make-a-complaint/report-an-online-scam-ad.html

[37] For example, The IAB Tech Lab - a US based industry standards body within the wider IAB family of organisations - has recently developed two standards that, if adopted widely by the industry, could stimulate much better data sharing across the online advertising supply chain. First, Buyers.json is a simple mechanism to allow DSPs to publicly share the names and identifiers of the buyers they represent, facilitating quick identification of threat actors when attacks occur. Second, DemandChain Object is a new feature within industry real time bidding protocol (OpenRTB) that would allow sellers to see all parties that were involved in the purchase of a creative.

[38] https://www.iabuk.com/goldstandard

[39] https://techcrunch.com/2020/12/14/uk-online-harms-bill-coming-next-year-will-propose-fines-of-up-to-10-of-annual-turnover-for-breaching-duty-of-care-rules

[40] https://www.wsj.com/articles/facebook-algorithm-change-zuckerberg-11631654215?mod=hp_lead_pos7

[41] https://www.wsj.com/articles/facebook-algorithm-change-zuckerberg-11631654215?mod=hp_lead_pos7

[42] https://www.wsj.com/articles/facebook-algorithm-change-zuckerberg-11631654215?mod=hp_lead_pos7