Written evidence submitted by Snap (OSB0012)

 

Summary

 

 

 

 

 

 

 

 

 


Full response

 

Thank you for the opportunity to respond to the Joint pre-legislative scrutiny Committee’s call for evidence on the draft Online Safety Bill.

 

The Government’s publication of the draft Bill marks an important step towards the introduction of regulation aimed at improving online safety, following a long journey that began with the Internet Safety Strategy Green Paper in 2017. Throughout this process, Snap has been in constructive dialogue with the Government about how to make regulation work in a way which is both effective in improving online safety, and proportionate for companies who need to put rules into practice. This remains our aim today. We are grateful for the role that the Joint Committee is playing in evaluating this critically important Bill, and hope that you can help ensure that legislation will meet the Government’s objective of improving online safety, while also being practical for the significant variety that exists in terms of the size, resources and service models of different online platforms.

 

  1. Introduction: Snapchat and our approach to safety

 

Before we turn to our perspective on the draft Bill, it may be helpful to give the Joint Committee a brief introduction to Snap, Snapchat and our approach to safety. Snap Inc. is a camera and technology company that, as well as designing wearable video technology and augmented reality software, owns and operates the visual messaging application, Snapchat. While Snap is still a significantly smaller company than the established tech giants that have dominated online media for the past decade, we are growing, with 293 million people globally now using Snapchat every day.

 

Snapchat has intentionally been designed very differently to traditional social media. At a high level, we use two principles that guide our design process: safety by design, which is about ensuring the safety of our community, and privacy by design, which focuses on data minimisation and protecting user data. Product counsel and privacy counsel are fully involved in the development cycle of any new product and feature at Snap, from outset to release. This up-front focus on safety and privacy by design is reflected in the build of Snapchat. Unlike traditional social media, Snapchat does not offer an open news feed where unvetted publishers or individuals have an opportunity to broadcast hate or misinformation, and we don’t offer public comments that may amplify harmful behavior. Snapchat is at heart a visual messaging application, designed for private communications (either 1:1 or in small groups), with the aim of encouraging users to interact with their real friends, not strangers. Snapchatters’ friends lists are only visible to themselves, by default you cannot receive a message from someone who you haven’t accepted as a friend, and you can never share your location with someone who isn’t your friend. Public areas of Snapchat - our Discover page for news and entertainment, and our Spotlight tab for the community’s best Snaps - are curated and pre-moderated, ensuring harmful content is not surfaced to large groups of people. Content on Snapchat is also designed to delete by default - this means that default settings are that messages and Snaps are deleted from our servers once they’ve been opened, while Stories are deleted after 24 hours. This limits how widely content can be shared.

 

The approach that we have taken - focusing on encouraging communication between close friends rather than strangers, limiting virality and the ability for regular Snapchatters to broadcast content - contrasts to the approach taken by traditional social media platforms, which aim to serve as open “town square” style environments, encourage the public broadcast of user-generated content, and rely heavily on artificial intelligence and automated moderation to identify harmful activity. 

 

Credible, external organisations can attest to the overall success of Snap’s approach. As an example, Snap is a signatory to the European Commission’s Code of Conduct on countering illegal hate speech online, which undertakes an annual collation of reports from NGOs specialising in the reporting of hate speech online. In the Commission’s June 2020 report evaluating compliance of the Code, the 39 participating NGOs submitted zero reports of hate speech on Snapchat; the imminent 2021 report will also show zero reports of this content on the platform. 

 

  1.                Snap’s perspective on online safety regulation

 

We support the case for effective online safety regulation, based around broad principles that companies of all sizes are able to follow and implement proportionately, as relevant to their service and risk profile. Regulation in this area is most effective when it focuses on the principles or outcomes companies should deliver, setting out “what” objectives are to be achieved, without being too prescriptive on “how” companies should achieve these. There is incredible variety in the size, resources and service models of different online platforms. A principles-based approach accommodates this variety and allows for innovative, effective approaches to be developed, while focusing on what is most important: the safety of users.

 

Where regulation goes wrong is where it becomes overly prescriptive and complex, focusing too much on process rather than outcomes, and assuming that there is a uniform “one size fits all” approach which will work for all online services. Ultimately the companies who are best served by overly prescriptive, complex regulation are the largest firms, with the largest compliance teams, who can easily deal with the bureaucracy involved, which smaller companies (and in particular start-ups and scale-ups) would really struggle to comply with. The Competition and Markets Authority’s 2020 Market Study into digital advertising highlighted structural problems in UK digital markets, with certain companies identified as having unassailable market power. Prescriptive regulation risks exacerbating these imbalances by disproportionately harming smaller challenger companies and strengthening the advantages of those largest players.

 

The Government has rightly acknowledged this, and has repeatedly committed to proportionate regulation that supports innovation and competition. Our feedback on the draft Bill aims to ensure that this model works in practice, and to guide towards the establishment of a principles-based regulatory framework rather than an overly complex, prescriptive and bureaucratic system. 

 

  1.                Snap’s overall perspective on the Online Safety Bill

 

Snap has long supported the central planks of policy outlined by the Government in its Online Harms White Paper and subsequent responses: particularly the establishment of a regulatory framework based on a statutory duty of care to ensure that platforms take responsibility for the safety of their users, implemented and enforced by an independent regulator. We believe that this model, if implemented in a principles-based way, has the flexibility to allow for different and innovative approaches to ensuring online safety, while ensuring that the regulator remains focused on the overall priority of keeping people safe - and giving it the power to act if platforms are falling short in their responsibilities. We welcome the proposed role for Ofcom, which has the reputation of a trusted, credible and well-resourced regulator in relevant adjacent markets, and which is rapidly building its knowledge and understanding of online platforms through the implementation of new “video-sharing platform” (VSP) regulation. 

 

We also support the focus that the Government has placed on the overall systems and processes that online platforms have in place to keep users safe, rather than focusing on their responses to individual pieces of content. This is a sensible distinction, in keeping with a principles-based approach to regulation.

 

Overall, the core safety duties set out in the draft Bill - described as “the illegal content duties,” “the duties to protect children’s online safety,” and the “adults’ risk assessment duties” - represent a strong basis for effective regulation. Under these duties, platforms would be required to take proportionate steps to manage and mitigate the risk of harm to users, and to use proportionate systems and processes to counter harmful or illegal content, or prevent children from encountering such content. We consider that these duties, taken together, represent an achievable “duty of care” to hold platforms to.

 

However, unfortunately the draft Bill, in a departure from the Government’s previously-stated policy, seeks to go much further than articulating an overarching, statutory duty of care for online platforms, focused on online safety. Instead the Bill seeks to establish a complex range of additional duties and requirements. In most cases, these appear more focused on process - producing multiple different types of risk or impact assessments, or implementing new complaints procedures, for example - than on user safety.  This approach creates the risk of a complex and burdensome regulatory framework which will disproportionately impact smaller challengers and thus the competitiveness of the digital marketplace overall, without achieving material additional safety benefits for users.

 

  1.                The administrative and compliance burden presented by the Bill

 

The draft Bill defines two main services in scope of regulation: “user-to-user services” and search engines. For “user-to-user services,” there are three possible layers of requirements: those which apply to all services, those which apply to services likely to be accessed by children, and “Category 1” services, which meet to-be-determined thresholds set by the Secretary of State for DCMS in relation to their size and functionalities. As above, we consider that the core safety duties articulated for each category, if combined, could represent a practical “duty of care” which would stand as a good basis for effective online safety regulation.

 

However, the draft Bill goes much further. The table below represents our attempt to summarise some of the different administrative and compliance requirements set out for different categories of user-to-user service in the draft Bill. By our reading, if a company providing a user-to-user service is determined to be “likely to be accessed by children” and “Category 1” under the proposed framework, they would be required to produce five separate risk and impact assessments, to comply with multiple different codes of practice, to implement and operate two new types of complaints procedures, and to produce transparency data in a format set by Ofcom. The overwhelming impression is the creation of an enormous amount of process, rather than anything that ensures that users are actually safer.

 


Non-exhaustive summary table of the different administrative and compliance requirements for user-to-services set out in the Online Safety Bill

 

Requirement

All user-to-user services

Services likely to be accessed by children

Category 1 services

Risk assessments

Must produce illegal content risk assessment

Must produce children’s risk assessment

Must produce adults’ risk assessment

Freedom of expression and privacy impact assessments

 

 

Must produce impact assessments on protection of freedom of expression and privacy

Journalistic content complaints process

 

 

Must offer expedited complaints process to those who have had content removed, if a person considers their content is journalistic in nature

Reporting and redress: complaints procedure

Must operate complaints procedure, including allowing complaints for users who have had content taken down or been suspended from using a service. 

Record keeping and review requirements

Must keep written records of risk assessments, steps taken to comply with duties, and regularly review compliance

Assessment on access by children

Must carry out assessment about whether it is possible for children to access the service, and whether “child user” condition is met, if access is possible

Codes of practice

Must comply with codes on:

- Terrorism

- Child sexual exploitation

 

Will likely need to comply with additional Ofcom codes on specific harms

Transparency reports

Must produce transparency report including information set by and in a format specified by Ofcom

 

We have set out earlier how overly prescriptive and complex regulation risks entrenching the advantages of the largest, most dominant players. Unfortunately the heavy administrative and compliance burden proposed in the Bill seems designed for and tailored towards the very largest and most profitable companies, with the largest compliance teams, rather than considering the impact on smaller challengers.

 

Snap recommendation: The Government should seek to simplify and rationalise what is expected of platforms, based around a central duty of care to ensure the safety of users. Ofcom’s assessment as to whether a platform is meeting its central duty of care - in ways which are proportionate and achievable given the platform’s size, service model, resources and overall level of risk - should be the key determinant in whether or not a platform is deemed compliant with regulation. 

 

While some supplementary duties such as producing regular transparency reports may be deemed a useful indicator as to whether a platform is meeting its responsibilities, the number one criteria should be a platform’s efforts to protect its users, rather than a case of how many assessments it produces or what records it keeps. This outcomes and principles-based model would be scalable and achievable for a wide range of platforms and services, rather than simply geared towards the largest companies.

 

  1.                Setting thresholds to determine “Category 1” platforms

 

In the draft Bill, several requirements are made purely for “Category 1” platforms. Such platforms will be required to take action against content which is legal but deemed harmful to adults, among other duties in relation to freedom of expression and journalistic content. But detail as to what constitutes a Category 1 platform is very limited, and with much appearing to have been left to the discretion of the Secretary of State for DCMS, who will specify the “threshold conditions” which will help determine whether a platform is Category 1. There is little information on how these thresholds will be set, although the Government has said these may relate to the size of a service’s audience and the “functionalities” which it offers.

 

Given the significance of the demands being placed on Category 1 platforms, providing clarity on these thresholds is extremely important. Understanding whether or not a platform is likely to be considered Category 1 will be vital to long-term planning for compliance with regulatory requirements. We propose that the Bill should make Ofcom responsible for publishing clear thresholds for Category 1. As the organisation responsible for implementing and enforcing the regulatory framework, Ofcom, rather than the Secretary of State, should have this responsibility. Thresholds could be set on a time-limited basis and revisited every two years. 

 

We propose, in line with the Government’s thinking, that determining a threshold should be based both on quantitative data - the number of daily active users that a platform has in the UK - and qualitative criteria. Importantly, the qualitative side should reflect the ability of a platform to limit harmful content, in order to encourage good behaviour and safe design of all platforms. In this area, Ofcom could take inspiration from existing good practice internationally, including the Australian eSafety Commissioner’s mandatory Safety by Design Code. In line with such an approach, we recommend that if a platform takes strong mitigating steps in terms of ensuring user safety, this should be an important determinant as to whether it avoids Category 1 designation. This would incentivise more companies to take good faith steps to make their platforms safer through additional monitoring, curation or privacy- and safety-by-design principles to significantly reduce the volume and severity of illegal and harmful content surfaced to users.

 

Snap recommendation: The Online Safety Bill should make Ofcom responsible for establishing clear thresholds to determine if platforms should be considered Category 1. Ofcom should develop a dual, quantitative and qualitative, approach to setting these:


 

  1. Quantitative: there should be a minimum threshold of daily active users, equivalent to 15% of the UK population, for a platform to be deemed Category 1. Ofcom should explain the methodology used to calculate this figure.

 

  1. Qualitative: Ofcom should establish a criterion based on a company’s ability to meet best-in-class standards in relation to safety and privacy being embedded in service design, content curation and moderation. If a platform achieves high scores in these areas (to be assessed by Ofcom) and is considered to be taking the strongest mitigating good faith steps, it would not be considered as Category 1.

 

  1.                Freedom of expression and complaints

 

A notable policy change in the Government’s thinking on online harms and safety was the focus on freedom of expression set out in its full response to the Online Harms White Paper, published in December 2020. In the foreword to that paper, the Secretaries of State for DCMS and the Home Office committed that the legislation would “protect freedom of expression”: the crucial tool to enable this seemingly being creating mechanisms for users to “object if they feel their content has been removed unfairly.”

 

This political focus on freedom of expression is demonstrated throughout the draft Bill. While there are various mooted requirements in relation to freedom of expression (including another impact assessment), the critical requirements are in relation to complaints. All user-to-user services will be required to operate a complaints procedure, allowing complaints for users who have had content taken down or been suspended from using a service. Category 1 platforms will require to offer an expedited complaints process to those who have had content removed, if a person considers their content is journalistic in nature.  

 

This approach seems focused on traditional social media platforms which are designed as open, town square style environments for people to publicly broadcast their views, rather than appropriate for all platforms and services. As we have set out, Snapchat is a very different service to traditional social media: we curate and moderate public areas of the app to prevent harmful or illegal content from being surfaced here. Our Community Guidelines are very clear about what types of content or activity are prohibited on Snapchat. If any individual or publisher is found violating these Guidelines, which are publicly viewable online and which apply to all content on Snapchat, there is no “freedom of expression” argument that would prevent us from taking appropriate action against the offending account.

 

Article 10 of the UN Human Rights Act, which is often quoted in discussions about freedom of expression, protects people’s rights to express themselves freely without interference from governments. It is not intended to force private sector actors to adopt certain business models or practices above others. While the Government can rightly expect platforms to have clear and transparent terms and Community Guidelines, and to enforce these fairly, we do not consider that the Government should attempt to influence the development of these.

 

More broadly, we are concerned that the Government’s proposed requirements around complaints - which seem designed to prompt a complaint any time someone has content removed or their account suspended, for violating Guidelines intended to keep users safe - risks encouraging large numbers of spurious or vexatious complaints. We are not aware of parallels to such systems existing in other prominent forms of media; TV or radio broadcast channels, for example, do not consistently remind viewers of the process for making complaints to Ofcom over the course of a day’s broadcasting.

 

Nudging users towards making complaints in this manner also risks creating significant amounts of additional work for Trust & Safety teams who would otherwise be focused on keeping platforms safe. The net result will be to bog operational staff down in process and bureaucracy to the detriment of user safety. If, after users have consulted the online resources available, users’ questions, queries or concerns eventually result in them wanting to make a complaint, platforms should ensure that there is the functionality for them to do so, and processes to deal with complaints promptly. But there is a difference between this, and nudging people towards making complaints as their first action.

 

Snap recommendation: The Online Safety Bill should make clear that while platforms should consider the impact of processes and policies on freedom of expression, their key obligation in this field should be to develop clear and transparent terms of service and Community Guidelines, and to enforce these fairly. While the Bill should require platforms to enable users to get answers to their questions, or to raise pressing concerns to platforms, including in writing where needed, the Government should dispense with the proposed requirement to prompt users to make a complaint any time action is taken against their content or account.

 

  1.                Transparency reporting

 

The Bill proposes that in-scope services must produce an annual transparency report, in a format specified by Ofcom. The list of information that Ofcom may seek from platforms, as set out in the Bill, is exhaustive. Rather than the quantitative, comparable data that has formed the basis of transparency reports on major platforms in recent years (e.g. the number of pieces of harmful content and accounts that platforms have received reports of, or taken action on, across different categories of harm), the Bill seems more focused on contextual information. For example, Ofcom may require platforms to set out information on their “functionalities to help users manage risks relating to harmful content,” or “information about the systems and processes a provider has in place to direct users to information about how they can protect themselves from illegal or harmful content.”

 

The combined asks represent significantly more transparency information than even the largest and most profitable online platforms currently provide. Implementing this entire package would simply not be feasible for smaller companies. More broadly, there is a real question about how useful such an approach would actually be for regulators, Government departments and academics. The danger of this context-heavy approach, rather than one which focuses on quantitative information, is that companies will end up producing long and verbose documents, without useful comparable information, which no-one will want to read in any detail.

 

Snap currently goes significantly further than most major platforms in terms of our approach to transparency reporting, by providing country-specific breakdowns of our response to different kinds of harmful activity, including for the UK. Our reports focus on key, comparable quantitative information about our response to different types of harmful content. Getting to this stage has been an iterative process: building the tools, systems and teams to enable users to report harmful content, to ensure we are acting on it quickly, and then to set out our response took several years. Many, particularly smaller, platforms will be at a much earlier stage in their development. The Government should not let the perfect be the enemy of the good: rather than asking platforms to provide an exhaustive range of information which will be beyond most companies’ capabilities, the draft Bill should focus on encouraging platforms to provide the most critical quantitative information in an accessible and easily comparable manner. Those platforms that are not yet set up to do this should be required to work with Ofcom to set out a road map for doing so.

 

Snap recommendation: The Online Safety Bill should give Ofcom the ability to ensure platforms provide quantitative, comparable information on the prevalence of harmful content on their services, and their response to it. The Government should drop proposals to mandate platforms to provide the further categories outlined in the draft legislation.

 

  1.                Ensuring the independence of Ofcom 

 

A critical strength of the Government’s overall online safety policy is empowering a credible, independent regulator who is able to implement a coherent framework, free from the immediate political objectives of the Government of the day. As set out earlier, we very much support the proposed role for Ofcom as a credible and trusted regulator in this space.

 

It is important to ensure that this independence is upheld in legislation. The draft Bill and Ofcom’s explanatory notes contain very few references to Ofcom’s independence, and indeed the draft legislation seeks to confer several key regulatory powers to the Secretary of State for DCMS. As an example, the draft Bill proposes that Ofcom must seek sign-off of its codes of practice from the Secretary of State, who may direct Ofcom to modify these. This risks politicising the regulatory process and directly impinges on Ofcom’s independence. Ofcom must have the ability to establish and implement regulation free from political interference, in order to establish a regulatory framework which will have credibility across the internet sector and internationally.

 

Snap recommendation: The Online Safety Bill should explicitly acknowledge Ofcom’s independence from Government in crafting and implementing regulation. The Government should relinquish proposed powers for the Secretary of State to sign off on or direct Ofcom to modify its codes, and any other proposals which impinge on the regulator’s independence.

 

  1.                Conclusion

 

Thank you for the opportunity to submit written evidence to the Joint Committee as it conducts its pre-legislative scrutiny of the draft Online Safety Bill. This scrutiny is critical to ensuring the success of online safety regulation in the UK, and we are grateful for the Joint Committee’s work. 

 

We hope that this response has been helpful in making a clear case for a principles-based regulatory framework, aligned around a central, statutory duty of care for platforms to ensure the safety of their users, and implemented and enforced by an independent regulator. We consider that this - the original core of the Government’s online safety policy proposals - represents the best model for regulation that will have a genuine impact in improving online safety, while also being practical and proportionate for a wide range of online platforms. We would urge the Government to reduce the range of superfluous, prescriptive requirements proposed in the Bill, which will do little to ensure that users are safer, and risks creating a complex regulatory environment which will only serve to entrench the dominant positions enjoyed by the largest market players.

 

September 2021

 

 

 

             

63 Market-Footer.png