Written evidence submitted by Glassdoor (OSB0033)
Executive Summary
- Glassdoor is a platform for employees to share their authentic workplace experiences. It provides job seekers with critical and otherwise hard to find information about prospective places to work. By hosting these anonymous workplace reviews, Glassdoor also empowers job seekers with detailed information as they consider a critical decision greatly impacting their lives: where they work. Providing this information supports job seekers in the UK and worldwide to find jobs and companies they will love. In many cases, users would not be willing or able to post frank, honest and informative reviews without anonymity.
- The ability to post anonymously has a dual purpose in that it: a) enables reviews that are authentic, transparent and honest and b) prevents users from being identified and thereby potentially suffering retaliation or reprisal from vengeful employers after critical reviews. Simply put, platforms like Glassdoor work because users feel sufficiently protected on account of anonymity to share honest reviews of their workplace experiences. The Bill should not place new limits on online anonymity nor mandate the use of specific technology to verify the identity of users. Any such attempt risks threatening the safety of the internet’s most vulnerable users (such as workers leaving candid reviews of their employers), hindering the internet as a place for creative and open dialogue, and creating a stricter identity regime for the online world than exists in the offline world.
- Glassdoor fully supports the Government’s commitment to ensure the safety of citizens online. But in order for the Bill to be enforceable, clarity needs to be offered on the exact content which will fall into the scope of the Bill, specifically for content that the Draft Bill describes as legal but harmful. In its current form, the Draft Bill does not sufficiently define content which is legal but harmful to children and adults, instead placing the responsibility for making this determination on to online platforms and service providers. The Bill should be explicit in its determination of what constitutes acceptable and unacceptable content, particularly around content which may be considered legal but harmful for children and adults. These definitions should be explicit, non-subjective and should avoid banning online what is legal offline.
- The Draft Online Safety Bill is an ambitious legislative proposal. It seeks to regulate many different types of service, each of which pose very specific challenges and do not easily lend themselves to horizontal rules or a one-size-fits-all structure to keep people safe online. However, as it currently stands the Draft Bill does not specify threshold conditions for categories of regulated services nor provide clarity on how each of these criteria will be weighted. The final Bill should set out the precise category thresholds that will determine inclusion into the Category 1, 2A and 2B umbrellas.
About Glassdoor
- Glassdoor was founded in June 2007. We are an online platform that enables employees to freely share information describing what it is really like, in their personal experience, to work at an organisation. On an anonymous basis, Glassdoor users share their opinions by posting job reviews and salary information on the Glassdoor platform. These workplace experience reviews and salary insights - of which there are nearly 100 million relating to over 1.7 million companies worldwide - provide an unprecedented, authentic and candid look at what workplace conditions and company cultures are really like from the perspective of those who know a company best: the employees themselves.
The importance of anonymity
- Glassdoor is a platform for employees to share their authentic workplace experiences. It provides job seekers with critical and otherwise hard to find information about prospective places to work. By hosting these anonymous workplace reviews, Glassdoor also empowers job seekers with detailed information as they consider a critical decision greatly impacting their lives: where they work. Providing this information supports job seekers in the UK and worldwide to find jobs and companies they will love. In many cases, users would not be willing or able to post frank, honest and informative reviews without anonymity.
- While many companies treat their employees well and compensate them fairly, some do not. And an unfortunate number of employees feel they are not paid justly or treated appropriately and respectfully in the workplace. These employees are deserving of a level playing field in order to voice their opinions. And this is where anonymity online plays a vital role. The ability to post anonymously has a dual purpose in that it: a) enables reviews that are authentic, transparent and honest and b) prevents users from being identified and thereby potentially suffering retaliation or reprisal from vengeful employers after critical reviews. Simply put, platforms like Glassdoor work because users feel sufficiently protected on account of anonymity to share honest reviews of their experiences at work.
- We are committed to protecting our users’ identity. A loss of this anonymity could place users’ livelihoods, reputations and economic wellbeing at risk. In recognition of this, we have elected to stand behind our users leaving reviews when appropriate and have engaged in legal action to protect their anonymity in more than 100 cases.[1]
- In the offline world, anonymity has long been a means by which individuals can freely enjoy their right to impart and receive information. This is something we take for granted. Individuals are free to enter public spaces, walk down the street, enter shops, engage in conversation and debate and much more without having to share and verify their identity.
- The use of pseudonyms, nom de plumes and pen names to conceal an author's identity has been common throughout history. There are many examples of female authors, including Mary Ann Evans (George Eliot) and Nelle Harper (Harper Lee), taking male pseudonyms to ensure their work would be taken seriously. Similarly, the benefits of anonymity for in-person support groups and forums is well accepted. Groups such as Alcoholics Anonymous and Narcotics Anonymous allow individuals to speak freely and without verifying their identity, without being stigmatised, benefitting recovery.[2]
- This extends to the online world. For example, online anonymity has proved a powerful tool for those in LGBTQ+ communities, enabling authentic and honest expression. For those who may not be ‘out’ to their family and friends, anonymity provides them with the freedom to explore their identity and obtain support in a way that is safe and comfortable. Indeed, the Stonewall School Report (2017) found that nine in ten LGBTQ+ young people felt they could be themselves online because of the protection of their anonymity.[3]
- There are strong arguments for the right to remain anonymous in the case of whistleblowers, or individuals seeking to speak out against authoritarian and repressive regimes, where doing so would put the individual in question at considerable risk. The Electronic Frontier Foundation (EFF) have referenced several instances of activists relying on online anonymity to avoid persecution, including in the case of Egyptian Wael Ghonim, whose anonymous social media page ‘We are All Khaled Said’ was highly critical of the incumbent regime.[4]
- Beyond safeguarding the identities of vulnerable individuals, the freedom to be anonymous has helped create some of the most positive aspects of the online world - as an outlet for creativity, for fostering new communities and for facilitating debate. Anonymity can allow users to explore their identity, find support, and challenge authority in a safe environment without fear.
- Debates around online anonymity have traditionally centred around two distinct arguments: one which maintains that anonymity encourages and facilitates abusive behaviours online, and another which asserts anonymity protects the internet’s most vulnerable and at-risk users. The opposing nature of these two positions ignores a critical middle ground: that anonymity legitimately protects and benefits people even if they are not vulnerable, at-risk or in physical danger. Further, it misses that anonymity enables some of the most positive aspects of the online world, encouraging creativity and expression. In the case of Glassdoor, anonymity levels the playing field on workplace-related issues: it allows for employees to give honest assessments of employers having far greater resources without suffering undue personal consequence.
- Further, the removal of online anonymity would not be the silver bullet it is often described to be. Following abuse on Twitter’s platform in the wake of the UEFA Euro 2020 Final, Twitter conducted an analysis of the offensive content. This analysis concluded that identity verification would have been unlikely to prevent the abuse from happening as 99 per cent of the accounts which were ultimately suspended had identifiable account owners.[5]
- Given the above, the current form of the Draft Bill is correct not to place new limits on online anonymity nor mandate the use of specific technology to verify the identity of users. Online anonymity and the protections it affords are critically important for the internet as a powerful democratic forum characterised by freedom and openness. It is correct that what is permissible offline is and should also be permissible online. This extends to the freedom to be anonymous.
- Any inclusion in the Bill that would mandate user verification and remove the anonymity of users online presents a threat to open, critical discussion. It would serve to reduce the amount of information available, depriving individuals of some of the key information they need to make about some of the biggest decisions in their lives. For this reason the Bill should, under no circumstances, require that users be identifiable from the content which they post online.
Clarifying definitions of harmful content
- Glassdoor fully supports the Government’s commitment to ensure the safety of citizens online. The safety of our users is a priority for Glassdoor, and this is reflected in our policies, Community Guidelines and Terms of Use which govern the use of our platform. These policies make clear what activity is acceptable and unacceptable on the Glassdoor platform, specifically prohibiting reviews featuring profanities, threats of violence, discriminatory language targeted at an individual or group, and even the identification of any individual outside of C-suite executives (who by virtue of their senior positions in an organisation significantly impact workplace culture). Similarly, we reject reviews that do not relate to an employer, a workplace experience or are otherwise not relevant to understanding a workplace culture. Our Community Guidelines and moderation policies also take into consideration applicable laws in the different international markets in which Glassdoor operates.[6] [7]
- To ensure that content strictly adheres to Community Guidelines and Terms of Use, posts on the Glassdoor platform are subject to a multi-tier review and moderation process. As part of this process, content is first analysed using proprietary technology that assesses multiple attributes of the content. If content fails to pass this technological review, a team of human moderators is engaged to determine if the content is consistent with the Community Guidelines. In other words, no content appears on Glassdoor without going through its moderation process. Additionally, and perhaps most significantly, any content that is flagged by a user or employer as objectionable on Glassdoor is assessed anew by human moderators. Overall, if a piece of content meets Glassdoor Community Guidelines, it is posted on the Glassdoor site. If it is not, it is rejected and does not appear. Glassdoor rejects and never posts approximately 5-10% of the content submitted to us because it does not meet our Community Guidelines.
- In order for the Bill to be enforceable, clarity needs to be offered on the exact content which will fall within the scope of the Bill, specifically for content that the Draft Bill describes as legal but harmful. In its current form, the Draft Bill does not sufficiently define content which is legal but harmful to children and adults, instead placing the responsibility for making this determination on to online platforms and service providers.
- Harmful content is demonstrably difficult to define, the term ‘harm’ itself being contextual, legally ambiguous and potentially culturally subjective. The Draft Bill requires service providers to make a multitude of highly impressionistic judgments in relation to the definition of harm currently included in the Draft Bill. These include: “reasonable grounds”, “material risk”, “direct or indirect”, “significant adverse impact”, and “of ordinary sensibilities”.
- Placing the onus on service providers to make these judgments given these sensitivities has the potential to result in inconsistent meanings of harmful content and ineffective action amongst different service providers, each with different functionalities and capabilities. This would undermine the Draft Bill’s ambition to create a consistently safe online environment. It may also create a disparity between legally acceptable content in the offline and online worlds, an approach which has not yet been fully explored in the Law Commission’s work on reforming communications offences.[8]
- The inclusion of clauses which recognise the importance of freedom of speech within the Draft Bill are welcome. The need for individuals to be protected when giving their honest accounts is well documented, such as in academic reviews, or the protection of journalistic sources. However, as drafted, these requirements create conflict within the draft framework.
- The Draft Bill places upon platforms the responsibility of screening and approving content, while also asking that they balance duties connected to freedom of expression. This conflict between subjective harms for adults and an enforceable duty to consider freedom of expression was noted by a recent House of Lords inquiry ‘Free for all? Freedom of expression in the digital age’. The report stated that this conflict was only resolvable by having unacceptable content set out explicitly in primary legislation.[9]
- Further, this balancing of freedom of expression and safety risks being academic for the majority of companies. The safety obligations of the Draft Bill are expressed to be “duties”, while for Category 2B companies the freedom of expression obligations are merely to “have regard to its importance”. There is a strong risk that process-based safety duties will win out when companies are faced with the task of determining where the greater regulatory risk lies.
- In order for platforms and service providers to enforce rules on their platforms and services, and in order for the Government to ensure freedom of expression online is safeguarded, the Bill should be explicit in its determination of what constitutes acceptable and unacceptable content. This will support companies in their content moderation efforts. These definitions should be free from subjectivity and avoid criminalising online what is acceptable offline. These definitions are key to the proper functioning of the Bill and for this reason unacceptable content should be clarified within the primary legislation as part of the final Bill.
- The most effective way for the Bill to set out, in explicit terms, which content is and is not acceptable is to adopt a focused and accepted definition free from subjectivity. The only workable definition of harmful content is likely to be that which is clearly defined as illegal. This clarity will support businesses to comply and Ofcom to enforce and lead to less litigation regarding the interpretation of the legislation.
Setting category thresholds
- The Draft Online Safety Bill is a welcome and ambitious legislative proposal. It seeks to regulate many different types of service, each of which pose very specific challenges and do not easily lend themselves to horizontal rules or a one-size-fits-all structure to keep people safe online. The inclusion of Category 1, 2A and 2B designations within the Draft Bill acknowledges this.
- However, the Draft Bill does not set out specific thresholds, either for company size, number of users or functionality to determine company categorisation. Further, the Draft Bill does not provide an indication of the likely weighting of determining characteristics. The Draft Bill instead stipulates that these threshold conditions will be set out in subsequent regulations prepared by the Secretary of State for Digital, Culture, Media and Sport and Ofcom. This presents two challenges. First, it makes it difficult to judge the appropriateness of the proposed online safety regime. And second, it deprives online services the clarity that they need to prepare to take on new online safety duties.
- In order to resolve these issues, the final Bill should set out the precise category thresholds that will determine inclusion into the Category 1, 2A and 2B umbrellas, spelling out which defined characteristics will lead to businesses’ categorisation and how these characteristics will be weighted.
- Currently the Draft Bill places additional obligations on service providers “likely to be accessed by children”. A service is currently defined as likely to be accessed by children if “it is possible for children to access the service or any part of it”.[10] This definition would benefit from more focused wording, as otherwise it will capture the overwhelming majority of services. Services will only be excluded if they are age-gated. This will mean that nearly all Category 2B businesses will either need to age-gate or bear a significant and unjustified administrative burden. This undermines the intention of the graduated categorisation approach taken by the Draft Bill, to limit the administrative burden on smaller and lower risk services.
- The Draft Bill’s attempt to provide clarity to service providers on whether the child user condition is met in relation to their service is equally vague. The Draft Bill currently stipulates that the child user condition is met if “there are a significant number of children who are users of the service or of that part of it”. With the term “significant” having no clear definition, the Draft Bill’s attempt to guide service providers is insubstantial. The final Bill would benefit from a clearly defined threshold for the number of children necessary to meet the child user condition.
Recommendations
- Glassdoor supports the Government’s intention to create safer online spaces and the Draft Online Safety Bill is a good first step to achieve this. However, in order to create a truly workable online safety regime, the final Bill should include the following measures across anonymity, clarifying definitions of harm and setting category thresholds:
- Anonymity: The Bill should not place new limits on online anonymity nor mandate the use of specific technology to verify the identity of users. Any such attempt risks threatening the safety of the internet’s most vulnerable users, hindering the internet as a place for creative and open dialogue. It also risks harming job seekers by depriving them of valuable information as well as creating a stricter identity regime for the online world than exists in the offline world.
- Clarifying definitions of harmful content: The Bill should be explicit in its determination of what constitutes acceptable and unacceptable content, particularly around content which may be considered legal but harmful for children and adults. These definitions should be explicit, non-subjective and should avoid banning online what is legal offline.
- Setting category thresholds: The final Bill should provide clarity to businesses by setting out the precise category thresholds that will determine inclusion into the Category 1, 2A and 2B umbrellas.
- We will be happy to provide further detail and to expand on these points for the Joint Committee, either in writing or in oral evidence.
September 2021
[1] https://www.glassdoor.com/about-us//app/uploads/sites/2/2021/08/Legal-Fact-Sheet-August-2021-1.pdf
[2] https://dl.acm.org/doi/10.1145/3134726
[3] https://www.stonewall.org.uk/school-report-2017
[4] https://www.eff.org/deeplinks/2011/07/case-pseudonyms
[5] https://twitter.com/TwitterUK/status/1425035343708016641
[6] https://www.glassdoor.co.uk/about/terms.htm
[7] https://help.glassdoor.com/s/article/Community-Guidelines?language=en_US
[8] https://www.lawcom.gov.uk/project/reform-of-the-communications-offences/
[9] https://publications.parliament.uk/pa/ld5802/ldselect/ldcomuni/54/5408.htm
[10]https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/985033/Draft_Online_Safety_Bill_Bookmarked.pdf