Written evidence submitted by Reddit (OSB0058)

 

Founded in 2005, Reddit is an online platform whose mission is to bring community and belonging to everyone in the world. Reddit is structured to enable authentic, moderated, community-based conversations in a way that empowers users to take the leading role in platform governance in a system that resembles a distributed digital democracy. Reddit serves more than 52 million users worldwide daily. While about half of these are located in the United States, the UK forms the company’s second largest user base. Headquartered in San Francisco, Reddit opened its first UK office in London in September 2020.

 

  1. Executive Summary

1.1.            Reddit’s safety-by-design approach entails a structure that differs from most social media platforms in that it is organised into communities primarily governed by the users themselves. This community structure informs every way we think about safety and must be accounted for in regulation so as to achieve an outcome that is proportionate and preserves diversity in the digital ecosystem.

1.2.            More clarity and consideration is needed in determining the tiered categorisation of platforms envisioned by the legislation. We suggest taking factors such as employee numbers and financial turnover into consideration in addition to raw user numbers. The definition of a user also needs more clarity to account for the differences between registered and unregistered users, and how platform design can mitigate the purported risks of high user numbers.

1.3.            Algorithmic content moderation measures have both benefits and drawbacks, and thus should not be seen as a panacea to be mandated.

1.4.            Anonymity is essential to safety and enables vulnerable communities to express themselves and seek support. It should be protected.

 

  1. Introduction

2.1.            Reddit appreciates the UK’s goal of being the safest place in the world to be online. With that in mind, we welcome the opportunity to engage in feedback on the Online Safety Bill from our point of view as a smaller, differently structured service distinct from the larger platforms.

 

2.2.            Reddit is a medium-sized, privately held company with just over a thousand employees, representing roughly a doubling in staff size in the past year. Our mission is to bring community and belonging to everyone in the world. To do so, Reddit provides a platform for people to create self-governing communities of shared interest, called subreddits, covering an extremely diverse range of subjects, from science and crafts, to sports and family. They do this while instituting distinctive and individualised community rules to keep discussions on topic and protect users from harm. This devolved community governance structure is the heart of our safety by design moderation approach, which is unique in the way it scales to mitigate risk by empowering the users themselves to play the central role in their own governance. These user efforts are supported by tailored and proportionate processes and systems administered by Reddit employees (known as “admins”) where appropriate. Content moderation thus happens through a layered, community-driven approach akin to our own democracy, wherein everyone has the ability to vote and self-organise, follow a set of common rules, establish community-specific norms, and ultimately share some responsibility for how the platform works.

 

2.3.            The first part of this submission details Reddit’s approach to content moderation and presents an assessment of some crucial themes of the UK Draft Online Safety Bill. In the second part, we turn over a portion of this submission to some of our volunteer user moderators from the r/MentalHealthUK[1] subreddit, who will share in their own words how they have approached safety and moderation in a way tailored to their specific community needs.

 

  1. Reddit’s Community Structure and Approach to Content Moderation

3.1.            We believe that the future UK regulatory framework should support a diverse digital ecosystem, with space for alternative approaches to safety and moderation tailored to a platform’s unique design and business model. To this end, we appreciate the flexibility that the Bill envisions, and we hope that the legislative process will remain faithful to this goal and avoid mandating practices that only make sense for platforms of a certain structure. While much of the public conversation on online safety focuses on centralised structures and top-down platform actions, Reddit’s decentralised, community-based moderation structure, described below, is an example of a competing industry paradigm that the proposed regulatory structure should accommodate.

 

3.2.            Subreddits and Volunteer Community Moderators

3.2.1.            Reddit’s approach to content moderation is layered and decentralised, and can be compared to a federal system. In contrast to most well-known social media sites, where the basic unit of engagement is the individual, who interacts with “friends” or “followers,” the main interactive unit on Reddit is the community, or “subreddit.” These communities are topically based and created by the users themselves. Each community on Reddit has its own rules. These are unique to the subject of the particular community, and in many cases can be extensive. For example, the r/science subreddit requires that posts link only to published, peer-reviewed research.

 

3.2.2.            These community-level rules are set and enforced by volunteer users known as moderators. Moderators can enforce rules through both automated and manual means, such as removing a post or comment, preventing the upload of certain types of media, filtering certain words or domains, or banning individual users from posting. These volunteer moderator actions happen without the involvement of Reddit, Inc., and form the vast majority of content moderation decisions on the platform, as noted in our annual Transparency Report[2]. In this way, Reddit’s community structure by its very design encourages users to take ownership of appropriate norms, culture, and behaviour.

 

3.2.3.            To support moderators, Reddit has a dedicated Community Relations team whose role is to engage with moderators. The team ensures that moderators understand Reddit’s rules and expectations (as set out in the company’s Moderator Guidelines[3]), as well as the tools available to help enforce them. Reddit also maintains Moderator Councils wherein employees can engage moderators directly to hear their concerns and feedback on everything from safety tools to the Content Policy. The Community Relations team also manages a dedicated “Moderator Reserves” programme which can supplement a subreddit’s moderation team temporarily with experienced volunteers in times of special need. For example, we helped the moderators of r/NewZealand secure additional moderators through this programme during the Christchurch attacks, when the subreddit became an important news resource. The volunteer reserve moderators helped provide additional time zone coverage, so that the New Zealand-based mod team could keep up with the sheer pace of events overnight.

 

3.2.4.            While the vast majority of our volunteer moderators perform well and in good faith, we also have a process to intervene in situations where this is not the case. We are able to engage in direct dialogue through the Community Relations team to assess the issue and determine an appropriate solution. This may include ensuring that moderators know how to use the moderation tools available to them, or recommending that they add additional rules to their subreddits. It can also include warnings about the Reddit Content Policy, as well as successive punitive steps that may include removing moderator privileges, either in a specific subreddit or across the entire site. Because of the diverse nature of Reddit communities, our approach to engaging with moderators is highly bespoke and relies on the human touch of our Community Relations team.

 

3.3.            Reddit’s Content Policy and Safety Team

3.3.1.            Overarching this network of subreddit rules is Reddit’s Content Policy,[4] which is set by Reddit at the company level and applies across the entire site. It forms a set of high-level, principles-based rules by which all users and communities must abide. It forbids unwelcome behaviour such as hateful content, harassment, encouraging violence, sharing personal information or intimate imagery without consent, and other behaviour which we feel has no place on Reddit.

 

3.3.2.            While moderators within communities are expected to enforce and uphold these rules in addition to their own, their efforts are backstopped by Reddit’s internal Safety Team, which both responds to user reports and also proactively seeks out bad behavior. This team acts especially in instances that are beyond the scope of what can reasonably be expected of volunteer moderators, such as addressing illegal content or monitoring data signals for evidence of bot activity or sophisticated commercial spammers. The Safety Team’s actions can be against individual users (for example through account suspensions), or against entire communities if they are found to be wholly dedicated to violating activities.

 

3.3.3.            True to Reddit’s structure, many of the metrics that the team measures are aimed at assessing the health and behavior of the community as an ecosystem, rather than focusing on individual accounts. For example, one of the most important signals the Safety Team assesses is how communities respond to rule-breaking content. If they are reporting, removing, and generally disengaging from it, it can be seen as an indicator that community governance is largely working. If, on the other hand, they consistently tolerate, promote, and engage with bad content, it is an indicator that something may be wrong in the community’s culture, and intervention is likely needed.

 

3.3.4.            We publish statistics around the interventions of our Safety Team in our annual Transparency Report, which also features our appeal intake and acceptance rate. Additionally, we provide updates to users on interesting safety and security case studies on a regular basis on our dedicated r/redditsecurity[5] subreddit. In this way, the conversations are interactive with our users, as opposed to a static blog post or press release, and we can engage in dialogue about what our safety experts are seeing around the site and what moderators and normal users alike should look out for.

3.4.            Upvotes, Downvotes, and Karma

3.4.1.            User voting forms the third and most scaled layer of Reddit’s governance approach. In contrast to other sites, content on Reddit is primarily ranked and curated not by a centralised algorithm, but by the votes of the users themselves. Any registered Reddit user can vote on each individual post and comment on the site. The content can be voted both “up” and “down” within the subreddit. Content that is perceived by the community as high quality receives more upvotes and rises in visibility. Content that is perceived by the community as low quality receives downvotes and becomes less visible; significantly downvoted content is collapsed from view entirely. In this way, every Reddit user collectively acts as a content moderator at scale.

3.4.2.            This voting system also plays into the governance of individual users as well. Users that post or comment on Reddit have a public reputation score known as “karma.” Accumulated downvoting damages that score and is visible to all. Moderators may use karma to help govern their communities; for example, they may set rules that prevent accounts below a certain karma threshold from posting. This helps protect against bot activity or merely creating new accounts to continue abusive behavior following a ban. The karma score thus incentivises good behaviour, with users keen to maintain social proof and ensure they are a constructive member of their community. Every user’s history is visible to others, ensuring full transparency. As a public platform, there is no such thing as a “private” account, which is an important tool for community norm enforcement.

3.4.3.            Finally, in addition to this community voting and reputation system, individual users themselves can also independently choose how to view and order content in their Reddit experience. Users may easily toggle between sorting content chronologically, by all-time popularity, or by other basic sorting logics, meaning that Reddit users have choice in what they see on Reddit and are not locked into a proprietary algorithmic view.

 

  1. Feedback on Selected Themes of the Draft Online Safety Bill

4.1.            Categories of platforms and thresholds

4.1.1.            We believe that imposing different levels of obligations to different categories of platforms, based on number of users and functionalities, is a sensible approach, but it is currently too simplistic for purpose, and the envisioned criteria do not provide an adequate basis on which to categorise the diversity of online products and services that the bill seeks to regulate. Additionally, while we find OFCOM a suitable regulator and have been encouraged by the dialogue we have had with it thus far, leaving these categorisation criteria so vague in the source legislation is particularly burdensome, especially for smaller and medium-sized platforms that are in scope, but who may not, based on the current text, be able to reasonably anticipate into which category they will fall. Having more legal certainty is essential, particularly for smaller companies, since the burdens of compliance with the most rigorous requirements likely to emerge for Category 1 services is substantial and will take significant planning.

 

4.1.2.            In considering a more fit-for-purpose categorisation scheme, we propose additional inputs in order to achieve a proportionate approach that does not inadvertently favour only the largest companies, well equipped to face the additional obligations of “Category 1” services.

 

4.1.3.            More nuance is required in considering platform and product design when assessing the risk of harm via reach. For example, while a platform like Reddit has millions of users, total user numbers alone does not translate neatly into a directly proportional reach (and any such thresholds risk being arbitrary), since these users do not interact all together, but instead through subreddits, which are much smaller and have moderation and governance built into them by design. Reach across subreddits is also in fact very limited; extremely tailored community rules mean that content is not easily transferred from one community to the other, limiting virality across subreddits.

 

4.1.4.            Furthermore, not all users are the same when considering potential harm. As an open platform, Reddit allows its content to be accessible to everyone, including those without registered accounts. We philosophically believe that this openness is important to the global internet, rather than creating a system of walled gardens. However, unregistered users may only view content; they cannot create, comment, or vote on it (though they may still report things they suspect of violating our policies, and so are still able to contribute to safety). Thus, the potential harm that can be generated is limited. For this reason, we would suggest that registered users should have more weight in Tier 1 categorisation considerations, versus unregistered users, many of whom may bounce into the site from a search engine and spend only a few seconds on the site.

 

4.1.5.            Factors such as financial turnover and number of employees should also be considered, as both significantly influence the platforms’ capacity to comply with certain types of resource-intense obligations.  This is an important factor in helping this legislation achieve proportionality and balance. Indeed, taking these overall company size indicators into account is an important way to achieve continuous improvement along the corporate growth cycle. It sets an expectation that as companies grow, they will continue to invest in harm prevention as an obligation incumbent on that growth. Safety expectations are not a plateau, but should instead continue to grow as a part of the corporate lifecycle.

 

 

4.2.            Algorithms and user agency

4.2.1.            In considering the role of algorithmic moderation as a means for compliance with the Bill’s regulatory vision, it is important to note the fundamental role of human review in Reddit’s content moderation efforts, and not prescribe in the regulation one-size-fits-all algorithmic or automated solutions. Context is crucial when evaluating reports, as the same piece of content can be acceptable in one context but not another. We have not yet found algorithmic means that are capable of evaluating this nuance; indeed, many intelligent and well intentioned humans will debate and disagree on the toughest cases.  Given this reality, automated moderation techniques are not a panacea and can even have adverse outcomes, particularly against historically marginalised communities, whose content is more likely to be subject to false positives.

 

4.2.2.            Nevertheless, there is certainly an important place for automated and algorithmic content moderation tools, but we have found these tools perform best when used to augment the judgment of humans, rather than be the basis of moderation themselves. For example, given the sheer volume of junk reports that we get through our reporting mechanisms, it is crucial that we triage safety reports in a way that those most likely to be valid are prioritised and ordered by potential severity for human review. As noted in our most recent Transparency Report, Reddit received 28,243,095 user reports for potential Content Policy violations in 2020. However, only 3.59% of these were actionable (valid). The remaining 96.41% of reports were either duplicates, already actioned, or did not violate our rules. Thus, the rapid sorting of these report queues through algorithms and machine learning is critically important, in order for us to focus resources on reports that are most likely to be genuine, and not waste time on those likely to be bogus.

 

4.3.            Anonymity as a Right and a Component of Safety

4.3.1.            We built Reddit with the ambition to make it a place for people to be their authentic selves. People come to Reddit to discuss important things that they may not be ready to be open about yet. From people struggling with their sexuality or seeking support for alcoholism to those criticising their governments under authoritarian regimes, they all have a home on Reddit, and we take seriously the trust they put in us to protect their identities. In these instances, anonymity is not at odds with safety; anonymity is safety, and is an essential enabling element in empowering vulnerable people to exercise their free expression. For these reasons, we ask that the right to anonymity be protected in the law.

 

4.3.2.            While well-meaning, efforts to regulate verification through the collection of personal data is in our view more likely to put people at risk. At the same time, the evidence does not bear out that people behave better when tied to their real names. Abuse is just as frequent on real-name platforms as not. Because of this, we find that a focus on technical and behavioral signals to identify suspicious content and bad actors is more effective. Parsing these technical signals allows us to investigate suspicious accounts and still allows us to share useful data with law enforcement when necessary for valid investigations, without needing to collect excessive personal information.

 

  1. In Their Own Words: The r/MentalHealthUK Community Moderators

5.1.            We’ve tried in this submission to illustrate the importance of considering different models of content moderation, how platform structure is an important part of safety-by-design, how the users themselves can and should be participants in their own governance, and how anonymity is essential to enabling important conversations. But we recognise that our corporate voice only goes so far in telling this story.  The voice of the community itself is essential, and to that end we present in this section what we hope you will find the most valuable part of this evidence. The following text (in italics) represents the direct contribution of the moderators of the r/MentalHealthUK[6] subreddit. This community defines itself as “dedicated to providing support, resources, mental health-related news and a space aimed mainly at people in the UK dealing with mental health issues.” Posts touch upon a series of topics such as therapy, NHS resources, depression, and addiction. It has over five thousand members, and some of the specific community rules[7] include being kind, forbidding the promotion of self-harm and illegal drugs, and avoiding speculation around diagnoses. We hope that their contribution, in their own unaltered words, will serve to evidence and support this submission as an illustration of our processes in action.

 

5.2.            As a moderator of a UK-based mental health community on Reddit, I feel that it puts me in a very good position to offer my thoughts and suggestions on what is already helpful, what could be more helpful and also on what could be actively counterproductive with potentially serious consequences in the quest for online safety.

 

5.3.            It must be said first of all that I have been pleasantly surprised with how little abuse I have received over almost 2.5 years as a moderator which has amounted to no more than 5 occasions (excluding the odd sarcastic comment), 3 of which did not include discrimination towards a protected characteristic under the Equality Act 2010. One of the two accounts that did include discrimination towards a protected characteristic under the Equality Act 2010 was banned by Reddit Admins[8], the other account I'm unsure the fate of. Both of these accounts were permanently banned from my community by myself.

 

5.4.            Due to the nature of my community, I tend to take a staged approach by taking the time to explain to people why their content was unacceptable/removed, giving warnings, time limited bans and then permanent bans. As previously suggested, it has seldom ever needed to go beyond providing explanations, and there are already rules laid out clearly in the community. [Reddit facilitates] the option to bring in other moderators to help, and I have one other person who acts as a moderator and helps during hours that I may not be awake. How moderators work together is very much down to individual preferences and for moderators to sort out amongst themselves, which can allow for a lot of nuance.

 

5.5.            It is my belief that there are several reasons as to why there has been little abuse towards myself and within the community, one reason being that I'm someone who actively engages with the community and acts upon feedback, but also there are a lot more community customisation options and specific guidance for moderators on Reddit when compared with social media sites such as Twitter, Facebook and Instagram. Customisation options can include opting out of being able to view NSFW[9] content via settings, allowing the option for moderators to set up an automoderator[10] comment which attaches things such as important information or rules to relevant posts, and troll prevention measurements such as ensuring accounts have to be X amount old before being able to post to a community and by excluding those with a set amount of low 'karma' (karma are points based upon content a user posts and whether it is upvoted or downvoted by communities). If the karma is particularly low, this could indicate a troll.

 

5.6.            It is an understandable concern that there is content which is shared online that would be less commonplace or more readily acted upon in general public spaces, and I believe that an approach of working with social media companies as opposed to against them will incentivise a more healthy online environment. This approach could include providing individual recommendations of features that could be developed to tackle what might be issues more specific to that one particular social media platform, or requesting quarterly anonymised feedback carried out by social media platforms.

 

5.7.            It is important to consider the reasons that attract certain people towards certain social media platforms and to consider what the implications may be if there is not enough of a nuanced approach in this Bill. Statistics have suggested that up to 70% of people referred to Prevent may suffer from mental ill health or other vulnerabilities that leave them prone to falling for propaganda from violent extremists. If these vulnerable people are pushed away from where they can be seen and accessed due to a heavy-handed blanket approach towards social media companies rather than working together to create healthier environments that takes into consideration what attracts certain users to certain platforms, this may potentially lead to bigger problems that could prove to be far more dangerous.

 

5.8.            There is a wide range of users in my community - people who are under 16 and people who are over 50, people who are unpaid carers and people who are paid professionals, people who have conditions such as Autism and people who have severe mental illnesses such as Bipolar and Psychotic Disorders. The flexibility in being able to moderate without heavy-handed or overly preemptive actions needing to be taken for me has allowed this community to attract a variety of people due to the moderating approach being based upon the feedback of the users here, and better yet, I'm someone who many can feel reassured that I'm able to truly empathise with the community due to having specific experience with the mental health system in the UK. This may not be possible if Reddit Admins were required to take heavy-handed blanket approaches towards moderation.

 

5.9.            It has been very moving to receive messages from the users here expressing they're impressed with the moderation here, that they're grateful that there is a community that is understanding and that understands their frustrations, that they've been able to access support they didn't know was available to them and on occasions that either myself or expressing that through reaching out here, it has likely saved their life.

 

5.10.            Both from personal experience and from comments made by the users in my community, the option of somewhat more anonymity is valued here particularly with regards to security and feeling able to be open about sensitive issues, so a threat to this anonymity may likely drive people away to where they are harder to reach or to where they will be in more danger. We know that an estimated 27,000 children in the UK identifies as a gang member (Children's Commissioner, 2019), that around 31% of children here live in poverty (DWP, 2021), and suicide and injury or poisoning of undetermined intent was the leading cause of death for both males and females aged 20 to 34 years in the UK, for all years observed (ONS). I'm unable to provide statistics of what percentages of different groups of people access my community but I do know that myself and others in the community have collectively provided advice and support for hundreds if not thousands of people who fall into these categories or are at a high risk of falling into them, so I'd urge you to strongly and carefully consider the points that I have made in this statement.

 

  1. Conclusion

6.1.            The goals of this process are extremely important, and we are glad to have the opportunity to contribute to the Committee’s thinking. In closing, we urge you to keep in mind the gravity of the task at hand. The internet has facilitated knowledge-sharing and community like no other tool in human history, and billions of people rely on it for their livelihoods and for support of all kinds. While the important issue of online harm should be addressed, it is crucial not to lose sight of the value that online society brings. It is important to ensure that the internet ecosystem remains a diverse and dynamic market for different platforms and companies, yes. But it is absolutely paramount that the voice and the rights of the user be held supreme in this important debate. While it is tempting to think that this debate is about the tech giants of the world, it is really and truly about communities like r/MentalHealthUK, who are simply looking to tend to their own civil space online, and support each other through the power of digital community.

 

20 September 2021

11


[1] https://www.reddit.com/r/MentalHealthUK/

 

[2] https://www.redditinc.com/policies/transparency-report-2020

 

[3] https://www.redditinc.com/policies/moderator-guidelines

 

[4] https://www.redditinc.com/policies/content-policy

[5] https://www.reddit.com/r/redditsecurity/

[6] https://www.reddit.com/r/MentalHealthUK/

[7] https://www.reddit.com/r/MentalHealthUK/comments/l0dyua/new_and_updated_general_rules_for_this_sub/

 

[8] “Admins” is a term used to refer to Reddit, Inc. employees.

 

[9] “NSFW” stands for “Not Safe for Work,” and refers to a variety of content that may be mature, graphic, or otherwise of the sort one would not want to be seen accessing in a professional setting.

 

[10] Automoderator” is a simple automated content moderation tool that Reddit makes available to its moderators to configure as they see fit within their subreddit. Moderators may choose to use automoderator to automatically apply a post to every comment thread reminding users of the community’s rules and expectations.