Written evidence submitted by Clean Up The Internet (OSB0026)


1. Summary

Clean Up The Internet is an independent, UK-based, not-for-profit organisation concerned about the degradation in online discourse and its implications for democracy. We campaign for evidence- based action to increase civility and respect online, to safeguard freedom of expression, and to reduce online bullying, trolling, intimidation, and misinformation.

We are delighted to have the opportunity to submit evidence to your committee. Our submission focuses on your sixth question, “Does the Bill deliver the intention to focus on systems and processes rather than content, and is this an effective approach for moderating content? What role do you see for e.g. safety by design, algorithmic recommendations, minimum standards, default settings?”. We argue that there could be more focus on design-level interventions, in particular interventions to drive improvements in platforms’ approaches to anonymity and user verification.

There is much to welcome in the draft Bill. It contains some important building blocks for a regulatory regime which encourages social media platforms to tackle harm and risk through design improvements or better systems and processes. These include the introduction of duties of care for platforms, and giving Ofcom powers to conduct audits of platforms’ systems and processes, and to produce Codes of Practice.

However, we believe that the draft Bill could be strengthened further if there were more of a focus on measures which would drive improvements at the level of design, systems and processes. At present the draft Bill fails to specify priority areas for improvement in design, systems or processes, or to set out a process for identifying priority risk factors which platforms must address. It also gives Ofcom insufficient powers to challenge platforms’ own risk assessments or the data on which they are based.

Insufficient focus on improving design, systems and processes also leads to an over-reliance, within the regulatory regime envisaged by the draft Bill, on content-focused interventions. In the case of the types of content designated as harmful in the Bill, this will create more regulatory challenges and involve more trade-offs between the freedom of expression of different users. In the case of harms which are not designated within the draft Bill, such as disinformation which causes harm at societal level, or content which is exempted by quite broad and vague carve-outs for “journalistic content” and “content of democratic importance”, this is likely to mean we see no improvement at all.

Much of Clean Up The Internet’s work to date has focused on the role which anonymity plays in harmful online behaviour, including both abuse and disinformation. This is an important example of such a “risk factor” which could be greatly mitigated through improvements to platforms’ design, systems and processes. We are concerned that as currently drafted the Online Safety Bill may not drive any improvements to how anonymity is managed.

This submission therefore makes the case for tackling anonymity, explains how the current draft falls short, and sets out an option for improving the current draft to address this. We suggest that the Online Safety Bill should therefore explicitly define anonymity as a risk-factor which platforms are required to act to mitigate, and which Ofcom should be required to address within its Codes of Practice. In addition to an overall duty to manage anonymity as a risk factor, we propose that the bill could introduce specific requirements for platforms to offer their users a “right to verify”, a “right to block interaction from unverified accounts”,  and for transparency about the verification status of all users.

2. What’s the problem with anonymity on social media?

The ease with which social media can be used anonymously, or with pseudonyms, combined with a lack of protections and controls for other users, is currently a key driver of harmful behaviour.

It’s a key factor in the spread of disinformation, conspiracy theories and extremism. Organised disinformation networks exploit the ability to create fake accounts, and false identities, at scale. They use networks of these accounts to create false and misleading content, to spread and amplify this content, and to distort and disrupt online conversations. A recent study by Clean Up The Internet found that in March and April 2020 anonymous Twitter accounts played a significant role in the spread of conspiracy theories about coronavirus and 5G in the early days of the pandemic. A recent NATO Stratcom study from November 2020 confirmed it remains extremely easy and cheap to buy fake engagement from fake accounts for the purposes of disinformation on platforms including Facebook, Instagram, Twitter, YouTube, and TikTok.

It’s also a key factor in bullying, harassment and trolling. A recent poll of UK social media users, conducted by Opinium for Compassion in Politics, found that of those who had experienced online abuse, 72% had received abuse from anonymous accounts. When social media users are anonymous, they feel much more able to behave badly and abuse other users  - a phenomenon known as the “Online Disinhibition Effect”. Anonymity also makes it much harder to enforce rules against such behaviour – if an anonymous troll does eventually get banned, they can easily create a fresh anonymous account with a new pseudonym and continue their trolling or abuse.

3. How could platforms improve their design, systems and processes to reduce harms associated with anonymity?

It is understandable, given the harms associated with anonymity, that some have proposed simply to ban it and require mandatory account verification for all users. A parliament.uk petition calling for such a ban has received almost 700,000 signatures. However, Clean Up The Internet does not support such a ban. There are many well recognised cases where anonymity is very important for freedom of expression – for example in the case of a whistle-blower, or someone fleeing domestic abuse.

We instead support changes to the operation of social media platforms aimed at restricting misuse of anonymity, whilst continuing to permit its legitimate use. Three practical and deliverable changes to how anonymity is managed by social media platforms would drive a huge reduction in the amount of harm it fuels:

  1. Give all social media users the right to verify their identity if they choose. Every social media user should be given the option of a robust, secure means of verifying that the identity they are using on social media is authentic. Users who wish to continue unverified should be free to continue to do so.
  2. Make it easy for everyone to see whether or not a user is verified The verification status of an individual user should be clearly visible to all other users. Each user would then be able to bring their own judgement as to what a verification status might say about the credibility and reliability of another user's content.
  3. Give users the option to block interaction with unverified users Some users will be happy to hear from, and interact with, unverified users. Others will not. This should be a matter of individual user choice. Every social media user should be offered options to manage their level of interaction with unverified users, including an option to block communication, comments and other interaction from all unverified users, as a category and pre-emptively.

In theory it would be perfectly possible for social media platforms to implement changes such as these voluntarily. However in practice they have consistently failed to do so, and when challenged on the subject (including by the DCMS Select Committee) have expressed a great deal of resistance to changing their current laissez faire approach, because they clearly prioritise “engagement” and advertising over the security and comfort of UK citizens.

We therefore believe regulation is required to ensure all the major platforms adequately address risk factors such as anonymity. It is reasonable to assume that without regulatory intervention the current approach to anonymity, and the problems associated with it, will endure indefinitely.

4. What would the current draft of the Online Safety Bill do to reduce harms associated with anonymity?

In December 2020, in their full response to the White Paper Consultation, the government stated very clearly that they intended to  “not put any new limits on online anonymity”. When challenged in parliament about this position on Dec 15 2020, the Secretary of State defended it by saying “ On anonymity, we have not taken powers to remove anonymity because it is very important for some people—for example, victims fleeing domestic violence and children who have questions about their sexuality that they do not want their families to know they are exploring. There are many reasons to protect that anonymity.” Incidentally we agree with those reservations, and have carried very similar wording on our public website since 2019, but a desire to protect anonymity does not need to conflict with measures to prevent its abuse. When pressed further by several other MPs, Mr Dowden then pledged to “genuinely keep an open mind, and if we can find a way of doing this that is proportionate, we will continue to consider whether there are measures we can take as we go through pre-legislative scrutiny.” 


The wording of the draft Bill reflects the intention set out in Dec 2020 of not putting any “new limits” on anonymity, of wishing to leave platforms’ current approach to anonymity untouched. This is consistent with the Bill’s broader silence on specific design-level problems or risk factors. It’s worth noting that the lack of specificity on design-level risk factors stands in contrast with greater specificity regarding content, including between categories of content, (“content harmful to children”; “content harmful to adults”), specifying some priority forms of illegal content (Terrorist content; CSEA) and a process to designate others, and specific other content which enjoys special exemptions (“journalistic content”; ”content of democratic importance”)


The closest the Bill comes to touching on anonymity is to list, in Section 135 (2) (a), “creating a user profile, including an anonymous or pseudonymous profile” as one of 13 “functionalities” which companies should consider when fulfilling their various safety duties, and which Ofcom should consider when conducting risk assessments. It is not clear how Ofcom is able to challenge platforms’ own risk assessments.


In our view this is a very weak provision, which sets no expectations that consideration of anonymity will be prioritised, or that any changes to their design or processes will be introduced.  Such weak and non-specific requirements seem unlikely to drive any significant change any time soon, particularly given that the platforms have repeatedly refused to even acknowledge, let alone address, the problems associated with anonymity. The platforms have a long track record of resisting design changes which could improve user safety, especially when such measures could sit in tension with maximising engagement and minimising friction. In the case of measures to tackle misuse of anonymity, or to extend the availability of verification, platforms have a particularly strong conflict of interest, as such measures could deflate the numbers of users they can claim for advertising purposes.


A hint at how platforms would approach the question of anonymity within the future risk assessments, if the draft Bill is not strengthened, is provided by their responses to concern about racist abuse of England footballers following the 2021 Euros final. Whilst the victims themselves highlighted the role of anonymous accounts, and polling suggested significant public concern about the misuse of anonymity and support for measures to address it, the platforms refused to engage with this issue.



The weak provisions in the current draft Bill are unlikely to suffice to drive significant improvements to platforms’ designs or processes in areas (such as anonymity) where the platforms are resistant to change. The final Bill needs to give Ofcom clear powers to force platforms to address such systemic drivers of online harms, and to audit and challenge platforms’ risk assessments if they fail to acknowledge or address them adequately. Otherwise we can expect that the platforms will continue to adopt strategies of avoidance or denial and produce section 7 risk assessments which gloss over problems associated with anonymity - and that Ofcom will be ill-equipped to audit the data on which such risk assessments are based, or challenge such omissions.


5. Why is the lack measures to tackle anonymity a key omission in the draft Bill?

Silence on anonymity is a key omission which risks weakening the rest of the Bill. This is for three main reasons:

1. The platforms’ current approach to anonymity is fuelling online harms

There’s a significant body of evidence that the major social media platforms’ approach to anonymity fuels both online abuse and the spread of disinformation. These are two of the principal “harms” which the government’s Online Safety agenda seeks to reduce. Tackling the misuse of anonymity would reduce the amount of harmful activity on the platforms, and give users more options to protect themselves. Leaving such design flaws unaddressed by the Bill risks leading to over-reliance on ex post content moderation, which is much more challenging to get right and poses more difficult trade-offs regarding freedom of expression.  We would urge a focus on prevention rather than cure wherever possible.

2. An absence of measures to tackle misuse of anonymity limits the effectiveness of other measures in the Bill, such as the reliance on Platforms’ own Terms & Conditions

The draft Bill envisages more consistent enforcement of the largest (“Category One”) platforms’ own Terms & Conditions as central to reducing content harmful to adults. However, at present users are able to exploit a laissez-faire approach to anonymity and verification to evade T&Cs. The ultimate enforcement sanctions in a platform’s T&Cs are suspending or banning an account. In the absence of any new measures regarding anonymity, it remains extremely simple for a banned user to start a new account and continue their harmful behaviour, including harassing other users. Where behaviour crosses a criminal threshold, or is defamatory, the prevalence of anonymous accounts also makes it much harder for perpetrators to be brought to justice.

3. A lack of measures on anonymity will weaken the credibility of the legislation in the eyes of the general public

Numerous opinion polls have found that the public sees harm from anonymous accounts as a leading problem with social media platforms. Polling conducted by Opinium this June found that 73% of adults “support government action to reduce the number of anonymous accounts on social media platforms”. Polling conducted in 2020 by Demos and BT found that 65% of respondents agreed with the statement that “harmful behaviour conducted by anonymous internet users means that everyone should have to use their real names to access services”.  The public’s desire for action is informed by their own experience of using such platforms (In the 2021 poll by Opinium 72% of those who had experienced online abuse said they had experienced abuse from anonymous accounts), and their awareness of high-profile figures who experience online abuse, such as celebrities or footballers. The public will struggle to understand why no measures have been introduced to reduce the harm from anonymous accounts, and will feel less confidence in the new Online Safety regime as a result.  They will see that the legislation has privileged the business model of the platforms over the safety and well-being of UK citizens.

6. How could such measures be added to the draft Online Safety Bill?

The legislation would do more to encourage platforms to consider safety and/or the risk of harm in platform design and the systems and processes that they put in place if it explicitly identified priority risk factors which it expected the platforms to address, and priority safety features which it expected them to introduce. The Bill could both name some priority risk factors, and create a process for further priority risk factors to be identified – as it does in the case of types of content.

Anonymous and unverified accounts would be an obvious candidate to be designated as such a risk factor, given both the clear evidence of the role of anonymity in enabling harm, and the public expectation that it be tackled. The Bill could require platforms to demonstrate to the regulator that through their design, systems and processes they have taken reasonable steps to mitigate the risks associated with anonymous accounts. It could further require that these mitigations included offering users a verification option, a means of seeing the verification status of other users, and a means of restricting or blocking contact from unverified accounts.

Further guidance could then be issued by Ofcom through a Code of Practice. This could include stipulating safeguards to address considerations like ensuring that verification systems are secure, accessible to users in a diverse range of circumstances, and are not exploited by platforms as an additional opportunity to harvest data.

Such an approach would set clear expectations to platforms, whilst still allowing flexibility to both them and the regulator. Different platforms would be free to develop their own designs, processes and systems to mitigate the risks associated with anonymity, and in order to comply with the specific requirements around offering their users a verification option. Ofcom could update its Codes Of Practice as new technologies and platforms emerged, or in the light of fresh evidence.

7. What could this look like in terms of changes to the wording of the draft Bill?

We recognise that the current draft Bill may change, potentially significantly, following pre-legislative scrutiny, so what follows should be seen as seeking to demonstrate a potential approach rather than as our definitive proposal.

Working with the current draft Bill,  a new clause could be added as a new section at the end of Ch2 (i.e. a new Section 17). We suggest it could look something like this:


(17) A duty to use proportionate systems and processes to mitigate and effectively manage the harm caused by anonymous accounts. This must include the development of systems and processes designed to ensure —

(a) all users of the service are provided with a simple and effective means of verifying their identity

(b) it is readily apparent to all users of the service which other users of the service have verified their identity and

(c) all users of the service are given simple and meaningful controls over the ability of all unverified users, whether individually or as a class, to contact them or interact with them using the service


See also, in relation to duties under this section, section 12(2) (duties about rights to freedom of expression and privacy)


Ch5, s29 regarding “Codes of practice” would then automatically apply to this new duty, as it would be defined as a “relevant duty” as per subsection (9).


An explicit reference to reducing harms from anonymous or unverified users could then be added

into s30 which sets out “Online safety objectives” which Ofcom must consider when preparing Codes of Practice. A new s30(2)(ix) could be inserted as follows:

ix) there are adequate controls over access to, and use of, the service by anonymous users, taking into account the ways in which anonymity may increase the risk and/or impact of harmful behaviour and make it harder to hold perpetrators to account for their actions

An explicit requirement for Ofcom to assess the risks associated with anonymous and unverified users could also be added to Ch3, s61, “Risk assessments by Ofcom”. This could be done by amending Ch3, s61(6) to insert “approach to anonymous accounts and user verification”, so that it reads as follows:

(6) In this section the “characteristics” of a service include the functionalities of the service, its user base, business model, governance, approach to anonymous accounts and user verification, and other systems and processes.

An explicit reference to anonymity could also be added to Part 4 Chapter 8 Section 103 regarding media literacy. This section amends the Communications Act to update Ofcom’s “duty to promote media literacy” for digital media.  A new s11(1)(b)(iv) could be inserted regarding Ofcom encouraging the development of “technologies and systems which help improve media literacy”, as follows:


(iv) indicate to the recipient whether or not the identity of an account sharing the material has been verified as genuine


Another additional subclause could be added to the definition of media literacy, as s11(2)(b)(v):

(v) the authenticity and reliability of the account or user sharing such material



8. Addressing legitimate concerns about measures to restrict abuse of anonymity

There is strong evidence that at present anonymity fuels a significant amount of harm, and that the public therefore support regulatory intervention to tackle this problem. However there are also instances of the ability to use social media anonymously protecting an individual’s freedom of expression - for example in the case of a whistle-blower account such as the Secret Barrister, or in the case of an individual fleeing domestic abuse. It’s also important to recognise that verification processes could act as a barrier to participation, particularly to users who may not have standard ID documents (e.g. a homeless person, or a trans person).

Three critical safeguards would ensure that our proposals bear down on anonymous abuse and the use of anonymous accounts to spread disinformation, without any undue negative impact on those for whom anonymity is important, or who just wish to continue unverified but aren’t engaged in negative behaviour.

  1. Ensure that verification systems are developed with due regard to accessibility, diversity, and inclusion

Any verification system would inevitably introduce additional steps which a user would need to take in order to achieve verification, and may require them to have access to specific means of proving their identity such as official documents. The specific needs of different minority groups would need to be considered, for example to ensure people with no fixed abode were not excluded through not having a permanent address, or that there was a straightforward way for trans people to transition their accounts to their new name/gender. Care would need to be taken to avoid over-reliance on a narrow range of national identity documents, such as passports and driving licenses, which sizeable minorities of the population do not possess. Options such as using a bank account for identity verification (97% of the UK population have a bank account), or using “vouching” by a trusted individual for those without documents, would need to be developed.

Ofcom should be required to consult and involve a diverse range of users in the development of a Code of Practice covering verification, and to monitor and review the inclusivity of those processes on a regular basis. Social media companies should be required to demonstrate that all their terms and conditions, including those relating to verification, are compliant with relevant equalities legislation, and that they have been developed with due regard to diversity and inclusion.

  1. Place strict limits on the use of data collected for purposes of verification

In general, social media companies have deservedly poor reputations for respecting their users’ privacy, and some users will therefore have concerns that identity verification could enable further privacy violations. It will be crucial to address such concerns, both to protect individual users’ rights, and to ensure that a critical mass of users are willing to undergo verification.

Ofcom should work with the ICO to ensure that data gathered for the purposes of identity verification should be used only for that purpose, and only retained as long as is necessary for purposes of verification. For example, should a document or image be uploaded as part of a verification process, it should be destroyed once verification status is confirmed. Third party options from independent providers should be made available which would mean platforms had access only to tokenised identity credentials, rather than e.g. a full scan of a user's official documents. A number of independent identity verification provider companies already exist or are in the process of launching, for example OneID and Yoti.

Verification processes and outcomes should be communicated in plain English to the user. That should include making it clear to users that they have a genuine choice as to whether or not they verify - i.e. it should be made clear that it is possible to continue as a user of the platform (albeit with some limitations) without verification.

These principles of data minimisation, active consent and user choice are consistent with the principles of the GDPR. Strict enforcement of existing privacy rules as they relate to social media identity verification will therefore be a key pillar of protecting users’ rights and ensuring trust. Ofcom, as the online harms regulator, will need to work closely with the ICO, as the data and privacy regulator, to ensure proper enforcement of privacy rules with regard to verification systems, and to identify (and fill) any regulatory gaps via a Code of Practice.

  1. Make verification optional

Ensuring that verification is made as accessible as possible, and that privacy concerns are addressed, should mean that most social media users are both willing and able to verify their identities. Research conducted by Opinium in January 2021 found that 81% of UK social media users currently state they would be willing to verify their identity for social media. However, our proposal that verification be optional would significantly mitigate any remaining issues. Retaining the ability to access social media without verification - albeit with certain limitations, but limitations designed to restrict malign rather than legitimate use - would mean no user would be at risk of losing access to a platform through not taking part in verification, whether by choice or because they had some difficulty with the process.

We explore the practicalities of verification, and safeguards to ensure inclusivity and privacy, in more detail in a briefing which is available here.


9. Other possible approaches to tackling anonymity

Our recommendations (explicitly define anonymity as a risk-factor which platforms are required to act to mitigate; introduce specific requirements for platforms to offer their users a “right to verify”, a “right to block interaction from unverified accounts”,  and for transparency about the verification status of all users) seek to strike a balance that recognises and minimises the trade-offs involved in seeking to tackle the harms associated  with anonymous accounts. In this section we briefly explore 2 alternative approaches which we are aware of.

a)      Make verification compulsory for all social media users

This proposal would require social media users to adopt a “know your user” approach for anyone using their platform. This could be coupled with vicarious liability for platforms were they unable to identify an end user.

The main advantage of this approach would be that it could entirely eliminate the problems associated with anonymous accounts - because it would entirely eliminate anonymity. In the case of criminal or defamatory activity it would ensure full traceability. It would also have the potential to provide a simpler user experience, by avoiding any scope for confusion amongst users about different verification statuses or the need for any user action to manage their level of interaction with unverified accounts (because there wouldn’t be any unverified accounts).

The main risk would be that it would eliminate legitimate uses of anonymity alongside eliminating its misuse. This would have implications for freedom of expression, so careful consideration would need to be given as to whether this was a proportionate intervention.

The negative implications for freedom of expression would be particularly significant were verification processes not fully accessible. Compulsory verification would therefore place a particularly high premium on ensuring that verification worked for vulnerable groups including those with more complex identification needs or lacking access to standard documentation.

b)      Require users to verify their identities to social media platforms, but allow users to remain incognito to other users

This approach would entail users providing identifying information to a platform, which the platform could retain against their account, but not sharing this with other users to whom they could remain anonymous or use a pseudonym. The platform would be obliged to make this identifying information available to law enforcement, and in the case of defamation proceedings. It would also be able to use it to enforce its terms and conditions more effectively by preventing users evading suspensions through the creation of a new account.

The principal advantage of this approach would be that it would potentially lower the bar to verification, by making it palatable to users who had concerns about other users identifying them (e.g. because they didn’t want an employer or their family to be able to identify them) but did not have qualms about being identifiable to a platform. This could potentially increase the uptake of verification - or if made mandatory, maintain scope for certain forms of benign anonymous (to other users) activity.

A major disadvantage would be that it would do less to increase trust, authenticity, or accountability in digital spaces because large numbers of users would remain anonymous to each other. Users would be traceable in criminal cases, but could remain anonymous for purposes of non-criminal trolling or bullying. There would therefore be less of a reduction in the “online disinhibition effect”.

Another disadvantage would be that it would centralise more information in the hands of platforms, rather than distributing more information and accountability amongst users. This data would need to be retained for as long as a user had an account with the platform, creating a new centralised dataset held by platforms, which could have privacy and security implications. An approach which verified publicly declared identity information would avoid such issues as verification of a public profile could easily be provided by a third party, and there would be no need for any extra personal data to be retained by the platform once verification has been completed. And whilst traceability data would exist, law enforcement agencies would remain dependent on platforms to disclose this data, and victims of defamation would continue to rely on Norwich Pharmacal orders to determine the identity of a user sharing defamatory content.

10. Conclusion

Clean Up The Internet sees much to welcome in the government’s draft Online Safety Bill. We welcome an end to failed self-regulation, and the plan to appoint Ofcom as the independent regulator, underpinned by statute. We welcome the introduction of safety duties on platforms, and the associated requirements for platforms and Ofcom to conduct risk assessments which at least require some consideration of the impact of their design, systems and processes.

However, we are concerned that the current draft Bill’s failure to tackle design issues like anonymity head-on will reduce its effectiveness. We fear that the current weak wording in this area is unlikely to deliver big changes to how platforms are designed and operated – and that without such changes, the regulatory regime will be over-reliant on measures focused at the level of individual content.

We do not take issue with the need for there to be regulation which focuses on how platforms treat harmful content, including legal-but-harmful content. We accept that some moderation of content is likely to be required even with the kind of design and systems level changes that we advocate. We certainly agree that such decisions should not be left entirely to private companies.

However, it’s desirable for the need for such content-level interventions to be minimised – both to avoid the regulator being overwhelmed, and to reduce the trade-offs regarding freedom of expression which inevitably arise. In our view the current draft doesn’t yet strike the right balance. Regulating platforms’ approach to design, including their approach to anonymity, should take a bit more of the strain, and regulating platforms’ approach to content would then be able to take a little less.


September 2021