Written Evidence Submitted by Mumsnet (OSB0031)
Mumsnet is a website for parents, with around eight million unique users each month and around 20,000 posts on our forums every day. Based in the UK, we were founded in 2000.
We aspire to be, and believe we are, a responsible host for user-generated content (UGC):
● We have a large and professional moderation team who are on duty seven days a week, 365 days a year. Our average response time to reports is one hour, 26 minutes, and over 50% of reports are responded to in less than 15 minutes.
● Our Talk Guidelines, which all users are expected to adhere to, are prominently flagged on all discussions. We remove reported content that we believe is racist, sexist, homophobic, transphobic, ableist, incites prejudice, or is otherwise against the law. We will also remove material that is deeply unkind, obscene, or potentially dangerous. (This is in addition to our responsibility to remove reported content that is illegal.)
● Every single flagged post is considered by a trained human being, and sometimes by more than one. Internal discussions about contentious decisions routinely include the most senior members of staff.
● We do not ‘incentivise the posting of outrageous content’, to quote the report published by the House of Lords’ Communications and Digital Committee. We have consciously chosen not to have ‘like’ functionality and we do not use algorithms to encourage ‘clickbait’. There is no mechanism on Mumsnet by which individual users’ popularity can be ranked, or even discovered. Posting divisively or aggressively is not a route to visibility, ‘fame’ or popularity on Mumsnet.
In our initial response to the Government consultation on this Bill, we were broadly supportive for two main reasons:
However, as the legislation stands we have serious concerns about the lack of clarity in definitions; the kicking of balls into the post-legislation long grass; and potential impacts on freedom of expression.
Assurances that wrinkles will be ironed out further down the line, in statutory regulations and by the regulator, are not sufficient, in our view, given the very weighty issues at stake for our business; not to mention for truth telling and freedom of expression.
We believe the Bill should provide greater clarity, and Parliament must have the chance to discuss the implications.
‘Legal harm’
The Bill needs to include a clear, written definition of ‘legal harm’ that gives certainty to the organisations that will be expected to adhere to this standard.
Mumsnet provides a platform for parents and carers to support, entertain and advise each other across a huge variety of topics. Our users are deeply interested in politics and culture, and in the fast-developing dialogue around social justice and equalities.
Many of these issues are highly contested in the public sphere. In particular, the use of language and terminology can be highly controversial - to the point where, in some debates, people cannot even agree on the meanings of individual words.
Furthermore, conversations in the UK about highly controversial issues differ substantially from the same conversations in the US. Yet in the absence of any Parliamentary guidance, standards for what is acceptably non-harmful will be set by Silicon Valley companies with no political or cultural roots in the UK.
On Mumsnet, discussions about hot-button political and cultural topics are frequent, popular and lively. In every single one of these discussions, there will be user posts that could, by the loosest definitions, be interpreted as ‘causing harm’:
● A Jewish, American or Muslim user may well be deeply offended by a post describing the routine circumcision of baby boys as ‘child abuse’.
● On reading a post that questions their right to priority spaces on public transport, a wheelchair user may well feel harmed by the implication that their legal rights are up for debate.
● An assertion that men as a category are a threat to women and children, given that almost all sexual and violent crime is committed by men, may well cause a male reader upset and alarm.
● Senior Labour Party figures doubtless feel they have been harmed by assertions that they are anti-Semitic.
If the government or Parliament believes any or all of these statements should be illegal, they should have the courage of their convictions and introduce legislation to that effect. All of these things could fall within a loosely defined category of ‘likely to cause harm’, depending on the sensitivity and standpoint of the person reading them.
In the past five years or so, Mumsnet has come under direct targeted pressure to stop hosting women’s speech in the deeply sensitive debate around the interplay between the rights of women and the rights of trans people. There are several hundred activists - many of them outside the UK - who are determined that Mumsnet should be closed down because we have continued to allow women who are concerned about the implications to women’s rights of gender self identification to speak on this issue. Our advertisers have been relentlessly targeted. One activist, Dr Adrian Harrop, stated publicly his aim was to drive Mumsnet to financial ruin.
If the language in the Bill remains unaltered, it is likely that activists will use the loose definition of ‘likely to cause harm’ to launch a barrage of vexatious complaints and supercomplaints against Mumsnet, tying up our limited resources and potentially forcing us to censor entirely legitimate discussion on this topics, for fear of potentially ruinous fines. Whatever category of platform we come under, activist groups will be able to weaponise the threat of large fines to intimidate platforms like ours into removing perfectly legal content to fit their own agenda. We are concerned that the lack of definition places a powerful tool in the hands of those who wish to silence British women.
Speech that is deemed harmful by some can contribute significantly to the development of ideas that go on to be regarded as entirely politically legitimate. The expression of ‘gender critical’ beliefs that we allow on Mumsnet would certainly have drawn complaints under the rubric ‘likely to cause harm’ had it been in place for the past five years. Yet just a few months ago, this same viewpoint was recognised in law as being a protected belief under the Equality Act (Employment Appeal Tribunal ruling in CDG vs Forstater, June 2021).
A shared understanding of where the lines are is critical. The loudest voices on social media do not necessarily represent the broader wisdom or consensus viewpoint of the British public.
We agree with the House of Lords that the putative adult envisaged by the Bill should be (our emphasis) ‘a reasonable person of ordinary sensibilities.’ As English PEN has said, “holding a communication to the standard of the most sensitive audience member would be unfair and a disproportionate infringement on freedom of expression.”
Similar issues were considered during the reform of the defamation laws ten years ago. In that case they were resolved by specifying ‘serious harm’, which in practice needs to be evidenced by direct and material impacts on income, wellbeing or relationships. In this case the House of Lords has proposed that ‘harm’ should be defined as (our emphasis) ‘reasonable grounds to believe that content presents a significant risk to an adult’s physical or mental health, such that it would have a substantial adverse effect on usual day-to-day activities.’ In addition, in defamation there are defences of reasonable belief, sincere personal opinion, and truth. Might a similar compromise be reached in this Bill?
The risk of incentivising compliance-by-algorithm
In the absence of any clarity about which services will be designated as Category 1, we have to assume that Mumsnet’s large user base (we have around seven million users monthly) could mean we will be included in Category 1 and so subject to the most onerous obligations to proactively moderate written material. It would be impossible for us to do this without resorting to the use of blunt force moderation algorithms at scale, something that to date we have deployed only in a very limited way.
The Bill does not focus enough on the inherent risks of the algorithmic content moderation systems that it will necessitate. The increased use of algorithmic moderation as necessitated by the Bill will lead to the rollout of algorithms which repeatedly censor women sharing their own experiences and opinions.
Element AI and WIRED news estimated that just 12 percent of machine learning researchers are women. A 2017 study by computer scientist Aylin Caliskan at George Washington University found that AI machine learning systems produce sexist biases by learning from data sets with gender inequalities. Artificial Intelligence programmes such as facial recognition applications suffer from being grounded in predominantly white, male datasets. (MIT)
Just to illustrate the problem, pages on Mumsnet are repeatedly blacklisted as ‘obscene’ by the algorithms used by programmatic advertising agencies. This is not because Mumsnet allows obscene content. It’s because our users post about breasts (in the context of breastfeeding, or in discussions of clothing shapes) and vulvas and vaginas (in the context of discussions of their health and wellbeing). Trained on databases of largely male speech, algorithms are simply unable to interpret non-pornographic discussions of female anatomy.
Consider the case of ‘anti-vaxx’ content, which any regulator would feel important to address given the context of the COVID vaccination roll-out. We are not free of doctrinaire anti-vaccination posters, and our moderation team makes considerable effort to delete posts from and ban users who have a pattern of anti-vaxx posting.
But we’re also aware that our audience of (mostly) women have legitimate reasons to question how far medical researchers have considered impacts on women’s bodies, which are systematically under-researched in medical science. Women have a right to ask for more information about how COVID vaccinations affect menstruation, menopause, pregnancy and fertility, given the acknowledged early state of research into these interactions.
Our solution is to allow our users to ask questions and share opinions and knowledge, as well as commissioning expert content from qualified healthcare professionals and research scientists to share the best available specialist knowledge.
We believe strongly that this nuanced approach, which does not censor women for asking questions, is much more effective at building confidence in the COVID vaccination programme than the auto-deletion of every questioning post, which would drive only mistrust and paranoia.
Mumsnet has built up a strong and successful content moderation system that relies on human moderators to ensure that all our moderation is proportionate, accurate and sensitive. We use AI tools to auto-flag posts that might be troublesome, but we do not use robots to make the final decisions about which posts stay or go. Algorithms are incapable of nuance; they cannot handle rhetorical absurdity, hyperbole, sarcasm or humour; and they are inherently biased against women’s speech.
The need for clarity about how categorisation will be determined
We do not yet have any idea whether Mumsnet will be classified as a Category 1 service. There are significant, impactful differences between the duties on services in each category. The lack of clarity about how services will be categorised creates great uncertainty for businesses such as ours. Ministers should provide clear guidance on how the categories will be assigned and write these into the Bill.
September 2021