Written evidence submitted by the LGBT Foundation (OSB0045)
We at LGBT Foundation are submitting the response to the DCMS Sub-Committee Enquiry regarding the Online Safety Bill. LGBT Foundation is a national charity that works to support, amplify, and empower the LGBTQ+ community. We directly serve over 40,000 people annually as well as providing online support and advice to over 600,000 individuals, more than any other organisation of our size in the sector. Our work relies heavily on the ability of individuals to contact us online and express their needs in an open and transparent space, including discussing sensitive information that may be (incorrectly) seen as controversial or harmful.
Is it necessary to have an explicit definition and process for determining harm to children and adults in the Online Safety Bill, and what should it be?
Yes, it is necessary to have an explicit definition and process for determining what constitutes “harm” and what content should be considered “harmful” in the new bill. Without a clear definition the bill will effectively outsource decision making on what is and isn’t permitted to tech companies. This will introduce avenues for discrimination against LGBT users and/or creators of LGBT-related content, with little recourse for appeal in the absence of specified standards.
The threat of large fines will create a general commercial incentive to over-censor, which existing evidence indicates will also lead to the disproportionate over-censorship of LGBT content as compared with other content. For example:
In relation to LGBT users and LGBT-related content, it is particularly important that “harm to children” be clearly defined, given significant evidence of non-explicit LGBT-related content being discriminatorily mis-classified as ‘mature’ or ‘adult’ by large platforms such as Tumblr and YouTube.[3] This resulted in young people being unable to access content about LGBT rights, history, identity, and discrimination, including in one case advice videos produced by an LGBT youth charity, while browsing in Restricted Mode on YouTube; and some LGBT content and users being permanently prohibited from Tumblr after its decision to ban ‘adult content’ in December 2018 as a result of changes to US law. On YouTube, the restriction and demonetisation of videos mis-classified as ‘mature’ has directly resulted in financial harm to LGBT users, who in some cases have lost their livelihoods and/or been forced to stop creating content for the platform due to a decrease in advertising revenue. Notably, in late 2019 several LGBT YouTubers filed a class action lawsuit suing the platform for “discrimination, deceptive business practices and unlawful restraint of speech.”[4] Though YouTube maintains a public list of guidelines for content demonetisation, the list is both broad and vague, meaning it is very difficult for users to understand what these standards are in practice and how they should be enforced.[5]
These examples show that LGBT content has already been disproportionately censored online when the decision over what might be “harmful” is left in the hands of private tech companies, especially in the contexts of companies that have an international reach where different countries have different approaches to LGBT identities. While the Bill outlines an intention for in-scope companies to consider freedom of expression and routes for user appeal, in practice these safeguards require a clear definition and process for determining harm to children and adults that actively seeks to counteract the evident existing tendency towards discriminatory over-censorship. Without such a definition, the Bill will both exacerbate this censorship and codify it into law, leaving LGBT users at even greater risk of discrimination.
Does the draft Bill focus enough on the ways tech companies could be encouraged to consider safety and/or the risk of harm in platform design and the systems and processes that they put in place?
No, the draft Bill does not focus enough on the risks in systems and processes that tech companies put in place. In fact, as it stands it will incentivise discrimination against marginalised communities.
The draft Bill will necessitate a massive increase in the use of AI moderation programmes which disproportionately censor LGBT people. Yet the bill fails to provide guidance or legal parameters for ensuring British anti-discrimination laws are taken into account when designing these AI tools.
We have already seen how AI tools silence LGBT voices. In the YouTube and Tumblr examples above, the discrimination has been blamed by both companies on the design of their algorithms for moderating and classifying content. Despite this acknowledgement of issues with AI tools, there has been little change, with users facing the consistent problem of a lack of transparency about how these systems are designed.[6]
Algorithmically-driven discrimination is an issue for search engines as well as social media platforms. A recent study found that from 3rd-7th February 2020, “LGBT” and related search items in Google News “consistently provided a prominent platform for evangelical Christian and far-right perspectives on LGBTQ issues” rather than supportive perspectives, or LGBT-focused content from LGBT people themselves.[7] Meanwhile, a 2019 study conducted by CHEQ, an advertising verification company, found that advertising companies’ keyword blacklists inappropriately targeted up to 73% of neutral or positive content from LGBT outlets such as The Advocate and PinkNews.
While escalating ‘complex cases’ to human moderators has been suggested as a potential mitigation, similar issues with discrimination have plagued Facebook’s existing hybrid AI-human moderation and review system. For example, the company has a history of inappropriately blocking LGBT advertisements[8], and of enforcing its ‘hate speech policies’ in such a way that the system instead negatively targets the marginalised users such policies were intended to protect.[9] This is due to a combination of insufficient understanding of how marginalisation and discrimination operate on the part of company leadership; inadequate one-size-fits-all policies that nevertheless are inconsistently applied; lack of transparency and accountability regarding how the systems operate and how decisions are made, with insufficient mechanisms for individual users to meaningfully appeal decisions; and a consistent lack of adequate resources committed to addressing these issues.
In particular, the Bill should consider in greater detail the mechanisms for individual users to appeal against the censorship of their content, and how the Bill might create additional incentives for platforms to remove legitimate content swiftly. The LinkedIn incident described above is only one example of complaints and reporting mechanisms being abused in a homophobic or transphobic manner to target LGBT content, creating new means for harassment of LGBT individuals. While the Bill places the onus platforms to swiftly remove potentially “harmful” content under threat of significant fines, without greater consideration of the potential for discriminatory abuse of these systems users will be left even more vulnerable to such forms of harassment.
In summary, while the Bill is well-intentioned and in a limited sense seeks to anticipate and mitigate the issues that could arise from poor platform design, as it stands this is inadequate to the actual scope, complexity, and potential impact of these issues. The wide-ranging evidence of discrimination towards LGBT communities as a result of the existing systems and processes employed by major tech companies shows that significant further consideration, and detailed focus on these issues, is needed to ensure that this discrimination is in fact mitigated rather than exacerbated by the Bill.
17 September 2021
[1] Kait Sanchez, “TikTok says the repeat removal of the intersex hashtag was a mistake: The tag’s disappearance caused frustration for activists,” The Verge, June 4, 2021, https://www.theverge.com/2021/6/4/22519433/tiktok-intersex-ban-mistake-moderation-transparency
[2] Tom Williams, “Brave teen came out to classmates by coming out in a dress for his prom,” Metro, July 28, 2021, https://metro.co.uk/2021/07/28/linkedin-removed-mums-proud-post-of-her-son-coming-out-in-prom-dress-15004631/
[3] Claire Southerton, Daniel Marshall, Peter Aggleton, Mary Lou Rasmussen & Rob Cover. 2021. “Restricted Modes: Social Media, Content Classification and LGBTQ Sexual Citizenship.” New Media & Society, 23:5, 920-38. https://doi.org/10.1177/1461444820904362.
[4] Jenny Kleeman, “SNL producer and film-maker are latest to accuse YouTube of anti-LGBT bias: The 12 complainants in the class action lawsuit say an algorithm that restricts content is an attempt to push them off the platform,” The Guardian, November 22, 2019, https://www.theguardian.com/technology/2019/nov/22/youtube-lgbt-content-lawsuit-discrimination-algorithm
[5] Aja Romano, “A group of YouTubers is trying to prove the site systematically demonetizes queer content: They reverse-engineered YouTube’s ad revenue bot to investigate whether it’s penalizing queer content,” Vox, October 10, 2019, https://www.vox.com/culture/2019/10/10/20893258/youtube-lgbtq-censorship-demonetization-nerd-city-algorithm-report
[6] Ibid.
[7] April Anderson & Andy Lee Roth. 2020. “Queer erasure: Internet browsing can be biased against LGBTQ people, new exclusive research shows.” Index on Censorship, 49:1, 75-7. https://doi.org/10.1177/0306422020917088.
[8] Sera Golding-Young, “Facebook’s Discrimination Against the LGBT Community: The company rejected an ad of a same-sex couple, but took no issue with a similar ad of a heterosexual couple,” ACLU, September 24, 2020, https://www.aclu.org/news/lgbtq-rights/facebooks-discrimination-against-the-lgbt-community/
[9] Dottie Lux & Lil Miss Hot Mess, “Facebook’s Hate Speech Policies Censor Marginalized Users,” Wired, August 14, 2017, https://www.wired.com/story/facebooks-hate-speech-policies-censor-marginalized-users/