Written evidence submitted by LGB Alliance
Submission on the Draft Online Safety Bill
This submission is made on behalf of LGB Alliance. https://lgballiance.org.uk/
We welcome the opportunity to respond to the Committee’s call for comments on the Draft Online Safety Bill and hope this submission is of interest. If you have any questions regarding our response, please contact email@example.com
LGB Alliance is a group that represents the interests of a rapidly growing number of lesbian, gay and bisexual people. We represent thousands of LGB people who have grave concerns about the loss of our rights, specifically in relation to moves to replace, in law and elsewhere, the category of ‘sex’ with ‘gender identity’, ‘gender expression’ or ‘sex characteristics’.
We are long-time gay and lesbian activists who fought for the rights of people with a same-sex sexual orientation. These hard-won rights are now under serious threat.
How has the shifting focus between ‘online harms’ and ‘online safety’ influenced the development of the new regime and draft Bill?
The Govt White Paper Ministerial Foreword is thorough, and we think it strikes the right balance; it sets out its ambitions which we think cover the areas we as LGB Alliance are concerned with. There are however, areas which lack definition which we discuss below.
The ‘key definitions’ do not include definitions of the words ‘safety’ or ‘harms’. Nor is it explained who is to conduct the ‘risk assessments’, which will be based on an assessment of what is harmful and what is safe. This naively assumes that no special mechanism is needed to root out bias from such assessments. In fact, without specifying robust checks and balances, there is a pervasive risk of censorship on political or ideological grounds. None of the words ‘subjective’, ‘censorship’, and ‘bias’ appear in the text. This blind spot creates a lack of clarity and focus that makes it difficult to assess the text.
This point has been raised before. After the results of the consultation in 2020, some respondents expressed concern regarding the definition of harm; see https://commonslibrary.parliament.uk/research-briefings/cbp-8743/
Nor do the individual sections supposedly clarifying the meanings of harmful and safe actually provide clarity.
The text fails to take account of the online social, political and ideological context. Examples abound of issues in which ideologically opposed groups take a completely different view of what is ‘harmful’ and what is ‘safe’.
LGB Alliance advises the Committee to accept that it must come to grips with the problem of subjective bias and the limits of the restrictions it is proposing with a view to redrafting the parameters of the text. Without attempting to come to terms with this online social context, and proposing checks and balances, the draft text is doomed to remain mired in confusion and perpetually challenged by those seeking to impose conflicting interpretations.
Is it necessary to have an explicit definition and process for determining harm to children and adults in the Online Safety Bill, and what should it be?
Yes, given the answer to question 1, LGB Alliance believes that arriving at an explicit definition of harm to children and adults is crucial to the further development of this legislation. We believe that this is such a divisive issue that we propose a Citizens Assembly to address it. The ideological lines are drawn so sharply, and the hostility so pronounced, between the fiercest proponents of opposing views that we cannot see any other way of achieving a solution. We see this issue – what is ‘safe’ and what is ‘harmful’ – as one that is as divisive as, say, abortion in Ireland. It urgently needs to be addressed.
In addition, the Committee must consider the statutory requirement on public bodies (Public Sector Equality Duty) to foster good relations between protected characteristics under the Equality Act 2010 – taking action where there are conflicts of rights, protecting free speech, and curbing the production and dissemination of disinformation, such as the false notion that humans can change sex, that everyone has a ‘gender identity’, that homosexuals are attracted to others of the same ‘gender’ rather than the same sex.
Does the draft Bill focus enough on the ways tech companies could be encouraged to consider safety and/or the risk of harm in platform design and the systems and processes that they put in place?
An issue that will need to be resolved here is the potential conflict between the definitions of safety and harm that are eventually adopted and the definitions that the tech companies themselves regard as axiomatic. The legal framework must be crystal clear.
Many tech companies have adopted their own internal regulations on which characteristics to protect, which in some cases are at odds with UK law. For instance, some companies do not include ‘sex’ in their protected characteristics, even though it is defined as such in the UK Equality Act 2010.
In addition, LGB Alliance believes that the draft text does not take note of the fact that lesbians in particular are banned from social media platforms disproportionately often for expressing gender-critical views. There is a need for an independent regulator that is transparent and can force platforms to comply with UK law.
The use of algorithms is not an unbiased way to manage screening. Algorithms will reflect the biases of those who design and code them and are seldom written by teams characterised by balance between the sexes. Given the marked underrepresentation of women, these algorithms are likely to have unconscious bias embedded within them. The use of AI presents similar problems.
What are the key omissions to the draft Bill, such as a general safety duty or powers to deal with urgent security threats, and (how) could they be practically included without compromising rights such as freedom of expression?
As stated in the answer to question 1, there is a need for definitions of the key terms ‘safe’ and ‘harmful’ that are not imposed by an ideologically biased lobby.
In addition, the section on freedom of expression delegates to services the onerous task of determining where the imposition of restrictions compromises freedom of expression – without government guidance that could help set the parameters. This is far too fraught an issue to expect services to resolve. A robust new mechanism is needed, which is free from institutional bias. It is possible that the limits of what could be blocked or banned could be discussed by the same Citizens Assembly that defines the terms ‘safe’ and ‘harmful’.
Are there any contested inclusions, tensions or contradictions in the draft Bill that need to be more carefully considered before the final Bill is put to Parliament?
We believe that the Bill is built on sand and will need to be overhauled. The complacent assumption that it will be possible to achieve a consensus on what is safe and what is harmful, and the inappropriate delegation to services of the task of deciding where the imposition of restrictions compromises freedom of expression, are crippling flaws at the heart of the text.
What are the lessons that the government should learn when directly comparing the draft Bill to existing and proposed legislation around the world?
We do not advocate the UK government looking elsewhere for inspiration when seeking to understand the extraordinary pressures, conflicts, and tribal oppositions in online discourse.