Written evidence submitted by End the Virus of Racism (OSB0173)




End the Virus of Racism has gathered information that shows reported hate crime and racism affecting East and Southeast Asian (ESEA) communities in the UK, has increased since the COVID crisis broke out. This increase has been sustained until now. Publicly released data on this lags behind our findings, thus the last time there are verifiable figures, is from 2020. For instance, in the first quarter of 2020, there appeared to be a 300% increase on previous years (see this October 2020 report from Protection Approaches).


This is the scale of the problem we are facing, but we are seeing some major gaps in the way the Online Safety Bill is being consulted with ESEA communities and have concerns with the safety implications of the policy developments on our communities.



About End the Virus of Racism


End the Virus of Racism is an anti-racism campaign group, working to tackle structural racism and inequalities affecting East & Southeast Asian communities. This is in the context of rising racism and discrimination towards all minoritised groups, with whom we seek to build allyship and solidarity. We are a registered Community Interest Company (13279897). For more information please see https://www.endthevirusofracism.com.


This submission was prepared by Hau-Yu Tam, Head of Campaigns at End the Virus of Racism. Hau-Yu is a community worker and campaigner who studied for an MA in International Studies and Diplomacy at SOAS. She is a co-founding member of End the Virus of Racism and formerly its Interim Chair. Previously she has worked as a management trainee within local government, as a Students’ Union sabbatical officer, and in various roles in charities and the education sector.


For further details about this submission please contact Hau-Yu Tam at hau-yu.tam@evresea.com



Our responses to the Joint pre-legislative scrutiny Committee:


Committee Question 14: Are the definitions in the draft Bill suitable for service providers to accurately identify and reduce the presence of legal but harmful content, whilst preserving the presence of legitimate content?


       No, the bill fails to properly define the concept of ‘harm’. Without a clear definition the bill will effectively outsource decision making on what is and isn’t permitted online, to tech companies.


       The threat of large fines will create a commercial incentive to over-censor which will disproportionately impact communities such as ours which have been historically censored.


       The vague definition of harm and the threat of massive financial sanctions provide hate groups with a powerful weapon with which to lobby platforms to censor the speech of those they don’t agree with. It is not hard to imagine hate groups pressuring tech companies to remove Black Lives Matter content in the midst of the protests in the last year. Needless to say, even if the pressuring were unsuccessful, the lobbying itself would cause harm, by giving rise to the notion that black lives matter should be up for debate at all.


       There has been intensified racism toward East and Southeast Asian (ESEA) communities, in the context of rising racism towards all minoritised communities across the UK and globally. The Runnymede Trust’s ‘England Civil Society Submission to the United Nations Committee on the Elimination of Racial Discrimination’ - to which End the Virus of Racism contributed - summarises how:


“There has been a rise in incidents of hate crimes against British Chinese and East and South-East Asian (ESEA) communities during the COVID-19 pandemic. At the beginning of the pandemic, Ipsos Mori found that one in seven people in the UK intentionally avoided ‘people of Chinese origin or appearance’. At that time, the UN Special Rapporteur on minority issues raised concerns about the role of politicians in exploiting fears surrounding COVID-19 to scapegoat communities, particularly Chinese and other ESEA communities, leading to a rise in violence against them. Data from the London Metropolitan Police also shows that hate crimes towards ESEA communities tripled in the first quarter of 2020 and doubled in the second quarter compared with previous years”. It is important to note that the Met police tracks reported hate crime; according to our own research and consultation with organisations working on tackling hate crime and racism, the problem is vastly underreported. According to data from police forces we have acquired (which will become available on our website in the coming weeks), similar patterns of rising reported hate crime across all regions of the UK, well into the last quarter of 2020, were observed.


       This intensified racism is not only a trend reflected on the ground/in person but also online. Research conducted by AI startup L1ght in April 2020 yielded the following worrying statistics: that there had been a 900% increase in hate speech on Twitter directed towards China and the Chinese during the breaking of the coronavirus crisis. Furthermore, there had been a 200% increase in traffic to hate sites and specific posts against Asians, 70% increase in hate between kids and teens during online chats and 40% increase in toxicity on popular gaming platforms, such as Discord.


       We believe that communities and community organisations most at risk of harm from online abuse and censorship, should be widely engaged in this definition-shaping process and the process for determining harm to children and adults. End the Virus of Racism is one of very few organisations serving ESEA communities in the UK. We have established ourselves as a recognised advocacy group, and our network is well connected to ESEA communities especially in London and other major cities across England and Scotland. Our primary mission is to tackle structural racism and inequalities affecting our communities. We are therefore well placed to contribute to this process, and should be given the platform to do so.



Committee Question 20: Are there any foreseeable problems that could arise if service providers increased their use of algorithms to fulfil their safety duties? How might the draft Bill address them?


       The Online Safety Bill will give further power to algorithms which suffer from racial biases, and disproportionately censor minoritised and marginalised voices. 


       Discriminatory bias can be found in the unequal way in which content is moderated across different languages. For example, content is twice as likely to be deleted if it is in Arabic or Urdu than if it is in English. (House of Lords Communications and Digital Committee 2021, page 15).


       Artificial Intelligence programmes such as facial recognition applications suffer from being grounded in predominantly white, male datasets, with errors occurring up to 20 per cent of the time for people with darker skin colours (MIT News 2021).



Committee Question 21: Does the draft Bill give sufficient consideration to the role of user agency in promoting online safety?


       An online safety bill which does not take ESEA voices and the experiences of our communities into consideration, is not a bill which is prioritising our communities’ online safety first. In our jointly written ‘Response to the Call for Evidence on Ethnic Disparities and Inequality in the UK’ report, written in January 2021, we wrote:

“The power and reach of social media should be harnessed to highlight effective initiatives directed at improving ESEA representation and to coordinate responses from the ESEA community when direct action needs to be taken. For example, much more needs to be done to protect the ESEA community from online abuse and harms (companies like Instagram and Facebook do not treat racist language towards ESEA people with the same gravity as racism towards people of other ethnic backgrounds). We need zero-tolerance of direct racism and discrimination on social media platforms.  Simple action along these lines will go a long way to reduce the level of discrimination and direct racism experienced by ESEA people and to further the UK’s mission to be an “inclusive country of equal opportunity and representation.”



Committee Question 28: How will Ofcom interact with the police in relation to illegal content, and do the police have the necessary resources (including knowledge and skills) for enforcement online?


       Racism is a crime, but online racist abuse isn’t being prosecuted by the police. Instead of outsourcing online policing to Ofcom and Silicon Valley, the police need to be given the resources to tackle content that is already illegal under current legislation.


       At the same time, End the Virus of Racism also works to hold institutions such as police forces to account. As the aforementioned Runnymede CERD submission had as its main recommendation regarding redressing racism toward ESEA communities: “The UK government should undertake an inquiry into the response of the police to hate crime against Chinese, East and South-East Asian communities in England and Wales, including investigation of the offences involved, prosecutions and outcomes of prosecutions, quality of support for victims, and effectiveness of any preventative measures undertaken.” However, at least there are mechanisms and precedents for holding the police to account. This will be a lot more difficult when Ofcom and Silicon Valley are involved.


       The Bill makes it harder to prosecute criminals and protect victims of racial abuse because it forces platforms to delete valuable evidence before the victims of targeted harassment or threats can report it to the police. Hate speech is a significant concern for our communities which is why we want to see abusers prosecuted by the police, rather than simply deleted. As we are an organisation which serves underserved communities, we would above all prefer to see resources from the government to strengthen community-based interventions and approaches, rather than punitive approaches which may not tackle the roots of racism.


       Between 2017 and 2019, less than 1% of cases reported to the police’s online hate crime unit resulted in charges (Independent 2019).


       31 percent of Asian people in the UK have observed cruel or hateful content online. (The Alan Turing Institute 2019).



28 September 2021