Written evidence submitted by Centenary Action Group (OSB0047)

 

This joint submission is written by Centenary Action Group, a coalition of organisations campaigning to remove the barriers to women’s political representation, alongside Glitch, Antisemitism Policy Trust, Inclusion London, Stonewall, Compassion in Politics, Women’s Equality Party, The Traveller Movement, Women’s Aid Federation of England (Women’s Aid), #NotYourPorn, Imkaan, Girlguiding, The Jo Cox Foundation, End Violence Against Women Coalition (EVAW), Womankind Worldwide, Equality Now and Julia Slupska, Oxford Internet Institute.


 


 

Introduction

The Online Safety Bill is a pivotal opportunity to tackle online abuse against women and girls. When announced, this Bill set out its aim to make the UK the ‘safest place in the world to be online’[1]. If this Bill continues to ignore the huge impact that gender-based online abuse has on women and girls in the UK, not only will the government not achieve its aim of creating world-leading legislation in this area, but this will also be a missed opportunity to bring an end to a serious and widely prevalent form of violence against women and girls.

Summary of Recommendations:

1. Include specific measures to address online harms against women and girls and those with multiple protected characteristics, including women in political and public life, and treat online harms as equally serious as in-person harm.

2. Ensure joined up working between the Online Safety Bill and other relevant UK Government workstreams, including the Online Media Literacy Strategy, the Violence Against Women Strategy, the Domestic Abuse Strategy, and the recommendations from the Law Commission’s review on Hate Crimes, including misogyny and reforms to protect victims of online abuse and safeguard freedom of expression.

3. Retain ‘legal but harmful’ as a category of harm, requiring platforms to use systems and processes to reduce the amplification of such content.

 

Online Abuse Against Women

Definition

 

The Council of Europe Convention on Preventing and Combating Violence against Women and Domestic Violence 2011 (Istanbul Convention), to which the UK Government is signatory, defines Violence Against Women and Girls (VAWG) as “a violation of human rights and a form of discrimination against women and shall mean all acts of gender-based violence that result in, or are likely to result in, physical, sexual, psychological or economic harm or suffering to women, including threats of such acts, coercion or arbitrary deprivation of liberty, whether occurring in public or in private life.” ​​Women and girls are subject to disproportionately high volumes of violence, sexualised abuse and hate online. This is known as ‘online VAWG’, a wide and growing set of harms, that includes, but is not limited to, intimate image abuse, online harassment, the sending of unsolicited explicit images, coercive ‘sexting’, and the creation and sharing of ‘deepfake’ pornography.

Prevalence and disproportionate impact of online VAWG

 

VAWG is increasingly perpetrated online – both through specific online crimes and through the use of technology to perpetrate ‘traditional’ crimes which is compounded with multiple forms of discrimination, intersecting with racism, homophobia and ableism and other forms of oppression. One in five women in the UK have suffered online abuse or harassment[2], and Black women are 84% more likely than white women to be mentioned in abusive or problematic tweets[3]. This has a damaging effect and Girls and young women of colour are more likely to not use social media (33% aged 11 to 21 compared with 24% with those who are white)[4]. Trans women, in particular, are also vulnerable to online abuse and Galop found that nearly 60% of respondents had experienced transphobia online[5]. Similarly, Gypsy, Roma and Traveller (GRT) women experience online harm on the basis of their gender and ethnicity. Internal shaming also occurs online amongst the Irish Traveller community, through social media ‘shame pages,’ that similarly rely on discrimination on the basis of gender, gender identity and sexual orientation[6]. Going forward, there must be meaningful engagement with the specialist led by and for sector about the wider impact of online harms on women who experience the material and lived reality of structural inequality and discrimination. In addition, we are concerned more widely about the development of a sanctioned surveillance culture that could profile and target Black and minoritised communities and could put survivors at further risk of harm, as highlighted by EVAW, Imkaan and the Angelou Centre earlier this year[7].

The relationship between online and offline VAWG

 

Online VAWG takes place in the context of gendered norms of popular culture that can reinforce harmful stereotypes and gender inequality. While data analysis in this area is still fairly underdeveloped, what is known is that women are 27 times more likely to be harassed online than men.[8] Online VAWG should be understood as part of a continuum of abuse which is often taking place offline too and is a real and present danger. This online abuse and harassment can form part of a pattern of coercive and controlling behaviour – which can encompass physical abuse, emotional and psychological abuse, financial abuse, and sexual abuse. Research by Women’s Aid found that 85% of women who experienced online abuse from partner or ex-partner said that it was part of the pattern of abuse they also experienced offline[9]. The latest data on intimate image abuse (also known as image-based sexual abuse) showed that 82% of prosecutions were flagged as being domestic abuse-related, further emphasising how online abuse operates as part of the continuum of VAWG.[10]

Taking but one example, so-called ‘incels’ believe in a conspiracy theory which prevents them from having sexual relationships with women. Misogyny abounds and morphs into violent chatter including celebrating the murder of women and calling for the rights of women to be curtailed. There is then offline violence, with at least seven mass killings having taken place from members of the online incel community. The incel internet subculture is believed to have motivated the Plymouth gunman who killed five people in August 2021[11], which demonstrates the increasingly high level of danger presented by this online harm.

Digital Threats to Democracy

Women in public life are disproportionately targeted by online abuse. No female MP who was active on Twitter during the 2017 Election was free from online intimidation. During the election, Black and Asian women MPs – despite representing only 11% of all women in Westminster at that time – received 35% more abusive tweets than white women MPs[12]. Online violence can hamper women parliamentarian’s ability to fulfil their mandates offline, and even prevent them from running for additional terms in office – by forcing a ‘choice’ between their mental health, safety or political career[13].

Impact of the pandemic

The use of digital spaces has increased significantly in light of COVID-19, and with it has come reports of an increase in abuse and harassment online. Glitch and EVAW’s nation-wide survey showed that 41.5% of 480 respondents had faced online abuse since the beginning of the pandemic[14]. This proportion increased to 50% for women and non-binary people of colour. One respondent stated, “the line between online and offline abuse is virtually non-existent.” and online behaviours and experiences lead to offline behavioural changes. For example, someone who is experiencing online abuse may take measures to increase their personal safety both online and offline, such as police assistance, travel-plan changes or event cancelations[15].  The increased anxiety, stress and hurt caused by online abuse has behavioural and psychological impacts for survivors both online and offline. For example, in 2019 a woman committed suicide after her ex-boyfriend shared an intimate video of her with his friend and threatened to also share with her family[16].

‘Whole System Response’ Required

Following the UK Governments recently published VAWG Strategy[17], we continue to call for a ‘whole system response’ to women’s experiences of violence and abuse. All parts of society must be held accountable for understanding, identifying and bringing an end to all forms of violence against women and girls – from schools to health services, the police to transport, business, housing and, critically, online spaces, too. That is why the Online Safety Bill is a pivotal opportunity to take action against online abuse against women. However, online VAWG is currently missing from the Bill and this must be urgently addressed.

 

 

 

 

Online Safety Bill: Recommendations

 

Below are specific aspects of the Bill which we recommend need reform to ensure online VAWG is addressed.

 

Definition of Harm

 

Online VAWG should be recognised as a wide and growing set of harms including intimate image abuse, online harassment, the sending of unsolicited explicit images, coercive ‘sexting’, the creation and sharing of ‘deepfake’ pornography. The harms caused particularly to young women and girls that result from appearance pressures and body image issues including bullying around appearance, harmful advertising around beauty, fitness and weight loss, must also be recognised.

 

It is imperative that definitions of harms are responsive to the ever-changing tactics and behaviours relating to emerging online VAWG, and are not so prescriptive as to become quickly outdated. Any measures or definition within the Bill to address online VAWG must also take account of intersectional harm of online VAWG against, for example, Black and minoritised women, Disabled women and LGBTQIA+ people. While recognition of intersectionality within the Bill is welcomed, a clear explanation of how this must be addressed in practise is also needed. The recognition of online VAWG as a harm within the Bill must be part of a wider, holistic response to online VAWG that includes sufficient funding of specialist VAWG support services, with ring-fenced funding for ‘by and for’ Black and minoritised women’s VAWG organisations.

 

There are concerns around the proposal that companies designated as ‘category one’ in the draft Bill will be under a duty to protect  ‘content of democratic importance’, over reducing online hateful content[18]. It is vital that this duty not be used as a pass for misogynist abuse and that future guidance from Ofcom makes this clear. The current prevalence of online abuse is a threat to the freedom of expression of those who it deliberately targets and therefore, it is in the interest of freedom of expression that online abuse and violence against women is effectively mitigated and ended, in order to secure women’s rights to freely express themselves online and offline. It is unhelpful to consider discussions around preventing online abuse or protecting freedom of expression as a binary issue.
 

 

Intimate Image Abuse

 

In a recent report on intimate image abuse based on interviews with victim-survivors, Prof. Clare McGlynn et al.,[19] describes the ‘profound social rupture’ victim-survivors experience following intimate image abuse which can be described as a significant devastation that drastically changed victim-survivors’ lives. The Bill must introduce measures to tackle intimate image abuse by recognising intimate image abuse as a sexual offence. This should be a straightforward offence to strengthen clarity and understanding. There must be no requirements to prove motivation as part of the offence as this does not accurately reflect the overlapping, shifting motivations of intimate image abuse perpetrators and would be inconsistent with most criminal law offences.

Evidence suggests there is rarely a single, clearly identifiable motive for perpetrating intimate image abuse. Motives are overlapping, interconnected and also closely linked to overarching cultural attitudes of entitlement, dominant masculinity and power. Seeking to separate motives does not reflect the reality of these abusive practices and risks undermining our developing understanding of motives. It also risks minimising the reality of cultural attitudes around dominant masculinity and entitlement which underpin much intimate image abuse. It risks individualising motives and does not recognise that motives can (and do) vary and change over time. Introducing specific motive requirements risks the law becoming dated and ineffective as new harmful cultures and motivations emerge.

 

Imkaan and the Angelou Centre have highlighted that Black and minoritised women and children have distinct and intersecting experiences of image-based harm. These are often perpetrated through numerous cultural and social lenses as part of a spectrum of VAWG and should not be positioned within culturally aligned stereotypes. We do not believe that this has been adequately acknowledged in the proposals of the Bill and raise concerns as to how far the lived experiences of Black and minoritised women and children survivors of intimate image based harm have been considered.

 

Alongside introducing a specific intimate image abuse sexual offence, there must be automatic anonymity for all those reporting any form of intimate image abuse and further measures to protect complainants during any trial process. Automatic anonymity is vital in order to encourage victim-survivors to report their abuse and continue cases. Additionally, civil legal aid to cover legal advice should be extended to support survivors with intimate image abuse cases.

 

Cyberflashing

 

We welcome that the UK Government is looking at including the recommendation from the Law Commission that cyberflashing should be an offence in the Bill. This recognition that the inclusion of cyberflashing in law is needed, however the Law Commission’s proposals focus the offence on motivation of causing distress, alarm or humiliation on behalf the perpetrator, rather than the core wrong of cyberflashing. Motivation for cyberflashing can be varied and overlapping, as men may wish to not only cause distress but also seek to ‘be funny’ or boost status among their friends. Furthermore, motivation is ultimately hard to prove in court. The Law Commission’s proposals would therefore result in very few prosecutions, further deepening the lack of trust survivors may have in the criminal justice system as a result of dismally low sexual offences prosecution rates. Instead, legislation should recognise cyberflashing as a harmful intrusion and encroachment on women’s personal space, rather than focus on the motivation of perpetrators[20].

 

User Safety

 

The Bill must include greater recognition that online forms of abuse and VAWG do not exist in the 'virtual world' alone and are part of a continuum of violence against women, as well as tactics used by perpetrators of abuse both online and offline, for example as a form of intimate partner violence.

 

The online world must no longer be seen as less ‘real’ than the offline world. Nor is it acceptable to suggest that people who are subjected to abuse simply remove themselves from the online space as a way of mitigating harm against themselves. For example, Disabled women are often advised not to engage online due to the abuse aimed at them for being a woman and because they are disabled. This victim-blaming approach is not an acceptable solution and more needs to be done to create a safe and accountable space where social media companies have a legal responsibility to remove and report criminal activity in a timely manner and put support in place for any woman who is targeted while using their platforms. This support needs to be accessible and available in-person as well as online.

 

User Redress and Advocacy

 

In the current proposals, enforcement powers are very weak and Ofcom cannot do anything substantial if companies fail to comply with their duties on legal harms. Prof. Clare McGlynn and Prof. Erika Rackley recommend that the regulator has the power to order the take-down of images, offer advice and assistance to victim-survivors, and specialist support services like counselling and legal advocacy. It could also lead public information and educational initiatives to challenge cultural attitudes. An e-safety commission or equivalent would create a powerful supportive pathway for victim-survivors to take back control[21].

 

Online providers must have clear obligations on preventing, prohibiting and responding to online VAWG, clear expectations for the response to civil and criminal investigations and proceedings in VAWG cases, and for how evidence is handled, stored and shared. Providers should also ensure that users are informed of these obligations, and clearly informed on the steps to seek redress.

 

During the COVID-19 pandemic, 'Trusted Flaggers' have been relied on to monitor tech companies such as Facebook and YouTube. While it is more time-efficient than public users' use of reporting mechanisms, the duty of care companies hold is outsourced to charities and non-profits without providing the accompanying financial resources needed for this method to be truly effective. For example, just one member of the Traveller Movement is given the status of Trusted Flagger and has certain power over removing a plethora of harmful pages and posts online. Due to the limited number of Trusted Flaggers and lack of resources, harmful pages remain online for longer than they should. There are accompanying questions around who is trusted and how can victim-survivors be in control of their own circumstances if they wish to be. A survivor-centred approach would empower survivors to remove harmful content themselves without meeting the Trusted Flagger requirements to “flag a large volume of videos with a high rate of accuracy to join the Trusted Flagger program”[22].

 

To improve user redress and advocacy, funding should be given to organisations supporting survivors of online abuse and working on prevention, with 50% of this funding going to ‘by and for’ led Black and minoritised specialist organisations. Funding can be provided by ring-fencing 10% of the Digital Services Tax which, according to the Office for National Statistics, raised £29 million in the first month of operation alone. The ‘polluter pays’ principle, endorsed by the OECD for almost 50 years suggests that the companies enabling these harms to society should pay to help rectify the damage. By ring-fencing at least 10% of this new tax annually for ending online abuse, the UK Government can commit at least £3.5 million to further establishing online standards which are fair and necessary to the growing digital economy. To combat online abuse and violence efficiently and effectively, this 10% should be pledged to organisations to help fund their vital work to end online abuse, such as training on online safety, flagging harmful content, providing policy advice and specialist support for survivors.

Media Literacy

 

The UK Government must effectively deliver the newly published Online Media Literacy Strategy and promote ‘Digital Citizenship’ education as a key priority. More investment in impactful digital citizenship across society (and age groups) is needed to make the Internet a safer space, tackle mis/disinformation and remove the onus on individual victims. Digital Citizenship is respecting and championing the human rights of all individuals online, and encompasses three key elements: individual, social and institutional responsibilities. At Glitch, there are 4 main pillars of digital citizenship:

       Digital Self-Defence: Using online tools to protect ourselves and others in online spaces

       Digital Self-Care: Creating boundaries in digital spaces to look after our wellbeing

       Online Active Bystander: What to do when you see someone else experiencing online abuse

       Tech Accountability: Understanding how to hold tech companies accountable

 

Governmental institutions and tech companies have a role to play in digital citizenship, ensuring that individuals can exercise their online rights whilst protecting the rights of those with multiple and intersecting identities. The UK Government must also prioritise digital citizenship education for all, using a public health approach that examines the wider impact of online abuse in a community, rather than treating online abuse as single incidents. For example, more education is needed on misinformation and disinformation and how these operate to harm marginalised groups, including the LGBTQIA+ community, at an even higher rate. We know that misinformation causes many LGBTQIA+ people to retreat from public spaces, both online and offline and, therefore, any type of citizenship education should be informed by a protected characteristics approach.

 

Gypsy, Roma and Traveller (GRT) voices echo the significance of ‘Digital Self-Defense' and ‘Tech Accountability’ as crucial avenues to tackle online harm[23]. Certain minority groups have limited literacy and education levels that make them vulnerable to harmful misinformation, conspiracy theories and abuse. Gypsy and Traveller women and girls have been misinformed about the health impacts of certain diet pills and nasal tanning sprays online that cause severe harm[24]. As well as this, digital exclusion on the basis of literacy, digital skills and education levels is highly relevant in the context of reporting. One’s ability to read and comprehend content must be at a very high level before one can report abusive content. This requirement manifests as digital exclusion for certain minority groups. If a victim-survivor or advocate has limited literacy and digital skills, they have little chance in challenging online abuse or getting anything removed. Removing this option from minority groups is disempowering. Targeted digital citizenship education is therefore also beneficial in ensuring the empowerment of all minoritised communities.

 

Anonymity

 

Debates around anonymity and online abuse must acknowledge that online abuse occurs on platforms that both do (e.g. Twitter) and do not (such as Facebook) allow anonymous accounts. Nevertheless, there is a link between anonymity and accountability that needs to be addressed, while not infringing on the right to be anonymous online when not perpetrating abuse and online harms. Certain communities such as the LGBTQAI+ community and activists use anonymity for legitimate reasons, including for safety, particularly when their activities or identities are seen as challenging authority. Much online abuse is both harmful and illegal, yet many perpetrators are not held accountable by law enforcement due to difficulties of traceability. Due to the nature of debates around anonymous accounts and their prevalent use in online abuse aimed at politicians and high-profile footballers who campaign on this issue, we must ensure that a balance is struck between improving the traceability and accountability of perpetrators of online harm, while not damaging the anonymity of legitimate online actors who use pseudonyms online, whether for safety reasons or otherwise. One possibility could be a twin-track approach creating the option for all users to gain “verified” account status and the ability to filter out from their news feed and DMs (direct messages) any account that is “unverified”.

 

Tackling legal but harmful content

 

As it stands, social media platforms will be required to minimise the presence of illegal content on their site and prevent the circulation of content that is harmful to children. However, no such requirements exist in the Bill for content that is harmful to adults (“legal but harmful”). ‘Category One’ Platforms will need to produce terms and conditions for their users to follow but this completely shifts the duty of responsibility off the shoulders of the tech platforms and is in opposition to the supported intentions of the Bill which is to tackle abuse through the design of the platforms. Companies should be expected to reduce abuse by design – rewriting algorithms to stop material from being promoted into people’s feeds and catching the use of abusive material before it can be posted.

 

Set minimum standards

 

The Bill needs to propose minimum standards for the Terms and Conditions social media platforms must produce. Otherwise, there will be a perverse incentive for platforms to produce weak T&Cs: their profit margins depend on enabling as many people as possible to share as much content as possible, no matter what the repercussions. We need to draw a very clear line in the sand about the type of material that should not be shared online and the actions platforms must take to remove it. The government should empower Ofcom to establish and enforce a minimum standard.

 

20 September 2021


[1] https://www.gov.uk/government/news/making-britain-the-safest-place-in-the-world-to-be-online

[2] Amnesty International 2017 https://www.amnesty.org/en/latest/press-release/2017/11/amnesty-reveals-alarming-impact-of-online-abuse-against-women/

[3] Amnesty International UK 2018 https://www.amnesty.org.uk/press-releases/women-abused-twitter-every-30-seconds-new-study

[4] Girls’ Attitudes Survey, Girlguiding 2020 https://www.girlguiding.org.uk/globalassets/docs-and-resources/research-and-campaigns/girls-attitudes-survey-2020.pdf

[5] Galop, Trans Hate Crime Report 2020: https://galop.org.uk/wp-content/uploads/2021/06/Trans-Hate-Crime-Report-2020.pdf 

[6] Traveller Movement, Shaming Report (forthcoming)

[7] End Violence Against Women Coalition and Faith and VAWG Coalition, Response to the Law Commission Consultation on Intimate Image Abuse, May 2021   Coalitionhttps://www.endviolenceagainstwomen.org.uk/wp-content/uploads/FINAL-EVAW-and-Faith-and-VAWG-Coalition-Intimate-Image-Abuse-Consultation-response-May-2021-1.pdf

[8] Her Net Her Rights – Mapping the state of online violence against women and girls in Europe

[9] Women’s Aid Online and Digital abuse

[10] ONS (25 November 2020) Domestic abuse and the criminal justice system, England and Wales: November 2020

[11] https://www.thetimes.co.uk/article/plymouth-shooting-incel-groups-radicalising-boys-as-young-as-13-8jzffqzn0

[12] https://www.amnesty.org.uk/online-violence-women-mps

[13] Inter-Parliamentary Union, Sexism, harassment and violence against women in parliaments in Europe, 2018 https://www.ipu.org/resources/publications/issue-briefs/2018-10/sexism-harassment-and-violence-against-women-in-parliaments-in-europe

[14] Glitch and EVAW, Ripple Effect Report 2020: https://glitchcharity.co.uk/wp-content/uploads/2021/04/Glitch-The-Ripple-Effect-Report-COVID-19-online-abuse.pdf

[15] A staff member for Dawn Butler MP bought a stab proof vest because of online abuse https://www.theguardian.com/society/2021/jul/01/social-networks-facebook-google-twitter-tiktok-pledge-to-tackle-abuse-of-women-online

[16] ‘Broken by revenge porn, my beautiful girl killed herself’, The Times, August 2019 https://www.thetimes.co.uk/article/broken-by-revenge-porn-my-beautiful-girl-killed-herself-70dsff92h

[17] https://www.gov.uk/government/publications/tackling-violence-against-women-and-girls-strategy

[18] https://www.hopenothate.org.uk/2021/06/07/hope-not-hates-response-to-the-draft-online-safety-bill/

[19] McGlynn et al, (2019) Shattering Lives and Myths: A Report on Image-Based Sexual Abuse

[20] https://www.independent.co.uk/voices/cyberflashing-new-law-online-safety-bill-b1888633.html

[21] Intimate Image Abuse – Policy Briefing on Law Commission Consultation Professor Clare McGlynn and Professor Erika Rackley: https://claremcglynn.files.wordpress.com/2021/05/mcglynnrackley-stakeholder-briefing-5-may-2021-final-1.pdf

[22] https://support.google.com/youtube/answer/7554338?hl=en#zippy=%2Cngos-and-government-agencies

[23] Traveller Movement, Shaming Report (forthcoming)

[24] Traveller Movement, ongoing casework