Dr Kim Barker[1] and Dr Olga Jurasz[2]written evidence (FEO0099)

 

 

House of Lords Communications and Digital Committee inquiry into Freedom of Expression Online

 

Summary and Key Points:

 

 

 

 

 

 

Public Policy Recommendations:

 

1.       We advocate for an Independent UK Adjudicatory Body to address grievances between online platforms and users.

 

a.     This should operate to allow users to take their disputes – resolved or otherwise – for independent adjudication.

 

b.     it should operate to require any platform operating within the UK to be captured within its remit.

 

c.      Importantly, it must allow for users to self-refer, rather than wait for a platform to refer.

 

d.     A diverse range of appointments must be made to the independent adjudicatory body, including experts in VAWG.

 

2.       Greater conceptual coverage of online harms is a pre-requisite for any regulatory regime. This must include explicit recognition of a broad range of harms.

 

3.       There is a greater need for cohesion across numerous policy areas and national strategies. For example, the VAWG Strategy should be a factor in discussions of online regulation.

 

4.       States have human rights obligations these should be factored into any legislative action, and Government policy.

 

5.       Public health agenda needs to address the impact of online violence against women, especially in the context of mental health. A rights-based approach to this issue is paramount, as recognised by the UN.[3]

 

 

Opening Comments

 

1.            We welcome the Committee’s inquiry into freedom of expression online and the opportunity to share our expertise through the submission of written evidence.

 

2.            We are responding to the call for evidence in our capacity as experts on social media abuse, online violence against women, online misogyny, and internet regulation. We have been working on issues relating to harassment of women and girls in online spaces since 2013. We have in the past made significant contributions to the UN calls for evidence on online violence against women, to the Bracadale Review on Hate Crime in Scotland, the UK House of Commons Women and Equalities Committee inquiry into sexual harassment of women and girls in public spaces, the Scottish Government’s ‘One Scotland: Hate Has no Home Here’ Consultation on amending Scottish hate crime legislation, and to the Online Harms White Paper consultation. In addition, we have made representations to the Scottish Government as to the need to amend legislation to cover a wider range of harassing and abusive behaviours online.

 

3.            We are authors of Online Misogyny as a Hate Crime: A Challenge for Legal Regulation (Routledge 2019) which is the first academic book to address the phenomenon of online misogyny and the merits of a legal response to it. Our research takes a holistic approach to the legal problems posed by online violence, merging criminal law, gender, human rights, and internet law expertise. Our research has been quoted by the Women & Equalities Committee[4] and also relied upon by Leanne Wood AM when calling for greater regulation of online abuse in November 2018.[5] We have also acted as consultants to the Council of Europe’s GREVIO (Group of Experts on Action against Violence against Women and Domestic Violence) regarding their first recommendation on online and technologically-facilitated violence against women and girls.

 

4.            We would add that we are happy to provide further expertise or evidence if this is of use and would be happy to answer questions orally or in writing.

 

5.            We are only commenting on the questions posed from the perspective of our research, which focuses on online abusive behaviours, offensive content, internet regulation (broadly conceived) and the impact abusive content has on the participation and safety of women online. This expertise is placed within considerations of legal responses to addressing the challenges posed by illegal, harmful and abusive online content, with a specific focus on the gender dimension of these pressing issues.

 

 

Question 1: Is freedom of expression under threat online? If so, how does this impact individuals differently, and why? Are there differences between exercising the freedom of expression online versus offline?

 

6.            Online violence and abuse threaten the freedom of speech online and this growing, everyday problem has a particularly strong gender dimension. An emerging body of evidence suggests that online violence and abuse have significant implications for women’s and girls’ freedom of expression online. For instance, the UK Girlguiding Girls’ Attitude Survey (2016) showed that 49% of the 1600 surveyed girls aged 11-16 and 44% of young women aged 17-21 felt unable to express their views in an online environment (p.17-19). This has silencing effects on women and girls participating online, which evidences itself in withdrawal from public spheres online, and contributes to the creation of hostile spaces for women, therefore undermining their participatory rights.

 

7.            This is a significant concern from the perspective of ensuring non-discrimination and equality of participation as well as freedom of expression online.[6] Online abuse and violence leads many women and girls to effectively withdraw from participating online and in public life. The impact of online abuse on women’s participation online and in public life was also evidenced in the responses of women politicians who were subjected to online abuse in the lead up to the UK 2017 General Election,[7] which pointed towards an extremely high volume of online abuse affecting women’s participation and expressing themselves online. This has not been confined to the 2017 General Election. In the 2019 General Election, 18 of the 50 MPs standing down were female, and several attributed their departure to the abuse and harassment suffered while in elected office.[8]

 

8.            Behaviours that undermine women’s freedom of expression need to be taken seriously, with laws and regulations that adequately capture the reality of online abuse and its impact on the victims. Alongside freedom of expression, which is one of the key human rights enshrined in the UN treaties and in the ECHR, ensuring women’s right to full and equal participation in public and political life, is also an international obligation of states that are parties to the UN Convention on Elimination of All Forms of Discrimination Against Women 1979 (CEDAW). This includes the United Kingdom, which ratified CEDAW in 1986. Under Article 7 of CEDAW, states are obliged to take steps to prevent and eliminate the discrimination of women in public and political life and to ensure their equal participation. Addressing online violence against women and online harms arising from gender-based abuse online would be a significant first step in fulfilling the CEDAW obligation.

 

9.            Thus far, there has been limited accountability in the UK for acts of online violence against women – especially where these acts encompass text-based abuse (TBA) rather than image-based (sexual) abuse forms (IBSA). There have been only a few cases which dealt with text-based abuse of women and they have been limited to instances of abuse of prominent women e.g. Gina Miller (R v Viscount St Davids), and Caroline Criado-Perez and Stella Creasy (R v Nimmo and Sorley). As a result, there has been only limited judicial recognition of various types of online harms arising from such abuse – including participatory and democratic harms, which directly relate to freedom of expression and which we identify and conceptualise further in our research.[9]

 

 

Question 3: Is online user-generated content covered adequately by existing law and, if so, is the law adequately enforced? Should ‘lawful but harmful’ online content also be regulated?

 

10.       There are a number of legal provisions which, in principle, can apply to online content.[10] For instance, this includes criminal law provisions such as Protection from Harassment Act 1997, Offences Against the Person Act 1861, s33 Criminal Justice and Courts Act 2015 as well as communications offences (Malicious Communications Act 1988, Communications Act 2003). Although there is no shortage of potentially applicable legal provisions, the vast majority of them (with regulation of IBSA being the exception) largely outdate the rise of social media. Furthermore, there is a lack of cohesion within the existing legal landscape concerning online abusive communications and online abusive behaviours. Many different elements of these offences, as well as different tests/thresholds contribute to what we refer to in our research as legal fragmentation.[11] The effects of such fragmentation have tangible consequences for the victims of online abuse in that there exist limited avenues for redress.

 

11.       However, the core issue is one of enforcement. Even where the platform-specific reporting mechanisms exist, they exclude gender as a basis for the abuse suffered. For instance, neither Twitter or Facebook allow users to categorise the incident/abuse being reported as ‘gender-based’ or ‘gender-motivated’. This in itself is reflective of the social attitudes towards gender-based violence – both offline and online – which disproportionately targets women and girls and reflects an underappreciation of its impact and consequences.

 

12.       Consequently, the social harm that arises from silencing women and girls as well as normalisation of online violence against women and girls resulting from legislative and regulatory inaction ought to be urgently addressed.

 

 

Question 4: Should online platforms be under a legal duty to protect freedom of expression?

 

13.       The protection of freedom of expression is paramount in a democratic society. However, it must stand on a par with other rights, especially gender equality and the prohibition of discrimination, which is the underpinning principle of international human rights law. As such, efforts to protect freedom of expression must involve due consideration of factors limiting the realisation of this right for some individuals. This involves consideration of the impact of gender-based abuse online on women’s participatory rights and their freedom of expression. An analysis of the underpinning power relations is essential in making an assessment of this crucial issue.

 

14.       Freedom of expression is a human right as such, the key obligation for the protection and realisation of it lies on the state. Whilst platforms are powerful players in this context, they do not per se owe human rights obligations. Nonetheless, states and platform providers should work together on ensuring that human rights of individuals are protected online. Furthermore, platforms are businesses and, as such, may use the corporate social responsibility framework to ensure that human rights – including, but not limited to freedom of expression – are respected.

 

Question 5: What model of legal liability for content is most appropriate for online platforms?

 

15.       Debates surrounding models of legal liability for online content have taken a different approach in the UK[12] to that of the EU[13] and differ again to the approach in the US of addressing it through the s230 model.[14] While a discussion of any legal model of liability should take influences from all of these debates some of which are long-founded none of them address the pervasive challenge of categorising content according to legal response. The prevailing approach has been to focus on (i) where liability could (and should) fall i.e., the platform, the posting user / account, or a combination, and (ii) which timescales should be in place to ensure that the content is taken down within an ‘appropriate’ period.[15] Such an approach fails to focus on the harm perpetrated and imposed on the victim(s) / target(s) of gender-based online abuse (GBAO), online harassment, and online violence against women (OVAW).[16]

 

16.       Notice and takedown (NTD) liability models are not perfect given the scale and volume of content – especially user-generated content and the need for human oversight in moderation decisions. Moreover, pre-emptive and over-zealous takedowns of online content are a direct interference and threat to freedom of expression online. This must be avoided especially now that the volume of content makes NTD more difficult to manage without curtailing freedom of expression.

 

17.       The current situation in the UK is confused somewhat by the Government’s announcement[17] that it will depart from the previously promised approach of maintaining consistency with Article 15 of the eCommerce Directive – which outlines the basic principle that there is no monitoring obligation that can be imposed on online intermediaries – such as online platforms – to monitor what is said online. The announcement from the UK Government post-Brexit is not surprising given the overwhelming emphasis in the Online Harms Full Response[18] on monitoring. That said, we believe it is a worrying indicator about the likelihood of monitoring obligations for online platforms, and the inevitable interferences with freedom of expression that may follow. Such a shift in policy and in legal model continues to overlook OVAW and GBAO.

 

18.       While we believe that there should be a model of legal liability for online content, we do not believe that the current model is appropriate and is especially unsuited to the protection of women’s rights and freedoms online. The current confused model of patchwork legal provisions[19] means that victims of certain categories of online abuse – particularly GBAO and OVAW – fall through the gaps.

 

19.       We favour an alternative model of legal regulation which encompasses both criminal and civil liability with two strands to it, with each model including a graduate scale of penalty – and proactive obligations where there is evidence indicating action is required for illegal content, decreasing through to lawful, but harmful content:

 

(a)        liability regime for platforms with exceptions for hosting, but not editorialising.

(b)        liability regime for users/account operators on online platforms.

 

 

Question 7: How can technology be used to help protect the freedom of expression?

 

20.       Given how widespread the online abuse of women is,[20] much more needs to be done to ensure that women’s freedom of expression is protected online, but also that their online expression does not lead to offline consequences and offline harms.[21] Technology is not always bad, but nor should it be seen as the sole ‘solution’. The fragilities of automated moderation are an example of the perils that arise where technology is presumed to be the answer.

 

21.       Technology but especially online platform(s) have a place, especially for protest / awareness raising/whistleblowing. For example, the #MeToo, #EverydaySexism and #TimesUp movements are indicators of the power of online platforms for drawing attention to issues affecting women,[22] and in turn triggering bigger movements. Technology also allows people to engage and participate online where they may otherwise be unable to do so, and it is therefore an important element in upholding participatory rights – especially for women.[23] However, technology must be balanced with an appreciation of the flipside that anonymity and online posting presents for women – namely the risks of GBAO and OVAW, especially where these amount to threats to rape and / or kill.[24]

 

22.       Freedom of expression is not just about moderation and censorship the two are not mutually exclusive. A technological solution must maintain an appropriate balance and ought to include specific consideration to gendered elements of free expression.[25]

 

 

Question 8: How do the design and norms of platforms influence the freedom of expression? How can platforms create environments that reduce the propensity for online harms?

 

23.       Platform norms operate to mean that things gaining attention are amplified, even when that content undermines democratic principles and values. It is often not in the interests of a given platform to act to prevent or curtail the acts of accounts generating ‘trending’ content. The ‘banning’ of Trump is one of the clearest examples where platforms effectively paralysed themselves over potentially banning a serving President, even despite the accusations of inciting riots, and subsequent use of social media to amplify similar messages.[26] This example indicates just one instance where the platform design hindered the freedom of expression of others particularly those opposed to Trump on, and offline. If the design of platforms placed freedom of expression at its core, Trump could and perhaps should – have been banned much earlier, especially where his account posted comments targeting women and fostered demeaning & harassing comments.[27] The platform design and subsequent intervention by moderators actively contributed to harms against women by failing to act to tackle the damaging content posted.

 

24.       Online platforms particularly social media platforms are algorithmically designed to act as an echo chamber. This can and does operate to the detriment of women, especially politically active or prominent women in public life.[28] Not only do such women receive a barrage of abusive content but are regularly subjected to torrents of threatening and sexually violent messages.[29] Even where reports are made to platforms concerning these abusive messages, platforms are notoriously slow to respond.[30]

 

25.       The design and business model of platforms is targeted at revenue generation.[31] This is particularly problematic when freedom of expression is pitched against this. On one level, platforms encourage and promote expression by encouraging users to share content and opinions enshrining mechanisms for exercising participatory rights. Yet they are also private spaces, run and maintained by private corporations.

 

26.       However, when content becomes problematic and is reported for violating terms and conditions or because it is illegal, the platform design poses barriers to tackling this content, not least because the mechanisms of response to the content almost always lead to takedowns or removals, or in more extreme instances, the banning of users from the platform. This while desirable from the perspective of addressing the behaviour causes a conflict of interest for the platform, not least because it is being asked to act against one of its users and remove content which is attracting attention and increasing page views. The interests of the platform imply action needs to be taken but this can be pitted against the vested financial interests in not removing users and/or content. Platforms especially social platforms are revenue driven and are accountable to their investors the design therefore can dictate how invested platforms are in tackling content. To combat this conflict, an independent adjudicatory body could operate to reduce the damage to freedom of expression.

 

 

Question 10: How can content moderation systems be improved? Are users of online platforms sufficiently able to appeal moderation decisions with which they disagree? What role should regulators play?

 

27.       Reporting and appeals are not well handled – especially for women, and those subjected to GBAO/OVAW. There is a distinct lack of clarity surrounding the distinction and need to act for content which is illegal, and that which is lawful but harmful. Not all content which is harmful will be illegal[32] – and this is something that platforms reliant upon automated content moderation struggle with.[33] For instance, incitement to commit gender-based abuse online does not feature on the same scale of seriousness as for example, incitement towards religious harassment.[34]

 

28.       In improving moderation systems, there is a significant need to broaden the conceptions of ‘harm’, and more specifically, for moderators (and regulators) to understand the range of harms and the impact such harms can have.[35]

 

29.       A national adjudicatory body rather than platform specific variants of oversight could assist users in appealing moderation decisions. This must be something independent, specific, staffed by experts, and not funded directly by online platforms.

 

30.       There is an increasing need for language and cultural competencies to be factored into the moderator role, particularly for women. Nuance which is required on linguistic, cultural,[36] and contextual bases is required in addressing moderation processes. Failing to account for these nuances is likely to lead to erroneous interferences with freedom of expression as evidenced through the Bablylon Bee’s Monty Python Sketch satire which Facebook wrongly removed on the misplaced belief that it ‘incited violence.[37] The regulators should ensure that there is an obligation on platforms to facilitate these perspectives in their moderation systems and infrastructures. There must be action taken to protect the freedom of expression and not undermine participatory rights online by overlooking nuance.

 

 

 

February 2021

 

11

 


[1]              Senior Lecturer in Law, The Open University Law School

[2]              Senior Lecturer in Law, The Open University Law School

[3]               UN HRC, Report of the Special Rapporteur on the right of everyone to the enjoyment of the highest attainable standard of physical and mental health (15 April 2020) http://undocs.org/A/HRC/44/48

[4]               House of Commons, Women & Equalities Committee, ‘Sexual harassment of women and girls in public spaces’ (23 October 2018) https://publications.parliament.uk/pa/cm201719/cmselect/cmwomeq/701/701.pdf

[5]               ‘Assembly Member Backs New Calls to Regulate Online Abuse Against Women Following Research’ (2 November 2018) https://www.leannerhondda.wales/online_abuse; Record of Proceedings, Welsh Assembly, 26 March 2019, paragraphs 119-124: https://record.assembly.wales/Plenary/5571

[6]              Kim Barker & Olga Jurasz, ‘‘Online Misogyny: A Challenge for Global Feminism’ Journal of International Affairs 2019 72(2), 95-113: https://www.jstor.org/stable/26760834; Kim Barker & Olga Jurasz, ‘Online violence against women: addressing the responsibility gap?’ LSE WPS (2019) https://blogs.lse.ac.uk/wps/2019/08/23/online-violence-against-women-addressing-the- responsibility-gap/

[7]               Azmina Dhordia, ‘Unsocial Media: Tracking Twitter Abuse Against Women MPs’ Medium (4 September 2017): https://medium.com/@AmnestyInsights/unsocial-media-tracking-twitter-abuse-against-women-mps-fc28aeca498a

[8]               Kim Barker & Olga Jurasz, ‘Gendered Misinformation & Online Violence Against Women in Politics: Capturing legal responsibility?’ CoInform (2020): https://coinform.eu/gendered-misinformation-online-violence-against-women-in-politics- capturing-legal-responsibility/

[9]               Kim Barker & Olga Jurasz, ‘Text-based (sexual) abuse and online violence against women: towards law reform?’ in Jane Bailey, Asher Flynn and Nicola Henry (eds), ‘Technology-Facilitated Violence and Abuse International Perspectives and Experience (Emerald Publishing, 2021) pp 247-264.

[10]               For an in-depth and critical overview of the applicable provisions, see: Kim Barker & Olga Jurasz, Online Misogyny as a Hate Crime: A Challenge for Legal Regulation? Routledge (2019).

[11]               Kim Barker & Olga Jurasz, ‘Online violence against women as an obstacle to gender equality: a critical view from Europe’ (2020) 1 European Equality Law Review 47-60: https://www.equalitylaw.eu/downloads/5182-european-equality-law-review-1-2020-pdf- 1-057-kb

[12]              HM Government, ‘Digital Charter’ (25 January 2018) https://www.gov.uk/government/publications/digital-charter; HM Government, ‘Online Harms White Paper’ (8 April 2019) https://www.gov.uk/government/consultations/online-harms-white-paper.

[13]              European Commission, ‘Proposal for a Regulation of the European Parliament and of the Council on a Single Market for Digital Services (Digital Services Act) and amending Directive 2000/31/EC’ COM (2020) 825 (15 December 2020).

[14]              Communications Decency Act s230.

[15]              See for example: European Commission Recommendation of 1 March 2018 on measures to tackle effectively illegal content online C (2018) 1177 (1 March 2018) which requires that service providers assess, remove or disable content within one hour of receipt of referrals (para 35).

[16]              Kim Barker & Olga Jurasz, Online Misogyny as a Hate Crime: A Challenge for Legal Regulation? Routledge (2019), xiv.

[17]              HM Government, ‘The eCommerce Directive and the UK’ (18 January 2021) https://www.gov.uk/guidance/the-ecommerce-directive-and-the-uk.

[18]              HM Government, ‘Online Harms White Paper: Full Government Response to the consultation’ CP 354 (December 2020).

[19]              Kim Barker & Olga Jurasz, ‘Online violence against women as an obstacle to gender equality: a critical view from Europe’ EELR 2020(1) 47-60 https://www.equalitylaw.eu/downloads/5182-european-equality-law-review-1-2020-pdf-1-057-kb.

[20]              Kim Barker & Olga Jurasz, ‘Gender-based abuse online: An assessment of law, policy and reform in England & Wales’ in Anastasia Powell, Asher Flynn & Lisa Sugiura (eds) Handbook on Gender, Violence and Technology (Palgrave, forthcoming 2021); Amnesty International UK, ‘Online abuse of women widespread in UK’ (2019) https://www.amnesty.org.uk/online-abuse-women-widespread.

[21]              Kim Barker & Olga Jurasz, ‘Text-based (sexual) abuse and online violence against women: towards law reform?’ in Jane Bailey, Asher Flynn and Nicola Henry (eds), ‘Technology-Facilitated Violence and Abuse International Perspectives and Experience (Emerald Publishing, 2021) pp247-264; 256-257; 257.

[22]              Kim Barker & Olga Jurasz, ‘Online Misogyny: A Challenge for Global Feminism’ Journal of International Affairs 2019 72(2), 95-113 https://www.jstor.org/stable/26760834.

[23]              Ibid.

[24]              See e.g. threats sent to Caroline Criado-Perez, and Stella Creasy. See above at Question 1.

[25]              Kim Barker & Olga Jurasz, ‘Online violence against women as an obstacle to gender equality: a critical view from Europe’ EELR 2020(1) 47-60 https://www.equalitylaw.eu/downloads/5182-european-equality-law-review-1-2020-pdf-1-057-kb, 56.

[26]              Jessica Guynn, ‘’Burn down DC’: Violence that erupted at Capitol was incited by pro-Trump mob on social media’ USA Today (6 January 2021) https://eu.usatoday.com/story/tech/2021/01/06/trump-riot-twitter-parler-proud-boys-boogaloos-antifa- qanon/6570794002/.

[27]              Michael D Shear & Eileen Sullivan, ‘’Horseface,’ ‘Lowlife,’ ‘Fat, Ugly’: How the President Demeans Women’ New York Times (16 October  2018) https://www.nytimes.com/2018/10/16/us/politics/trump-women-insults.html.

[28]              Otito Greg-Obi, ‘Effective Protection from Online Violence Against Women in Politics’ IFES (16 April 2019) https://www.ifes.org/news/effective-protection-online-violence-against-women-politics; Kim Barker & Olga Jurasz, ‘Text-based (sexual) abuse and online violence against women: towards law reform?’ in Jane Bailey, Asher Flynn and Nicola Henry (eds), ‘Technology-Facilitated Violence and Abuse International Perspectives and Experience (Emerald Publishing, 2021) pp247-264.

[29]              Owen Bowcott, ‘Twitter Intimidation not taken seriously enough by police says Stella Creasy’ The Guardian (29 September 2014) https://www.theguardian.com/politics/2014/sep/29/twitter-online-intimidation-police-stella-creasy-peter-nunn; Jessica Elgot, ‘Diane Abbott more abused than any other female MP during election’ The Guardian (5 September 2017) https://www.theguardian.com/politics/2017/sep/05/diane-abbott-more-abused-than-any-other-mps-during-election.

[30]              Kim Barker & Olga Jurasz, ‘Online violence against women as an obstacle to gender equality: a critical view from Europe’ EELR 2020(1) 47-60 https://www.equalitylaw.eu/downloads/5182-european-equality-law-review-1-2020-pdf-1-057-kb, 50; Jessica Elgot, ‘Twitter failing to act on graphic images and abusive messages, says MP’ The Guardian (22 August 2017) https://www.theguardian.com/technology/2017/aug/22/twitter-failing-to-act-on-graphic-images-and-abusive-messages-says-mp.

[31]              Marietje Schaake, ‘Algorithms have become so powerful we need a robust, Europe wide response’ The Guardian (4 April 2018) https://www.theguardian.com/commentisfree/2018/apr/04/algorithms-powerful-europe-response-social-media.

[32]              Kim Barker & Olga Jurasz, ‘Online Harms White Paper Consultation Response’ (2019) http://oro.open.ac.uk/69840/, 4.

[33]              See above at Question 3.

[34]              Kim Barker & Olga Jurasz, ‘Online violence against women as an obstacle to gender equality: a critical view from Europe’ EELR 2020(1) 47-60 https://www.equalitylaw.eu/downloads/5182-european-equality-law-review-1-2020-pdf-1-057-kb, 52.

[35]              We have outlined our typology of online harms particularly affecting women elsewhere: Kim Barker & Olga Jurasz, ‘Text-based (sexual) abuse and online violence against women: towards law reform?’ in Jane Bailey, Asher Flynn and Nicola Henry (eds), ‘Technology-Facilitated Violence and Abuse International Perspectives and Experience (Emerald Publishing, 2021) pp247-264; 256-257.

[36]              Kaleev Leetaru, ‘The Problem with AI-Powered Content Moderation is Incentives Not Technology’ Forbes (19 March 2019) https://www.forbes.com/sites/kalevleetaru/2019/03/19/the-problem-with-ai-powered-content-moderation-is-incentives-not-technology/.

[37]              The Copia Institute, ‘Understanding cultural context to detect satire’ (9 November 2020) https://www.tsf.foundation/blog/understanding-cultural-context-to-detect-satire-2020.