{"HashCode":-849872376,"Height":841.0,"Width":595.0,"Placement":"Header","Index":"Primary","Section":1,"Top":0.0,"Left":0.0}

 

Dr Garfield Benjamin, Solent Universitywritten evidence (FEO0028)

 

House of Lords Communications and Digital Committee inquiry into Freedom of Expression Online

 

Please find below evidence in submission for the call on freedom of expression online. Responses to specific questions are preceded by general comments.

 

General comments

 

Freedom of expression cannot be considered in isolation from the deeply entwined issues of harmful content, abuse, privacy, security, inequality, exclusion, bias and regulation of privately owned and operated platforms. My recent Digital Society report[1] outlines seven steps in a roadmap to overcoming this issue of connecting regulation of online platforms, starting with creating an Office for Digital Society not as a separate new regulator but as a mouthpiece to bring together the many existing powers of current regulators in order to tackle systemic issues with platforms, content and related issues online. Surveys conducted for my report found that there was public support for greater fines for platforms, better complaints procedures for users, and banning platforms that do not comply with regulation. 67% of people surveyed tended to or strongly supported more cohesive regulation across issues like online content and privacy. 67% of people surveyed tended to or strongly agreed that online content recommendations can cause harm, while 69% tended to or strongly agreed that this affects different groups in different ways. The public surveyed was in support of platforms having to check any content they serve to users (at least 74% tended to or strongly agreed across Twitter, Google and Facebook). There is support from the public, from rights groups and from the research community for greater regulation of expression online, and a greater role for regulators in holding platforms to account.

 

New norms are needed for communication online[2] to tackle the current inequalities in access to freedom of expression. This is likely to require radical changes, and much greater support for offline aspects of tackling hateful and harmful speech. Freedom of expression should not be used as an excuse for lack of regulation of online platforms. Freedom of expression, and freedom more generally, is not an adequate framework for thinking about these issues.2 Policy should focus on a shift from freedom towards support for equitable access and representation, as well as issues of identity, power, community and responsibility.

 

  1. Is freedom of expression under threat online? If so, how does this impact individuals differently, and why? Are there differences between exercising the freedom of expression online versus offline?

 

As it currently stands, freedom of expression online is much wider online than offline. This is largely due to the dominance of one particular political standpoint and interpretation of the US constitution in establishing the frameworks and ethos of the early Internet. But this form of freedom of expression commonly touted by Mark Zuckerberg and others as a smokescreen to justify inaction and conceal the tremendous role platforms already play in defining what content we see online fails to capture the fact that in most legislative systems, including the UK and even the US, freedom of expression is always balanced against the harms that can be caused by such expression. Online speech would benefit from hewing more closely to offline principles, rights, laws and regulations. In my recent Digital Society report, 70% of the UK public surveyed felt that current UK legislation was fairly or very inadequate at tackling hate speech online, and 71% felt hate speech online should be more tightly regulated. This shows clear public support for greater action against the negative impacts of blanket freedom of expression, and similar levels of support were shown for a more active approach to tackling misinformation.

 

The threat to freedom of expression online is not that it will be curbed, but that it will be curbed according to the whims (and bottom line) of (largely US-based) corporations. These effects, both towards and against blanket freedom of expression, are seldom felt equally. Those with existing platforms and privilege are disproportionately able to exercise their freedom of expression. This is often ignored when discussing the banning of prominent figures from platforms for hateful or harmful comments that offline might easily be considered out of scope for the protections afforded by freedom of expression. Powerful individuals being held to account for what they are saying is long overdue, not an affront to freedoms. Those from marginalised groups (particularly women, BAME and LGBTQ+ communities), by contrast, bear the brunt of horrendous abuse online without the protections they should be provided under law. An equitable future of policy for expression online should be contextually aware and responsive to existing power asymmetries, both in the design of online systems and how they are regulated.

 

  1. How should good digital citizenship be promoted? How can education help?

 

Surveys from my Digital Society1 report found that people are confident in their own abilities to discern what information is false or biased (74% and 86% tend to or strongly agree for false and biased information respectively), but believe that other people are less able to (only 23% and 26% tend to or strongly agree for false and biased information respectively). There are many important initiatives, like the Me and My Big Data project[3] which provide wider frameworks not only for data ‘doing’ skills but also data ‘thinking’ and data ‘participating’. Citizens need awareness not only of digital literacies, but of the broader regulatory, business and social contexts that create the dominant norms and narratives of online spaces. But this leads on to an important point - education is an essential part of improving digital citizenship, but the burden should not be placed solely on individuals. Yes, digital literacy is needed to improve digital citizenship, but this must also be supported by regulation and the redesigning of online platforms to empower users to use these literacies to engage more productively and collectively according to the public spaces online platforms claim to be.

 

  1. Is online user-generated content covered adequately by existing law and, if so, is the law adequately enforced? Should ‘lawful but harmful’ online content also be regulated?

 

User-generated content online - including how it is created by users and how it is distributed by platforms - is neither adequately covered by existing law, nor adequately enforced with the laws that do currently exist. For example, the ICO has done very little to directly tackle the inequities and harms caused by online platforms, despite itself identifying ‘systemic vulnerabilities in our democracy’[4] brought on by, for example, Facebook. And as I outline in my Digital Society report1 even measures such as the Online Harms Bill will only further the spreading of regulation across entities, exacerbating an already fragmentary regulatory landscape that is full of loopholes for platforms to exploit in avoiding regulation. These loopholes include ‘lawful but harmful’ content which ought to be regulated, and often can be using already existing offline laws such as hate speech. Far more comprehensive regulation of online platforms in their entirety is needed - bringing together privacy, freedom of expression, prevention of harms, integrity of information, competition, advertising standards, transparency, justice and many other concerns.

 

  1. Should online platforms be under a legal duty to protect freedom of expression?

 

Platforms should be held to a more comprehensive set of legal duties of care towards their users, befitting their role as online public spaces. This should include freedom of expression, but also prevention of harms, preventing the spread of misinformation, and providing transparency of how recommendation algorithms work to promote certain content. Recent developments in the US relating to public officials, as well as a long history of inequitable application of rules between mainstream and marginalised groups, or between different geographical contexts, highlight on one hand how important it is to monitor and restrict certain expression online, but on the other hand that this should not be left to the decisions of private companies. These actions showed the breaking point of platforms’ claims of neutrality, and have brought to a head their editorial role that requires regulating as such. To avoid playing different roles to their advantage, platforms should be regulated according to the strictest guidance for each of their constituent roles, whether that be media provider, data controller, advertising agency, or any other role that falls under existing or future regulation. The Digital Society1 report outlines seven steps of a roadmap towards this.

 

  1. What model of legal liability for content is most appropriate for online platforms?

 

Current models of legal liability are inadequate[5]. There are valid reasons for reduced platform liability, particularly in support of enabling freedom of expression. However, it has enabled platforms to position themselves as in-between media and communication regulation, and to use the conflicts between, for example, intellectual property and privacy regulations to their own advantage. Placing greater liability on platforms, in their editorial role, is not without risk. However, when we consider the algorithmic editorial role that has long been occurring, it is entirely inappropriate to continue liability exceptions. Freedom of expression may remain even in obscurity, but visibility is managed by platforms and their content algorithms. This may lead to more fundamental changes in how platforms and communication online operate, but with the scale of current harms and manipulations, such changes are already necessary. Regulators and policy-makers should work across stakeholders (including platforms, researchers, rights and advocacy groups, but centring those most affected)[6] to develop new models that change how liability for content creation and content distribution are spread across users and platforms.

 

  1. To what extent should users be allowed anonymity online?

 

Anonymity is an essential part of life online, particularly for those from marginalised communities (for example LGBTQ+), in order for individuals and communities to develop safe spaces to, for example, explore issues of identity or find support for issues that may be sensitive (like mental or sexual health). There are some risks of anonymity, emboldening some towards more abusive behaviour, but much abuse online is not anonymous. It would be better to direct focus on the underlying issues of platforms, communities and communication rather than misdirect efforts onto the false cause of anonymity.

 

  1. How can technology be used to help protect the freedom of expression?

 

Conflating the freedom of expression debate is the use of algorithms to determine the spread of content online. This is a key reason that those with existing privilege and status are able to abuse their freedoms. De-prioritising hateful comments or creators online does not in itself harm their freedoms. Being able to express oneself is separate from allowing that content to be actively inserted into the feeds of others. Freedom of expression is not a right to be heard – that is a matter of representation, which needs greater consideration to support marginalised communities. There should be no hiding or escaping the role that recommendation algorithms play in defining the visibility of some content over others online. These systems are designed with certain intentions (generating more clicks, more engagement, more advertising revenue) and this tends to have an exteeming effect. If platforms are claiming to operate public spaces, then they must be required to design content recommendation algorithms that are equitable and in the public interest for all members of the public who are online.

 

Platforms could also use their systems to provide space for restorative justice[7] - providing all participants are willing to engage in good faith. But this must be closely monitored and backed up by penalties for those causing harms or abusing the process.

 

  1. How do the design and norms of platforms influence the freedom of expression? How can platforms create environments that reduce the propensity for online harms?

 

Norms are an inherent part of any social space, community or interaction. They define the accepted limits of behaviour, constructing and reinforcing the power relations of those operating within the given social context. Online, norms are created and reinforced by every post, share or like. They are socially constructed and may differ greatly between platforms and contexts. But some people (particularly public figures or celebrities) and certainly some organisations (especially governments and platforms) hold disproportionate levels of power over the shaping of these norms. We may all contribute towards norms of accepted expression, sharing and privacy, and related issues, but we are not all able to contribute with equal weight and influence.

 

Platforms combine the power of design and norms to establish and enforce the boundaries - but we should not be misled. There is little community about the 'Community Guidelines' on platforms like Facebook, Twitter and others. This language evokes a public space, but it is fundamentally privately constructed by private companies. We need to change how we think about these platforms, and if platforms are to continue their claim of being a public space then they need to take that role seriously. As the rallying cry of disability and other activism has been for decades now, “nothing about us without us”. Regulators and user communities (particularly marginalised groups and those most affected by potential harms or censorship), supported by research into social issues in technology, should play a more significant role in establishing positive norms. These norms should also have an enforceable influence on design, to allow the development of more positive online environments. Establishing appropriate design practices and norms for platforms is essential for setting the social context of expression online and the integrity of both information and social interaction.

 

  1. How could the transparency of algorithms used to censor or promote content, and the training and accountability of their creators, be improved? Should regulators play a role?

 

Structural power over public discourse is currently owned and operated by private entities with no democratic accountability. Transparency of content censorship and promotion algorithms is essential to both supporting freedom of expression and reducing online harms. In-depth details of the algorithms should be made available - under non-disclosure agreements where proprietary - to regulators and any independent analysts appointed by regulators (such as academics). Data literacy initiatives - particularly the critical and participatory skills - should also include those engineering online systems, covering the skills needed to consider social and ethical issues and implications of technical systems. Again, this is not a fix all, but one additional component of the broader improvement and regulation of the ways online platforms create systemic changes to public discourse and democracy. Roles for regulators span assessing the algorithms themselves, facilitating independent expertise where appropriate, engaging with industry for improving social literacy of engineers, and building enforceable best practices for algorithm design.

 

  1. How can content moderation systems be improved? Are users of online platforms sufficiently able to appeal moderation decisions with which they disagree? What role should regulators play?

 

Current systems use a mix of automated and human content moderation, but the balance is often inappropriate and disconnected from users. Greater support is needed for human content moderators - many of whom are contractors with few benefits and inadequate mental health support, despite being subjected to reviewing harmful and traumatic content on a daily basis. They also need greater support in deciding what content should be allowed or blocked - often this is left to them to work out for themselves. Establishing clear guidance for content moderation is an area where regulator involvement is urgently needed. This includes the practice of moderating content and the support needed for content moderators. Additional research and development into automated systems can also be beneficial - for example, the police used these types of systems to limit the amount of harmful material officers would be exposed to when searching for explicit content on suspect devices - but this also needs to be better integrated into user interfaces. Users often have little explanation of methods of recourse, and as with most of this issue, this tends to exacerbate existing biases and inequities. Guidance and enforceable best practice, as well as routes for recourse, are important roles for regulators moving forwards.

 

  1. To what extent would strengthening competition regulation of dominant online platforms help to make them more responsive to their users’ views about content and its moderation?

 

Strengthening competition - such as the anti-trust measures that are currently taking place in various jurisdictions around the world - are insufficient on their own to effect real change. The recent changes to Whatsapp privacy policy show how flimsy competition regulations are, as notice and consent mechanisms can be used to manipulate users into continuing to share data between platforms. Wider competition, such as alternative platforms, can help - such as a rise in Signal users after the Whatsapp privacy policy change - but existing norms (particularly, for example, across age groups) and market or cultural dominance can present barriers to migrating users fully onto more rights-promoting platforms. The market alone is an inadequate protector of rights, as other issues such as functionality, social pressure, public image and existing usage all play roles in potentially negating the free movement of users between platforms.

 

  1. Are there examples of successful public policy on freedom of expression online in other countries from which the UK could learn? What scope is there for further international collaboration?

 

Various attempts have been made, particularly across Europe and in the US, but each approach has been severely limited, largely by focusing only on illegal content and failing to define what constitutes online harms. The UK should take a leading role in developing more robust and just public policy on freedom of expression and related issues online. Particularly in the context of leaving the EU, the UK risks being left out between the collective regulation of Europe and the US-centric policies of where the major platforms are based. The UK could instead act as a broker across issues such as human rights, cybersecurity and democracy[8] to promote wider collaboration that acknowledges the different legal and cultural international contexts that impact on policy online. These issues cross borders, and only a collaborative approach will suffice. Building more concrete and cohesive regulation in the UK offers a starting point for leading these developments.

 

 

January 2021

7

 


[1]               G.Benjamin (2020) Digital Society: Regulating privacy and content online. Solent University. https://digitalcultu.re/policy/digitalsociety

[2]               G.Benjamin (2020) From protecting to performing privacy. Journal of Sociotechnical Critique. https://digitalcommons.odu.edu/sociotechnicalcritique/vol1/iss1/1/

[3]              Me and My Big Data (2020) Me and My Big Data Report 2020: Understanding citizens' data literacies: thinking, doing & participating with our data. University of Liverpool. https://www.liverpool.ac.uk/media/livacuk/research/heroimages/Me-and-My-Big-Data-Report-1.pdf

[4]               ICO (2020) Letter from the Information Commissioner to Julian Knight MP ICO/O/ED/L/RTL/0181. ICO. https://ico.org.uk/media/action-weve-taken/2618383/20201002_ico-o-ed-l-rtl-0181_to-julian-knight-mp.pdf

[5]               J.Cobbe and J.Singh (2019) Regulating Recommending: Motivations, Considerations, and Principles. European Journal of Law and Technology. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3371830

[6]              G.Benjamin (2020) “Put it in the bin”: Mapping AI as a framework for refusal. ResistanceAI Workshop NeurIPS2020. https://drive.google.com/file/d/1qs7_SIebVDt742_hj6x_f2eTnTGT1LjC/view

[7]              A.A.Hasinoff, A.D.Gibson, and N.Salehi (2020) The promise of restorative justice in addressing online harm. Brookings Institute. https://www.brookings.edu/techstream/the-promise-of-restorative-justice-in-addressing-online-harm/

[8]              R.Niblett (2021) Global Britain, Global Broker. Chatham House. https://www.chathamhouse.org/2021/01/global-britain-global-broker