Dr Edina Harbinja[1]supplementary written evidence (FEO0098)

 

House of Lords Communications and Digital Committee inquiry into Freedom of Expression Online

 

I am grateful for the invitation to give oral evidence at the Committee’s session on 19 January 2021. Following the session, I welcome the invitation to submit evidence to expand on certain questions in this important and timely inquiry. Therefore, my contribution will be limited to the questions the Committee has kindly asked I should specify further.

 

 

  1. First, I would like to note that I agree with the Committee’s conclusion in paragraph 232. of the Report: ‘However, inaction causes problems of its own. As we saw in chapter 4, competition law is slow and retroactive, and does not take account of noneconomic problems associated with digital dominance. Once the damage is done, it is often too late to remedy. Preventative action is needed.

 

  1. I also believe that greater cooperation and regulatory coherence is required, as noted in the Report and also reinforced in my oral evidence. In addition to extending the power of the ICO, the CMA and Ofcom in various areas where there is a pressing need for regulation, there needs to be a clear cooperation mechanism that will aim to establish and maintain effective and coherent regulatory framework, something that has been absent in the current, rather patchy and piecemeal approach. This new, a more coherent framework, would include implications of AI, in addition to platform regulation of content and harms. This need for regulatory coherence has been highlighted in the recent AI Council’s AI Roadmap as well.[3]

 

  1. As discussed below, there are many issues with the Government’s proposal that Ofcom should take over the powers to regulate ‘online harms’. Therefore, the Committee’ suggestion to establish a new body, the Digital Authority, is certainly a better option. The aim to co-ordinate regulators in the digital world, plus all the functions recommended in the Committee’s Report sound reasonable and much more helpful than tasking an inadequate regulator with these powers.

 

  1. The Committee’s Report in para 241 notes the importance of the Digital Authority’s cooperation with the respective European regulators. The EU is currently undertaking reform in the area of platform regulation as well, and the Proposal for the Digital Services Act[4] includes the establishment of a Digital Services Coordinator in each member state, and the European Board for Digital Services, tasked to advise the Digital Services Coordinators and the Commission. In my view, the Digital Authority would be a more suitable counterpart to these European partners.

 

 

  1. As I noted in the oral evidence session, there are already quite a few mechanisms that need to be enforced with greater capacities on the regulatory side and there is a need for greater co-operation between the regulators (e.g. data protection, competition, advertising and consumer protection laws).

 

  1. Generally, I would recommend the following principles to underpin any forthcoming regulation of online expression:

 

1)           Greater free speech and privacy safeguards built into the future regulation, including the forthcoming Online Safety Bill. These safeguards have not been articulated and developed very clearly in the Government’s documents so far, including the Response from December 2020.[5]

 

2)           Human rights impact assessments for emerging technology (ideally at the development stage or design stage), a mechanism that does not just include assessing risk to individual’s privacy or free speech, but also the entire human rights framework. Other examples of useful impact assessments include child protection impact assessments, ethical/societal impact assessments for all products and services, including retroactively.

 

3)           Interoperability principles for large platforms built into the framework and enforced. There is a need to have platforms communicate and talk to each other, so that users can easily either switch between those or avoid being locked into them.

 

4)           Independent audits of the platforms that would be performed annually, for example. The Digital Services Act proposal at the EU includes that requirement.

 

5)           Judicial oversight is crucial, and any regulator should be independent with clear pathways for judicial remedies and reviews.

 

6)           Transparency reports - as laid out in the Internet Safety Strategy Green Paper, a transparency report is the first step to accountability[6]. Beyond this, there should be separate accountability and reporting requirements imposed to report on human rights compliance and Article 8 and 10 ECHR protections.  This could be achieved by human rights auditing by an independent body to ensure compliance with privacy and free expression rights.

 

7)           A route of redress needs to be in place for users to challenge decisions regarding removal or takedown of content which has found not to be illegal or harmful.

 

8)           Human oversight should be a requirement wherever there are takedown procedures in place. This is essential to ensure that fundamental rights, including the freedom of expression, are upheld and are not unjustifiably interfered with. It is also essential to have human oversight where automated systems are not reliable enough to be used in isolation.

 

9)           In addition to principle-based rules in the regulatory framework, there should be a blacklist of certain practices that should not be permitted in any circumstances. A blacklist of practices can also evolve with the fast-paced changes in digital technologies.

 

  1. The new framework should also differentiate between platforms and large or very platforms, which serve as online gatekeepers. For instance, the EU’s Digital Service Act Proposal defines very large platforms as those having ‘average monthly active recipients of the service in the Union equal to or higher than 45 million’. New obligations for these platforms include mandatory risk assessment, mitigation of risk, independent audit, online advertising transparency, obligations related to recommender systems, data access by the Independent Coordinator. With some refinement, however, the Proposal retains principles of the existing intermediary liability, including the prohibition of general monitoring, which could violate Article 8. of the ECHR, something that the Government’s Response does not clarify.

 

  1. Regarding the Government’s proposal to introduce duty of care as a guiding principle of the new online safety bill, many, including myself, have expressed their reservations about the concept, during online harms consultations and elsewhere.[7] Dr Leiser and I have emphasised the difficulty of the duty of care to satisfy the test set by courts, inter alia, due to the difficulty in drawing an analogy with current examples of duty of care (e.g. owners of physical property, where the potential harm is specific, well defined and owed to identifiable individuals).

 

  1. It seems likely that the Government’s proposal will be adopted in some form quite soon. However, there are still serious concerns about proportionality and the effects the new regulation may have on human rights and freedom of expression. The key issue is the vague and undefined nature of many harms that the government proposes to regulate. While some of these harms are already regulated and clearly illegal (e.g. terrorist-related content, content related to child abuse, extreme pornography etc.), a lot of the harms that the White Paper identifies as ‘legal harms’ (e.g. disinformation, trolling or intimidation) could potentially be within the remit of the protection awarded by Article 10 of the European Convention of Human Rights (the right to freedom of expression). Offensive content may be harmful but not rise to the threshold of illegality and may even be protected speech. The vague nature of harms as a group that are not per se illegal could be challenged under principles of the rule of law, proportionality and legal certainty. A speech should never be judged on its subjective effects on a user. However, none of our laws regulating speech are applied via a subjective test. This also contravenes the longstanding principle from Handyside v UK[8]:

 

"Freedom of expression...is applicable not only to 'information' or 'ideas' that are favourably received or regarded as inoffensive or as a matter of indifference, but also to those that offend, shock or disturb the State or any sector of the population”.[9]

 

  1. In terms of harms that online speech may cause, any speech assessment will need to include qualitative questions about whether content online should be treated differently to information offline for every individual user; the platform will need to understand the context of exchanges between every user on a platform and how people communicate offline with one another (the parity principle). For this to happen, the platform will need to understand the context of exchanges between every user on a platform and how people communicate offline with one another, and this is virtually impossible.[10]  Different platforms have different social norms and communication practices and this should be respected (e.g. it is not realistic to expect the same language on 4Chan, Reddit, TikTok and Mumsnet).

 

  1. In terms of content moderation, the ‘relevance’ criterion of content moderation is particularly tricky, as it requires platforms to decide whether the content assailed may also harm at some point in the future.  Platforms should not be placed in a position where they are forced to judge content on the effects that it may have on some users at some point. Due to the volume of user-generated content, it would be impossible to comply without deploying automation and technical measures. No filtering service is perfect; they will not catch all undesirable content and run the risk of over-blocking.  When filtering is used under the threat of sanctions, platforms will err on the side of caution and could decide to block against content that could conceivably be harmful to users but also is in the ‘public interest’ to display; for example unpopular political opinions.

 

  1. The government expresses a strong preference for a co-regulatory model for platforms enforced by Ofcom.  Social media platforms would be particularly affected by the new regulatory framework. While this model has its benefits (e.g. stronger legitimacy than self-regulation, based on powers given by the Parliament, expertise, principle-based regulation, flexibility, cooperation with the industry), there is a danger of uncritically replicating the model of broadcast regulation into the online environment. Broadcast regulation has a very different historical rationale and justification (i.e. regulating entities who have the access to scarce resources, i.e. spectrum, those who produce and distribute content at a large scale, and exercise editorial control with little or no freely user-created and generated content), whereas the need for the regulation of the Internet is largely different (i.e. there are not scarce resources of the same sort, but user-generated content, individual speech and privacy implications, open and free Internet). While it is evident that self-regulation has failed in various instances, given the scandals we have witnessed, companies have started improving their regulatory mechanisms (e.g. Facebook’s Oversight Board). As noted above, the key here is making sure that users have the right to redress under those procedures, as well as that there is not a general obligation to monitor users, and that the current liability regime is overseen more efficiently by the regulator. Thus, it is not so much about new powers or the duty of care, but enforcement powers and the necessary oversight. If the government wishes to introduce a regulator, ideally this should be a new public body, as suggested by the Committee for example, with expertise in Internet regulation, cybercrime and online offences and human rights law. This would provide a balanced and proportionate oversight and the protection of fundamental rights and freedoms of Internet users.

 

 

  1. As noted by Baroness Buscombe in the oral evidence session, my particular expertise on post-mortem privacy is also relevant, albeit not in the focus of this particular inquiry. Therefore, I would just like to bring the issue to the Committee attention as something that will certainly require legislative and regulatory action soon in the UK. The questions relevant to this inquiry include: the effects of online speech on the image, privacy and dignity of the deceased; the balance between privacy interests of the deceased and their heirs/families versus the freedom of expression of others, including the public interest in archiving and maintenance of accurate historical records; preservation and collective memory versus individual control of personal data and digital assets (including one’s speech left behind). I am happy to elaborate on these further should the opportunity arise.

 

 

15 February 2021

5

 


[1]              Senior lecturer in media/privacy law, Aston Law School, Aston University, https://research.aston.ac.uk/en/persons/edina-harbinja . The evidence provided reflects the views of the author.

[2]              https://publications.parliament.uk/pa/ld201719/ldselect/ldcomuni/299/299.pdf 

[3]              https://www.gov.uk/government/publications/ai-roadmap

[4]              https://ec.europa.eu/digital-single-market/en/news/proposal-regulation-european-parliament-and-council-single-market-digital-services-digital

[5]              https://www.gov.uk/government/consultations/online-harms-white-paper/outcome/online-harms-white-paper-full-government-response#:~:text=16.-,The%20government's%20response%20to%20online%20harms%20is%20a%20key%20part,drive%20digital%20and%20economic%20growth.

[6]              https://assets.publishing.service.gov.uk/government/uploads/system/uploads/ attachment_data/file/708873/Government_Response_to_the_Internet_Safety_Strategy_Green_Paper_-_Final.pdf

[7]              https://techreg.org/index.php/techreg/article/view/53

[8]                            ECtHR (1976) Handyside v UK (5493/72).

[9]                            At Para 49.

[10]                            Banerjee, S., Chua, A. Y., & Kim, J. J. (2017). Don't be deceived: Using linguistic analysis to learn how to discern online review authenticity. Journal of the Association for Information Science and Technology, 68(6), 1525-1538.