{"HashCode":-849872376,"Height":841.0,"Width":595.0,"Placement":"Header","Index":"Primary","Section":1,"Top":0.0,"Left":0.0}

 

Chara Bakaliswritten evidence (FEO0034)

 

House of Lords Communications and Digital Committee inquiry into Freedom of Expression Online

 

My name is Chara Bakalis, and I am a Principal Lecturer in Law at Oxford Brookes University. I am also Head of Social Sciences Research at the Institute for Ethical Artificial Intelligence based at Oxford Brookes.

 

My expertise is in online hate speech and hate crime law. I have published widely in this area and can send the committee my key publications should they be useful to you. I have been involved in a number of law reform projects over the last 18 months. I was a member of the Core Expert Group led by Judge Marrinan who undertook an independent review of hate crime in Northern Ireland, and have been involved in two Law Commission projects on Hate Crime and on Online Communications. I was also recently commissioned by the Council of Europe to write a comparative report on hate speech laws in Europe for the Armenian Ministry of Justice, and to recommend proposals for reform of their hate speech laws.

 

 

Question 1: Is freedom of expression under threat online? If so, how does this impact individuals differently, and why? Are there differences between exercising the freedom of expression online versus offline?

 

Yes, there is evidence that freedom of expression is under threat online. Whilst other contributors to this committee will no doubt highlight evidence that certain views or groups are more likely to be censored over others, I would like to focus on evidence, such as that produced by Amnesty International,[1] which demonstrates that it can be the untramelled exercise of freedom of speech that can, in fact, make it harder for some groups to express their opinions online. For example, this particular report found evidence that many women, and particularly women of colour, did not feel able to freely express their views online because of the abuse they suffer if they do. As a liberal society, it should concern us that this inequality exists in how citizens can exercise their basic civil liberties and human rights.

 

There are some significant differences between the exercise of freedom of expression online versus offline that require us to treat the two differently, particularly with regards to hate speech. In my work, I have identified some key features of online speech - such as its permanency and reach - which means the harm it causes is different to the harm caused by offline speech. Owing to these unique features, the harm caused by online hate speech goes beyond the words themselves, and lies in the very fact that this speech is publicly and permanently available on the internet. These features of online speech are not currently taken into account by our existing laws which were formulated at a time when individuals’ ability to air their views to such large audiences was more limited. Given the growing evidence that untramelled free speech online can be harmful, this means that we have to reconsider the balance that is struck by our existing provisions.

 

 

Question 3: Is online user-generated content covered adequately by existing law and, if so, is the law adequately enforced? Should ‘lawful’ but harmful online content also be regulated?

 

In terms of whether user-generated content is adequately covered by existing law, in my academic publications, I have highlighted where there are gaps in existing legislation. These articles can be provided upon request.

 

In terms of whether ‘lawful’ but ‘harmful’ online content should be regulated, I am assuming that this question stems from the wording used in the Online Harms White Paper which referred to the regulation of ‘harmful’ but ‘legal’ material. I think what this question is asking is whether current laws draw the correct line between illegal and legal speech. For the reasons outlined above, I think we need to rethink this boundary, particularly when it comes to the regulation of hate speech.

 

It is also important to bear in mind that there is a crucial distinction between on the one hand holding an individual responsible under the law for something they have said online, and on the other hand regulating the online sphere by requiring internet companies to remove certain types of speech. The former, particularly in relation to the criminal law, must be done in a very restrictive manner, and only where absolutely necessary. However, when considering the issue of holding platforms responsible for the material that appears on their sites, the law can take a broader approach about where to draw the line between permissible and impermissible speech because no legal responsibility is attached to the individual speaker. This is something that we already do in other spheres such as TV and radio. Freedom of speech is still paramount, but it is recognised that when views are expressed publicly on platforms that reach millions of people, including the young or vulnerable, there are other interests at stake that must be balanced against freedom of expression. Whilst it was originally hoped that the internet would provide everyone with a platform to voice their opinions and express their views, there is now clear evidence that this has come at a cost not only to freedom of speech itself, but also to other broader values such as equality, and so it is important that we reconsider the role that law and regulation should play in this area.

 

 

Question 4: Should online platforms be under a legal duty to protect freedom of expression?

 

Yes, given the power and influence that these platforms now exert, and given how they have become a crucial arena for public debate, it is important that they are under a legal duty to ensure all users can exercise their rights to freedom of expression equally. However, this should also mean that these platforms have a concomitant duty to protect groups from hate speech. The two must be seen to go hand in hand.

 

Question 5: What model of legal liability for content is most appropriate for online platforms?

 

I am broadly in favour of the recommendations under the Online Harms White Paper (with some reservations). It has become increasingly clear that voluntary codes of conduct have limited effect, and that the time has come for an independent regulator to be set up. Arguably, the current position leaves too much power to internet companies to decide what can be posted online. The benefit of a regulator is that we can ensure that the rules governing speech online reflect the values of our society, and not simply the internal rules of any particular internet company. Greater transparency enforced by a regulator would allow us to have oversight over what material is removed, and we could put in place processes to ensure that internet companies do not act over-zealously or over-cautiously when removing material. However, this does mean that the detail of how the regulator will operate will be crucial, and as yet, we have very little information about how it would work.

 

 

Question 6: To what extent should users be allowed anonymity online?

 

Whilst I can see the arguments in favour of removing anonymity online, I believe that doing so would not strike the right balance between protecting people from harm on the one hand and ensuring the benefits of the internet can be shared by all. For example, whilst anonymity can mean young people are at risk from peadophiles befriending them by posing as someone else, anonymity also provides safety to those same young people when they wish to connect with others around the world without fear of being recognised or tracked down. In relation to hate speech in particular, removing anonymity would essentially be shifting the responsibility to individuals for their speech, whereas responsibility should lie with the platforms. Thus, anonymity should be retained, and other ways of solving the problem should be found.

 

 

Question 7: How can technology be used to help protect freedom of expression?

 

Given the sheer amount of online speech and the problems this causes from a monitoring or policing point of view, it is difficult to see how speech can be regulated or protected in an effective manner without the use of technology. We do of course need to ensure that we have oversight of the use of any technology, and that AI is subject to strict scrutiny.

 

 

Question 9: How could the transparency of algorithms used to censor or promote content, and the training and accountability of their creators be improved? Should regulators play a role?

 

Regulators should play a role in having oversight over the algorithms used to censor or promote content on social media and other platforms. Without this, we cannot be certain that freedom of expression is being protected, or that hate speech is being removed.

 

Question 10: How can content moderation systems be improved? Are users of online platforms sufficiently able to appeal moderation decisions with which they disagree? What role should regulators play?

 

These need to be streamlined and made easier for users to navigate. Regulators should have oversight of these, and some consistency should be imposed on internet companies to ensure that content moderation systems are fit for purpose.

 

 

15 January 2021

 

 

 

4

 


[1]              https://www.amnesty.org/en/latest/research/2018/03/online-violence-against-women-chapter-5/