final logo red (RGB)

 

Select Committee on Communications and Digital

Corrected oral evidence: Freedom of expression online

Tuesday 24 November 2020

3 pm

 

Watch the meeting

Members present: Lord Gilbert of Panteg (The Chair); Lord Allen of Kensington; Baroness Bull; Baroness Buscombe; Viscount Colville of Culross; Baroness Grender; Lord McInnes of Kilwinning; Baroness McIntosh of Hudnall; Baroness Quin; Baroness Rebuck; Lord Storey; Lord Vaizey of Didcot; The Lord Bishop of Worcester.

Evidence Session No. 1              Virtual Proceeding              Questions 1 - 13

 

Witnesses

I: Ayishat Akanbi, cultural commentator; Dr Jeffrey Howard, Associate Professor of Political Theory, University College London.

 

USE OF THE TRANSCRIPT

This is a corrected transcript of evidence taken in public and webcast on www.parliamentlive.tv.

 


21

 

Examination of witnesses

Ayishat Akanbi and Dr Jeffrey Howard.

Q1                  The Chair: I welcome our witnesses to our first evidence session of our inquiry into freedom of expression online. Let me give a bit of background for our witnesses, who I will introduce in a moment. The Committee has investigated digital regulation in the past. We produced a report on the protection of children online and on the future of regulation and, most recently, we have examined the future of journalism. One of the recurring themes has been getting the balance right between freedom of expression, prevention of online harms and other aspects of regulation, which is what we want to explore in this inquiry.

In this first session, we welcome witnesses who can help us to start to explore those issues. Let me introduce Ayishat Akanbi, who is a cultural commentator—you are very welcome—and Dr Jeffrey Howard, Associate Professor of Political Theory at University College London. Ayishat and Jeffrey, thank you very much. So that you know, today’s session is broadcast online and a transcript will be taken.

Thank you very much again for joining us. It is really good of you to give up your time to help the Committee to explore these issues. Rather than give you each a lengthy introduction, I invite you to introduce yourselves and give us a brief overview of your personal perspective on the issue of freedom of expression online. I will then invite other Committee members to ask some questions and explore the issues further. Ayishat, welcome. Do you want to go first?

Ayishat Akanbi: Yes. Thank you for having me. My name is Ayishat Akanbino worries on pronouncing it wrong; it happens. I am a fashion stylist and a writer specifically.

The way that I look at freedom of expression online is it seems to be a very fraught issue at the moment, which has brought my interest in it. There are a lot of hot topics surrounding identity, race, gender and all of those types of hotline issues. It seems to be in these areas where the threat potentially to free speech is often spoken about. There seems to be fear surrounding saying what you want to express in case of damage to your reputation and professional and personal injury. There is a lot of demonisation and defamation. Having cultural and political discussions has become a lot more strained than I at least have ever noticed.

The Chair: Thank you very much. We look forward to talking to you further about that. Dr Jeffrey Howard, could you introduce yourself and give us your initial personal perspective on the issue?

Dr Jeffrey Howard: Thank you so much, Lord Gilbert, and thanks for inviting me to give evidence. I am an Associate Professor at University College London, where I teach ethics and political philosophy in the School of Public Policy.

In my work, I think about a range of moral dilemmas that arise in policy-making. I have written on the ethics of policing, on criminal punishment, on counterterrorism, and most recently on freedom of expression. In my work, I defend the idea that free speech is among the most important rights that people must be afforded if they are to be free and equal citizens. In my view, the right to express one’s views and to communicate with others is paramount to the development and exercise of our personal autonomy and our political agency. That includes the right to engage in robust and heated debate with others.

However, as with all of our rights, I think that the right to free speech has limits. In particular, I have argued that the right to free speech is limited by a moral responsibility we owe to others not to endanger them or otherwise wrongfully harm them. Specifically, in my work, I have argued that dangerous speech that risks inspiring violence morally falls outside the protective umbrella of free speech. Here, I have in mind especially terrorist propaganda, forms of hate speech and dangerous disinformation. Accordingly, I believe it is no violation of free speech, properly understood, to demand that social media companies take action to combat this problematic content, and I look forward to discussing all this with you.

The Chair: Before we move on, all those illustrations of harmful speech you described sound to me as if they would, in fact, be illegal. They sound as if they were illegal speech anyway, or does your definition go beyond that?

Dr Jeffrey Howard: I certainly think it includes crucial categories of speech that are already illegal. In this country it is a crime to incite or glorify terrorist violence. It is a crime to incite racial or religious hatred. It is not a crime to disseminate disinformation or misinformation. It is certainly not a crime to engage in cyber harassment or abuse. As we discuss this, I will certainly defend the idea that we should encourage and perhaps even require social media companies to suppress more speech than what is already independently illegal, but certainly that is the starting point in my view.

The Chair: They were very good openers. Thank you very much.

Q2                  Lord McInnes of Kilwinning: Something that we want to look at is the fundamentals around online expression. This is a two-part question. How do you both feel that online expression differs from offline expression? What do you think are the implications for expression of freedom of speech online?

Ayishat Akanbi: Online speech is quite distinct from offline speech. People are a lot more confident online and are more combative. I think that people release a lot of frustrations and maybe resentments that they cannot in the public sphere. The problem is that because so many people spend their time online this colours the way they have now started to look at the world. We look at social media as an accurate representation of reality and, while it is similar, it is not quite the same.

Sorry, was the second part of your question about the implication?

Lord McInnes of Kilwinning: Yes, for freedom of expression online.

Ayishat Akanbi: People behave in ways that they would not offline, so if we curtail freedom of speech too much online, it makes public discussion a lot more stagnated, I think. It stiffens things. People already feel this sense of fear and an inability to express without maybe unnecessary consequences what they want to say. As much as I agree with Jeffrey that social media companies have a responsibility, especially when it comes to children, to limit harmful ideas or any ideas that can incite even dangers to themselves, I think there is a marked difference between speech online and speech offline.

Lord McInnes of Kilwinning: Would you see that as a different category as opposed to written expression and spoken expression? Would you see online expression as separate from those two forms of offline expression?

Ayishat Akanbi: Yes, I think so. People do not necessarily approach social media with the same care that they would in writing something. It is a lot more throwaway for a lot of people. A lot of people forget that what happens on social media can last for ever, whereas you are more conscious of that when you are writing.

Dr Jeffrey Howard: What makes online expression special, in my view, is the continual possibility of easy amplification and connection. That can, of course, have enormous benefits—think of social movements where inspiring and righteous messages go viral online, as in the case of the #MeToo or Black Lives Matter movements. Ordinary citizens whose speech in an earlier era would not have travelled very far can find their words retweeted to hundreds of thousands. From the perspective of our free speech interest in communicating and connecting with others, we have to recognise that that brings undeniable benefits.

As I am sure you are well aware, these mechanisms of amplification and connection clearly have a dark side to them. Those who seek nefariously to cause harm have clearly weaponised social media platforms. Dangerous rhetoric and disinformation are ubiquitous, and it seems to me that the amplification of this content within hateful echo chambers endangers the safety of individuals and erodes crucial norms that underpin democracy. All this, as I am sure you know, is empirically well understood. The crucial moral question, of course, is what we should do about it. How do we protect the power of the internet to be used for legitimate social purposes while limiting its abuse? I see that as the central task of content moderation.

The Chair: Ayishat, you referred to harmful ideas and acknowledged that there are harmful ideas on the internet, and, by definition, they harm some people. How far does your definition go in defining harmful ideas?

Ayishat Akanbi: It is probably just towards the incitement of violence and the encouragement of harming yourself, which I think is particularly rife on young media. That is particularly troublesome when a lot of young people are vulnerable for all different types of reasons. That is probably as far as I would go—danger to yourself or inciting danger towards other people.

The Chair: You are not thinking so much about the ideas as about provoking or inciting harm in itselfrather than a set of ideas being harmful?

Ayishat Akanbi: Yes, exactly, it is any sort of promotion of or glorifying harming yourself and other people; it is not necessarily the ideas specifically themselves, as I think you are saying. What is potentially harmful in an idea another person might find to be quite useful, so I am not going down that road. Yes, it is just violence.

The Chair: Thank you, that is useful.

Q3                  Baroness Bull: Thank you very much to both our witnesses for being with us today. I wanted to start by picking up on your reference, Jeffrey, to #MeToo and Black Lives Matter. You are right that there were positive ways in which those messages were amplified; it was an extraordinary way. However, you have only to look below various articles to the comments threads to see how dangerous it was for some people to say the things they said or how much kickback there was for the things they said. We think about the world wide web as a democratic space where everybody can speak, but in reality is that the case? Are there categories of people or even sectors of the community who are less able to express themselves online? I have mentioned one reason why that might be. There may be others. Could you comment on that?

Dr Jeffrey Howard: Even if everyone formally enjoys the opportunity to access and express themselves on social media networks, some individuals clearly feel more comfortable expressing their views than others. Following the logic in your question, we probably should distinguish between different kinds of cases here. In some cases, people are put off by the kind of heated disagreement that can take place on these platforms, but tough, heated disagreement is an essential feature of democracy, so part of this is about developing a thick skin.

In other cases, I worry that the pervasive culture of publicly shaming and engaging in ad hominem attacks against those with whom we have good-faith, legitimate disagreements is counterproductive to a healthy, democratic conversation. Here the solution is not a thicker skin but just better norms of how we talk to one another online, in particular, by trying to talk to one another online a bit more like we do offline. By all means, disagree with ideas that you reject, but try not to demonise the proponents of those ideas as vile or irredeemable traitors, as can often happen.

There is a third set of cases, though, where the reason people are reluctant to speak is that they reasonably fear for their safety. Vulnerable citizens who are continually exposed to racist screeds or misogynistic harassment or threats of violence online might reasonably prefer to keep quiet rather than risk enduring those attacks. In my view, violent intimidation is itself anathema to a free speech culture. Here the remedy is definitely not to tell vulnerable citizens to develop a thicker skin but to try to tackle some of that abusive content by getting it off the platform or at least demoting it in people’s feeds.

Baroness Bull: Thanks. Ayishat, do you want to add anything to that?

Ayishat Akanbi: Yes. Your question was whether I feel as though there are some groups who may feel less confident to speak online. Is that what you are asking?

Baroness Bull: Less able. Confidence may be a reason but there may be other reasons. I am interested in the range of reasons why some people may be excluded from this “democratic” space in which we express ourselves.

Ayishat Akanbi: Yes, I think so. At the moment—and this might just be because of the position that I sit on the internet and the conversations that I am focused on or interested in—I feel that for anyone considered to be privileged, whether that is by race, class or gender, a cynical motive is often assumed when those people take a counterview on something popular. I do not notice this when it comes to vulnerable or so-called vulnerable people in the same way. That does not mean, of course, that vulnerable people, maybe more so than others, are not exposed to potentially higher rates of harassment and bullying, but I think there are spaces for people from vulnerable communities to criticise with more ease than others. That creates this tension online where people think that there is a double standard, which makes the discourse very fraught and needlessly tense in places where it does not need to be. Again, there are a lot of accusations relating to people’s motivations, whether they are racist ones or any type of phobic, and that generally is lobbied only at certain groups. I speak to people all the time and a lot of them tell me they are not able to express things, especially if they have a counterview to the prevalent one.

Baroness Bull: That is really interesting. Is it pushing too hard to ask you to give an example? Is there an example you would feel comfortable giving, just to give us a bit more of a handle on what you have just described?

Ayishat Akanbi: I guess this is a classic trope now, but you can say anything about the cis straight white male, for instance, in that way. The things that you can say about the cis straight white male I doubt you would be able comfortably to get away with saying about any other group on the internet. To me, whether there are some justified criticisms levelled at members of some of those groups, there is a double standard in the ways in which people express them and that is definitely going to have a consequence.

Baroness Bull: Chair, may I ask a quick follow-up of Jeffrey? This is really fascinating. I am very interested in how legislation might tend to circle around protected characteristics but, as we know, straight white male is not a protected characteristic, neither is socioeconomic class and neither is body size and shape, but we know, particularly with the latter, that that is an area where much harm is done. Do you have a view on how protection of freedom of expression online might relate or not to protected characteristics as we understand them now?

Dr Jeffrey Howard: I certainly think that it is a fraught and not straightforward matter to identify what exactly the protected characteristics are that belong on our list. Different definitions of hate speech, for example, rely on widely divergent specifications of what the protected characteristics are.

If I could pinpoint a unifying principle that would be the starting point for thinking through what belongs on the list, it would be something like a vulnerable social identity, a social identity that involves greater risk of subjection to violence, harassment, abuse, and discrimination. That will vary contextually. In different societies it will be a different list.

That poses problems, of course, when we are thinking about coming up with some master specification of the protected characteristics once and for all, but I think that just shows that no master list is available and we need to be much more contextual about it, especially in content moderation. One thing that I think is beneficial in the context of content moderation is that the social media platforms can enjoy a bit more flexibility in specifying what exactly counts as a protected characteristic in contrast with a mechanism like the criminal law, where it is just not feasible constantly to alter what characteristics belong on this list or not.

Q4                  The Lord Bishop of Worcester: Thank you both very much for being willing to give evidence today and for what you have already said. We have touched on the fact that there has, of course, been a huge proliferation in the opportunity for freedom of expression online and that that can bring empowerment and exacerbate vulnerability. A fair number of polls have been taken about people’s attitude to freedom of speechI am thinking of one by YouGov in 2017 and one by Policy Exchange in 2019. What is your view? As a result of the things about which we have been talking, do you think there has there been a change in public attitudes to freedom of expression? Is the public mood different now or not?

Ayishat Akanbi: Again, this might be specific to the areas and communities that I frequent, but whenever I hear the term “free expression” or “free speech” in particular, people assume a nefarious intention around that. Free speech translates to a lot of people as the freedom to be prejudiced, and I do not know if we have always made that connection with the term “free speech” or “free expression”. In that regard, I see the way that freedom of speech has been changed. I do not meet as many people or see as many people who think of free speech as maybe the freedom to explore different perspectives, which is my interest in free speech. I perceive conversations to be quite dogmatic on certain subjects, and I see free speech and free expression as the exploration of or interrogation of some of these topics. Yes, I think it is seen with some sort of negative blanket now.

Dr Jeffrey Howard: I am not a social scientist or a pollster so I am not confident taking any strong stand on the exact way public attitudes might have changed about freedom of speech. The nice thing about being a philosopher is that my concern is not necessarily with what people do think about a topic but what they should think about the topic.

The Lord Bishop of Worcester: Life generally, yes.

Dr Jeffrey Howard: That is right. In my view, of course, freedom of speech remains just as important a right as ever. The online context has intensified our sense of the possibilities of doing good with speech, just as it has intensified our sense of how people might do bad with speech. I do not think it has fundamentally changed what the arguments are that underwrite the right to free speech. It is still my sense that in democratic societies free speech both is and is viewed as indispensable, notwithstanding its limits.

Where there might have been some change over time is exactly where those limits lie, and one thing the online context has illuminated is all the pervasive number of ways in which people can endanger or harm others with their speech. That question about where exactly the limits of the right to free speech are is vital, but it is a conversation that we have been having long before the internet and we will continue to have it through the rest of this century and beyond, of course.

The Lord Bishop of Worcester: I am really interested in what you said, Ayishat, in comparison with what Jeffrey has just said. In speaking to someone I know who is a bit of a firebrand, a young person, he said, “I quite like free speech, but I do not tend to like the people who bang on about it”. There was a ComRes poll on Scots that tended to suggest that older people and men thought freedom of speech really important but younger people did not. That seems to be coming out in what you are saying, Ayishat.

Ayishat Akanbi: Yes, that is exactly how I perceive it. The people who I notice talk about free speech do skew male. It is definitely not coming from young people. In fact, most young people who are engaged in politics in some wayfrom what I see and even looking at pollsthink of free speech as a euphemism for wanting to be racist and homophobic. There does not seem to be an imaginative view on what free speech is and how it can be used to progress society.

The Chair: Ayishat, is that partly because free speech is still pretty uneroded? The right of free speech has never been challenged in reality. Young people have concerns about harm and about, as Lord Bishop has said, the way in which some people characterise and defend free speech. Do you think people understand what the erosion of free speech would mean for them as individuals?

Ayishat Akanbi: No, I do not think that. Without trying to diminish people’s intelligence or anything like that, when you have a strong conviction about something, I guess you never quite imagine that you might one day have an opinion that is counter to a mainstream one. They do not imagine that they could ever be in a position where something that they want to say has maybe been unfairly seen as aggressive or hostile. It is typical to say, but it is the youthful hubris of thinking you are right on a lot of things, not ever thinking that that will affect you one day. When I hear of people making the same argument that you made, that if we are too adamant about curtailing the free speech of a certain group this will one day come for you, that does not seem to do much for people. I do not think that they ever think that that will.

The Chair: I am very interested in the way you look at this as unfashionable counterpoints and that there is a variety of people who hold a variety of different unfashionable counterpoints. By definition, as the world moves on, those will come around.

Ayishat Akanbi: Yes, definitely, which is why I am one of the only people I know, in my circles at least, who does think that we have to fight for freedom of speech and free expression online and distinguish between hate speech and speech we hate, which sometimes is hard to see.

The Chair: That is the challenge. That is really interesting. Thank you.

Q5                  Lord Vaizey of Didcot: I like that expression: hate speech or speech we hate. I have a lot of sympathy with what Ayishat said about people using freedom of expression as a euphemism to be racist, but I am not going to go down the rabbit hole of cancel culture as well in how young people sometimes use free speech or not free speech, because I have a different question.

Let me quickly explain that I work with a not-for-profit based in San Francisco called Common Sense Media, which provides digital citizenship material for schools. I am also on the advisory board, along with people like Jimmy Wales, of a company called NewsGuard, which rates news websites for their veracity. It is more of a transparency website looking at how those news sites are funded and so on. That is my rather long-winded declaration of interests.

I wanted to ask Ayishat and Jeffrey about digital citizenship. What we are talking about, as we live in a digital world, is that we now all engage online. Ayishat, you started this hearing talking about how people express themselves in very different ways when they are online to how they would offline or in person. It is a potentially enormous topic, but do you have thoughts on how to provide digital citizenship for our kids, obviously, in the classroom but also for adults as we learn to engage in this forum?

Ayishat Akanbi: The problem with social media in particular is that there is no established social etiquette. We have an etiquette for using the Tube. We do not have one for social media. In many ways, I think of social media as a big, open-plan house. We all live there together but we cannot stand each other. We need to work out what etiquette we want to use if, for example, we are using Twitter for political dialogue. Not everybody uses Twitter and social media for the same reasons, but I think we need to establish what we are using social media for. Some of us are using it for our personal brands. Some of us are using it for political discussion. Some of us are using it just for entertainment. Our etiquette should encompass what will be the most useful way to maximise our entertainment there or maximise fruitful conversation.

We need to establish an etiquette. If social media and the negative aspects of social media are important to you, people have a responsibility to embody their ideals in how to use the platform. That is a personal choice. I do not know how we can necessarily get people to do that, but it is something as simple as not speaking to people on the internet in a way you would not speak to them offline. Thinking of an etiquette that is distributed through schools or something like that would be really helpful, because people forget. They forget that even if you delete something on the internet it does not mean that it has gone from the internet full stop. It may be gone from your platform but not gone from the internet.

When we are using social media, especially because we often use it when we are in the grip of some heated emotion, whether that is anger, frustration or your own personal grievances, it is all too easy to use it without thinking. We should promote critical thinking on these sites. As Jeffrey said, there is a lot of misinformation, and we need to find a way to use it that protects ourselves and other people.

Lord Vaizey of Didcot: I will not say what I am in the grip of when I go online. Jeffrey, what are your thoughts on this?

Dr Jeffrey Howard: I completely agree with what Ayishat said. When I think about promoting good digital citizenship, the first question I ask is: what is good digital citizenship? What is this thing we are trying to promote? The best way to think about it is as a package of civic virtues. I will mention just two.

One is what we might call epistemic resilience. The idea here is that we want citizens who are not easily duped by misinformation, people who stand back and think and employ their critical faculties, to reflect on the source of the content they are seeing, to ask whether it is trustworthy and to inquire into the agenda of the person who is putting it out there. That is not a call for scepticism toward everything and anything—one of the more dangerous possibilities is a world in which everyone distrusts everything; I think that would be a really bad world. What we instead want is the development of a more critical, detached stance towards the information one encounters. That certainly needs to be brought to the centre of a civic education curriculum at all levels of the education system, primary, secondary and higher.

That is not going to stop the problem of older folk sharing misinformation, so what can we do there? I think social media platforms can and should flag harmful misinformation as disputed and perhaps provide those who are exposed to harmful disinformation with the requisite links to trusted third-party fact checkers. That is not going to solve the problem, but it will be part of an effort to engineer a more critical civic culture. Certainly, Lord Vaizey, it sounds like the initiatives you are involved with aim for the development of that kind of culture.

The other civic virtue I might mention is about how to talk to people with whom one disagrees. I have this sense that there is some proportion of our society who are quite intolerant of disagreement and who are reluctant to recognise that many of our biggest disagreements in public life are reasonable disagreements, where there are good arguments on both sides, and that we really need to understand one another’s perspectives and understand that some of these public policy questions about which we disagree so fiercely really are difficult. The idea that those with whom we disagree are obviously stupid and in the grip of some evil or false view is a huge mistake. We need to develop a citizenry that is better able to engage in respectful disagreement with others.

This connects to the free speech issue directly, because one of the oldest ideas in the free speech tradition is that a great way of combating bad ideas is not necessarily to ban them but to argue back against them. If that is going to work, we need citizens who are able to do the work of that argument, of that counter speech, and do it in a way that is respectful and stands some chance of reaching people and persuading them, rather than simply antagonising them.

Lord Vaizey of Didcot: We should probably put all politicians on a digital etiquette course.

Dr Jeffrey Howard: Not a bad idea.

Q6                  Viscount Colville of Culross: Thank you again for coming to the Committee. I would like to declare an interest. I am a series producer making a television series for Netflix and the Smithsonian, which will also obviously appear online.

I would like to move the discussion on to the platforms and how they look at freedom of expression. Recently they have been opening up their moderation policies much more. Many of the moderation rules seemed very subjective and the takedown rules seemed to be very varied; you only have to look at the different ways in which Facebook and Twitter dealt with the New York Post Hunter Biden allegations of corruption to see that. Is it possible for the platforms to give fair balance between reducing online harms and maintaining freedom of speech?

Ayishat Akanbi: I will let Jeffrey answer that first, and then if something comes to me I will come in. Jeffrey might have more to say here.

Dr Jeffrey Howard: I think that the platforms have made enormous progress in this area compared to just a few years ago, and that we should all be encouraged by the progress that has been made. Are the platforms doing the maximum that we could reasonably expect of them? I think the answer is no; there is more work to be done. I do not think, at least, that a lot of people working on content moderation within the platforms would disagree with that.

My own view is that social media platforms have a moral obligation to reduce the prevalence of harmful content on their platforms. That is partly because they are the best placed entities to do so. Just as someone walking by a pond is at the right place at the right time to rescue someone who might be drowning in the pond, social media companies are in a unique position to do something about the harms that occur on their platforms.

A supplemental argument to that, which is a bit more philosophically controversial, is that an organisation can be morally complicit in wrongs committed by other people, simply by providing them with the space in which to commit those wrongs, even if the organisation does not intend for the space to be used for that purpose. Even though the principal responsibility for harmful speech online is, of course, vested in the users who engage in the harmful speech, I do think that platforms have a duty to reduce their own complicity in it.

I am sure we will talk more about the details of how content moderation should work, but I want to make a quick point about how we should frame this debate. I want to resist the framing of the question as a balance between respecting free speech on the one hand and reducing harm on the other. The framing suggests that those who seek greater action against harmful content are somehow opposed to free speech or are comfortable with violating it for the sake of other values

That is unhelpful framing for two reasons. First, there are some kinds of speech that just do not deserve protection at all, such as intentional incitement to violence. It is not like we say, “Oh, wouldn’t it be great if people could express their support for terrorism? Oh, but look, it risks inspiring harm”, so that claim gets outweighed. In contrast, I just do not think there is a claim there at all.

Secondly, part of the point of restricting harmful content is to empower the speech of others by eliminating an atmosphere of abuse or violence. When we try to create an atmosphere without intimidation and threats of violence, we support free speech; we do not undermine it.

I wanted to make that point about framing, because it is very easy for people who support greater content moderation to find themselves on the back foot in the argument in appearing to be opposed to free speech when that is not a good description of their view.

Ayishat Akanbi: I agree with Jeffrey’s last point. I do think that moderating harmful speech or harassing speech empowers free speech. It allows people to feel that they can use social media in a way that will not be personally costly to them. That is all I would say there.

Viscount Colville of Culross: I listened to the questioning of Mark Zuckerberg before the US Senate committee last month, and I was absolutely amazed—I do not know why—that Facebook users apparently post 115 billion messages every day. Zuckerberg said in his evidence that 89% of the hate speech posts are taken down, but that still leaves billions of messages that are not taken down. Do you think that these social media platforms are just too big to be governed? Is it possible to do that?

Dr Jeffrey Howard: Part of the public debate on this topic has involved the proposal that these companies be broken up by appealing to various kinds of anti-trust arguments. It is, of course, worth pointing out that if the companies are broken up, the resources that they have at their disposal to engage in extremely expensive content moderation is reduced. It is not immediately obvious that that would be an effective solution.

It occurs to me that we lambast the social media companies whenever they fail to take down enough content, but the moment they take down too much content there is a scandal on the other side. I do not say that to let the companies off the hook by any stretch of the imagination, but simply to remind ourselves that this stuff is pretty difficult, and engaging in content moderation at scale across countless cultures and countless languages is a difficult business. We need to have patience, given that we are only at the inception of this process in the history of social media platforms.

I definitely think we need to demand that platforms do more and that they do their work better, but it is crucial to remember that we do not necessarily expect the police to drop the crime rate to zero. We expect them to make a reasonable effort, given the resources they have. What that means in the context of social media regulation is a very interesting question, but if perfection is the goal, it will remain one that we are unlikely ever to reach. We want to get better at this, but we will never be perfect at it.

Ayishat Akanbi: I agree again. I do not think that we could ever eradicate all the speech that is potentially harmful, and we should be realistic about that. What is personally important for me is young people, especially suicidal ideation and things like that. There is so much content on that that is really damaging to young people, and we have seen lots of evidence of that, so at least it can work on that front.

As for harmful ideas, it is very hard to catch them all. Some ideas do not start harmful but later transpire into something else. We need to be realistic about it and have more education on how we as users can protect ourselves when using the internet, because there is only so much that these companies can do.

Q7                  Baroness Rebuck: First, I have a declaration to make. I am a director of the Guardian Media Group, which has a very strong online presence.

The question I wanted to ask of both of you builds a bit on the question of the design of platforms and the extent to which that facilitates or affects how users express themselves online. I am thinking obviously of the algorithms that determine what you see. Jeffrey, was it you who talked about the dark amplification effect? Ayishat, you have talked about certain really difficult sites—you mentioned self-harm—where, if you have visited once, you might get an avalanche of pushes towards other sites. There was one stat that really shocked me. On one platform, 70% of what you saw was personalised; in other words, it was not something you had actively searched for.

I would love you to comment on other areas, apart from taking down material, where you think that these platforms could help get us to a position of better discourse online.

Dr Jeffrey Howard: I completely agree that we do not want the platforms to simply be playing endless Whack-A-Mole with harmful content. Rather, we want them to try to address it further upstream. That is not necessarily to say that one is addressing it at the source. There is a temptation to think that social media companies can solve all society’s problems. In so far as hatred between various social groups is a problem in society, that will manifest on the social media networks, but the social media networks will not be able to eliminate that problem, even though they can put a dent in it.

I take the point that we want the platforms to focus further upstream rather than simply deal with content that is problematic as it arises. There is a persistent worry that in so far as the business model of some platforms hinges on user engagement, which increases advertising revenue, there is certainly an incentive to provide users with content that they appear to favour, even if it is hateful, even if it advocates terrorist propaganda.

I do think that as part of an effective regulation of the platforms—in response to some of the comments on the Online Harms White Paper, it looks like the Government are moving further in this direction—it cannot just be about taking down particular categories of harmful content. It also has to be about engineering the system in such a way as to reduce the demand for that content and to try to open up some of these noxious echo chambers that form online where people are exposed only to points of view with which they already agree, which we know from the social science has the effect of radicalising and polarising their views. Those more systemic issues are absolutely crucial, and are a part of any adequate regulation in this area.

Baroness Rebuck: Ayishat, is there anything in the design of the platforms themselves and what they control in the algorithms, the amplification effect and the format that information takes, the personalisation, that tips you into the dark amplification effect, the aspect of harm online which you talked about a bit earlier?

Another aspect—you have talked about is the need for more kindness in how people deal with each otherwhether there is a way in which the actual design can help get us to better challenge and debate that is not confrontational in the way you have talked about, with all the harms that ensue from that, particularly for young people.

Ayishat Akanbi: There is, definitely. The design is very addictive, and anything that we spend a lot of time doing has the propensity to radicalise us. If we think of social media as an accurate representation of the world, it is no wonder that things have become more polarised.

I am not sure if this speaks directly to your question, but there are some add-ons or plug-ins that enable you to get rid of likes and things like that. You can hide certain things. It gives users more scope over what they can control and what they can seeif they do not want to see anyone else’s likes or retweets, and anything like that. When we do not see these things, it changes the way we engage with them. An opinion becomes particularly noxious to us if we see that thousands of people have liked it, and that makes us want to react. If we were not to see that, maybe we could easily move past it without it ruining our day and potentially someone else’s as a result. It is things like that.

The Chair: Dr Howard, can I come back to something you said? Ayishat had quite a narrow and clear definition of harmful content: things that directly cause harm or encourage people to self-harm. I suspect most people could get to a point of clarity about what came into that category and a variety of other harms. I think you could get consensus on that.

It seems to me that misinformation/disinformation is the truly difficult one, and the Covid episode perhaps is an illustration of it. It could be argued that anyone dissenting from the science behind government guidance is being harmful, because it undermines the Government’s guidance and leads to less compliance and more deaths, which is harmful, is it not? On the other hand, there is a genuine debate among scientists and many scientists do not agree with aspects of the World Health Organization advice. It is an emerging area of science. None the less, by dissenting they are undermining compliance, and that is harmful.

How on earth can a social media company or anybody else take a view even on the science of the arguments, let alone that sort of balance, which truly is a balance, is it not? You disputed the framing of it, but that is an illustration of the framing, is it not?

Dr Jeffrey Howard: The disinformation category is certainly one of the more fraught issues. It would be a clear mistake for social media platforms to categorically ban disinformation in the same way they ban terrorist propaganda or hate speech.

In this category, we need a much more targeted approach whereby, when there is a particular kind of disinformation, where its falsehood is incontrovertible such that there is an overwhelming consensus among the relevant experts that it truly is false and truly dangerous, it becomes plausible to start talking about appending fact checks to speech that circulates that disinformation to hide the speech behind. It is called an interstitial: the user has to click through it and is informed about the fact that there is intense dispute about the truth of the speech. I certainly would not favour a categorical prohibition on disinformation. In targeted cases where particular kinds of disinformation are shown to lead to real-world physical harm, those are the cases in which I think content moderation can be justified.

The Chair: Yes, but disputing the science behind the Government’s advice to citizens, leading to a reduction in compliance, would lead to harm.

Dr Jeffrey Howard: That is very helpful. I think it is necessary that speech leads to harm in order for it to be moderated, but it is not sufficient. Speech might lead to harm and yet be protected.

Here is an example I might give to my students in class. Someone is giving a passionate defence of their preferred religion before a crowd of spectators, and a bunch of unreasonable fanatics in the audience respond violently to this profession of the speaker’s faith. In that situation, we would say that the speech is protected by free speech, and the right response to the fact that it will cause harm is to provide police protection for the victims of that harm.

We need to look at the speaker’s reasons for engaging in the speech, in that in so far as there is legitimate reasonable disagreement about any number of empirical claims, you cannot label those claims disinformation and then take it down. It has to be only instances where the claims are incontrovertibly false and clearly linked to real-world harm.

Q8                  Baroness McIntosh of Hudnall: I am sorry. I wanted to come in a bit earlier with this question, but I think it is still relevant to everything that you have been saying in the last 10 minutes.

Dr Howard, right at the beginning you framed this in what I think you described as a moral contextthat there are moral questions here, which on the whole we tend to be very wary of now for reasons, I suspect, to do with their contestability. There are two points to make about digital citizenship and the way we empower a new generation of users of the internet to respect the idea of free speech and to use their freedoms responsibly.

Ayishat, I would really like to hear what you have to say about this, too. To whom should we look to set the moral context for that learning and that—sorry to put it in a rather highfalutin way—moral development of a young person’s intellectual ability? You said that people behave differently online, that they are more confident, which ties into Dr Howard’s question about disagreeing in an agreeable way.

How can we encourage people to understand that there is no legitimate distinction between what you say online and what you say offline, particularly not when it is directed towards harming other people but not only then?

Ayishat Akanbi: How can we encourage people to realise there is no distinction between online and offline speech?

Baroness McIntosh of Hudnall: I am sorry to use this word again, but I mean its moral weight. If I say something vile to you or about you online—this is my personal view—it has no different moral weight than if I said it about you or to you offline. Clearly, for a lot of people there is a distinction. How do we try to break that down?

Ayishat Akanbi: That is a good question. I would love to know myself. For younger people, it is about IT education. In IT lessons, perhaps now there will be modules on social media and how to use it. I think that would be beneficial, because if young people were given the information early enough to know that their social media accounts are essentially their public recordsif people were to approach these sites in that way as opposed to just frivolous entertainment, which is how many people perceive social media, I think—we would reconsider what we post, how we say it and why we say what we say. For adults, it is about becoming more savvy on the internet and realising that what they say online and what is said offline has equal weight.

I am not sure what more it will take. We have had many examples of people becoming very ill and in the worst cases committing suicide over things that have happened on the internet or on social media. That does not seem to have made political and social discourse any easier.

I do not know. I guess it is just up to people who care about these things to put out more messages encouraging people to respectfully disagree. I really agreed with Jeffrey when he mentioned that. It is scary to me that people now see disagreement often as a form of hatred or a form of denying their existence, or some murderous intent. We really do need to learn how to disagree again and remember that disagreement is healthy. Yes, it may be about having more conversations about disagreement and more ways to disagree, such as learning more effective ways to disagree.

Dr Jeffrey Howard: In response to Baroness McIntosh’s points about the reluctance to use moral language, I would simply say that all policy questions have this ineliminably inescapable moral element. Whose morality? Who gets to decide? That has to be the result of the democratic community’s conversation and by appealing to moral values like freedom, equality, respect and tolerance, which are part of an overlapping consensus among all the traditions across society.

Q9                  Lord Storey: I do not have a question linked to this. I just want to make one point that I want you to consider. We talk about freedom of expression but, of course, we do not have freedom of expression where the platforms agree with totalitarian regimes not to allow certain things to happen. That is my first question.

My second observation is that there is a generational problem. By the way, I love the expression that we need to establish an etiquette, but there is a generational issue, is there not? Older people are often very nervous about using social media in the sense that they were born and grew up in a different time, so their use of language is different. They are frightened to offend. They are frightened to use the wrong word in the wrong context. When they occasionally do, suddenly they get absolutely bucket loads of abuse. Maybe that also links into your point, Ayishat, about an etiquette for the internet.

Dr Jeffrey Howard: I think it would be morally wrong for platforms to do the bidding of totalitarian regimes. This raises fraught ethical questions about platforms’ operation in countries where compliance with the local law requires them to be complicit in some seriously nefarious practices. That point, while important, does not necessarily bear on what the UK’s approach to appropriate regulation should be, but it does raise an important ethical question for these companies operating in non-democracies.

I second your point about the counterproductivity of the pile-on shaming and abuse that takes place online. Language evolves. Norms of respect and etiquette evolve. If people use an outdated term, it is perfectly appropriate to remind them that it is outdated and discourage them from using it, but the kind of vitriolic public shaming that we see is, as I have said, deeply counterproductive.

Ayishat Akanbi: On the intergenerational question and the differences between our generations, the problem with social media and why older people will also fear misspeaking on the internet is that, not directly but in the social media arena, shaming other people is celebrated in some ways. You can get a lot of social capital from it, so people have an incentive if they want more followers or want to be seen as maybe more morally superior than another person. Shaming is a way in which people easily do that.

Again, as you mentioned, this is why there is an important need for social media etiquette. Of course, we cannot necessarily enforce that, but there needs to be some literature on the best ways to use it for your aims. We all have aims with social media, so if you truly want to expand your political consciousness, attacking people for using outdated terminology will probably not be the most helpful way to do that.

Q10              Baroness Quin: Thanks to both of you for being with us today and for your contributions. I want to talk about the role of the state in regulating in this area. Obviously, we are a parliamentary committee, so our reports make recommendations to government. If you would both like to put yourselves in the place of the Cabinet Minister responsible for regulation, are there things that you would like to bring in, changes to the current system, that you think would be helpful?

Ayishat, something you said on YouTube struck me when you talked about the divide between something that merely makes you uncomfortable and something that is hateful. Do you think that, in legislation and regulation at the moment, the balance between those things is about right, or does it need to be changed? My concern, thinking of things such as the right to be forgotten, particularly when it comes to young people, is that you can put something online that is unwise and you get saddled with it for ever. Do you think that changes need to be made in regulation and legislation in that area as well?

Ayishat Akanbi: I am not sure what the current laws are on having things that stay on your record. There needs to be something for young people and, if there is not already, we should look to legislation not to allow these things to stay on your public record or your internet presence. That would be helpful, and not just for young people. I do not know, but maybe there should be some kind of timeframe. It is not helpful for our individual psyches and our professional lives that an unthinking moment stains us for ever. If there is nothing in that area, it might be something to look at.

A lot of the social media issues are, to me, more individual. There are things that users have to take up. Definitely for social media companies—I am not sure about the Government’s involvement—there could be things that are beneficial, but personally I see this as us getting the internet that we create collectively together. I am much more interested in what we as users can do and what we can encourage.

Baroness Quin: Thank you. Jeffrey, what is your wish list, if you have one?

Dr Jeffrey Howard: I favour the view on this side of the Atlantic that social media platforms constitute such a profound and pervasive site of democratic discourse that they have to be subjected to some kind of democratic oversight. In the immediate period, the most effective way to provide this oversight is to continue to work directly with the companies and apply direct public pressure. A lot has changed in the last few years, and companies now accept the premise that content moderation is a necessity. They are directly engaging in it and they have established elaborate community standards and guidelines. In the short term, we want to work with them to continue this work.

In the medium to long term, proper official regulation is essential. I do not have a strong, settled view yet on exactly what form this regulation should take, in part because I think the right regulation depends not merely on philosophical insights but on difficult empirical questions about what regulations are likely to be effective and technologically feasible. I cautiously support many of the elements in the Online Harms White Paper: namely, the degree of flexibility it offers in working directly with platforms to devise the best technological response to the problem of online harm. The consultation process on that White Paper has yielded a lot of valuable insights, as reflected in the Government’s official response to it.

There was considerable concern about the Online Harms White Paper from a free speech perspective, given that it appeared in its initial incarnation that the Government intended platforms to take action not only against illegal speech such as terrorist incitement and child sexual abuse material, but also against legal speech that causes harm. In the response, the Government have clarified that the regulator, which is seems will be Ofcom, would not force tech platforms to remove specific pieces of legal content.

With respect to legal but harmful content, the aim instead is to require companies to enforce their own terms of service, which they pledge to their customers consistently and transparently. I am inclined to think that that is the right approach moving forward.

I look forward to the Government expediting legislative discussion about whatever legislation emerges on this, and I hope it does not take too long.

Q11              Lord Allen of Kensington: Thank you very much for your contributions this afternoon. It has been fantastic. I have to declare an interest. I am chair of a company called Global Media & Entertainment, which runs UK radio stations. We have one station in particular called Leading Britain’s Conversation, which basically has people from very different areas so you hear different views. A lot of our listeners listen to different perspectives. I was interested in Ayishat’s point that having challenge is quite important in the whole process.

One of the issues we have identified previously in various reviews is the algorithms of the platforms. That is not freedom of speech; it narrows conversations because you keep being reinforced. Jeffrey mentioned it earlier. The algorithms narrow the conversation, not broaden it. As we think about legislation and regulation—I particularly like the word “etiquette”—we should also think about how we open people’s minds up rather than close them down. That is very important. Everything that we have discovered over the last three years that we have been involved in is that it narrows the conversation, not broadens it.

Dr Jeffrey Howard: I agree that thinking about algorithmic design is absolutely crucial here, although it requires engagement with the companies’ technological expertise in how exactly their algorithms operate. There is scope for real innovation here. I am thinking, for example, of Twitter trialling a mechanism whereby one has to open the article that one wants to retweet before being permitted to retweet it.

The kinds of mechanisms, the hacks, that try to slow down the rapid pace of discourse on social media, to throw some sand in the gears and to introduce a bit more friction so that people have to stop, think and reflect are the innovations that we should be looking for here. We can probably all agree that social media moves at far too fast a clip. When I teach my classes to my students, we have these slow, hour-long deliberative conversations. If we want social media to work like that or at least a bit more like that, the pace of it needs to slow down considerably.

Ayishat Akanbi: That is a great point. Putting in mechanisms that allow us to respond rather than react as quickly as we do would be great. That is a great idea about the articles: you cannot retweet an article unless you have actually read it. More measures like that would be very helpful.

Q12              Baroness Grender: Thank you for that lovely thought about slow social media. I am sure we will want to chew that over at length.

I have a question about a particular cis straight white male who has been in the news quite a lot over the last few weeks: Donald Trump. His social media platform of choice is Twitter, which has made various decisions to suppress or stop his comments. In the context of free speech, do you think it was right or wrong, given that part of its argument, I guess, was that he was inciting problems in the US when he said that there is profound corruption in the election process?

Ayishat Akanbi: I feel like a lot was riding on this election. Everyone kept talking about it being the most important one. I guess they always do that. Donald Trump’s presence, with phrases like “fake news” and all that kind of stuff, definitely made political conversations a lot more tense than they have been in recent years. With a lot riding on this election, maybe Twitter was right.

I see it now not just with Donald Trump’s rhetoric but even on Covid. It is not suppressed, from what I see; there is a sort of wall before you see it that has not completely gone but which allows readers to know that what is being said might not be factual. When you hold such a position, that makes sense, or at least I understand the reasoning behind it, and at the moment it does not trouble me.

Dr Jeffrey Howard: I agree with Ayishat. I think Twitter’s guidelines on election integrity are defensible. The fact that it did not remove his posts but simply tagged them enabled people to see them and make up their own minds.

As an aside, it would be really useful to get some more empirical research on the effectiveness of these tags and whether they actually make a difference. If we genuinely think that spreading disinformation about the legitimacy of an election causes a serious harm and it is shown by empirical research that tags are reasonably ineffective at changing people’s minds, the next question is: would taking down some of this content be justified?

I tend to be a bit hawkish about that and say that in principle it would not be a problem from a free speech perspective if you took down direct disinformation about the reliability of elections. That does not necessarily mean that you should do it, because it might be deeply counterproductive. It might backfire horribly, and those are also consequences we need to attend to. Generally, I am supportive of the platforms’ policies on election integrity.

Q13              Baroness Buscombe: Thank you both. You have been terrific. I have a quick question about all the things that you have been talking through. Ayishat, you started talking about people online being more confident and tending to be more combative. They can vent their frustrations and people feel more comfortable, whereas offline they fear for their safety.

Surely the ability to be anonymous allows so much harm. It is human nature that if we have to say who we are, as we have to in every other form of publishing or broadcasting or radio, we say who we are, but online people are free behind this screen, this mask, to be cruel, abusive, say what they like, with nothing coming back at them. Surely one way of dealing with this would be to say, “Okay, say what you like, but we actually want to know who you are”. In so many instances, anonymity allows real harm and real abuse. What do you think?

Dr Jeffrey Howard: I completely share your concerns about anonymity. This is a tough one. I was a sceptic earlier about balancing, but this is a context where I think balancing is appropriate. Clearly, anonymity can bring important benefits. Think of the gay teenager who is not prepared to come out of the closet but who benefits from engaging with likeminded others. Think of the dissident in an authoritarian society expressing legitimate criticism of the government, who will face repression if her identity is exposed. Yet, as you have said, it seems that this shield protects both the righteous and the sinister. People are more likely to be hateful and vitriolic and break valuable social taboos when their identities are unlikely to be exposed.

It is also important to point out that we all have a powerful interest in identifying the source of speech that is communicated to us so we can identify its credibility. It looks like there are good reasons to protect anonymous speech, especially in non-ideal situations, and there are good reasons not to protect anonymous speech. It is not clear to me that either set of reasons is obviously weightier than the other. What do we do in that case? I think we opt for a mixed model where some platforms protect anonymity, other platforms do not protect anonymity, and that way we get some of the benefits without having to incur all the costs that full anonymity would involve.

I should note that the issue of anonymity is, importantly, different from the issue of inauthentic or fake accounts that might be populated with bot farms off somewhere. A platform should be able to confirm whether a user’s account is linked to a real human and still perhaps permit that person to have an anonymous handle, as Twitter does but Facebook does not. Facebook takes a harder line on anonymity. That is an important point, because even if we decide to protect anonymity, it would still be important to take action against inauthentic accounts, which do enormous harm, I think.

Baroness Buscombe: Ayishat, I love your reference to etiquette. This is really what I am getting at: this feeling that people feel free to hurt others because they do not have to say who they are.

Ayishat Akanbi: Exactly. Inauthentic accounts and bots and all that might be more risky than anonymous accounts. I worry about restricting anonymity only because the landscape of politics and social issues is very fraught. A lot of the time, people only feel comfortable saying what they truly mean, and being direct without sugar coating it, through anonymity. There are a few anonymous accounts that I have come across that are really insightful, and I understand why they would not express those ideas under their own names.

As much, as I see that anonymous accounts can cause a lot of harm and can get away with saying anything they want, as Jeffrey said, as long as there are some platforms that allow this function and others that do not, maybe it is for users who are particularly sensitive to the abuse and cruelty that happens on social media to make more informed choices about whether they want to be on platforms where there is very little restriction. I see the benefits of anonymity and obviously the harms.

The Chair: Thank you to both of our witnesses. This has been a fascinating opening session. You have given us a lot to think about. I shall think about the difference between speech that I hate and hate speech, among other things. You genuinely have provided us with a number of areas to start thinking about and thank you for thinking about it so deeply before coming to us. The session was very useful indeed. Ayishat and Jeffrey, thank you both very much for your time.