final logo red (RGB)

 

Select Committee on Communications and Digital

Corrected oral evidence: Freedom of expression online

Tuesday 8 December 2020

4.00 pm

 

Watch the meeting

Members present: Lord Gilbert of Panteg (The Chair); Lord Allen of Kensington; Baroness Bull; Baroness Buscombe; Viscount Colville of Culross; Lord McInnes of Kilwinning; Baroness McIntosh of Hudnall; Baroness Quin; Baroness Rebuck; Lord Storey; The Lord Bishop of Worcester; Lord Vaizey of Didcot.

Evidence Session No. 4              Virtual Proceeding              Questions 38 - 45

 

Witnesses

I: Ruth Smeeth, Chief Executive Officer, Index on Censorship; Richard Wingfield, Head of Legal, Global Partners Digital.

 

USE OF THE TRANSCRIPT

This is a corrected transcript of evidence taken in public and webcast on www.parliamentlive.tv.

 


17

 

Examination of witnesses

Ruth Smeeth and Richard Wingfield.

Q38              The Chair: In our second evidence session today, we are joined by Ruth Smeeth and Richard Wingfield. Ruth and Richard, welcome, and thank you for joining us. The session is being broadcast online and a transcript will be taken.

Ruth is the chief executive of Index on Censorship and Richard is a legal and policy expert. Would you introduce yourselves and tell us a bit about the organisations that you represent? Would you also give us your opening perspective on the broad issue of freedom of expression online? We will then take questions from members of the panel.

Ruth Smeeth: Good afternoon, everybody. I have been chief executive of Index on Censorship for six months, so if there are questions I do not quite know the answer to, please bear with me and I will resubmit. It is a very strange experience for me to be on this side of a committee hearing in Parliament, and not on that side. It is a nerve-racking experience on this side, as it turns out.

Some of you may be aware of Index on Censorship’s incredible heritage. We are coming to our 50th anniversary next year. We were established to be a voice for the persecuted; for people living behind the Iron Curtain who were being persecuted because of their writings. The index was established to be a vehicle for writers and scholars.

We have a quarterly magazine that we have published consistently for 50 years. It acts as a platform for people who cannot be published in their own country, and shines a light on what is happening in oppressive regimes. Stephen Spender, who launched us in The Times in 1971, made it clear that we were not to be a grievance sheet but rather that we were to expose and to fight for free speech in the UK, France and the US, because the rest of the world looks to us. The guidance and how we choose to legislate free expression and how we use the Human Rights Act is key. The rest of the world will look to us and repressive regimes will hide behind what we do.

As regards free expression online, my position is clear. Online activity is incredible because it has given so many more people an immediate voice. Those of you who know my political context will know that I as much as anyone else appreciate the fact that there is a problem with online harm and bullying and harassment online. I do not start from the position that there is no problem. There is a problem. It is whether legislation is the right way to deal with it and what the unintended consequences may be of that legislation.

We should be very clear that, in the context of history, online activity is a very new tool, but, especially in the last year, it is not about our online lives versus our offline lives. It is now our lives and we should not seek to regulate our offline lives differently from our online lives. That is why I have some concerns about the Government’s White Paper on online harms and the likely legislation, which I am sure we will touch on. In an international context, we need to be prepared to recognise that people are looking to us.

The Chair: We will definitely touch on that and many other aspects. Richard Wingfield, welcome.

Richard Wingfield: Thank you very much for inviting me today. I am the Head of Legal at Global Partners Digital. It is an organisation based in the UK, but we work globally. We were established in 2004 by Andrew Puddephatt, who is now the chair of the Internet Watch Foundation, so is someone who has been involved in online content for a number of years. Our broad mission is to try to ensure that the development, the use and, increasingly, the regulation of the internet and its technology supports and enhances human rights rather than undermines them or puts them at risk. We work across the world with partners in about 40 countries to try to ensure that these kinds of debates and conversations, whether with companies or with Governments, lead to rights-respecting outcomes.

My broad approach is very similar to Ruth’s, in that I recognise the tremendous value that internet technology has brought, including for the enjoyment of human rights, and not just freedom of expression but also the rights to freedom of association and peaceful assembly and the ability for activists to communicate, to assemble, to share information, to whistle-blow, but of course the internet and the darker parts of society are reflected there as well. It is looking to how we address those challenges online as well as offline but without losing any of the value or those benefits it has brought to free expression and other human rights.

Q39              Baroness Bull: Thank you both for being here. I want to probe with both of you what you believe is the scope of the right to freedom of expression. We know that it is protected under the Human Rights Act and we know that it is subject to some caveats. It is generally agreed that anything giving rise to physical harm is covered, but it is less clear on things that give rise to other types of harm—social, cultural, harm to one’s dignity or self-confidence, for instance. We have heard from one witness that the scope should only include physical harm. We have heard from other witnesses that that is too restrictive. I am keen to have your view on where the edges of the umbrella are.

Ruth Smeeth: I am quite clear that it is about incitement to violence; anything that incites violence. In addition, it is anything that promotes criminal activity in a British framework, a UK context for the purposes of our definition. We have to be very clear. The more we make it complicated and add caveats, the more there is a chilling effect on how people use the internet. From my perspective, it should be as all-encompassing a definition as possible, which allows people to engage within the framework of British law.

Baroness Bull: To be clear, in your definition, violence is physical.

Ruth Smeeth: Incitement to violence.

Baroness Bull: Do you mean physical violence, or is that a legal definition of violence? I do not know, to be honest.

Ruth Smeeth: I think it is probably the legal definition of violence. The wonderful thing about all these issues, and one of my concerns about the White Paper, is that language is ever evolving. Using key words means that people will just use alternative words and definitions. We have to be very clear about what we mean and what we do not mean. For me, it is about the British legal framework on illegal activity, because that is the framework we want to operate regardless. There has to be consistency online and offline.

One of the issues we have is the concept of legal but harmful material. That is absurd because the conversation that I might have with one of you over a cup of coffee, as and when we are allowed to do that again, should not be regarded as different from what I put in an email to you or what I put on social media, yet under the Government’s proposals on language it would be. We need a consistent approach and, from my perspective, in a UK legal framework.

Baroness Bull: What I am getting at is that between violence and anything that promotes criminality there is a world of abuse, some of which has become criminal; for instance, coercive control, which was not criminal until relatively recently, would now fall under criminal activity. There are still grey areas between violence and criminality where harm lives, but in your version it would fall outside the umbrella.

Ruth Smeeth: Yes.

Baroness Bull: Richard, may I ask you to tackle that, please?

Richard Wingfield: From a human rights perspective, there are very clear standards on permissible restrictions on the right to freedom of expression under article 19 of the ICCPR.

The Chair: We will move on to another question for Ruth, and we will come back to Baroness Bull to finish her thoughts with you.

Q40              Baroness Rebuck: This is a very different question, Ruth. I want to ask you about behaviour online and how we might be able to influence it. One aim might be to reduce the amount of offensive content that is posted online before we try to take it down. We have heard quite a bit from witnesses about whether there are ways in which we can influence online behaviour and promote better digital citizenship and better online etiquette. You have spoken before about the importance of education, young people and schools, but I am equally interested in civic education for adults and what that might look like and who might provide it.

Ruth Smeeth: This is where there is an opportunity. We are talking about regulating culture, which is an incredibly difficult thing to do. For me, there is no carrot; it is all about the stick of how we can regulate. There is a great deal of discussion about codes of conduct for the companies. We need a code of conduct for us as consumers, as part of the platform, for how we engage and what we do. Digital citizenship in schools is key, but you are right that it is a civic responsibility, and I think there is scope.

I will start with young people and work my way up. The Scout Association should be applauded. It has introduced a new badge called the digital citizenship badge. That is an incredible thing to have done to make it normalised and integrate online and offline culture. It was a really positive move.

For a multitude of reasons, we are about to go through a recession. Unemployment is likely to spike. I say that, because I think that in the next 18 months we will see a lot of people reskill and upskill, and the Government have issued a commitment to further education and lifelong learning. As part of that, undoubtedly the Government will want to give people online skills for employability. We should incorporate at that point an element of digital citizenship.

As a former Member of Parliament, my typical abusers are not children. You are absolutely right. They are typically men in their 40s. Making sure that people are given the skills as they touch different parts of the education system as they progress is key.

There is an opportunity. We had a national programme for civic engagement for silver surfers and how to get older people online. It was done through our library networks, through the WI, and through every community group. They were given a programme that was easy and accessible. We could do exactly the same, and ask the digital platforms to work collaboratively to develop a programme that could easily be rolled out as part of the ongoing engagement activities we are seeing. As we continue to have an online world as opposed to a real world, at least for the next seven months, that is an immediate opportunity that could be exploited, and could be an easy recommendation for the Committee.

Baroness Rebuck: I like the idea of giving people online skills and including digital citizenship as part of that, but to whom do we look to define what public morality or good citizenship could be? I like the idea of it, but how do we get to that point? Do you have any thoughts on that?

Ruth Smeeth: It is at the heart of all this conversation. When we talk about algorithms or about machine learning, at the core of it is who determines what is or is not right. Obviously, I would say that it is civic society coming together as a collaborative effort to agree, but one of the things I am nervous about in this context is that there has been no national conversation about what our online lives should be, what free speech should look like and, candidly, where the red lines are. It brings me back to Baroness Bull’s question about what should or should not be in scope. It is not for me or you to determine. It is nationally to be agreed, so that people have buy-in. One of the issues for us is that it looks like a free-for-all, so there is no buy-in.

To do that, we have to start the process. You start the process as parliamentarians, in my opinion, with civic society. You have a forum to look at where we think the line should be, and then ask the social media platforms to fund it and help us develop platforms. They have the tools available. All of them run education programmes already. We should be working with them to see what they are doing, and what works and what does not work.

The Chair: Baroness Bull, could you pick up with Richard on the fundamental question of definition?

Baroness Bull: Perhaps I could add a second thought that I was going to give both of you, which leads on exactly from Baroness Rebuck’s question. It is about the question of cultural differences and how much one can be absolute about what might be harmful, indeed what might be moral, given that cultural differences may come into play.

Richard Wingfield: Purely from a human rights perspective, the instruments that guarantee freedom of expression contain exceptions. It is not an absolute right like some other rights. Those are clearly set out in Article 19 of the ICCPR and its equivalent in the UK Act. They relate to protecting the rights of others, preventing crime and disorder, protecting public health and morals and so forth. We have a starting point there. We then need to work out within society more specifically where restrictions might be necessary. I absolutely agree with Ruth that the criminality of certain forms of behaviour is the right place to start, because we have agreed through a democratic process what is and is not allowed.

There might be situations where we have harms in society that we take action against even if we do not criminalise them. One example is bullying in schools. We might intervene to stop children saying words to each other in the playground. We would not do the same if it was adults in the pub. The response is not criminalisation. It is another kind of intervention. We have the same thing with age ratings on films and television programmes. The unfortunate answer to your question is that there is no single neat line between acceptable and unacceptable. It depends so much on the context, the speaker, the intention and all the rest of it.

To your second question, which is about cultural differences, not only do we have significant differences in different parts of the world over what is socially acceptable or not, but also significant difference within different parts of the UK, within age groups, within families. All that leads me to believe that it is incredibly important that we allow as much nuance and discretion as possible, rather than trying to draw clear lines about what is acceptable and unacceptable because, unfortunately, that is not the way societies work.

Baroness Bull: That is really helpful, Richard. Thank you very much.

The Chair: We have got to a point where we have had some clarity about your positions on some of the key definitions and we have started to explore the issue of behaviour. Baroness Rebuck discussed that with Ruth. Baroness Quin wants to pick up on that point.

Baroness Quin: It follows from Baroness Rebuck’s question. I was quite struck with what you said, Ruth, about the Scouts’ digital citizenship badge, which I did not know about. It made me wonder how we can tap into examples of good or even best practice in this field. In your international work, Richard, are people looking at successful international examples of digital citizenship as it affects different age groups?

Ruth Smeeth: At the moment, there is no clear framework showing best practice. Part of that, I believe, is because the social media platforms themselves provide the training, and they are commercial endeavours and do not necessarily talk to each other about what they are doing.

The Scout Association badge was developed in partnership with Nominet. It had a corporate sponsor to ensure that it worked effectively, and that it was worth while. Some of these things will have to be done as collective effort. DCMS is probably the most sensible vehicle for collecting information about best practice, and what is working and what is not. We all have anecdotal examples of where we think it is.

As a former Member of Parliament, I experienced Google’s training session. It was extraordinary. It was also extremely expensive, and schools needed to commit to a full day outside the curriculum. I think we all know from our own experiences at school that one day does not necessarily change a culture. It may spark interest. There should be a steady stream all the way through. I think you are right. My godchildren are more IT literate at the age of eight than I am ever likely to be. Starting much earlier than we would have otherwise started some of the citizenship education means that we have to look at ongoing programmes. As I said earlier, some of the main perpetrators of problems online are not of school age.

Baroness Quin: Richard, would you like to add anything on the international dimension?

Richard Wingfield: I have no great examples of international programmes. It is important to recognise the very different harms that young people and adults face online and recognise that there are differences in building resilience. How one develops the ability to resist financial fraud is very different from how one builds resilience to disinformation or very different from how to resist screen addiction. Just as we teach children and adults in different ways things such as road safety and about drug and alcohol and other forms of potential risks to life and other harms, when it comes to the online environment, we must recognise the differences as well. Different programmes are probably needed to deal with different ones.

Q41              Lord Storey: My question is about the balance between people having freedom of expression and the harm they potentially can do by having that freedom of expression. It links to the other questions about citizenship and etiquette and young people.

I am of the view that there are three things. First, we have to make sure that we are determined and organised in how we provide that knowledge and understanding, particularly to young people. If we just do it in an ad hoc way, it will make no difference at all. Equally, in each society we have to decide what our freedoms should be, and we should guarantee and protect those freedoms as much as possible.

Thirdly, I have seen for myself as a teacher how children have been traumatised by name calling and cyberbullying on the internet. I have always done two things. One is talking to them about that, but making it clear that some of them are too young to be on Facebook. We should make sure that age verification is correct. Secondly, I say, “Do you know what, you can unplug and not look at the nasty comments?” Sticks and stones may break my bones but words will never harm me. Words do harm you and, if you do not like it, come away from it. It is quite a long convoluted question, but is there anything I say that resonates with you, or is it not as easy as that?

Richard Wingfield: There are a couple of elements on your suggestion that perhaps the person should simply withdraw from the platform and stop using it. Is it feasible to simply ask people not to use platforms, whether young or old? Often for a young person that would be very difficult. For an adult, it means withdrawing from quite fundamental parts of society if they are no longer able to use social media platforms and so on. I am not sure that is an entire solution.

Your question about making sure that children are safe and secure online is critically important. There are ways that platforms could do better to try to make sure that is the case. It is also critical to remember that we cannot lump everyone under the age of 18 into one box. What is inappropriate for a four or five year-old to watch on YouTube may be quite important for a teenager to watch, if they are looking at information about sexual identity, for example, or information about alcohol and drugs. If there are restrictions, we should make sure that it is not under-18s versus over-18s but that they are appropriate for the young people accessing the platform. That raises huge questions about how the platforms know the age of the person, which we might come to later.

Lord Storey: Could you also address the issue of legal liability and what is appropriate for online platforms?

Richard Wingfield: The principle we have at the moment, whereby platforms have complete immunity from liability unless they are made aware of the content, which is what we have in the e-commerce directive and our current law, will probably have to be reviewed, simply because platforms are more active now in looking at the content on those platforms than perhaps they were when the legislation was passed. The danger of going to the other extreme, which is to make them liable if there is any illegal or harmful content on their platforms at all, is that it would require them to monitor everything that everybody says, and to intervene if they think something illegal or harmful has happened.

That raises a number of questions that I would love to explore, but I shall introduce them briefly and then go to Ruth. There are questions about whether it is appropriate for platforms to decide on the legality of things online instead of, for example, the police and the courts, and what are the means of redress for individuals if their content is removed or filtered in some way.

At the moment, the processes are not very transparent. Are we simply going to encourage platforms to use algorithms and automation to proactively remove content with no human ever looking at it, when we know that those processes are incredibly inaccurate and often disproportionately target minority groups due to biases in the algorithms? The right liability framework encourages good content moderation through transparency and accountability, but does not make platforms ultimately liable, if, despite doing that, something bad remains on the platform.

Ruth Smeeth: As regards your comments about cyberbullying and being traumatised, Lord Storey, I empathise both with the comments and the experiences of the young people you are talking about. From a personal perspective, I would advocate removing oneself from social media, especially if you are in the middle of a Twitter storm, which would be more relevant for some of you, or in the middle of an experience of online harassment. The way I coped with it, and this is a personal experience, is that I removed myself from all social media, which is not helpful when you are in the public eye.

I now have social media only on my desktop computer. I do not have it on my phone. If I want to see what is happening on Twitter, I have to physically go to it. That separates you from bad experiences. My Twitter settings are extraordinary. There have been numerous cases, even in the last couple of months, when I have only found out I was in the middle of a new Twitter storm because of positive comments that came up on my notifications when people were defending me, not because of what had been said about me in the first place. That is a much nicer place to be than my personal experiences four years ago. One of the reasons why it is important that people know how to use the technology is that they can set up the settings they need. From a personal perspective, that gave me some power over what was happening to me. Regardless of age, that is important.

There are clear elements of the legal framework that would be helpful as regards legal liabilities, one of which is definitely the transparency of algorithms, the AI and the learning. We have no idea what is really happening. One example is a piece of research that Index on Censorship did earlier this year on news aggregators. We compared how two news aggregator sites were giving information and news stories about LGBTQ issues. On one very prominent platform, 46% of what it was putting into its news content came from right-wing or alternative right news stories, and an equivalent from exactly the same period gave only 6% from those same sites. New aggregators choose by their algorithms what we see in our news priorities. Most people just type news into a search engine, and get the stories that come up, so that is what they think is the news. There is a huge amount of control without any transparency. A legal framework would help.

The other area where the piece of legislation could be incredibly important is that no legal framework exists in the world at the moment for deleted data to be stored. One of the pieces of work I am doing is with the Syrian Archive, where people collect data on social media evidence of war crimes in Syria; 23% of what they have collated and downloaded had already been deleted by the platforms. Once the platforms have deleted it, it is permanently deleted, so it cannot be used as evidence in war crimes tribunals. Understandably, platforms do not want to hold that data because they would be holding illegal images of terrorist activity or incitement, and they have no legal shield to do so.

A lot of the conversation is about potentially removing evidence of rape or of war crimes. Once it is deleted, our security and police agencies can no longer access that material. We could have a legal framework—this is the opportunity—that allows a digital evidence locker to be established. I know I have veered slightly off the legal liability framework, but I think those two points could be quite powerful for the legislation.

The Chair: Let us stick with UK legislation for a bit.

Q42              The Lord Bishop of Worcester: Thank you both for all that you have said so far. Thank you for your useful tips, Ruth, on dealing with social media. I think you should offer seminars to parliamentarians with that sort of the advice.

I want to take us on from some of the questions that have already been raised but drill down a bit on the legalities. The publication of the Online Harms Bill is imminent. It follows the fuss over the White Paper about whether the online harms Bill would include cyberbullying, trolling, intimidation, disinformation, et cetera. We have heard from a previous witness that there is still a lot of worry about the fact that in the Online Harms Bill harms might not be very clear, and there might be a lack of clarity and a question about how much discretion Ofcom has. I would like to press you more about legal liability. What would you do with the Bill? Where would the line be drawn for you?

Ruth Smeeth: Richard should definitely start that.

Richard Wingfield: Thank you, Ruth. The proposals that we’ve seen in relation to illegal content concern me very much for a couple of reasons. The first is quite a principled one, which is that if Parliament does not want something to exist, if it does not want people to do a certain thing, it should prohibit it clearly and explicitly, whether it takes place offline or onlineIt would mean that our Parliament would say to people, “We don’t like you doing this thing. We’re not going to make it illegal, though, but you won’t be able to do it online, and anything you do will be taken down”. If the Government have concerns about some of the types of harms in the White Paper, such as disinformation, they should create, in either criminal or civil legislation, clear criminal offences or some equivalent civil law, to make clear that that is prohibited, and that the user is prohibited from doing it, whether they do it in real life or online. Trying to address the problem by simply having it deleted when it is online, but with no consequence for the user, for the speaker, or for its offline equivalent, seems to me deeply problematic.

My second concern is that even with illegal content now we are transferring law enforcement, and questions about deciding what is legal and illegal in society, from the police and the courts to content moderators at Facebook, Google and YouTube. I was a lawyer for many years. Making decisions about what is illegal speech is incredibly difficult. It takes time to gather evidence and to talk to witnesses, and it is most likely there will be a trial at the end of it, yet we are asking content moderators to understand our legal system and make decisions in minutes about whether somebody’s speech is illegal or not. That means that platforms will introduce automated processes, which we know they are doing already. It will not even be humans; algorithms will start deciding whether what people in the UK do is legal or illegal. From a rule of law perspective, as much as anything else, that makes me feel quite uncomfortable.

The solution, therefore, is not to create new kinds of prohibited speech online, but to look at our existing legal frameworks and what is prohibited in society, and make sure that is enforced online through proper resourcing of the police, and better upskilling of law enforcement agencies to collect the evidence they need, so that people can be prosecuted, or in other ways dealt with, when they do something illegal online. That would be my approach.

Ruth Smeeth: Unsurprisingly, I agree with Richard, but I want to expand slightly. We have to look at what social media are and the fact that each platform is different. Our expectations of Google would be very different from our expectations of Facebook, Parler or TikTok. Each of those has a different commercial offering and probably requires a more nuanced than a one-size-fits-all approach to regulation. Given what we are talking about, comment pages on TripAdvisor and Mumsnet would be covered under the same piece of legislation as YouTube and Facebook. None of these issues is clear-cut because it is about culture. We need to separate users and platforms. From the users’ perspective, we have to be very clear about their expectations. That comes back to the original question I was asked about a code of conduct for users. We also need to ensure, as Richard said, that the current penalties are being used.

I will give you a real-life example. We are coming up to the anniversary of the general election last year, when I received a handwritten death threat. At the same time, I received from a different but local source vile racist abuse that was very threatening in nature. The physical death threat has been dealt with and gone through the courts, because that is traditional policing. There were fingerprints, it was normal and the police knew what to do.

Even with my experience and their experience of having looked after me for five years, one of my former staff was interviewed on Saturday night to give yet another witness statement about something that happened over a year ago because the police are still, a year later, investigating the online abuse. The powers exist, but the resources are not available or are not there for online abuse. Before we come up with a new legal framework, let us make sure that the resources are there for the current legal framework to work appropriately.

There is also an issue about the nature of the companies. Some of them are commercial enterprises and make profit. Others have alternative reasons for their existence, whether they are political projects or nation state projects. The latter two do not make a profit. They are not designed to make a profit. If we are to have a legal framework that imposes financial penalties on the companies, we are only talking about the former, yet the latter are where some of the more toxic and dangerous speech that incites violence is being used. We have to be very clear and careful about making sure that, candidly, we legislate with a view to tomorrow and to all platforms, not just, with the greatest of respect, the platforms that people in this group use, or that the people writing the legislation use.

I had not heard of TikTok until the pandemic. There are new platforms developing every day. I assume that none of you is on Parler, for example. We have no experience of the many different platforms that the legislation would seek to govern. We need to be very careful when we are developing a legal framework and legal liability that we do not just do it for Facebook and Google, but that we think about some of the more alternative platforms.

The Lord Bishop of Worcester: The teenagers in my family keep me in touch with TikTok. I had never heard of Parler until I saw one of President Trump’s tweets recommending it.

You said earlier, Ruth, and you just implied, that there should be no difference between what one can say online and what one can say offline, as it were. As a previous witness remarked, a right to freedom of speech is not a right to amplification. Is there a complication with online because of the possibility and reality of huge amplification?

Ruth Smeeth: The difference is that now immediate amplification is possible, and something can go viral overnight. Historically, you could have seen exactly the same thing. Thirty years ago, it would have been letters and a media story, and a slow burn that took several days, weeks or months but might have had the same effect. We have to be very careful when we talk about not wanting to amplify people’s voices.

Richard, I am completely stealing your TikTok example. TikTok was seeking to protect disabled voices on its programmes because they were getting abuse, so they were deprioritised. That meant that disabled voices were then not heard on TikTok. You have to be very careful about where the balance is. That is one of the problems with the AI and algorithms that are being used to manage programmes.

There are understandable reasons for the algorithms. Facebook has 3 billion monthly users, and there is no way we have the human capacity to monitor that number of comments. There is a balance, but how the algorithms work and the transparency of the algorithms is key in amplification. I agree that everyone has the right to speak, but they do not necessarily have the right to be heard. We need a conversation to decide how we do that. It is not for me to decide as the arbiter of free speech who has or does not have the right to be heard. We just need to be careful about how we do it. Remember that it is all nuanced, so there are consequences if you de-amplify as well as if you amplify.

The Lord Bishop of Worcester: Thank you. That is very helpful.

The Chair: And very interesting, too.

Q43              Viscount Colville of Culross: I would like to carry on talking about what you were saying, Ruth. You made a distinction between for-profit companies and not-for-profit companies. Generally, what has brought this on is the behaviour of the for-profit companies. You talked about the need for transparency of algorithms to deal with amplification.

Witnesses have talked about the business models of platforms that hinge on user engagement, encouraging and incentivising users to provide outrage and hate. Is there anything we could do to encourage the tech platforms to engineer their systems so that there is less demand for outrageous content? Could we open up what one of the witnesses called the noxious echo chambers that form online and are supposed to radicalise and polarise users?

Ruth Smeeth: Unsurprisingly, both as a former politician and in my current job, I am a true believer in speech, and more speech rather than less speech. I say that having sat in the Chamber in the other place and listened to far too much speech. Anything that has moderating voices is incredibly important. How you do that when as users we self-select social media platforms is hard.

It is for the platforms to talk about how they operate. It is interesting with YouTube—there is no point in not naming the platform—that, if you type “Holocaust denial”, it will show you the Holocaust denial film you want to watch, but the next film on autoplay will be about the Holocaust and what really happened. There is a countermessaging element that is key. I probably should not say this with my work hat on, but my concern would be about what happens if someone puts in “the Holocaust”. Do they get a Holocaust denial film next? It is about who decides whether they should or should not, which brings me back to the transparency element.

I would argue that my moral judgments are quite mainstream, but so would everybody, so who gets to decide what is and is not acceptable as regards how it works and how we should make interventions? That is incredibly important with the anti-vaccine movement, which we are about to see more of in coming days. It is a test for us all and for the platforms. I am not in favour, unsurprisingly, of banning the anti-vaxxers. We should challenge every single argument they put up, and give more information, not less information. We will need the platforms’ help to insert that into the anti-vaxxer groups. The platforms will have to help us, because the alternative is that that speech gets banned, and that will not help to build confidence in a vaccine. We have to be far more mature about the conversations we want to have. We need help from the platforms to get into the right spaces to have some of those conversations.

Viscount Colville of Culross: What is your feeling about how willing the platforms are to help in that conversation?

Ruth Smeeth: Every platform I have spoken to recognises that there is a problem online. None of them has a single solution for how to fix it. Obviously, they do not want a fining regime. That is understandable from their commercial perspective. It is about how we work together to make it real and put in a level of protection for free speech. Most platforms have been built on and emerged through free expression. It is about how protection meets racist content. I do not think many people would want to say that they make their money through the promotion of hate speech. I think they are amenable to working with us, but I am sure you will have them in front of you to have that conversation.

Viscount Colville of Culross: Richard, do you have anything to add?

Richard Wingfield: Only a couple of things. I endorse everything that Ruth said. Amplification, as Ruth mentioned, is an area ripe for intervention. I would love to see proper transparency of the algorithms so that we understand the data and terms that are being used by platforms to decide whether to promote or deprioritise content. There needs to be some kind of accountability or pressure mechanism to make sure that it is done appropriately and without causing harm.

The issue of echo chambers is challenging. We do not want to demand that people follow certain other people on a social media platform to get a diversity of voices. We tend to choose who we listen to in the same way that we choose which newspapers we read and the television programmes we watch. Again, it goes back to the question of algorithmic transparency and a better understanding of why certain voices, adverts or messages are pushed towards certain groups. That is critical to understanding how to identify what harms are leading to it and what interventions are appropriate.

One of the discussions that seems to have come up within government is about whether there should be a duty of impartiality, and a duty to ensure that a plurality and diversity of voices is heard on platforms. I have some concerns with that idea. Often we choose platforms because we want to hear certain things. If I want to go to a platform that is, for example, full of conservative voices, I do not think that platform should be under an obligation to promote liberal voices just so that I get a balanced picture. We have to give individuals some responsibility to listen to different sides and not force them to do so. I have some concerns about how that would be achieved in practice, but it is critically important to address transparency of algorithms and the processing of data to prioritise content.

Q44              Lord McInnes of Kilwinning: From what you have said, legal liability cannot be applied at the moment, which just means that a lot more needs to come from the platforms as regards policy and its transparency. Up to this point that has come about due to pressure from advertisers, as did next year’s meeting to try to harmonise different policies. How do you both feel about the balance between online freedom of expression and the individual policies of the platforms? Indeed, if there are agreed policies across platforms, what impact could that have on freedom of expression?

Richard Wingfield: If you ask that question of platforms, they will tell you that they are all different and there is no one-size-fits-all policy. We have seen a number of areas where they are working closely together. Child sexual abuse imagery, for example, is illegal and relatively easy to define legally. We have the internet Watch Foundation and certain hash  databases so there is consistency as regards the platforms removing that. There is the Global Internet Forum to Counter Terrorism when it comes to terrorist content as well. To an extent, platforms are working together on some of the more clearly delineated types of illegal and harmful content to ensure consistency of removal, so that it does not appear on any of them.

Once you get into other less clear forms of harm for example, things such as bullying or disinformation, it is difficult because you will have platforms that want to ensure that they have a very safe space and so will be very interventionist in aggressive language that encourages rough and tumble and political debate so there will be different levels in terms of intervention.

It becomes very difficult when you want consistent policies across platforms. Promoting the kinds of initiatives that we have seen already, such as the GIFCT, the IWF and the recently announced Forum on Information and Democracy, are the best ways of trying to achieve as much harmonisation as possible. Government mandating a single content policy for all platforms would be unfeasible and undesirable.

Lord McInnes of Kilwinning: Sure.

Ruth Smeeth: There is always inherent tension on lots of these issues. The joy of our core human rights is that there is tension between them all. How we figure out as a society what we are going to prioritise and where the line should be is what the conversation should always be about. There is a clear issue about freedom of expression in a positive way and general principles to restrict that.

One of my concerns, especially about a legal liability framework, is that no one will be penalised for deleting too much content. That should be a real concern for us all. Because of the way it will be deleted, it will be permanently deleted and we will never be able to see what was or was not deleted. Facebook will become all about dancing cats. There is a place for dancing cats, but many more of us would want a broader discussion. We have to give a clear framework to the platforms about where the red lines are so that they do not overdelete.

One of the most ludicrous things that has been deleted recently under counterterrorism—this is my personal opinionwas video of countergovernment protests in Lebanon. There were chants that were anti-Hezbollah. The algorithm picked up the word Hezbollah”, and, even though it was messaging we would probably all want to see, it was permanently deleted. That is of no help to us, either as a society or as regards free speech and free expression. Even in a British context, that is relevant for some of our discussions. Content is twice as likely to be deleted if it is in Arabic or Urdu than if it is in English because of the context and the lack of understanding of what is up there and why. Setting out a clear framework with the providers would be one step, but it will require ongoing monitoring. Someone needs to be able to see what they are deleting. Arguably, that should include academics, politicians and journalists.

Q45              Lord Allen of Kensington: This has been very helpful, Ruth and Richard. Thank you so much. I want to take us back to harmful content. The questions have probably all been answered, but I would like to try to summarise some of the issues I have heard.

There are three things. The first is reducing reach online and how we manage that. Ruth, you talked about algorithms and red lines. Then there is removing content and how you do that. As you said, it is quite complicated, so we need to think about that. The third thing is cultural difference. We have heard from other witnesses that what is acceptable in the US might not be acceptable in the UK. I know we are short of time, but it might be helpful to get very brief responses on those three points.

Ruth Smeeth: They are at the heart of what we are discussing. As regards reducing reach, there is a difference in a traditional media setting between a comment article and the letters page. We have to decide on the general public versus commentators, and whose voices should or should not be heard and whose should or should not be amplified. That is an incredibly difficult conversation, as we have seen in relation to Donald Trump throughout his presidency, but definitely since the election, and whether some of the voices of people supporting him should be amplified or not.

It is a very difficult area. I do not have an answer because it is so difficult. Everyone has the right to speak. Everyone has the right to free expression. How you manage who is heard among everyone’s voices is a conversation we need to have. To take a step back, it goes to the algorithms. Somebody is already making that decision for us, but we do not know about it. That is where I worry about the transparency and accountability element.

As regards reducing content, I am genuinely nervous. A politician in the other House asked for every comment that used the word “rape” to be removed from a platform. That assumes that the only people who would use the word rape are perpetrators and not victims. You are silencing voices if you reduce or delete content in that way. That is simply unacceptable. There is an element about self-harm, for example. People are doing self-care, especially during the pandemic when they cannot access services they would historically have accessed, and they are using forums online to talk about what they have done to themselves and how they are coming through it. Those forums would be automatically deleted under this legislation because they use key words. We have to be very careful about what language is deleted and why, because, with an algorithm, context and intent are impossible to judge.

That leads me to cultural differences. We can all agree that, as we see with the Brexit negotiations, even between us and our French allies, there may be a slightly different cultural focus or challenge. That will always be the case. We do not want the internet to standardise culture; that is not what it is meant to do. It is meant to expose culture to the world, not limit it. I do not think any of us would want to see one cultural narrative dominate how things are moderated and what we get to see. That brings me back to transparency. We need to know who is cutting what, why, and on what basis. We do not even know the first principles of what is being removed.

Lord Allen of Kensington: That is very helpful.

Richard Wingfield: I have a couple of things to add to what Ruth said. On the first point, and it has come up a lot, not only is transparency critically important, but an area of ground-breaking legislation would be to give users more control over the algorithms that platforms use. At the moment, you have only two options. You can look at things chronologically or you can use the platform’s own algorithm, which decides what content is promoted and what is not.

If that was opened up, users could have more control to determine which data points were relevant to the algorithm, or even to bring in third-party applications to do the curation or the moderation of that content. For example, I might want to see more political content or more content related to religion, or more recipes. Greater control over what content was prioritised and deprioritised would go a huge way to addressing the problems we have at the moment.

Lord Allen of Kensington: May I push back on that? I am not sure how you would do that technically. You can do it with preferences but not algorithms. I am not sure how we could encourage people to do that.

Richard Wingfield: Platforms would need to be required to make more of their content moderation process open source, so that people could see how technically the coding was taking place that determined how content was prioritised or deprioritised. If you made that more openly available, third parties could develop software or other tools that could be integrated in a way that allowed the user to decide what algorithms were used to moderate their content, what preferences, and all the rest of it. We have not seen that at all at the moment because all platforms keep it closed and you can only use their algorithms and their content moderation practices.

Lord Allen of Kensington: It is horribly complicated. We are doing it in some of our companies. How do you get the general public to understand how the algorithms bring you to the right place? But you make a good point, and it is something to think about.

Richard Wingfield: It is certainly an area that deserves greater exploitation. Twitter has talked about it a little, which has been interesting.

On the question of content moderation, I would like to see content platforms take the same approach that we expect of state actors when it comes to restricting people’s speech. That means having clear policies in place that are easily accessible and understood by the user so that they know in advance what can and cannot be permitted on those platforms.

I would like transparency over the removal process, so that people are aware of what is being removed and why, and they can appeal those decisions. Whether it is content removed or content left up, it can be challenged and there should be some kind of process that is clear and open to users. I worry that we will simply see automation used in all instances because of pressure and the risk of fines or other kinds of sanctions if not enough content is taken down. Once you start using automated processes, it becomes a lot harder for the individual to know what is happening, and why, and to challenge that decision. That is something I would like to see. On the points about culture, in the interests of time, I fully agree with Ruth.

Lord Allen of Kensington: Thank you very much.

The Chair: Richard and Ruth, it has been a really interesting session. Thank you both.

Please keep in touch with the Committee during the inquiry, and, if you see anything that you think might be useful, draw it to our attention. In the meantime, thank you for joining us today. That brings the session to an end.