final logo red (RGB)

 

Select Committee on Communications and Digital

Corrected oral evidence: Freedom of expression online

Tuesday 1 December 2020

4.30 pm

 

Watch the meeting

Members present: Lord Gilbert of Panteg (The Chair); Lord Allen of Kensington; Baroness Bull; Baroness Buscombe; Viscount Colville of Culross; Baroness Grender; Lord McInnes of Kilwinning; Baroness McIntosh of Hudnall; Baroness Quin; Baroness Rebuck; Lord Storey; The Lord Bishop of Worcester; Lord Vaizey of Didcot.

 

Evidence Session No. 2              Virtual Proceeding              Questions 14 - 26

 

Witnesses

I: Benedict Evans, Independent Analyst; Mark Scott, Chief Technology Correspondent, POLITICO.

 

USE OF THE TRANSCRIPT

This is a corrected transcript of evidence taken in public and webcast on www.parliamentlive.tv.

 

 


19

 

Examination of witnesses

Benedict Evans and Mark Scott.

Q14              The Chair: Welcome to our witnesses, Benedict Evans and Mark Scott, who are going to give us evidence today in our continuing inquiry into freedom of expression online. Benedict is an analyst in the area of mobile digital media and technology, and Mark is chief technology correspondent at POLITICO, writing on a range of relevant subjects. The session is being broadcast online and a transcript will be taken. I will ask you to say a few words to introduce yourselves, and then to give us a brief overview of your perspective on freedom of expression online. I will then invite other members of the Committee to ask questions.

Benedict Evans: I started my career 20 years ago as an equity analyst, writing about technology and mobile companies. I have worked in strategy in media and telecoms in consulting. I spent the last five or six years in Silicon Valley working for Andreessen Horowitz, which is a venture capital firm, and I moved back to London at the beginning of this year.

My perspective on freedom of expression is that we have spent 250 years working out how we think about free speech. We have built up a very complex layer of mostly implicit norms, cultural expectations and presumptions about how a newspaper works, how book publishing works, when somebody who owns a venue might or might not rent that space to different kinds of people, what can appear on television and in a cinema and who might be able to go and see it. A small portion of that is in law, but most of it is in professional standards, expectations and norms. The internet has come along, and it is all those things and none of them. It is not really telephones or newspapers or books but something else, in the same way that television and radio were something new, and trying to apply standards from newspapers to radio would not have made a huge amount of sense.

This has happened very quickly. Technology has gone from being interesting and exciting but not really an important part of most people’s lives to being pretty much central to most people’s lives in the last five or 10 years. My favourite statistic here is that, in 2017, 40% of all new relationships in the USA began online. This stuff has become central. It has become systemically important to society. That has implications for freedom of expression, politics, political advertising and all sorts of different questions, because we have connected everybody. That means that we connected all the bad people and, probably more importantly, we connected all our own worst instincts: we connected all our cognitive biases, our prejudices, our rush to judgment, our recency bias and all those kinds of things.

Now we are struggling to work out what we think about that. Policymakers are struggling to work out what they think about that, and a bunch of 30-something product managers in London and Silicon Valley are thinking, “Okay, so how does this work, and why are you telling me I have to censor a politician in Malaysia? How do I work out what I think about that?” As I said, we are trying to work this out much more quickly than we did in those 250 years working out what we thought about newspapers or libel law.

Mark Scott: Good afternoon. I am POLITICO’s chief tech correspondent, basically sitting here in London but primarily focusing on what is going on in the continent, here and in the US. That is my job. I have been doing this for about three and a half years. I also spent seven-plus years at the New York Times before that and previously as a reporter for Bloomberg Business Week.

Specifically, my career has focused primarily on how policy and digital work together. Over the last two years or so over a variety of elections, countries, jurisdictions, policy groups and so on, I have been following the discussion on how freedom of expression works, particularly when you are dealing, as Benedict mentioned, with an internet that is global, has no rules and there is not really any type of international agreement on what freedom of expression is or what the rules are for platforms. It has all been made up on the go.

When it comes to freedom of expression, I am a journalistI think many of you are too, or have beenand it goes without saying that I am a big fan. What I am very interested in, and I spend a lot of my time reporting on now, is that friction, that grey area, where you have some pretty nasty content being put up by a variety of people but it does not fit into an illegal category or into the category of implicit violence but there is something going on there.

How do you deal with that? What I am struggling with, as someone who is an advocate for the First Amendment and freedom of expression, is what limitations or impositions we as a society, and more importantly as politicians, should place on content that is a bit icky but is not illegal. That is the friction that comes when you have things like the NetzDG law in Germany, which obviously fits into the German context but you cross over the border in France and it does not apply. What do you do there?

When it comes to freedom of expression and what that means for me, it is that grey area, that ickiness, where we need greater international co-operation. We will get on to online harms later in the session. How that fits in with the broader discussion going on even in western alliances is fundamental to my reporting, and obviously to how this plays out. That is mostly because, if there is a coalition of the willing on policy-making in setting minimum thresholds or pushing specific definitions of what is and is not allowed online, it progressively gets picked up by others, so there is a knock-on effect. Hopefully that does not create some sort of ministry of truth but at least mitigates some of the pretty nasty stuff that we are seeing, even in the recent US election.

The Chair: Great. What a very good summary of the issues, which we will now start to unpack.

Q15              Baroness Bull: Thank you to both of you for being here today. We are very much at the start of this inquiry and your perspectives will be very helpful in ensuring that we turn our attention and our focus in the right direction.

We know that technology has created various platforms on which people can express themselves and make themselves heard, and that proliferation of platforms is increasing all the time. I am interested in understanding how the technological developments are impacting on individual expression or how they might be impacting on the expression of some rather than others. Are there inbuilt obstacles or preferences? Do different platforms preference different groups? Crucially, where is this going next? What is coming down the pike and what effect will that future development have?

Mark Scott: Benedict is probably better on what comes next in the technology, but I will talk to some of the preferencing issues. I do not think that even 12 to 18 months ago any of us knew what TikTok was. There is that progressive, inbuilt choice of where to go next which the next generation is picking up on. That is not a preference; it is basically that you just do not want to be where your parents are. Therefore, you have people going to new areas where they feel less encumbered. I spend a lot of my time in some very nasty areas of the web, on encrypted messaging groups and so on.

That also plays a role in preferencing, where some of the nastiness gets pushed out to the fringes because there is less regulatory oversight either by the companies or by the officials, because basically no one cares what a telegram channel is. In the end, if you want to go somewhere and talk to people about some nasty things, you will find something on the internet that allows you to do that. I think it is self-preferencing more than bias from specific platforms.

Baroness Bull: We have heard evidence of Google ranking racist or sexist content highly and therefore effectively creating an unsafe space for women or for people of colour. I am interested to explore how platforms are preferencing, and therefore excluding, and therefore limiting freedom of expression.

Mark Scott: Some of this comes down to algorithmic bias and how those choices are being made. I struggle to think that there is a Google engineer promoting that content for any other reason. Maybe we will come to that later, but I think some of it is down to better data access, figuring out how the algorithms work and de-amplifying specific content to create those safe spaces. You are right that, in the end, the algorithm—and it is not just Google’s—allows that to happen. The thing I am cautious about is what is pushing that, and I think it is mostly a monetisation issue rather than any sort of innate bias built into the platforms.

Benedict Evans: I agree. There are several things to think about. Five years ago, if you had said to people in Silicon Valley, “You should control what people talk about”, they would say, “First, we cant, and, secondly, we shouldn’t. We can’t in that we have hundreds of millions, billions, of pieces of content. We can’t build automated systems that can scan those in any reliable way, except at a very crude level such as no Nazi content in Germany. But anything else is basically impossible, just technically. We physically can’t build it. Theres far too much content to hire people to look at it all. We can have it or we can turn it off, but we can’t filter it. Secondly, we shouldnt, because for us to decide what you can talk about would be like Vodafone deciding what you can talk about. It is just not their place. Its not our place to do that”.

In the last five years, both pieces of that attitude have changed. Machine learning means that it is possible, at least theoretically, to try to look at everything, bearing in mind that Facebook has also hired 30,000 to 40,000 human moderators who are the backstop to that machine learning. Secondly, everyone has seen what has happened in the last five years and thought, “Yes, okay, we do need to try to do something”. You still need to remember that these are automated systems, by and large. If you ask Google when Tom Hanks landed on the moon, it says “1970”. It has inferred that from seeing the “Apollo 13” movie. There is a tendency to look at these systems and think that they are intelligent, but they are generally about as intelligent as a washing machine. You can put clothes in it and they get clean. It is amazing, but if you put dishes in it and press go, it will wash them as well. It does not know what it is looking at.

A lot of the questions of machine-learning bias and of what Google or Facebook promote come from the fact that what they are really doing is reflecting the data. There was a time when, if you searched for, “Is the Holocaust a hoax?”, the pages with the words “Holocaust” and “hoax”, were, guess what, pages that said that it was a hoax. That is not somebody at Google wanting to promote that; it is just that is how it is indexed in the context. They can go in and put their fingers on the scales with some of those things, but ultimately what you are seeing is a reflection of the data. In a sense, you have to be conscious of how it works, and they need to be conscious of looking for where those kinds of problems can arise.

As a classic example, if you give a computer vision system an image of a man in a white coat, it says “doctor”. If you give it a woman in a white coat, it says “lab technician” or something. Again, it is not because Google is biased; it is the labels on most of the women in white coats. What are all the pictures of men in white coats labelled? That is what the machine is thinking. You have a lot of those sorts of challenges, which all reflect the fact that this is an automated system, as I have said several times. It does not really know what it is looking at in any meaningful sense. There always needs to be a conscious effort to go in and change what the automated system inferred from going out and looking at the world.

Q16              Baroness Rebuck: I know that there will be many questions on legislation and technology, but I want to look at the human factor and something Benedict said about where the norms are. You said before that pretty much everything we worried about has now happened online. It has been amplified and channelled in new ways. We heard from one of our witnesses the other week about standards of digital citizenship and etiquette and whether they can be improved—as I said, the human factor.

People are much less inhibited online, but how can we promote digital etiquette and how can we have a disagreement that is civilised rather than hurling abuse? One of the suggestions was to concentrate on young people and teach them digital citizenship in schools. That is fine, but it kisses goodbye to several generations of adults. I would love your perspective on this and on possibilities outside the education system.

Benedict Evans: A lot of this is what people who make consumer internet products refer to as the mechanics you are creating. Where do you channel behaviour? What things do you make frictionless? What things do you make difficult? For example, Twitter has a very easy mechanic to quote somebody else and talk about what they are saying, and very often that is used to attack people. On Instagram, you cannot re-post something that somebody else has said at all. You can comment, but then you can flag a post that cannot have comments underneath. There are lots of product decisions you can take that will shape how people use things that will create friction on certain behaviours and remove friction on other kinds of behaviours. Some of that is about the incentive of how you create the product.

For some of it you have to presume the worst about the world. You have to design your product thinking, “What will happen once Beavis and Butt-Head arrive?” This is what happened to Zoom. Zoom built the product to be used inside big companies. It did not occur to them that people would dial into random calls and flash their genitals, because if you are using it inside IBM, why would you do that? You would get fired. Then suddenly you have hundreds of millions of normal consumers using it and they had to hire a trust and safety team and go, “Oh, dear, we have Beavis and Butt-Head on the platform now. What do we do about this?” You can take lots of those kinds of product decisions that shape how people will use it or how they will not use it. The other side of it is that you get an awful lot of weird cultural norms emerging on some platforms.

There is a challenge of working out what this new sphere is. If I say something to you, if I reply to one of your tweets as a politician, I almost do not think you see it. For many people that is like shouting at the TV in the pub. I do not want to generalise that example, but objectively the same action can seem very different to different people and very different in different contexts. Some of this is because we are just working out how it works. Some of it is also about product design, and people shout a lot at Twitter about product design for these kinds of issues.

I have one other example and then I will pass over to Mark. Three or four years ago, there were three or four hot social networks whose premise was that they were anonymous. I forget what they were called, but the idea was that it was anonymous and it was just in an area. Basically, if you were at this school you could post into the group for your school and it was anonymous. That seemed like a brilliant idea for about 30 seconds, and then basically they become bullying networks. Those networks do not exist any more; they ended up getting being shut down. First, you could not make any money out of it but, secondly, there was no way to make it not toxic.

That is an extreme example of where you realise that you have created a mechanic only for bullies. That is a lot of what one thinks about. I would be really sceptical about the idea that you can just persuade people not to be unpleasant online or that you could do a course for that. I think it has more to do with how you design the products and think about them.

Baroness Rebuck: You mentioned the New York Times. I just read a really interesting article about a data scientist on one of the big platforms who asked users to rate posts as either good for the world or bad for the world. Then they produced an algorithm that de-emphasised those they predicted were bad for the world and they de-emphasised them on the feed. That was great. That was a result that was good for the world but bad for business, because it reduced the user’s engagement. To Benedict’s point: what are the levers? Is it human? That would seem to indicate that people knew the difference, but clearly it was not good for business, so they did not continue with it. What is your perspective?

Benedict Evans: That is great until people start jumping on people they disagree with: “Everybody mark this one down, because I disagree with it”. An awful lot of these things do not survive contact with the enemy.

Mark Scott: Yik Yak was one of the platforms that Benedict mentioned that was shut down. I am going to sound very cynical, so forgive me, but these are both public spaces and the commons, but they are also advertising models. There is a monetisation question here. I am very much in favour of greater digital citizenship and etiquette and so on, but is that the purpose of Facebook? Again, forgive me for being cynical. I am not sure it is, at least from a commercial, investor perspective, which is the underlying point, to a degree.

There has been a push to create online IDs and to link people’s offline personas with their online personas in an effort to create greater friction. I would not be a fan of that. Maybe if someone like Mark Scott, the actual human being, is linked to a post, I will be more reluctant to start slagging people off. That is a really bad idea, not because I think that anonymity to a degree is bad, but because there are places in this world where platforms provide a service for people getting messages out that the Government would not like. As much as I have been a critic of the platforms and their activity, sometimes the digital citizenship plays both ways, and the platforms provide a voice in places where people do not have a free press or the Government are not on their side. That is also worth keeping in mind.

Q17              The Lord Bishop of Worcester: Thank you both very much indeed for being with us this afternoon. I want to build on what you were saying at the beginning, Benedict, about the question of freedom of speech being something we have grappled with ethically for centuries, rather than decades, but there is something new here.

Also, thinking about what you described as the “icky” content, Mark—I like that technical phraseology—given that it is something new, and that potentially the harms are much greater, that must be one of the things that characterises this really new phenomenon. How should platforms balance protecting freedom of expression, which we all recognise as being important, with that of reducing harms? What new policy should we enact and what challenges might these policies face? We want policies that will survive the first line of attack, as you put it, Benedict. Mark, you are unmuted, so will you go first?

Mark Scott: Yes, I think we will do a tag team here again. Some of the easy, low-hanging-fruit options are mostly there to be taken. The platforms have got together and created something called the GIFCT, which tackles terrorist content, and they have done a relatively good job at limiting that Christchurch-massacre type content from spreading really quickly.

We are now in the grey area where it is harmful but not illegal. The Online Harms Bill, the European Union’s Digital Services Act and Section 230 in the US will have a hard time dealing with that. Just because I do not like something does not mean it is illegal, so how do you define that? I am not a lawyer, so how do you define the limitations? That is what the platforms—and you guys are doing this too, so the policymakers—are struggling with. Something has to happen.

There is something to be done about not getting ahead of ourselves. At the moment, it would be very useful to know, per the previous question, how the algorithms work to amplify content. Before we start banning or limiting harmful content, let us figure out where the content is, how it is amplified and how the machinery works first before we decide to jump to the final step. As much as it is important to hit the harmful content, which obviously needs to happen, we still do not know how the sausages are made. Until that happens, it is very difficult to come up with real policies—internally and externally—that will have a meaningful impact.

Benedict Evans: I almost go back to what I said earlier: that a lot of this is about product design and how you think about what kind of behaviours you are encouraging. I tend to disagree a bit about the idea that toxic content is good for business. By and large, toxic content tends to push people off the platform and draws in the kinds of people who advertisers do not like and are not interested in. It is a little bit like the parallel fallacy that Google is making lots of money from news. Actually, the advertising is not on stories about terrorist attacks. The advertising is on links to holidays.

It is not that anybody likes this stuff. It is more that, going back two or three years, people at Facebook or Twitter would say, “We don’t like people claiming that vaccines don’t work, but should we stop people from saying that? At what point should we say, ‘Youre not allowed to say this on our platform’?” Twitter, for example, does not really have an algorithm. It has one, but the vast majority of what you see is from the people you have chosen to follow. In a sense, you are seeing what you asked for.

Part of the challenge here is when you are on Facebook, in particular. If Instagram bans you, that is your loss but it is not the end of the world. If you are banned from Facebook, it is a little bit more like being told that you will not be allowed a telephone any more. It has that quasi-utility character that makes it more difficult for Facebook to say, “We’re going to kick you off”. It gets into the core free-speech challenges. Are we really sure that we want to say, “Youre not allowed to say that”?

This gets you to a phrase we have not used yet, the amplification question, which is what I think Mark is talking about when he says “algorithm”. If I say something in a private chat, that is okay, mostly. If I say it on my newsfeed and I have 10 followers, that is probably okay. What if I have 500 followers? What if I have 10,000? What if I have 10,000 and 50,000 people like that? Should it then be recommended to people who do not follow me? If I say that in a group that has 100,000 members, should that group be recommended to other people? Should this YouTube video be recommended to other people?

That question of amplification is very interesting, because it takes you through a sliding scale on two axes. One axis asks how bad it is, and there is stuff at one end that we would all agree should be illegal and stuff at the other end that almost nobody thinks should be illegal. Then there is another axis that asks what it would mean to say, “Take it down”. Do you mean block it, remove it, and make it impossible for anyone to see, which is kind of what happens in China? You use the words “Falun Gong” and your account just disappears instantly.

I do not think we mean that for very many pieces of content. But we do mean that nobody who has not explicitly chosen to see it should get to see it; it should not be amplified. But even that leaves a question about the anti-vax group of 10,000 mothers who have chosen to join it to talk about how vaccines cause cancer. Should Facebook close down that group? This is a group of people who have decided that they believe this and that they want to talk about it.

I do not know the answer to that. I know that it is complicated, and it gets you to the point that I made earlier about the 35 year-old product manager wondering, “Should I make this decision? I have an opinion, but should that be my decision to take?” That is why you get things like the Facebook governance board trying to find somebody else to take it, and why Facebook asks for regulation: because it wants someone else to tell it.

The Lord Bishop of Worcester: That is what I was getting at when I talked about the potential for much more harm, which I suppose is because of amplification. That is one way in which the whole technology is really new, is it not?

Benedict Evans: Yes, it is a completely different question. In the past, the newspaper editor could put you in the paper or not, and there were five or 10 newspapers. If all the newspapers say, “Youre not getting into print”, you are just not getting into print, and that is it. That had just as much force as state-controlled censorship, except it is 10 censors instead of one and it was bound up in that 250-year consensus of what a newspaper editor or a book publisher should do. Facebook does not have any of that legitimacy. It has not come out of 300 years of newspaper tradition. Nor did it get elected. It was historically quite uncomfortable about making those kinds of decisions.

Q18              Lord Vaizey of Didcot: I was going to ask about how product design could affect how people express themselves online, but to a certain extent Benedict has answered that question quite comprehensively. So I will ask Mark what he thinks about product design.

I have noticed that I now retweet far less than I used to since I found I had to comment on every tweet that I was retweeting. That has probably saved people who follow me an enormous amount of time. Also, if the Committee feels that product design has been covered by the answer to Gail Rebuck’s question, another question is: where is the balance to be struck between global regulation and country regulation? I want to ask about that, because I know that Baroness Grender is going to ask about online harms. Can country regulation work with these platforms?

Also, as an aside, Benedict mentioned that Facebook wants to be regulated. It wants somebody to give it the answer. A lot of people who do not want Facebook to be regulated do not want it regulated on competition grounds; that if you regulate Facebook, the next Facebook that is currently in somebody’s back bedroom will not get off the ground. It is GDPR on steroids. Mark, let us start with you for my completely chaotic question.

Mark Scott: I am trying not to agree with Benedict every time, because that is a bit boring. Let me take my experience with the US election. I covered it quite extensively and spent a lot of very long hours from London looking at it. What struck me about Facebook labelling content from CGTN and RT, for example, as content from state-backed actors—there was also the friction it put in place with, “Do you really want to post this?”, as well as some of the de-amplification of the QAnon stuff, and the stuff about voter fraudwas that it still had an impact. It was not perfect, and some of the state-backed content and misinformation was still spread widely, even when Facebook labelled it as false. It did not take it down, to Benedict’s point, for some legitimate free speech reasons.

So the friction was there. It is a start in the right direction, but it is not enough. I am not a lawyer, nor am I a project designer, and I do not know what the answer is, but it is part of the solution.

On your second question about global regulations versus domestic, this is literally my day-to-day job, because I cover so many jurisdictions. In a perfect world, we would have—maybe bar China, Russia, Turkey and some of the authoritarian countriesa western alliance of platform regulation that would make sense. To a degree, the European Union will announce an EU 27 version of that next week. The online harms proposal has been a really good opportunity for the UK Government and others who want to do this.

My concern about global versus domestic is that global will take too long, just like digital services tax and online competition rules, so you will get domestic responses. The German NetzDG is an example of something that has been passed. It is focused on specific hateful, harmful content in a very limited legal context. Has it really moved the needle? It has not. Therefore, as much as it has 24-hour takedown with significant fines behind it, it has done nothing to prevent Germans from seeing some nasty stuff.

Finally, on competition, you can have content regulation alongside revisions to competition rules and they can be run in parallel. I do not think it needs to be one or the other.

Q19              Lord Vaizey of Didcot: Benedict, I am conscious of time, but you have done a lot on production regulation and I would be fascinated to hear your views on competition, on which you are an expert.

Benedict Evans: A manufacturing one. A lot of regulatory objectives in this field are in conflict. A competition regulator would make it very easy to export all your Facebook friends and everything they are interested in,So that I can build another social network”. The privacy regulator would say, Don’t you dare do that”. That is how policy works. I want my parents’ house to go up in value and houses in Highgate to go down in value, but unfortunately you cannot have both. Sometimes you need to pick those trade-offs.

The challenge is that generally the new thing does not look like the one that came before. If you had drawn up a system of regulation that covered Facebook, WhatsApp and Instagram, TikTok would be a completely new thing and might sit in a completely different place. If you then draw up a rule that is aimed just at TikTok, that will get you yet another thing.

I used to work in telecoms, and regulation tends to be good for incumbents. It caps their margin but keeps other people out. That is partly because it is expensive to have all those compliance people and partly because it tends to define new things into old channels. You see this in bitcoin. The SEC keeps trying to define bitcoin as a security, but if you define bitcoin as a security it is not bitcoin any more. You have just got rid of the whole point.

One needs to have a degree of flexibility and understand that there will be some completely different thing in a year, two years, three years. You have to be cautious about trying to channel everything into existing paths. You also have to be conscious of the fact that there are competitive questions here. The CSO of Facebook, Alex Stamos, has suggested that there will be moderation as a service, so there will be five or six companies that provide moderation as an outsource service the way AWS provides storage. Your legal shield is, “I was using that moderation service and it meets this standard, so it is on them”. Who is providing that outsourced moderation? Probably Facebook, Google and Amazon. Now, all of a sudden, basically Facebook, Google and Amazon decide what the moderation system is.

You can imagine going down that path and ending up in a situation where the first thing any new social network has to do is sign up with Amazon for storage and Facebook for content moderation. Is that good for competition? Maybe. That is a weird new set of problems.

Lord Vaizey of Didcot: That is a fascinating point in the direction of travel. I will leave it there, because I know others have questions and somebody else has a question on global regulation.

Q20            Baroness Grender: Mark, you have said that global is too slow and, hey presto, as if by magic, having taken a very long time, it is possible that we will get an online harms Bill next year. It was originally supposed to be at the beginning of next year, but let us just say it will be next year now. What do you think of the proposals so far? This all tips into as you described it, hence why there must be a delaythe ickiness issue. But you also have organisations like the Law Commission that is saying that a new offence should be based on likely emotional or psychological harm.

Benedict, what do you think of this? Is it possible for Governments to impose this kind of thing, or are we simply going to be regulators of the platform’s own terms and conditions, as some fear?

Mark Scott: On the online harms proposals, the initial idea is a good concept. The duty of care idea has some meat to it, and I know that others elsewhere in the world have grasped on to this. My issue is definitions. What is harmful content? What does that mean? How do you define that? That is critical for this to have teeth.

The other thing is Ofcom. How much power is it going to have? Is it going to be able to regulate and change terms and conditions itself, or is this basically something that has been outsourced to the platforms and Ofcom is sitting on top? That needs to happen soon, and again you will know this. It will be very easy for it to fall back and just become a legal content-only structure, because that is an easier way of looking at it. Without specific legislative definitions of what is harmful content, it is a non-starter.

Benedict Evans: Something that one sees a lot in the first wave of global regulation is the highest common denominator problem: that one jurisdiction passes a particularly restrictive law and, because of the way it is drawn up, if you are going to operate in that jurisdiction you have to apply it everywhere. Basically, every American company has to think about GDPR and either just block Europe or work out how to comply with it. California passed a privacy law called the CCPA, which basically every American publisher follows, even though notionally it only applies in America.[1]

Most notoriously, there was a libel case in Austria where somebody had called a politician a fascist and they sued. The Austrian court said that Facebook had to take it down everywhere, not just in Austria but globally. What does that mean? If you were to get a British court saying that Facebook had to take down something that is mean and nasty and might call psychological harm, you would struggle to fill a room with people - you would not find a single American politician in any part of any party who would support that. That cuts to the core of how Americans think about freedom of speech. You are allowed to be as unpleasant as you like in America. That is not a crime. Many people in Britain would be politically very uncomfortable with the idea of criminalising people for just being unpleasant or being nasty online.

Here we get to the question of real philosophical differences in different countries in what is thought to be okay, combined with laws that have the potential to get applied everywhere. If you draw up your laws in such a way that Facebook has to apply those to everything on Facebook, it will ask, “How do we do that? How do we apply that globally just for Britain, and what’s that going to mean?”

There is a real tension between having your own national standards for some things and diverging too far away from what other people think, as well as from setting a precedent for what Hungary, Malaysia, Turkey or the Philippines might say. It is difficult for Britain or America to say that the Philippines should not be locking up journalists for writing this story if you have also passed a law that says, “Youre not allowed to say X or Y. You could upset somebody”.

Baroness Grender: Benedict, do you think we will get there with this online harms paper? Do you think it is possible for the UK Government to deliver it? Given especially that you have given us a useful historical context and that we also had a duty of care in the Communications Act 2003, do you believe that it is possible to get to a position where we have online harms legislation in this country, or do you think it is impossible?

Benedict Evans: Zoom tells me that there are 20 people on this call and 19 other people who know more about what is going to happen in UK legislation this year, next year, than I do. I do not want to offer an opinion on what the Government will or will not get through Parliament. I echo Mark’s point about the the sense of trying to produce something that is defined, that is not trying to boil the whole ocean but is instead focusing on one problem—harmful content—rather than trying to solve every problem to do with technology and then having some sort of reasonableness test that allows you to flex around that.

In principle, that is the right approach. The challenge becomes what happens if that applies everywhere and if it gets drawn up in such a way that it becomes impossible to comply with in California or sets a bad precedent in the Philippines.

Q21              Lord McInnes of Kilwinning: One of the issues that this Committee has looked at over the past three inquiries has been the difference between publishers and platforms. The inevitable outcome of an online harms Bill is one of responsibility. I want to get your views on the current models of legal liability for platforms in an environment where there is a need to work within a legal setup globally or nationally.

Mark Scott: The liability question is a tough one. Something needs to change. The idea that the platform is not a publisher no longer makes sense, but it is not a newspaper. This is what I struggle with a lot. As much as greater liability is required, I am still not sure I would go as far as saying that they need to be responsible for everything from cat videos to Nazi content, because we are not there yet. What needs to happen is greater responsibility, definitely on some of the more harmful stuff. But the idea of disturbing some of the existing rules such as the e-commerce directive and Section 230 right now is a tough one. That may be getting ahead of ourselves.

Lord McInnes of Kilwinning: Benedict, do you agree with this gradation approach?

Benedict Evans: I mentioned the history of media earlier. Arguing about whether these things are platforms or publishers is a bit like looking at radio and saying, “Is that a book or a newspaper”? No, it is not, it is radio. Facebook does not decide what you see when you open your page. What you see is based on who your friends are and what they posted, so it is not a newspaper. On the other hand, it is not a telco, because telcos make decisions about what a billion people see each day. We do not expect a telco to be liable if you organise a bank robbery over the telephone.

Facebook does not have the same degree of control. It sits in a slightly different place. That is why I am a fan of the tests of reasonableness and degree that you get in Section 230 and in the online harms Bill. That is that you should try, but we cannot expect you to find everything instantly without hesitation because you have 4 billion people using this stuff.

Viscount Colville of Culross: Mark, you said that you thought that the NetzDG law had not moved the dial at all, yet you wrote that Europe had established itself as the world’s regulatory trendsetter for big tech outside China. What is there in Europe that the UK Government could take away in regulatory terms or avoid, or other international examples that we ought to be looking at?

Mark Scott: I will be more specific about NetzDG. Some of the transparency things that it asks the platforms to do have been useful. The issue is that just publishing a quarterly update about what has been taken down does not tell me much. That is where I was coming from when I talked about moving the needle. I am not a Europhile, but it feels, at least right now, like the EU is doing the majority of the heavy lifting on this stuff. Some things from NetzDG are useful. What the French do with their fake news law for a judicial review within a six-week window before an election is useful, but that is predicated on no fake news coming before the six weeks, which is silly.

Having some sort of mechanism forces greater transparency. I will come back to this a lot—transparency, transparency, transparency. If I know where the data are coming from and that it is possibly being amplified through an algorithm, I can then figure out if things are going bad or not. We may see a move in that direction with the Digital Services Act from the Europeans next week. I would not like to see the Russian right to be forgotten law that imposed very strict limitations on what can be published domestically or the Singaporean fake news law that enforced a Singaporean view on Facebook domestically, and similarly in Vietnam. These things are quite over the top.

The UK can fit in here. Not to champion Brexit too much, but it provides some sort of quasi-sandpit to try out new things. I know that the EU and the US are very interested to see what happens, particularly with the new digital markets unit in the CMA. That can play a role with regard to online harms, because you can try things out that can have an impact elsewhere in a large market that is not the US or China.

Q22              Viscount Colville of Culross: The NetzDG law was one of the very first to be put into action and it has been very controversial; it is supposed to have made the AfD into political martyrs. What is in that law that might be useful for UK lawmakers to take away?

Mark Scott: It is very easy to focus on the money and fines. That is a mistake; the companies do not care. They care about the fines, but €50 million is not going to move the needle. What they are interested in is the reporting transparency component to the NetzDG and the timeframe in which they have to take content down. I agree with Benedict that you should not be asking them for a one-hour takedown, like the French are doing in their new law, but having a 24-hour turnaround or some sort of structure like that would be quite useful, because it sets a time limit so you are not dealing with a long tail of content that has not been touched. That can be quite useful for focusing minds.

Viscount Colville of Culross: Benedict, you said that the US has a very different view of free speech and that there is a very strong view that people should be allowed to be as unpleasant as they liked, which is enshrined in the First Amendment. Do you think that that determination by the US lawmakers, that they prefer self-regulation for the tech giants, would have an effect on us in the UK trying to regulate this sphere?

Benedict Evans: Implicit in a lot of what we have talked about is that there are trade questions here. We have regulations for all sorts of things that apply across countries, and those get rolled up into trade talks. You harmonise your regulations and you agree to recognise their certification in your country and vice versa. That stuff is complicated, but when the industry is not changing too fast it is relatively achievable within a bureaucratic process.

If you get into a situation where the EU says, “We’re going to shut down this new product from Facebook, because we don’t like it”, that gets into a trade war conversation quite quickly. Americans tend to overstate how much protectionism there is in EU regulation. I tend to point out that the EU does that to European countries as well. They do not just do it to Americans. Certainly the tendency in the US is to see this as protectionism by sore losers, like, “Why are you doing this only to American companies? Why don’t you have any big internet companies of your own? You’re just doing this because it is sour grapes and you hate Facebook because it is American. You should just shut up and build your own social network”. There is a degree of Faragism to that, but there is also some truth to it. It is an easy narrative to sell.

One does need to be conscious of the challenge of drawing up laws that do not just apply here but apply to what you are allowed to post in California and to be seen by Californians.

Q23              Baroness Quin: Some of the aspects I was going to deal with have partly been covered, particularly in the discussion about international co-operation.

To pick up on Mark’s point about the EU, that outside the EU we might try some things on our own, at the same time do you feel it is important to have ongoing dialogue in some structured form with the EU on these issues? Benedict, you might like to comment on that, too. Wider international co-operation, whether on technical issues or on values, is also important, but I can see the limitations on that and the challenges that make it a particularly difficult area to explore.

Mark Scott: You are completely right: the UK needs to continue having a dialogue. The question here is that the Digital Services Act and the Digital Markets Act, which are coming from Brussels next week, will be a three to five-year process, even if everything goes to plan. That is a long time. Something from the Online Harms Bill or the CMA digital markets unit on competition and so on could be quicker than that, and that would provide a test bed that the European Union cannot do.

Co-operation is key. On working with the Americans, I do not think much will change under the Biden Administration on Section 230, the digital services tax or transatlantic digital issues, but they are open to discussions. The UK is well positioned to be able to work with the States and Europe.

Benedict Evans: I agree. You see, particularly living in the US, that the US has profoundly different philosophical attitudes to regulation and when it may be necessary, but also a profoundly different attitude to how you do it. It tends to move forward in laws and court cases as opposed to five-year regulatory processes. The more you can mediate between those two different ways of thinking about what laws should be and how regulators should work, the better.

A lot of the concern from the US, particularly about EU regulation, is that it quite often tends to be the equivalent of going to General Motors and saying, “You’ve got to make a car that can’t crash and we’re going to fine you if you don’t”. The collision of the colossal arrogance of the tech industry with the colossal arrogance of regulators can be a problem there. There is a role to be played in trying to understand how that would work and to move faster.

Q24              Baroness Quin: On a rather different matter, what are your thoughts on how important personal individual responsibility is here when it comes to being a citizen, media literacy and initiatives such as that in tackling some of the issues that we have been talking about? Before you answer that, can I thank you both very warmly, because I have certainly found it a very fascinating and informative session.

Mark Scott: Media literacy is a medium to long-term solution. I did a story out of Stockholm pre-Covid on the teaching there of four to five year-olds on how to use social media by creating offline social media profiles. They could work with each other in the classroom to figure out what was and was not acceptable. That is great, but it does not fix the problem short term.

On personal responsibility—I agree 100% with this—it is not Facebook’s or a regulator’s responsibility for me not to post nasty things on the web. The problem, to Benedict’s point, is that the nastiness comes out even from the nicest people sometimes. Yes, greater personal accountability in what we do is paramount.

Benedict Evans: I agree with that entirely. It is incumbent on us not to be horrible to people, but it is also incumbent on people not to design products that encourage you to do that.

Q25              Lord Allen of Kensington: Thank you both very much. It has been an informative session. I want to go back to what you said, Mark, about what has been happening in Germany with NetzDG, US Section 230, Singapore and France. We have seen a number of regimes. To what extent do you think platforms are complicit in censorship by authority regimes? That may be outside some of the countries you have made reference to.

Mark Scott: That is a tough one. The current legal system—says the non-lawyer—is that you have to comply with national law if you want to work in those countries. Do I think that the platforms want to follow the data localisation laws of Russia? No, I do not, but that is what the rule says domestically. Is that complicit? It is tough. An example of things going awry—more than awry; I am trying to be polite—is what happened with the Rohingya minority group in Burma. The platforms were not looking at that issue. Is that them being complicit, or is it not looking at something that is incredibly important for Myanmar but not something which someone in the valleys cares much about? Unless you can prove that, the word “complicit” seems quite far for me.

Benedict Evans: I tend to agree with that. This is a challenge for any multinational company. I remember the Egyptian Government ordering Vodafone to shut down its mobile network in Egypt. Some people got very upset that Vodafone had said yes, but what is the alternative: “Multinational company defies the rule of law in Egypt. Multinational company tells their employees in Egypt to ignore the law”? As soon as you start asking, “What did you want them to do?”, it becomes a little bit more difficult. It is the same thing here should Facebook say, “We’re just not going to be in Russia”. At a certain point, you get to those points.

Clearly there is a whole debate about Facebook desperately wanting to get into China and not being able to, or Google trying to get into China and ultimately deciding that the compromises it would make would have to be too great and not getting into China. On the other hand, Apple is in China, but Apple does not run a social network.[2] The degree to which Apple is obliged to give the secret police information about you looks different to the degree to which Google is obliged to give the secret police information about you. For Ikea to run stores in China, what is it doing other than selling people furniture? If they were to take slave labour from Chinese concentration camps, that might be a different conversation, but I do not think anyone would say you should not do business in China at all in any circumstances.

There is always that gradation for any company, and here it is a question of which country it is, what you are being asked to do, what your product is, and how it relates to how you have to change your product.

The Chair: You have been very succinct, which allows us to get in one final question, from Lord Vaizey.

Q26              Lord Vaizey of Didcot: I thought we had run out of time, so I get the final question, which obviously has to be a weird and odd one. Benedict, you make the point in your latest newsletter that Susan Wojcicki has avoided all these hearings and that YouTube has not been part of the Section 230 show trials. When I watch these evidence sessions and see Mark Zuckerberg and co giving evidence, I genuinely wonder what they think. They are in control of these companies the world has never seen the like of before, and we, as punters, constantly moan about this video or this anti-vax propaganda.

This is a slightly odd question, because it is total speculation, but what do you think they genuinely think? Do they think, “We’re trying our very best and people just don’t understand what we’re doing” or the caricature, “We don’t give a blank, blank, blank”?

Benedict Evans: I mentioned earlier that five years ago people would have said, “Complaining to me about what is on Facebook is like complaining to the chief executive of Vodafone about what somebody said to you by SMS”. It is like, “Well, that’s horrible, but what do you want me to do about it?” That would have been the universal attitude five years ago.

The challenge now is that this is all hard and complicated. There is a combination of: “We’re working on it and we know we’ve not finished. We completely missed that and probably screwed up and were careless. We aren’t at that bit yet and we hope to. We disagree with you about this”. There is also, and this is a phrase we have not used yet, a degree of moral panic, particularly around Facebook. You sometimes have the experience if you read a news story about Facebook that you need to read it twice to work out whether they have anything to back up the headline.

Some people in Silicon Valley have the attitude, “Theyre out to get us and it’s all nonsense. I’m just going to ignore everything the New York Times says”. That is not a healthy attitude at all, but you do encounter that attitude. To make that slightly more coherent, there is a mix of, “Okay, this is a huge problem. We’re doing stuff about it. We completely dropped the ball on that last week and didn’t realise or were not thinking about that. We disagree with you about this being an issue. We’re basically scrambling to get to a point where we know what we’re supposed to be doing”.

Nobody in Facebook looked at Myanmar and said, “Yeah, whatever, it’s a distant country. We don’t care if they kill each other”. It was literally nobody in Facebook’s job to worry about that, so it just went completely off the radar. It did not see it at all. Nobody in Downing Street looked at organised child abuse in the Midlands and said, “Yeah, who cares? It’s the Midlands”. It is about somehow knowing that the whole system failed completely to catch that. That would be the right way to think about it. They are people with good intentions, some of them are careless, none of them has ever been to Myanmar and they missed it completely.

Lord Vaizey of Didcot: Benedict has helped me to refine my question, which, put more succinctly, could be: do you think they have a moral responsibility or a commercial responsibility to try to fix this?

Mark Scott: I cannot speak to Mark Zuckerberg’s inner thinking, but from my discussions with people at the company, it is both. There are more commercially minded people than those who are doing it for other reasons.

The one thing I would focus on is their focus. From covering the US election in November, the UK election in December, the Canadian election in the spring, and the Italian, the Swedish, the European parliamentary elections, I would say that the amount of time and effort spent on this issue focused at the US dwarfs everything else. There is the idea that the US tech companies strangely think they are American still when all their users are outside of the US. You see executives—although, as in Benedict’s last newsletter, not the YouTube CEO—going to Capitol Hill and giving as much evidence as is required, but nothing will change in the US. That is where the focus is missing. They believe that if they talk to a domestic US audience this will go away, when it will not because all their users are overseas. That mentality still has not hit home.

Benedict Evans: As a small anecdote, somebody told me that there were a lot of Syrian resistance people who are using fake names and that Facebook was trying to shut this down because you should not use fake names. Then a bunch of drag queens in San Francisco wanted to use their drag names, Facebook stopped them and that became a huge fuss, so Facebook said, “Okay, you can use another name”. That gives you a sense sometimes of how provincial the tech industry can be. It is a small town six hours away from New York.

I made a joke in a presentation I gave at the beginning of the year that there is the line that war is how God teaches Americans geography. You could make that point now about regulation. There is no indifference or callousness or cruelty. It is just, “Where is Myanmar?”

The Chair: Thank you both very much. That was a very interesting session and we have enjoyed your evidence. I think the questioners found it extremely useful. It was good of you both to give up time to be here. We may well be in touch with you to follow up on some of the issues that you have addressed today but, Benedict and Mark, on behalf of the Committee may I thank you very much for coming along today?

 


[1]              Note by witness: This should read “it only applies in California.”

[2]              Note by witness: Partial correction – however, Apple does run a messaging system, iMessage.