final logo red (RGB)

 

Select Committee on Communications and Digital

Corrected oral evidence: Freedom of expression online

Tuesday 23 March 2021

4 pm

 

Watch the meeting

Members present: Lord Gilbert of Panteg; Baroness Bull; Baroness Buscombe; Viscount Colville of Culross; Baroness Featherstone; Baroness Grender; Lord Griffiths of Burry Port; Lord Lipsey; Baroness Rebuck; Lord Stevenson of Balmacara; Lord Vaizey of Didcot; The Lord Bishop of Worcester.

Evidence Session No. 20              Virtual Proceeding              Questions 166 - 173

 

Witnesses

I: Ross LaJeunesse, former Global Head of International Relations, Google; David Kaye, Clinical Professor of Law, University of California, Irvine.

 

USE OF THE TRANSCRIPT

This is a corrected transcript of evidence taken in public and webcast on www.parliamentlive.tv.

 

 


14

 

Examination of witnesses

Ross LaJeunesse and David Kaye.

Q166         The Chair: I welcome our second set of witnesses today in our inquiry into freedom of expression online.

Ross LaJeunesse is a public policy and human rights advocate. He has 30 years of leadership experience as an insider in government, business and the non-profit sector.

David Kaye is a clinical professor of law at the University of California, Irvine.

Thank you both very much indeed for joining us and giving up your time. I think you have been following our inquiry. Today’s session will be broadcast online and a transcript will be produced.

We have a number of questions for you. May I start by asking you to give a very quick overview or any further words of introduction and your perspective on freedom of expression, particularly as it relates to the online environment? Ross LaJeunesse, do you want to start?

Ross LaJeunesse: Gladly. Thank you for inviting me to appear today. It is a special privilege to be here with my friend and colleague Professor Kaye, with whom I worked for many years when he was UN special rapporteur.

Given David’s presence today, I feel that what I am best able to share with you is my own experience as a former Google executive, someone who has worked on tech policy for over 20 years, a former government official, and someone who has long advocated for the large tech companies to do more, to put it plainly. When I say “do more”, I am talking about simply recognising and acting upon their responsibilities as the most powerful communications platforms the world has ever seen and, quite frankly, the most powerful and richest corporate actors the world has ever seen; to do more to protect and advance human rights, to produce true digital citizens, to protect our children, to combat tech addiction and to protect free expression while addressing the hate and violence that we know is very easily found on their platforms.

I want to talk later in our Q&A, hopefully, about what dangerous content versus hateful content is, but I would like to emphasise a few things to provide context. First, please remember that technology and the internet are overwhelmingly positive tools for humanity. Whatever regulations or laws are put in place to address the harms must also preserve and promote the ability of people to speak and understand the world, which we know is central to what the internet and technology is all about.

Secondly, please remember that not all technology companies are the same. Even among the big tech platforms, their business models, products, leadership and, therefore, their approaches to the issues of online content are very different. Any policy reforms should take that into account by providing some measure of flexibility to enforcement agencies, for example. Remember that these companies are, after all, innovative, so that sort of relationship works best.

Thirdly, although you have a difficult job to do, government must act. Although the companies are different they have one thing in common: they have failed time and again to show us that they are willing to step up and address the problems that have been repeatedly pointed out to them. These are the richest, most innovative corporations in the history of the world, with unparalleled resources, and market capitalisation sometimes in the trillions of US dollars. Yet we are told that they cannot figure out how to enforce their own content policies on their platforms and products. Where are the resources for that? Where is the ingenuity and innovation that they apply to the execution of their own business priorities?

Instead, we are expected to believe that they are unable to find even child sexual abuse imagery on their own platforms. A Bloomberg story just last week cited the example of an American teenager whose sexual videos, which had been obtained and uploaded against his will by paedophiles, nevertheless remained on Twitter for nine days, despite his parents pleading with the company to block the imagery and the videos drawing 167,000 views and comments, many of which asked Twitter to take down the imagery—nine days. There have been too many apologies and promises to do better.

Experience from my time at Google tells me that the companies will truly do better only when their bottom line is at stake and significant fines are involved, as there are in the online harms proposals. These issues are difficult and the answers are not apparent, but we know that the platforms can do better if they are properly incentivised. I want to thank you for tackling the issues that you have in front of you.

Q167         The Chair: Thank you. We want to tackle the issues that you have raised and have questions about a number of those subjects. Thank you very much indeed. Professor Kaye, welcome.

David Kaye: Thank you very much. I begin by saying that it is a real privilege and honour to be here to speak with you today.

I want to say a couple of things before we get into the Q&A. The first is about my background. Much of what I will be discussing derives from my work as the UN special rapporteur on freedom of opinion and expression. To give a little bit of background, I am sure that you are familiar with the treaty-monitoring role and monitoring role more generally of special rapporteurs in the UN system. In my particular role, I tended to focus on digital rights: digital security, encryption and anonymity, content moderation, hate speech and the rise of the private surveillance industry. I am very willing to speak about all those things, but it is important to note at the outset just how interrelated things that might not seem to be related are. Digital security is very much related to content moderation; privacy and data protection are related to content moderation.

I will be very brief and highlight three specific points that I tend to bring to these debates. The first is that government regulation in this area must, first and foremost, focus on breaking down the opacity of this industry, if we can call it a single industry. I take Ross’s point, which is very important, that many of the companies are different and operate in different spaces, but all of them are operating in the space of dealing with user-generated content, and even researchers know very little about how they operate. Therefore, I would put up as a priority the transparency obligations of the companies, because that enables you as legislators to understand what exactly the harms are that you are dealing with.

The second perspective is that rules related to content should be approached with the utmost caution. You already know that, so I will not belabour the point. Drawing from experience in other environments, I would note that we do not want to see Governments become the censor of speech—I know that the tradition in the UK is to avoid government becoming the censor of speech—or, at the same time, to incentivise companies to take down fully legitimate content. Those are competing issues and priorities, but I just want to highlight them.

Finally, because this is in the context of the democratic traditions of the UK, there is a modelling role that some countries need to be mindful of—I would put the UK in that category—which is that many Governments around the world are, as we say in the United States, chomping at the bit to regulate this space. I would encourage thought about what UK regulation might mean globally.

The Chair: Thank you very much. That is very useful; that point is quite well made. We have a lot of territory to cover and a lot of questions to ask you, so, if you confine your answers to the questions, I think that one way or the other we will cover everything that you can help us with.

Q168         Baroness Rebuck: Thank you both for those really interesting introductions.

David, in your report you state that hateful content spreads online spurred on by a business model that values attention and virality. We have heard that from many of our witnesses—one even went as far as to say that there was a dependency on a systematic mass violation of rightswhereas, interestingly, other witnesses have said that it is in the tech companies’ interests to have decent, thorough moderation because advertisers do not want their products next to toxic material, which puts off users as well. The question to both of you is how you feel platforms’ business models shape their approach to freedom of expression.

David Kaye: I definitely believe that one thing that we have seen over the past several years from research and from observing the nature of online hate and disinformation is that the business model of attention and virality, and the incentive structure for users, is to create content that will get the most eyeballs. That is also connected to the business model, which is to attract advertisers to the pages, posts or users who are generating the most eyeballs, so those things are very much connected.

I would first go back to the issue of transparency. We need to have much more clarity about how the business model is interacting with content moderation policies. Very few of us understand that connection. Government regulation to encourage that kind of transparency could be very helpful, and other tools—competition and so forth—are also relevant.

The other point is that it is unclear whether the hardest content moderation decisions are made by the content policy teams—those responsible for content—or are raised to the highest level of the companies, the CEOs and those responsible for business but not necessarily content. That is problematic because it raises the question of whether the companies themselves are observing their responsibilities under, for example, the UN guiding principles on business and human rights to protect everyone’s human rights, or are basing their decisions on either a political or business-oriented framework. I think that transparency can go some way to dealing with those problematic issues, but consumer protection and other forms of government regulation could also play a role.

Baroness Rebuck: Ross, you talk interestingly about how you felt that Google drifted away from its original principles, certainly once its founders moved on, and how you left the company precisely because it would not put a human rights impact assessment on top of its entry into market. Would you like to comment on this question?

Ross LaJeunesse: Quite frankly, the business drives everything. YouTube’s singular mission for many years has been simply to increase the amount of watch time. That was the singular directive from YouTube leadership and everything has been designed to do that. We now know—there have been numerous studies—that that has led to viewers often being driven to “watch next”. That feature sends them often to even more extremist views. As David said, it is all about eyeballs on screens because that means higher advertising revenue.

David makes a great point about where these decisions are now being made. Many of them are being made by AI and there is no human involved at all. That in and of itself can be extremely problematic but, as I mentioned earlier, where are the resources when questions or flags are raised with the companies and nine days go by while a teenager’s illicit material is viewed by several hundred thousand people? I can tell you that the decisions are not being raised at the highest levels, and that is indeed problematic.

Baroness Rebuck: That leads me to my second question, which is more about the design of platforms and possible regulation. The Twitter example that you gave is a very chilling one, as was what happened to the Rohingya community in Myanmar when incitement to violence went unchecked. Conversely, David, you have argued against a blunt AI algorithmically led tool because that might silence legitimate underrepresented voices. I ask you both: what practical design measures should we look at as a committee to protect human rights and encourage good digital citizenship, and where specifically should legislation play a role in relation to the design of platforms?

Ross LaJeunesse: One of the tools that is now incorporated into the proposals is a meaningful user redress and flagging system that is given an appropriate amount of resources so that nine days do not go by before a flag is responded to. I know that is included. It is absolutely necessary because the platforms themselves rely on these tools in order to police their platforms but say, “There is simply so much content”—and they are right; 24 days of content is uploaded every minute on YouTube. “How can you possibly expect us to police it? We rely on the community and on the system of flagging and then we respond”. They do rely on it but then do not resource it adequately for it to be an effective mechanism. Including some sort of requirement about a time by which every complaint or flag will be responded to might be a way forward.

Baroness Rebuck: You make the point very powerfully about resource. David, you have written about a whole menu of possible design interventions that could ameliorate matters. Do you want to tell us about those?

David Kaye: It is a really important question. First, I want to problematise it a little bit because—this goes back to something Ross said in his introduction—the internet more generally offers this broad space for freedom of expression and access to information. That is also a freedom that we want private actors and companies to enjoy, so in addressing these issues we are also mindful of the rights that the companies might enjoy in designing their platforms.

For me, one of the biggest design problems, which is a large-scale design issue, is how far removed companies are from the communities that they are essentially dominating or regulating. While I totally agree with Ross’s points about making the user interface for reporting more accessible to individuals, the companies themselves need to have better access to communities around the world. You raise the issue of the Rohingya in Myanmar. That is a perfect example—one that is replicated time and again in the most hyperlocal circumstances—where the companies, simply because of their design and structure, have very limited insight.

I emphasise that this is not something that one country can do alone, but there needs to be pressure on the companies to devote more resources, as Ross was suggesting, to get at the local issues, because context means everything when it comes to hate speech, misogyny and disinformation, and the companies are just too removed from it. Therefore, a big design question is: how do they get closer to that?

Baroness Rebuck: That is really interesting. I could go on, but I am mindful of time. Thank you very much for those answers.

Q169         Baroness Grender: Thank you so much for a fascinating insight so far. Help us a little further on that insight with regard to the issue of competition. A potential solution to this is to open up greater competition but, boy oh boy, is it difficult. From April the Government are establishing a digital markets unit, although legislation to give it real power will come later, and that may be a problem. That is in the wake of the Furman review. Talk to us about how competition can improve this, but please bear in mind that one of our witnesses, Jimmy Wales, suggested that greater competition might suit the larger platforms, because they will be able to amend and adjust to meet those greater competition imperatives and thresholds, but will make it harder for new entrants.

Ross, could you give us little insight into that, given where you come from and Google’s total domination in this area, which has such a chilling effect on so many businesses? In particular, if any representative of the media were here, they would be hollering at you about that right now. Talk us through how we can open up competition and improve it.

Ross LaJeunesse: I think the most effective way of addressing this issue is happening right now in the United States Congress. We have seen the appointment of two folks to the Biden Administration who are very focused on this special issue. One is likely to be appointed to the Federal Trade Commission and one has been appointed to the White House.

US Congress is now in the process of examining how we redefine what effective competition looks like and what anti-trust law in the United States is designed to protect. It has always been about short-term consumer financial harm. There is now a recognition that this simply is not sufficient and that we are broadening the focus of the harm to include competitors to the market itself and to society. I believe that the US process will be successful in that redefinition, although the companies will fight it with everything that they have.

As you said, 92% of the search market in the UK and globally is controlled by one company. This means that, in effect, Google determines what 92% of the world sees and does not see. It is just astounding and it needs to change. I do not quite understand the argument of Jimmy Wales, but it is fascinating to me that he would see competitive elements to the big platforms as something that they would welcome because those were not the conversations that were happening internally.

Baroness Grender: To be fair to Jimmy, he is referring to the online harms Bill more than the DMU. Another colleague will ask you about the online harms Bill.

Ross LaJeunesse: There is a very real concern about when you impose regulations on these platforms. The platforms most easily capable of dealing with that are the major, well-resourced ones, and those likely to get stuck, or never come into existence at all, are the small startups. It is true that the big platforms might see government regulation in this space as a competitive advantage because they have trillions of dollars to spend on it and deal with it and the smaller ones do not.

David Kaye: I share those views. It is a really important question. I want to highlight one or two problems that are unusual in this space, because we are talking about global companies and the impact of competition, or anti-trust, in the United States.

Typically, we in the United States do not really think about the rest of the world in our domestic regulation. That needs to be changed here. I raise it because of this scenario. The general thinking about breaking up Facebook is to disentangle Facebook from Instagram and WhatsApp. WhatsApp has become a massively important tool for private communication in the global south in particular.

One question that we want to ask in competition policy is: what would be the impact of breaking up Facebook for the use of that tool worldwide? In other words, will it force WhatsApp to adopt a new model that might involve more of what we all think of as surveillance capitalism these days—more ads—which may be fine, or will it put a cost on the platform itself?

I do not know the answer to those things. My only plea is that competition policy thinks beyond the domestic. That is an idea that I am sure your process can also encourage in transatlantic discussions because of the interconnections here.

The other point that I make on competition policy is how important it is to think about competition in the context of other regulatory moves. To be clear, I favour competition as a tool here, but we should not think of it as a piecemeal approach. If we are to engage in a competition or break-up policy for the companies, how do we also think about addressing the questions of incentivising innovation and creating space for innovation by new entrants? Is that a role for government? Government can create space for that without subsidising it. How does it do that?

Those things are very much connected, because if we go down only the route of competition we might miss the broader picture of how the companies operate. It goes back to Baroness Rebuck’s question about business models. All these different things are connected.

The Chair: I think that you are raising an issue that this Committee has looked at from the perspective of a number of areas that we have studied—the interconnection between public policy areas. In answering these questions you have been talking about public policy in terms of general economic resilience and about a number of regulatory fields: content regulation, competition regulation and data regulation.

We have been talking about a number of aspects of consumer law and human rights law. It strikes us that in none of the jurisdictions we have looked at have all of these aspects of law and regulation been effectively joined up so that a societal harm is identified and a tool dragged from the box and it is said, “This is a competition issue, a data issue, a content issue or a human rights issue”.

We urge the joining up of all the regulators and propose a body called a digital authority. You can call it what you want, but it brings together all the regulatory and public policy tools into one to see the harms coming down the road in the future and create comprehensive public policy solutions to them.

Do you see any jurisdiction in the world beginning to do this, or is it still aspirational—or is it a bad idea?

David Kaye: My view is that it is aspirational at this point and that it is a good idea. Whether we are talking about having one authority or simply a task force approach that brings together different elements of this issue, there needs to be a vision for what digital space looks like in democracies.

Pieces of that vision have been expressed by different people and different Governments, but it has not been expressed as a unified or whole-of-government approach. To give one brief example, although the UK is no longer in the European Union, the European Commission last year tabled a set of recommendations for democracy in the digital age and a Digital Services Act and Digital Markets Act. All of those pieces are part of the same puzzle and need to be connected in some way.

I fully subscribe to the idea that you are talking about. Whatever the structure might be, the different people talking about these issues—it gets very specialised—need to be talking to one another in a way that is organised by government.

Ross LaJeunesse: The only place where it is being done—it is not a model for us—is in authoritarian regimes. China has a single source of power and decision when it comes to digital platforms and that results in a fairly unified approach to the platforms, but it is obviously not something that we would want to pursue.

The Chair: We will send to you for your comments our proposal that envisages a role for Parliament in asserting societal priorities. We would welcome your thoughts on it.

Let us move back to the specific issues that we want to follow up with you now.

Q170         Lord Vaizey of Didcot: Ross and David, thank you very much for giving such brilliant evidence. I want to pick up the theme that Lord Gilbert and Baroness Grender have just been talking about. To a certain extent, there is cooperation. There is a coming together at the momentthanks to the election of President Bidenof US, UK and EU legislation. If we can create some kind of dialogue between the three Administrations to try to move more or less in harmony, that would be a good thing.

To the UK’s credit, to a certain extent I think that we are paving the way with the online harms Bill. I am personally quite supportive of the Bill and what it is seeking to achieve. By the way, my children have just come home from school, so the noise will now just kick in when I am asking my questions.

I would be keen to know whether the online harms Bill has had any impact on US policymakers thinking and what your own personal views are on the Bill, if you have any. It is very clear what it can do about illegal content; it is less clear what it can do about harmful content. The Government have said they are very keen to strike a balance between being quick to remove harmful content—picking up what Ross said earlier about if only the sites could just get on with taking stuff down instead of waiting for days and days—and ensuring there is still space for people to create controversial or annoying content online.

Does that provide an unworkable format where platforms go too far in one direction and remove everything that could be mildly controversial, or they get done for removing mildly controversial comments because they are meant to be removing harmful content, or they get done because they do not remove harmful content? I am already confusing myself, if you see what I mean.

Let us stick with the question of what you think of the Online Harms Bill. Has it had an impact on US policy-making, and can you carve out an exception for controversial comments, even if you are trying to take down harmful content?

Ross LaJeunesse: I will respond, if I may, by recognising some things that I noted in the proposals that I think are excellent.

The first is the substantial financial penalties. They are one of only two ways I have ever seen the large tech platforms pay attention to anything.

The second is their CEOs being hauled before a government entity and personally being asked to explain what is going on. I am not joking. It has been remarkably effective in France. Every French President since Sarkozy has hauled the CEO of Google before him and basically had a tête-à-tête and said, “You need to do more or I will kick you out of this country”. Each time the company has responded. That is why the Cultural Institute is in Paris; that is why it has spent hundreds of millions of euros on building up staff there and doing other things.

Lord Vaizey of Didcot: I smile only because I thought that you were going to say that the only thing to make them respond would be to put their chief executives in jail.

Ross LaJeunesse: I will say a little bit about that later.

First and foremost, this is the time to act. This is slightly provocative, but might government adopt a policy similar to the design ethos of the tech companies, which is launch and iterate? Put it out there—there might be mistakes—and maintain flexibility to fix it later. You will be spinning your wheels for ever if you think you will get everything right a priori. Launch and iterate; do it and fix the problems later.

Another thing that I want to emphasise is that it is critical to recognise that a key problem is a failure of the platforms to provide meaningful user reporting and redress mechanisms. The proposals do that, which I applaud.

Lord Vaizey of Didcot: Ross, does anyone apart from you know about these proposals in the US?

Ross LaJeunesse: Washington, where I am sitting now, is a very insular place. As a former policymaker myself, I know that there is not enough done in Washington to look beyond the United States for examples of how things might be done. That is the plain truth of it. Both hubris and insularity are involved.

David Kaye: I agree with Ross on his last point, although, interestingly enough, there have been efforts at transatlantic discussion on these issues. A growing coterie of experts are familiar with not only the proposals around Section 230 but online harms, DSA in Brussels and so forth. I think that community is one that could be very helpful to your committee.

As I look at the most recent draft of the online harms Bill, which I think is from December, one thing that is very positive is that it is much more nuanced and narrowly tailored and drafted than the original White Paper. That is a very positive step. I also want to applaud the process, which has involved an open government approach: for example, “Here are our ideas. We welcome public comment. There will be hearings”. Overall, I think that it has been a very welcome democratic process that we do not always see in other democratic spaces.

As for specific ideas at the moment, it is perhaps more helpful to highlight some of the things that raise some concern, one of which is the definition of harm. This is a regular process that Lord Vaizey and others know well. There are issues such as psychological harm. My hope is that as the process moves forward it will just be narrowed so that there will be a lot of clarity, whoever is the ultimate decision-maker, about what these terms mean.

A second issue is around penalties. Ross and I might diverge here a little bit, but I guess that in principle we do not. Maybe penalties should not be prison, but financial penalties should be part of the process. One problem is that the companies have enormously deep pockets. It takes quite a penalty to have an impact on these companies. Given that, I want to raise a concern about something we have seen in NetzDG in the German process. Penalties can also provide perverse incentives for the companies to take down a lot of legitimate content. It could be content that we might not like to see, but it is lawful. That is a particular area that needs real caution.

I would add to that the issue of time limits, because often penalties are connected to time limits. We do not want to see a move towards greater and greater use of algorithmic tools and automation because, when it comes to speech, context means so much and we could end up in a situation that we see regularly on YouTube, Facebook and Twitter where journalists are commenting on, or even detailing, bad hate speech that they have uncovered and that material gets taken down because of algorithmic tools. I would be careful about how we instantiate those kinds of tools in a legislative process.

The UK has a real advantage in the existence of Ofcom. Involving democratic public institutions in the most difficult issues around speech and democracy should be open and transparent. I see those elements in what has been proposed, so I applaud them.

I would also encourage increased transparency requirements on the companies with respect to their actual content moderation practices. I wrote about this in one of my reports to the UN. There should be something like case law of content decisions by the companies, which we do not have. We have very little information. I think that regulation can have a role in that in combination with tools such as Ofcom and its role in encouraging transparency.

Baroness Featherstone: My understanding of America’s insularity was when I first went there and discovered that the World Series included only Americans.

David Kaye: And Canadians.

Q171         Baroness Featherstone: Forgive me; it was a long time ago.

Everyone is grappling with how to regulate platforms. We have been looking at other countries to see what they are doing. We looked at the German NetzDG system, but that comes in for criticism because potentially it damages the right to freedom of press and freedom of expression.

Then we looked at Australia, which works through a reporting and take-down system. That is criticised for being too broad and vague. If you look at Poland, which is proposing a system of penalising overcensorship and oversight of individual cases, that has been criticised on the basis of who is chosen as the arbiter of what is right and what is wrong.

My question to both of you is: which international examples should we be looking at to emulate or learn from, and which should we avoid?

David Kaye: This is a really important question. I start with those to avoid. Ross highlighted this previously. Over the past several years there has been a real rise in regulation by authoritarian Governments who get very intricate and granular in what they demand of the companies. More than that, they often do not use their public institutions—for example, judicial ones—to impose requirements on the companies.

I will give an example without mentioning a country. There are many examples where a Government—it could be their ministry of information and technology—have the authority to order a company to take down content, and they often do so without going through their court system to make those orders. That puts the companies in a difficult position in so far as they are responsible for maintaining the space for freedom of expression of their users. Oftentimes this also involves demands for the sharing of user data. Ross knows these issues very well. They are very complicated and difficult issues. In the authoritarian world we see this repeatedly.

One of the problems is that Governments or laws such as NetzDG or the new legislation tabled in Poland are often used as models by authoritarian Governments. They say, “Look, this democratic country has adopted this rule. What’s so bad if we adopt it?” They do not necessarily have the same rule of law environment that exists in Germany, which ameliorates some of the worst problems of NetzDG.

There is a very significant problem of authoritarian regulation or imposition around the world. To go to your direct question, I cannot point to any model of regulation, certainly in the democratic world, that exists today that achieves all the kinds of goals that we have been talking about. That is not to say that those proposals have not been raised. To give one example, in France, not much more than a year ago, the Government commissioned an inquiry to examine how to regulate social media. The commission came up with some very interesting ideas about multi-stakeholder oversight of the platforms. For example, ARTICLE 19, an organisation based in London, has promoted social media councils. That proposal was rejected by the Government.

The ideas are out there in democratic societies, but they have not been fully embraced so far. The Digital Services Act in the EU is making some strides in putting transparency at the centre. That is very positive, but that is not yet law and it is unlikely to be for another couple of years.

Ross LaJeunesse: I have been impressed by Australia’s focus on user empowerment and safety. It is very easy for government to look at a problem and regulate, but we also need to prepare our children for a world in which they are digital citizens, almost first. Very few governments are taking that approach. Australia seems to prioritise that.

Our kids today interact with technology in everything that they do. As we see the rise of artificial intelligence, the future will be almost entirely digital, yet so little time is spent in our educational programming teaching children about what is harmful, what is bullying, what to do when you see bullying online and what is the appropriate reaction. We do not spend the time doing that. I think that is just as important as the other side of the coin, which is the regulation of the platforms themselves.

The Chair: Before we move on, Baroness Grender has a brief question on redress.

Q172         Baroness Grender: One witness from Facebook’s oversight board, Alan Rusbridger, talked us through a bit of the process. He made it very clear that the board should be looking at algorithms in the future. Do you have any views on that attempt by one of the larger platforms to have some of that redress and take-down oversight?

Ross LaJeunesse: I see the oversight board at Facebook primarily as a marketing tool. I am not saying in any way that the participants on that board feel that way. I think that they have joined it in good faith and are trying to make a difference, but I am a bit cynical when it comes to Facebook’s attempts to deal with these issues, quite frankly.

When I refer to effective redress, that is not an example of what I am talking about. I am talking about something that would allow the 13 year-old to have his explicit videos taken down within 24 hours. The oversight board is supposed to be weighing the mighty ponderous issues after the fact, which is fine; it needs to be done, but that does not address the issue or problem that we currently face.

David Kaye: There are aspects of the oversight board that your committee might want to look at as examples of elements that you might extract to think about what industrywide oversight would look like. There are elements in the oversight board that are quite appealing. Unfortunately, only Facebook and perhaps Google could do it, because it is extremely expensive; it is a trust of over $150 million, I believe. It is not replicable, except by government intervention or government support. I think that some of those elements should be taken into account: independence and the possibility of including international human rights standards as a guide to decision-making. Those kinds of things could be very useful. Regardless of the motivation of the oversight board, there might be something that we could extract from it that is positive.

Baroness Grender: That is very helpful.

The Chair: We may come back to you in 12 months for your view on its first year of operation, but that is useful.

Q173         Lord Lipsey: Thank you for disproving the theory that Washington is the most insular place on earth; you have given us a wonderful global perspective.

I want to return to human rights and perhaps put it in rather crude terms that anybody can understand. There is a country in Africa where Google does very good business. It is doing its own online harms Bill, which is very similar to those of other countries, except it contains a clause that says, “Any reference to LGBT rights or practices is forbidden”. It says that this is very harmful to society because it will convert all the young men to being gay and then there will not be any children for the next generation, so it is a real harm. If that happens, what is the responsibility of the platform involved relative to that Government? Should it be taking down the service? Should it be arguing with them, or should it just say, “If that’s what they want, they have every right to have it”?

David Kaye: That is an excellent question. Ross and I are smiling at each other through Zoom because this is a persistent problem. I chair an organisation called the Global Network Initiative, which Ross knows very well. It is a multi-stakeholder initiative of companies, civil society organisations and academics that tries to address exactly these kinds of questions and assess how the companies do on this.

This could be a dissertation-level question, so I will say two things very briefly. First, companies need to have human rights policies so that when they go into a country, or their service is available in a country, they have done due diligence to know exactly how they will respond in those situations where demands are put on them that force them to be at odds with the rights of their users. If they do not have that policy going in, they will be faced with very difficult problems of appearing to be complicit in the human rights violations of that Government.

The second part of it is complicated. It is very difficult, if not impossible, for any company to engage in a market and say that it will not adhere to local law. No Government in Europe would like it if the company said, “We will abide by first amendment standards, which are different from Article 10 European convention standards”.

Those problems need to be worked through in a transparent way, and they need tools for mitigation. If that order comes from that Government, what will they say? Will they say, “This has come from your judicial institution”? What are their responses? Do they have tools to limit the impact? Some will say they do, but you are highlighting an extraordinarily important problem. It really needs government support worldwide for this particular aspect of company behaviour worldwide.

Ross LaJeunesse: As David said, your example could be a doctoral thesis. That is precisely why these companies need to have real human rights analyses and programmes in place and be guided by global human rights principles. A very real concern of mine is that, of all the big tech platforms, only Microsoft has, I believe, a real human rights programme and a senior person in charge of itthat is it. Maybe David will disagree with me, but the others have recently hired heads of human rights programmes and said they have human rights programmes, but when a company says it has a human rights programme I always advise people to ask, “Who is the person leading it? How big a staff does it have? To whom do they report? Is it a real decision-maker, or is this just marketing fluff?” From my own experience, one of the reasons I left Google and why it tried to sack me was my advocacy over many years that the company should adopt a formal human rights programme, which it refused to do.

The Chair: Sadly, we need to leave it there. I apologise to both witnesses for running out of time; there is much that we would have liked to explore with you. I am sorry for cutting you short on a number of occasions just so that we could get all the questions in and let you go on time.

Professor Kaye and Mr LaJeunesse, thank you very much indeed for giving up your time to be with us today. We will follow up one or two things with you—for example, the coordination of regulation.

We may also be back in touch with you some time in the future for a review of the first year of Facebook’s oversight board.

Again, I thank both of you very much indeed for giving us your time.