final logo red (RGB)

 

Select Committee on Democracy and Digital Technologies

Corrected oral evidence: Democracy and Digital Technologies

Monday 9 March 2020

12.30 pm

 

Watch the meeting

Members present: Lord Puttnam (The Chair); Lord Black of Brentwood; Lord German; Lord Harris of Haringey; Lord Holmes of Richmond; Baroness Kidron; Lord Lipsey; Lord Lucas; Baroness McGregor-Smith; Lord Mitchell.

Evidence Session No. 19              Heard in Public              Questions 240 - 257

 

Witnesses

I: Vint Cerf, Vice-President, Google; Katie O’Donovan, Head of UK Government Affairs and Public Policy, Google.

 

 


27

 

Examination of Witnesses

Vint Cerf and Katie O’Donovan.

Q240       The Chair: Thank you very much for being with us. We are very grateful. I have to read out the police warning. As you know, this session is open to the public. A webcast of the session goes out live and is subsequently accessible via the parliamentary website. A verbatim transcript will be taken of your evidence and put on the parliamentary website. You will have an opportunity to make minor corrections for the purpose of clarification or accuracy. For the record, would you introduce yourselves? Mr Cerf, you wanted to make a short introductory statement.

Vint Cerf: Yes, I appreciate that very much, my Lord. Is it “your Lordship”? I will never get this right. I am American; I watch too much “Downton Abbey”. I am vice-president and chief internet evangelist at Google. You will appreciate that I did not ask for this title. The last time I met a committee of the House of Lords, I was asked, “What title did you ask for?” and my answer was “Archduke”. However, they did not find a way to assign that title in the HR system, so I am your chief internet evangelist.

The Chair: If it makes you feel better, no one around this table asked for their titles either.

Vint Cerf: I want to say thank you. First, thank you so much for accommodating my hearing impairment. The device seems to be working beautifully; I am very relieved. Secondly, I want to thank you for doing this. As an American, I have an uncertain understanding of the processes in Parliament, but I have been given to understand that laws passed in the House of Commons come to the House of Lords for improvement, and the process requires due diligence. My impression is that this meeting is about due diligence on the topic. I am really grateful and impressed by your determination to learn more. I hope my colleague and I will be able to help. Thank you.

The Chair: Thank you very much. Incidentally, it is not just legislation that comes to the House of Lords for improvement; it is also people, particularly parliamentarians.

This is a slightly personal remark, but I am one of a number of people who are somewhat in awe of you. You are an extraordinarily influential figure in the history of the digital world. We have connected through MIT; I am a huge fan. The other day, in a different situation, we heard from David Attenborough. Not to be embarrassing, but I see you as being the David Attenborough of the digital world.

Vint Cerf: Oh my.

The Chair: I am sorry; no pressure, but there are people who look to you, in exactly the way people look to David, as signifying truth. With that comes huge responsibility. All I want to suggest is that, when you say that something is the way it is, no one I know in the digital world will question that. As I say, that is a huge responsibility. It is one of the reasons why we were desperately keen to meet with and hear from you: because we have heard from people who have their own, sometimes quite prejudiced, version of where we have got to and where we are going.

Q241       Lord Harris of Haringey: I apologise for missing the first part of your explanation of the archduke title. Your written evidence states that Google uses algorithms to evaluate “highquality information”, as determined by user testing, rather than subjective determination as to the truthfulness of particular websites. Is there any version of objective truth?

Vint Cerf: That is a really good question. As a scientist, I have to tell you that, at best, we have only an approximation to truth or to the way the real world works. I am sure you have had the experience of being told, “The way things work is X”, and 10 years later you get the advice: “We have studied the problem and have now discovered that it is really Y, not X”. Scientists, if they are honest, have to give up their theories when evidence shows that they are not correct.

We are looking not so much for absolute truth, since I am not sure we can attain that, but for good quality and good sources. Our algorithms find what we believe is goodquality content, whether in Google Search or in YouTube videos, to offer goodquality information. The recipients of that information have the challenge of figuring out what to accept and what to reject. We do not try to tell them that; we try to give them the bestquality information we can.

Lord Harris of Haringey: There are some things that even I, as a recovering scientist, or you, as a scientist, might say are pretty true. There are very few adherents of the view that the earth is flat, for example.

Vint Cerf: I hope you are not one of them.

Lord Harris of Haringey: I am certainly not one of them. Can you give us any evidence that the highquality information, as you describe it, that you promote is more likely to be true or in the category, “The earth is not flat”, rather than the category, “The earth is flat”? People can say the earth is flat with great authority and even produce all sorts of material to cite as evidence.

Vint Cerf: I will respond on the search side and, Katie, perhaps you could respond on the YouTube side. Those are two different sources of information that come to our users. First, let me draw your attention to a book, which we would be happy to make available to you, if you like, on how search works. It is a useful highlevel reference for the mechanics of the search system, including the mechanisms by which we try to determine good quality in the content of the world wide web.

I am sure every one of you who has experienced searching the world wide web, through our search engines or others, has discovered a vast and dynamic range of quality. You are confronted with trying to sort through what is useful and what is not. The amount of information on the world wide web is extraordinarily large. There are billions of pages. We have no ability to manually evaluate all that content, but we have about 10,000 people, as part of our Google family, who evaluate websites. We have perhaps as many as nine opinions of selected pages. In the case of search, we have a 168page document given over to how you determine the quality of a website.

Imagine for a moment that these people are going through webpages and evaluating them according to the criteria found here. By the way, this is publicly available. The people who make webpages have access to this as well, so they have guidance on how to construct a website that we would consider to be of good quality. Once we have samples of webpages that have been evaluated by those evaluators, we can take what they have done and the webpages their evaluations apply to, and make a machinelearning neural network that reflects the quality they have been able to assert for the webpages. Those webpages become the training set for a machinelearning system. The machinelearning system is then applied to all the webpages we index in the world wide web. Once that application has been done, we use that information and other indicators to rankorder the responses that come back from a web search.

There is a twostep process. There is a manual process to establish criteria and a goodquality training set, and then a machinelearning system to scale up to the size of the world wide web, which we index. That is how we do it for the search system. Maybe Katie could say a bit about how we do it for YouTube, which is different.

Katie O’Donovan: I work at Google with Vint but based in the UK. I head up our government affairs and public policy team. I am very happy to be here today, having looked at the Committee’s work so far.

As Vint described on search, for a query about flat earth, we can give a clear summary of information on a search page that reflects scientific consensus on the shape of the earth. On YouTube, we have a similar challenge. YouTube is our videohosting platform. About 400 hours of content is uploaded to the platform every single minute. That covers everything from DIY videos to music and entertainment, House of Lords Select Committee content and theories about scientific questions.

This challenge was captured very well in evidence from the chair of the CDEI, who said that there is a difference between freedom of speech and freedom of reach. On YouTube, we look carefully at how to make sure that, while we host the information, it is hosted within the right framework for users and viewers to understand the consequences of that.

To begin with, we have very firm community guidelines on what is and is not allowed on YouTube. That includes things that you would not expect to be allowed: content that incites hate or violence, or content that, although it is not illegal, most people would agree does not need to be on YouTube. We have made significant shifts in our policies over the last three years, introducing more and more policies to define what is and is not allowed. Then there is content that, as in the flat earth scenario, is not illegal. If you wander into a bookshop or even a library, you might find content supporting those theories. But we want our users to understand that this is one opinion out there and it may not be the consensus, mainstream or scientifically preferred view.

We work to raise up authoritative content. If somebody has hosted a video on YouTube, there is an information panel relating to the agreed scientific consensus, which directs you to a thirdparty source. You might see a piece of content up there, but you will understand that different scientific organisations with weight, experience and expertise do not agree with that view.

If you are searching for this content, we want to make sure it is not necessarily returned, and to raise up more authoritative content. If you have a question on the shape of the earth, instead of pointing you to a video that has been uploaded, has a limited number of views and does not reflect the scientific consensus, we work with external experts to help our machine-learning tools to determine what is an authoritative response, and make sure it is prominent and easy to find on the platform.

We reward content that meets those “trusted and informative” criteria and make sure there is no incentive for producing more controversial content. We make sure that not everyone has the opportunity to monetise on YouTube, which impacts the likelihood that that content will be created.

Vint Cerf: I wonder whether I could ask Katie a question. Am I allowed to ask questions?

The Chair: You are probably a more informed questioner.

Vint Cerf: In Senate hearings, it is not always the case. Katie, you mentioned information panels. I understand that those are curated in a manual way. For content that we can tell is controversial, we go to the trouble of pulling up those information panels and making them available. We curate that as opposed to doing it automatically. Am I correct?

Katie O’Donovan: We look at which categories of videos and content need those information panels, yes.

Lord Harris of Haringey: I understand what you are saying; I am just thinking of my experience. If I Google anything about 9/11, I get far more conspiracy theories about it not really happening or it being a falseflag exercise, so I am not quite sure how this kicks in. Admittedly, I did not check that last night, but that is the experience.

Fundamentally, why should anyone believe that your algorithm successfully identifies a trusted source of information? I am bearing in mind your evidence that, if you give too much detail in public about your processes, it provides opportunities for malicious actors to play the system.

Vint Cerf: The last point is correct. It is a challenge to lay out in detail how the algorithms work, because the algorithms involved, the machine learning and neural networks, are less explainable now than we would like them to be. That is a research problem. Let me set that aside for a moment.

I am going to try the experiment with 9/11. Thank you for drawing my attention to that. We might try the same experiment with the coronavirus and find out how much conspiracy theory comes up versus helpful information. I did that this morning, and my impression, which might differ from yours, was that we put up more useful information than conspiracy theories about that. I do not know what to say about 9/11, but I can do the test.

Katie O’Donovan: I think 9/11 is a very important example of the challenge out there. On both systems, you are curating information that is populated from a very diverse foundation of understanding and political starting points. Of course, it is the most important information for our users to receive. The rater guidelines share information on the intent of quality behind the search algorithm. On YouTube, a similar challenge is to make sure that, while the content searched for might not be illegal or violate our community guidelines, we have the right rules in place on conspiracy theories relating to harmful events. We have recently changed YouTube’s policy to reflect that.

How you evaluate is a really important question. By sharing information like the rater guidelines, we hope to explain the intent behind our algorithms. We are also subject to academic research and scrutiny by many others. A piece in the New York Times earlier this month went deep into research undertaken by the University of California, looking at eight million recommendations of YouTube’s “watch next” algorithm over 15 months. It is a really important piece of work, because it showed that, while the view time for conspiratorial recommendations had decreased by 50 per cent and then 70 per cent, there were other challenges that were cyclical and different.

That shows that this problem is not easy for us to solve. It is a dynamic flow of information being put on the internet, us changing our policies and the way our tools work to reduce the availability of that content, and the people who create it responding to the way we change the rules. It will not be solved in one day, but we hope and feel that we see improvements over time.

Q242       Lord Lipsey: I want to probe this question of high quality and userdefined high quality, because it is not unambiguous. “High quality” could mean “more truthful”. It could mean that more people want to watch that particular piece of content for longer. A highquality story in the British Daily Mail is not the same as a highquality story in the Financial Times.

I wonder how, within the criteria which the panels are looking at and feeding into the algorithm, you make distinctions and make sure that the really desirable kind of quality comes to the fore.

Vint Cerf: I completely agree with your characterisation of quality and source. I do not necessarily urge you to read all 168 pages of the rating criteria for web search, but they get to sources. The people doing the evaluation are aware of quality sources, as you point out, like the Financial Times, the New York Times and scientific publications. They are considered much higher quality than some of the popular press. That goes into the evaluations of the webpages they are looking at. A similar evaluation takes place for YouTube.

Sticking with search for a moment, since their evaluations take into account quality of source in the same way that you just did, the training sets they produce as a result reflect that recognition. It is implicit in the training sets that are generated. In the evaluation of the webpages used for training, we see a reflection of quality from source.

Q243       The Chair: Katie, could I ask you a very specific question? This is about sensitivity, because not all information is equal, as you would agree. You live in the UK and you know there are racial tensions in this country. Therefore, there are places you just do not go I do not mean physically without unbelievably great care.

As Full Fact told us that your algorithm took inaccurate information, that Muslims do not pay council tax, which went straight to the top of your search results and was echoed by your voice assistant. That is catastrophic; a thing like that can set off a riot. Obviously, 99 per cent of what you do is not likely to do that. How sensitised are your algorithms to that type of error?

Vint Cerf: I am glad you brought this up, because it is an opportunity to say something about machine learning and neural networks. I do not know about this specific incident.

The Chair: It is unfair to cite one; it is just an example.

Vint Cerf: I understand. I want to point out what I consider to be a continuing research area with neural networks; this might take three minutes or so, if that is okay. We know that they can be brittle. Has anyone been briefed on what are called generative adversarial networks? Does that ring a bell for anyone? Machinelearning systems have been used for identification for example, distinguishing cats, dogs, kangaroos and so on by telling the neural network whether it got the answer right. If it gets the wrong answer in response to an image, you say, “Thats the wrong answer”, which backpropagates into the network and weights are changed to get closer to the correct answer. When you train this system, it gets to be really, really good at distinguishing pictures of animals, fire engines and such things. Then you present it with something it has never seen before, and if it successful it will say that it is a cat, a dog or what have you.

Generative adversarial networks are also neural networks, and they are set up against the ones that are supposed to correctly recognise things. Their job is to change a few pixels in the image to try to fool the recognition and classification system. There are cases where the change of just two or three pixels, which a human being would not recognise as a change, causes the system to say, “It is not a cat, it is a fire engine”.

Your reaction to this is, “WTF? How could that possibly happen?” The answer is that these systems do not recognise things in the same way we do. We abstract from images. We recognise cats as having little triangular ears, fur and a tail, and we are pretty sure that fire engines do not. But the mechanical system of recognition in machinelearning systems does not work in the same way our brains do. We know they can be brittle, and you just cited a very good example of that kind of brittleness. We are working to remove those problems or identify where they could occur, but it is still an area of significant research.

To your primary question, are we conscious of the sensitivity and the potential failure modes? Yes. Do we know how to prevent all those failure modes? No, not yet. But, when they get reported, you can be sure there is a reaction: “How do we adjust the system? Can we adjust the system to eliminate that particular brittleness?”

Q244       Baroness Kidron: When you described the quality information that then trains the system, you mentioned a whole series of things that were behind paywalls. Does it worry you that a lot of the very highquality material is currently behind paywalls? That means two kinds of user access have developed.

Vint Cerf: That is a very good question. My first reaction, to be quite honest, is to ask, “Why wouldn’t we pay for access to that so we can index it?” What do we actually do? Do you know, Katie?

Katie O’Donovan: We work with news publishers, both those that use paywalls and those that do not. We have built particular tools that enable them to find a subscription model that supports their business the best, and we have changed the way we index that content. We used to set a requirement that a certain proportion not be behind a paywall, because we wanted the information to be openly available, but we have responded to publishers telling us, “In this instance, the viable business model we have decided on needs subscription”.

We have worked hard with companies to make sure we can index that information and reflect its importance. Then it is for the user to decide, “This is the content I want to discover, and therefore I am going to subscribe or use it in a different way”.

Vint Cerf: There are two pieces to this. We subscribe if we have to in order to do the training and the evaluation. However, that does not necessarily give that same information to the users unless they subscribe.

Q245       Baroness Kidron: In the spirit of due diligence, as you put it, how do you ensure that your algorithmic choices do not disadvantage certain sectors of the population? To be fair to you, I want to point out that Google’s own evidence talks about marginalised groups and the importance of voices that may not have been represented. It particularly points to young people in the LGBTQ+ community. A number of court cases are going on at the moment, with the same community saying, “Hang on a minute. We’re considered less beneficial for advertising”. There seems to be perhaps a valuedriven issue for certain communities. I wonder how you consider those sensitivities in your algorithms.

Vint Cerf: Let me give you two things. Katie, I am sure you will have other things to say as well. First, we have gone to a fair amount of trouble to make available more positive information about the LGBTQIA+ community I am sure I got that right than has historically been the case. We go out of our way to make sure there are positive stories about that community to reinforce the validity of those minority groups.

With regard to monetisation and advertising, two things are going on. First, we do not allow advertisers to use as criteria for advertising things like sex, gender and skin colour. Those are not criteria which they can apply for the purposes of advertising.

Katie O’Donovan: To clarify that slightly, we allow advertisers on search to use the gender of the user but not the sexuality.

Vint Cerf: That helps a little. On the other hand, that means that advertisers cannot tell us not to show an ad, for example, on the basis of sex. That is an attempt to defend against exactly the problem you are describing. Those court cases will presumably sort out the extent to which we have been successful.

Baroness Kidron: The last few questions have been about your criteria on the one hand, and the ranking of different varieties on the other. Both positively, in some of your examples, and negatively, in some of our examples, you have immense power as the mediator. I am interested in whether you are comfortable with your level of power in the mediation.

Vint Cerf: Are you asking me personally as opposed to corporately?

Baroness Kidron: If there are two answers, please give both.

Vint Cerf: They are not the same. My biggest concern is for users and their ability to think critically about what they see and hear. The wetware up here is more important than anything we could do, as hard as we try, and will continue to try, to give goodquality information and reduce the visibility of poorquality information. And to teach people to think critically about what they see and hear. Where did this information come from? Is there any corroborating evidence for an assertion? These are the things that we should be teaching our children and practising ourselves. I do not mean to suggest that we have no responsibilities. I am a huge fan of helping people learn how to do that, because in the end it is the person who has to decide what information to take on board and what to reject.

Katie O’Donovan: To your specific question, our answers sound a little like we are saying, “This is our intention. This is the work we have done. There are good examples and challenging examples”. That reflects the dynamic nature of all this work, and I hope it reflects the efforts and the changes we are making as a company to respond to what people expect from our services.

Taking the example you give, one reason why YouTube, in its broadest sense, is so popular is because it allows diverse voices and communities to come together, create content, share content, form and thrive in communities in a way that has been difficult in the past. If you are a teenager in a north Wales village, you can create a YouTube video and connect with people across the country who might be going through other teenage issues similar to yours. We think that is absolutely an additive and positive thing.

In recent years, we have had challenges relating to why we have allowed different kinds of content to be monetised and supported by companies that do not understand what it is or that it should not be monetised in the first place. We have taken action to address that, so people understand that if you want to monetise on YouTube, you have to have a certain number of viewers and subscribers, but you also have to be of high quality. Sometimes there has been tension in the way we have implemented that.

Your question about the power we have is really important, because we give everybody the right to appeal. We also respond and listen very carefully to what our users are saying, and make sure that the systems we are using work in the way we intended, which is absolutely not to exclude some of our key communities.

We have done many things over the years I have been talking to members of this Committee or, indeed, talking at other committees. We did not have a transparency report for YouTube three years ago. We have introduced one. In that, we have included information on flags we get from users, flags we get from our machines, the categories and types of content where we get them and the relevant countries. We have just added to that information about appeals. Every user can appeal a decision that we make, which obviously is really important if you are making revenue from your channel; in some cases, it is the livelihood of the creator. You have always been able to appeal those decisions, and we are very serious in how we deal with those, but we also make that information public so that people can scrutinise whether a high number of appeals are being upheld or we are not making those decisions in the way that would be expected.

Without getting into the particulars of live cases, we understand very gravely the responsibility for getting this right and enabling these diverse individuals, who have not been able either to find their community or to make a living from that, to continue to thrive on our platforms.

Baroness Kidron: Finally, I want to lock down on this question about corporate comfort, if you like. In one way, I hear exactly why you do not want to say what the criteria are so they are not gamed, but that means that they are not transparent to us, the users. Is there not therefore an argument for a regulatory intermediary to say, “Google doesnt get to have all that power. We’re going to share it with a broader set of people i.e. the regulators "as our representatives”?

Vint Cerf: I have one immediate thought. The criteria on the search side and on the YouTube side are public and available to creators of webpages and on YouTube. At least you have that, so you know our intent. You can certainly decide whether we are achieving our intent, and that is what some of your questions are exactly about.

Katie O’Donovan: There is real granular detail in our monetisation policies, and how advertisers can use the platform and choose which content they want to display against. Different advertisers will target different users depending on interest.

In terms of regulators, the Government have published their plans for the Online Harms White Paper and have said they are minded to appoint Ofcom. As those plans develop, Ofcom is exactly the right sort of organisation to review what information is in the public domain and what is not, and to reach a credible and detailed understanding of that. While we are making concerted efforts to put more and more information into the public domain so that it can be scrutinised and tested, we understand that people often want an expert to do that too.

Q246       Lord Holmes of Richmond: I echo the words of the Chairman; it is great to have a tech legend in the House with us today. In what ways would Google Search and YouTube experience be affected by increased algorithmic transparency? What work has Google done on allowing external audit of your algorithms and avoiding any negative impact?

Vint Cerf: Thank you very much for that question. Let me start by observing that, because our criteria for quality, for example, and other ranking purposes are public, at least researchers know what our intent is, because that is clearly stated in the evaluation criteria and in the terms of service and community standards that show up in the YouTube case.

Secondly, our properties are open, in the sense that they can do experiments. They can run any number of different queries they want against either the world wide web search or the recommendations that come out of YouTube. Between those two, they can construct neural networks, as we have, and compare whether their neural network structures work better than or comparably to ours. We have given them enough information to perform experiments and determine whether our intent is being realised in the actual results that come back. You mentioned the UCLA study, which was related to that. That is one way of getting there.

Trying to describe what the algorithms are, from my point of view as a computer scientist, is rather difficult, because of the introduction of the machine-learning and neural network structure. I noticed you nodding when I said that they do not explain themselves well at this point, and the brittleness shows up in somewhat unexpected ways. This is still an area for deep research.

I am very proud that one of our researchers, Geoff Hinton, was the recipient of the Turing Award recently for his work on machine learning and artificial intelligence. In the UK, DeepMind, which I am sure you are all well aware of, has garnered great visibility for its depth of understanding. We rely on them to help us move ahead.

Katie O’Donovan: We have seen examples where we have put information in the public domain. In Google’s early days, a paper was published on the importance of linking to our algorithm. Quite quickly and very systematically, websites were exchanging links, paying for links and trying to game that system. That results only in a detrimental outcome for our users. It skews results and means that they are less good.

It is important that we explain our intent, show our systems and make information public, but it is a real concern. I know it raises an eyebrow, as it sounds like we are saying, “If we tell you too much, it will make the systems not work”, but we have had real evidence of this in the past.

Vint Cerf: It occurs to me that there is another analogy, with the US tax situation. The Internal Revenue Service has criteria for determining whether to audit a return, with a set of values, thresholds and things like that. It keeps that information tightly held so that people do not carefully craft their tax returns to avoid audit. There is a similarity to what we are concerned with.

On the other hand, we are very interested in improving the state of the art, so we fund research at universities on these various topics. That information is always made public, by the way; those funded research things are not kept purely in-house. We are part of the community that wants to make progress in this space, and are happy to have the research community help us do that.

Lord Holmes of Richmond: You helpfully identify the IRS example. In that sense, what key differences would you see in what would be expected from a particular government agency and a private entity?

Vint Cerf: I am not sure I understand the question. Could you say that again?

Lord Holmes of Richmond: You correctly identified, in the IRS example, why it would not want to give up its methodology, but, on that specific, should there not be a different philosophical or actual approach from a private entity and from a state entity?

Vint Cerf: I understand. We are back to the question of what we can share that is useful information to evaluate. The most useful information we can show, honestly, would be the training sets used for establishing machinelearning programs. The machine-learning structure itself is fairly complex, so it is not clear whether that is the best piece of information to share. I am much happier having someone see the criteria in the training sets and perhaps come back with alternative machinelearning mechanisms so that we can compare the results. That would be quite helpful.

Q247       Lord Lucas: We know you can tilt results. “Jews are” used to have a number of unfortunate suggestions after it, but no longer does. You have not adjusted “Tories are”, I noticed.

Vint Cerf: Note to self.

Lord Lucas: Since we know you can bias results to deal with things in the real world, would it not be a good idea if someone like Ofcom, under your supervision, had total access to what is going on, to reassure us that some underlying bias is not applied to the way Google reports politically sensitive information?

Katie O’Donovan: It is really important to understand the difference between autocomplete and the results you receive. This is fundamentally important to our engineers. Sometimes, sitting where I sit on the public policy side of the building, I see the results when people type something of clear intent, and it is very unsavoury and I wish somebody had not taken the time to create it and put it on their blog post or elsewhere.

Where we change the autocomplete to make sure that it cannot be used in offensive ways as you described, it does not mean we necessarily change the search results that come after that. If you type something in about the Holocaust, using all the criteria that Vint described earlier, making sure we are returning expert, authoritative and trusted results, we will give you meaningful and reliable information against that. But if you type in something that is very discriminatory, or seek hatefuelled content, you are still able to do that in Google Search.

It is important to have that clear understanding, because we are reflecting what is on the internet rather than curating the content we hope would be on the internet. If it is viewed, as you suggested, that we bias results in one way or another, that can lead to challenging misconceptions that mean we are not dealing with the issue we are hoping to deal with as a society.

Q248       Baroness Kidron: A number of people have given evidence I was particularly struck by Mozilla’s evidence about this on exactly what data they would like to have access to in order for research access to be meaningful. We keep hearing that the researchers are not a proper part of this conversation, because the companies are keeping data to themselves for commercial reasons. I am not sure you can answer it right now, but I would be very interested to know what the problem is with the access they are asking for the list of things they suggested should be in the public arena.

Vint Cerf: I would be interested to see what that list is; I have not seen it. We should recognise that this is an extremely open space, where there is lots of opportunity for trying new things out. No one is prevented from indexing the world wide web. We do it; other people do it. Moreover, people in the research community are free to establish whatever neural network structures they want. We have given them all the criteria that we use for ranking. They can build their own training sets, if they want to. We should come back to that.

Baroness Kidron: I would be grateful if you would, perhaps in writing.

Katie O’Donovan: We are an organisation with a mission to make the world’s information universally and usefully accessible. Within that is a commitment to open data. I watched the evidence, but I do not remember the specific requirements they had.

However, we do make available the very powerful datasets that we use in our systems, whether that is YouTube content, images or the Natural Questions database. We have also built a search engine, from Google Search, to help data scientists analyse all the open datasets on the internet, so there is an organised way of representing those. Android and Chrome, our browser, are opensource. Whether you are a data science company or an academic, there are great opportunities to access that information. It is not content that is exclusively held by us.

Vint Cerf: One other service we offer in Google Cloud is called WhatIf. I do not know whether you have bumped into it, but it is intended to let people who design and build multilayer neural networks make small changes to figure out what happens. The whole idea is about exploring alternative ways.

Q249       Lord German: You told us you used two sets of human intervention. One was the human evaluators, who evaluate the people wanting to post. Then you said, “We work with external experts”. Could you give us some idea of the quantum of external experts the size but also paid and unpaid or supported and unsupported? How much notice do you take of them? What methodology do you have for understanding what they are saying to you?

Vint Cerf: I have less information for you than I hope my colleague does, because this is not an area of expertise for me. I know that we have some 10,000 people engaged in trust and safety, but I do not know how to split them up. Do you have better information than I do, Katie?

Katie O’Donovan: Yes, the 10,000 number refers to people within Google and YouTube working on the safety and security of our products so that, where people are trying to host content on YouTube that breaks our community guidelines, we can recognise it and take it down, to help formulate those community guidelines. That is one area where we work very closely with external experts. Sometimes we work with a very narrow number of external experts, perhaps in one country where we have a particular issue, or it might be an issue where we want to understand the broad consensus of the community. There might be different ways to do that.

As an example to bring that to life, on YouTube in the UK over the last couple of years we have had an issue with drill music. Drill is a particular form of rap that is very aggressive. It can seem very angry; it also gives young people an opportunity to reflect the communities they have grown up in and to have creative outlets. It is a real point of tension for a platform like YouTube, which wants to enable young people to tell their story, but at the same time some of this content had been linked very clearly, both by the police and by the courts in some instances, to reallife examples of violence. This was something that we as a country team were very concerned about. We wanted to be a hone for this music, but we did not want to be a home for inciting violence.

If you or I were to upload a video to YouTube and say, “Im going to go and see this person this evening, and beat them up”, it would be very easy for us to determine the intent behind thatthat it was not an artistic expression and that it did not have a home on our platform. You can have a rap song by one person that is just a telling of a story versus a rap song by another that includes a threat. We needed to understand how we distinguish between the two. This was not expertise that we had inhouse. We were able, inhouse, to understand the parameters of the problem, the potential for impact on real life and the detriment if we were to take all drill music off.

We were able to work with the Metropolitan Police, the mayor’s office, the musicians themselves, some of the people who host this content on channels on YouTube, young community members and community members who had been victims of crime. We built an understanding of not just the language and some of the slang in that content, but also where this was related to offline violence and where the content needed to come down, because it was not an artistic expression or telling of a story but content that really should not have a place on our platform.

That is an example where having one global policy for this music would not have been fit for purpose for this very specific instance in the UK, and we worked with experts on that. We then replicate that in different ways on different issues. Around the world, we have a system called trusted flaggers, where, if you have particular knowledge of an issue, we will invite you to become a trusted flagger. That might include, in the UK, the Institute for Strategic Dialogue, which has deep knowledge about counterterrorism issues; it might include a childsafety organisation for its relevant expertise. We also have an intelligence desk that looks at emerging threats or risks about which we have not yet formulated policy. Again, they will work with organisations around the globe.

When we are looking at the really authoritative and newsworthy content that we need to raise up on YouTube, perhaps from broadcasters that we know to be trusted, we might take specific members of a research community, for example the medical profession, to determine the sensible view on this.

Vint Cerf: It occurs to me that we should appreciate that this has to happen in about 100 different languages. We have a real problem, for example, with the vocabularies of hate speech. We need to understand what those vocabularies are, and a lot of them use code words that we would not normally interpret in the way intended by someone who wants to incite violence or otherwise cause a problem. I am always astonished at the amount of language that we have to understand in some mechanical way in order to detect these problems.

The Chair: You mean language across cultures and subcultures.

Vint Cerf: Yes, indeed.

The Chair: That is very interesting.

Q250       Lord Black of Brentwood: Can I take you back to an answer you gave to Baroness Kidron’s earlier question? You were talking about the importance of teaching people to think critically about the information they see, and about your role in effect as a curator of that information.

To what extent should one be able to go a step further than that and enable users to decide what content they see? In other words, should users be given more tools to restrict the pool of content that is shown to them, including the possibility of removing information from unverified sources?

Vint Cerf: There are two things going on with that interesting question. One might be filtering and the other might be blocking. The question is whether the users can do that on their own. They have the ability to change what we think they are interested in. You know about that, presumably. You can go to your account and edit the things we have tracked, to the extent that you allow us to do that, either to alter the search response to provide things that we think are of interest, or to decide which advertising they might be interested in.

I do not think we have ever done anything along the lines of what you are describing, although from time to time I have thought about tools that would let people create this sort of filter for themselves and maybe even for others. Subscribing to the Lord Black view of the world wide web, for example, might be of interest to other people in addition to you. I do not believe we have tools that let you do that, but it is an interesting notion.

I am not even sure what the practical implementation implications would be, to be honest with you. I am not in a position to say, “That is a great idea; we should go do it”, but it is an interesting question. Could it be done? Could it be done at scale? I do not know the answer to that yet.

Lord Black of Brentwood: With that approach, presumably the challenge is to do it at scale.

Vint Cerf: Yes, exactly.

Katie O’Donovan: It is an interesting question if you think about it in terms of search and/or YouTube. In search, remember, you do not need to be logged in; you do not need to have a Google account. You can just arrive at the home page and your intent and what you are looking for is clear from what you type into the search engine bar. On YouTube, we have the ability to create playlists on different themes. We are looking at recommendations like you being able to say that you would rather not have a recommendation in there, but, again, a lot of it is driven by the intent of the user and what they are looking for rather than being served up.

Q251       The Chair: I do not think you can answer this today, but it would be very helpful if you could write to us. A lot of the questions we are asking come down to research, the quality of research and your relationship with the research community. Could you set out your criteria for deciding which research institutions and universities do and do not engage with you? It feels sometimes like cherrypicking; that you are both judge and jury: “Yes, we will work with them and we rather like that idea”.

This is what occurs to me. I happened to look recently at the Volkswagen scandal. It was fascinating that in the end it was a university in Arizona that found a difference between onroad and offroad emissions from Volkswagen diesel cars. Had Volkswagen decided who to use to research its emissions, there might have been a very, very different result.

At what point do the Government need to step in and take a very objective view? At what point are you prepared, frankly, to take a deep breath and say, “Well go with whoever has a great idea of how to improve this”? To what extent do you have tighter criteria for who you are prepared to license and give information to?

Vint Cerf: Let me try to respond, first, with our university relations activity, with which I have had a connection ever since I joined Google, 15 years ago now. We publish areas of interest which we are looking for researchers to engage in, and we receive proposals from them. We do an internal evaluation or peer review within the company for all the various proposals that come in. Basically we fund the ones that we find to be most interesting or effective for our purposes. Again, I remind you that any research that we fund by that means is open research. It is published freely. We do not constrain any of that.

The criteria there have to do with how the engineers and researchers at Google see the barriers to solving problems and finding people who have ideas that will get us further along. That is not driven by corporate financial interest or anything; it is driven by the basic research driver. Google is a datadriven company. It may even be over the top in being datadriven, because if you make an argument, express an opinion and have no data to back it up, your opinion does not count for very much.

We choose to do research or support research that advances the state of the art. It is a little like my years in the Defence Advanced Research Projects Agency. You were confronted with a problem, and the question was how to find the best people in the country to solve that problem, and we had the freedom to go and find them. That is true at Google, too.

The Chair: Katie, broadly speaking, what are the criteria? How frustrated could a research team from X university feel at never being able to penetrate your juice?

Katie O’Donovan: It is worth remembering that lots of research teams at different universities have been able to study our content, algorithms and recommendations without permission from us. To do that, they can use various APIs and other pieces of open information. We have academic partnerships in the work we do that is relevant to us as a company, but there is a whole ecosystem of people who do that research independently of us.

Vint Cerf: It is an implicit contract in the sense that all the information on the web is accessible to them. They can use our search engine; they can use our platforms. We offer free use of a lot of our research technology, TensorFlow for example, for machine learning and the like. There is a lot of enabling going on from Google to the research community, in addition to specific funded research.

Q252       Lord Mitchell: I have been listening to this for nearly an hour now, and one of the feelings I get from the responses is this: “We are Google. You should trust us”. The Chair made a point about you being judge and jury on these issues. I must say that I feel uneasy about all this. You have the resources, but in the end these are your decisions that you make. I wonder whether there should be a countervailing force to level the playing field.

Vint Cerf: There are countervailing forces. One of them is called competition. We have competition. There are other search engines.

Lord Mitchell: I must have missed that one.

Vint Cerf: There is competition in the space, which helps because people can go to other search engines if they want to. They do not have to accept our results exclusively; they can even compare the results from our search engines, Microsoft search engines and others, even experimental search engines.

Lord Mitchell: But in this country or your country, what percentage of searches are on Google’s platform?

Vint Cerf: It is probably in the region of 80 per cent.

Lord Mitchell: I have heard 90 per cent. That does not sound like a very competitive situation.

Vint Cerf: You should keep in mind that there is some choice involved. In other words, we do not force people to use our search engine. It is their decision to use the search engine, and it might be because they get better results, from their point of view, than they get from others. To be fair, Google is not forcing anyone to use its search engine, if that is the implication that you intended.

A different question might be whether the choice of using Google is in the interest of the users. In a way, you have to give some credibility to the users’ ability to make that determination. They can go elsewhere if they want to. I sense that that is not satisfying you. Can we probe a little deeper about what is making you uncomfortable?

Lord Mitchell: What is making me uncomfortable is what I would feel is a monopolistic position over the product. You tell me that you have competition, but I do not see that competition. It is you who take a lot of the decisions to do with content. It just feels as though you have all the aces in this situation. I would like to feel that there are moderating forces.

Vint Cerf: It might be worth doing additional research, because other competitors have comparable platforms with comparable capacity to index and search the world wide web. The fact that we have a lot more users, perhaps, is not the same as having no competitors. It is the choice of the users that has spoken, I think.

Q253       Baroness McGregor-Smith: Could you explain to us the role of human moderation in improving online experiences? I am interested in knowing how you could make the processes more transparent and a little more consistent. Have you considered having a public database that explains to the public how you moderate and the system of precedence to ensure that everyone can understand how you have greater equity and fairness in your decisionmaking?

Vint Cerf: That is a nice portmanteau question with many parts, and we have touched on some of them already.

Baroness McGregor-Smith: Yes, we have.

Vint Cerf: We have a publication coming out in April that describes how we handle YouTube. We have this big 168page document with the criteria for evaluators, which is very public. We have the terms of service for YouTube, and we have additional information about YouTube coming out in April, which again is intended to be released to the public. That responds, in part, to your question about public access to how things work.

In terms of moderation and content evaluation, I would be repeating myself if I went back to say what I said before about the techniques by which we transform the criteria into a machinelearning model, which is then applied in scaling up the evaluation of content on search, and the similar mechanisms in the case of YouTube and its recommendations.

Baroness McGregor-Smith: But there will be human intervention at the beginning.

Vint Cerf: Yes.

Baroness McGregor-Smith: My question is how a human makes those decisions. Before it gets into the algorithms, et cetera, there is human intervention. I know you have your 168 pages.

Vint Cerf: Yes, right, and we would be happy to give you them.

Baroness McGregor-Smith: But it is also about real simplicity and explaining, say on YouTube, how you moderate. Where could I click that says, in very simple, nonIT language, “This is how we have moderated on this particular case”? I am talking about which ones you have moderated on and why you have made the decisions you have made.

Katie O’Donovan: Whenever we change our community guidelines, we make sure it is very clear for the creator community. If you break our community guidelines, we will often impose a sanction; we do not want to do that if people have made an innocent mistake. Our community guidelines are set out in detail in plain English. We also use video to represent what they are, and we give examples of content that may or may not be within the allowance.

If you flag a video, you are now able to look at the dashboard to see what has happened to that flagged video. If you are the host of the video, if that content is yours, you will be able to see why we have taken it down. If we take a piece of content you had on your channel down because of a copyright violation, we will let you know the detail of that. If you have made a 10minute video, and within one minute there happens to be a song on the radio and that has been spotted by our copyright mechanisms, we will explain that to you and you will be able to repost the video, editing out that bit of copyrighted material.

In the transparency report I talked about we include that information about adjudicators. We also have content that explains what happens when people flag content, who reviews it and how it is reviewed.

Baroness McGregor-Smith: How do you ensure that you have consistency across your decisionmaking?

Katie O’Donovan: We have very clear guidelines. Most of our review is done by machine learning. As Vint has described, we will use humans to set the community guidelines, what we do and do not allow, and then we will use machine learning to identify that content. About 70 per cent of the content flagged on YouTube is now flagged by machine learning. About 60 per cent can be removed without a single view.

Vint Cerf: That is a single view by a human.

Katie O’Donovan: About 90 per cent of the content we remove on YouTube is detected by a machine.

Vint Cerf: Consistency is very important. I am glad you brought that up. In the case of search, for example, we do not just take one reviewer’s opinion about things and apply it. We have at least nine reviews of a webpage before we try to aggregate the rating for the quality of that webpage. I like the point Katie makes; that consistency is imposed by the fact it is automated for the bulk of the content.

Katie O’Donovan: If there is content that requires a human review, a human will look at that using our community guidelines and the detailed guidance. There are processes, again, to have a second opinion and to consolidate on something. In very particular cases, it will require not just a reviewer but judgment from other colleagues.

Baroness McGregor-Smith: Would you be prepared to publish how they make a decision?

Katie O’Donovan: When we remove content from a channel, we will share with the channel host why that content has been removed. I am trying to remember the example, which I am afraid I cannot remember I am happy to update you on this where we have made detailed cases available for use as a tool to teach our community members who upload that content exactly how we do that.

In our transparency report, we take the million videos we might remove in a quarter and explain what type of content they were and why the content was removed. We explain the granularity in that. In certain cases, perhaps where very highprofile or contentious content is removed, we will communicate that. We do that with every single content creator, and they may choose to share it or make it clear.

Baroness McGregor-Smith: If I am a member of the public trying to understand how you moderate and how content is taken down, the human intervention will follow a process. How do you know that human intervention is sensible? Human beings all do things differently. How can you help, over time, the public understand that your humans are moderating in the right way, not ad hoc or with a tickbox: “Here are 10 things we have to think about”?

Vint Cerf: The transparency report is partly intending to give people a statistical understanding of what types of content have been removed and for what reasons. That data is new.

Baroness McGregor-Smith: I am interested to go deeper on each one.

Katie O’Donovan: That should be read with the community guidelines, which really clearly set out what content is and is not allowed. As someone creating content on YouTube or a member of the public, this would be your first understanding: “Okay, we don’t allow pornography. We don’t allow content that incites hate. We don’t allow content that seeks to discriminate on the grounds of race”. That sort of information is there.

Then there is training and management. Machines automate a lot of it, but for the human reviewers, as in any walk of life, we have systems in place to train, manage, review, qualitycontrol check and all those things. Humans will make different decisions on different days for different reasons, but we have the processes in place to eliminate and minimise that, just as any other organisation trying to make routine decisions will.

Vint Cerf: I have personal experience with our YouTube system. I was here, as some of you may know, to receive the Queen Elizabeth Prize for Engineering in 2013. We were asked collectively if we would meet with schoolchildren, so we did for several hours. A video was made of this, and after the video was produced I was asked whether I could arrange to have it put up on YouTube, so I did.

It was about a 90minute video, and about two minutes after I put it up on YouTube I got a note saying, “Your video has been taken down for copyright violation by the International Olympic Committee”. It turned out that our fingerprinting mechanism for identifying videos that have been registered as copyrighted picked up the 15second segment of some bicycle riders who were operating at an angle. The person who was making the presentation to the students used it to explain how gravity works and how you do not fall over on your bicycle even when you are at a funny angle.

I was allowed to lodge an appeal, and in not very many hours my video got back up again. But I was very impressed by how quickly the fingerprinting system worked. Plainly, we have at least done something that works pretty effectively.

Q254       The Chair: Katie, the common denominator of every session we have had, one way or another, is the building of trust. You would totally agree with that. We are in the trustbuilding business in a sector that has big question marks over it constantly, rightly or wrongly.

That being the case, what would be your resistance to having your moderators televised I know there is a lot of stuff around the Channel 4 programme talking about the way they see the world and the kind of decisions they have to make? That is how you build trust. It is by having people on the ground fairly consistently and openly talking about how difficult their job is. You do not build trust by spreading a wall around those people and saying that they must not talk to the public.

Katie O’Donovan: I was struck by the evidence of several witnesses who said, “This is not a punitive process to punish technology companies. The online harms process can be beneficial for companies.” We very much agree. Many of us have been having conversations about the UK Government’s response for many years, and we have a clear intent from the UK Government, but we have not had legislation for the past few years. As a company, we have not waited for that legislation. We have listened very carefully to what people are saying to us, sometimes constructively and sometimes very critically, and responded to that in what we feel is the best way possible.

Nobody who works at Google or YouTube thinks, “The job is done. Turn the page”. This is a series of incremental steps, which starts with us making sure we have the right policies in place and can build the right technology, employ the right people, communicate in the right way, be scrutinised in the right way and keep the trust of our users.

To refer to that earlier conversation, it is very, very easy to move from one search engine to another. If you are not getting trusted information or seeing information you believe in on YouTube, you are not going to use that. We need to make sure that people continue to get a good service through our platforms, but we understand that it is really important to have thirdparty input, whether from a regulator, government or systems like this, and share our workings. The improvements we have made to the transparency process, which is not settled, will help achieve that.

Vint Cerf: Transparency is our friend here. I take your point. There is no question in my mind about that. The kind of transparency and how deep it goes is somewhat of an open question. I want to come back to something that Lord Mitchell raised. I now understand another potential concern that might be motivating your comments. If people choose to use Google in lieu of anything else, some of you are looking for assurance that they are getting the bestquality information that could reasonably be provided to them. Am I getting the message?

The Chair: That is not a bad way of typifying it. We are hoping that Google will reach a standard that we can sign off on and stop having this rather weird series of unfortunate questions. Around this table, we are definitely believers in the digital world. There are no sceptics here. What we want is a better digital world and a trusted digital world.

Vint Cerf: So do I.

The Chair: I figured.

Q255       Lord Lucas: The work you are doing on borderline content on YouTube raises the same question. You were talking about hate speech and racism; you will see in the newspaper today that Trevor Phillips is in the middle of a controversy. I would hate the idea that some group of woke Californians dictated what I could see on YouTube, or that somebody else should define the borders there. Why not be open? Why not let us share in understanding how this is working? We would all love to see goop less.

Vint Cerf: The community guidelines are available to you. I do not know whether you have seen them.

Lord Lucas: But it is not about the guidelines. It is about what happens in practice.

Vint Cerf: What you see in practice is a consequence of applying the guidelines through a group of people.

Lord Lucas: It is a consequence of you applying the guidelines.

Vint Cerf: Yes.

Lord Lucas: It is not a process that is visible to us. The Chair asked how we build trust in that. There are mechanisms for it, perhaps by having a regulator with oversight who could look at these things, if you do not want to let information out. But trust is important.

Vint Cerf: I would take away from that question a challenge to Google, which is to figure out how much more transparent we can be, to respond to your concerns. However, you can see the results of what we do, and you could compare the results of what we do with what other people do. In fact, this might be a perfectly reasonable research thing.

Lord Lucas: There is a way in which I can see the results, on a YouTube search, that you are downplaying. I can ask you to show me all the videos you are not showing me.

Vint Cerf: I do not think we can show you what we are not showing you. That is a very interesting way of formulating it. Everyone who is not here, raise your hand. We do not have the ability to do that, but you could see the differences in different search engine responses to compare them.

Katie O’Donovan: Different organisations will look at different search engines or videohosting websites with particular reference to their area of specialism. The Internet Watch Foundation, for example, looks at different open and social media platforms where it has identified or had flagged to it child abuse material. It publishes that data in its annual report.

We publish our community guidelines and the implementation of those in the transparency reports. That is available to be compared on the open system. Take the research I cited at the beginning done by the University of California. I understand that was not done in conjunction with us; it was done by using YouTube APIs to understand what content was on there and what content was used in our recommendations.

As an enthused amateur, you can look at the content we return when we do search results on YouTube. The content that may be recommended will have a number of thousands, tens of thousands or hundreds of thousands, or millions of views. If you are looking for a particular type of niche content, you might find it, but you might see that it has been on there for a month and had 100 views. That will be because we are trying to raise up authoritative content, which is either established in the scientific community or created by a trusted news provider. Where the community guidelines or the law do not prohibit us having the other content on, it is okay to be there but we are not raising it up or rewarding it.

Lord Lucas: You do not think this is a proper role for government rather than a private company.

Katie O’Donovan: The Government have made it very, very clear that they want to bring regulation forward in this space. Ofcom is their suggested regulator, and we will work very closely with them. We have said that we welcome regulation where it can be constructive, and where it will help our users and people who are concerned about how the internet impacts our everyday lives understand those issues more. That absolutely is the case in the UK.

Vint Cerf: Could I just explore something with Lord Lucas for a moment? I do not know exactly the model you have in your head about the mechanics of what is actually going on, but let me explain the pieces we have talked about several times now, although I am sure you are tired of this.

We have the criteria and the training sets against which we place the criteria, and humans do that. The net result is this neural network structure, which is a bunch of numbers, basically. It is a complex interconnection of weights that take input in and pop out with something in the bottom to tell us what the quality of this particular webpage is. I am not sure that showing you the neural network would be helpful. It is not like a recipe that you would normally think of when you write a computer program. It is not an if-then-else kind of structure. It is a much more complex mathematical structure.

That particular manifestation of the decisionmaking may not be particularly helpful to look at. The real question is what else would be useful in order to establish the trust that we have been talking about and the utility of that trust. I am still struggling to know what would be really helpful for you or Ofcom to see in order to evaluate how well or how poorly we are doing on quality.

Lord Lucas: I have a similar mechanism up here. It develops a lot of inherent biases. I want Ofcom to look at the end results you are producing and the raw material in other words, what is available on YouTube and what actually gets shown in the search content and to ask, “Are we seeing biases here? Are we seeing people being directed in a particular direction because of the way in which the system is built and trained?” We do not want to know the details of that, but is the end result something we are happy with? Should we trust the process?

Vint Cerf: We need to inject into this conversation once again the scale issue. When you do a Google search, you will have noticed it comes back and says, “I found 10 million hits in 30 milliseconds”. Some people say, “Why don’t you show all of them on the first page?” We say that we do not have any fonts small enough to put them all up.

Scaling is a problem here. You cannot show all 10 million of those hits in any way other than some serial process. It may be hard for anyone to analyse all 10 million or whatever it is, in order to answer your question. “Show me all the stuff you did not show me. Show me everything that was not in the first 10 pages”. If you want to, you can force the system to keep going and see what we turned up that we considered not worthy of your attention. There is a way to do that, but I do not know whether doing all of it and handing you all 10 million pages would be helpful.

The Chair: The good news is that you are both incredibly popular on the streaming at the moment. The bad news is that your officers are very anxious to get you away at around 2 pm. Could I jump to the final question? It is utterly central to our purposes as a Committee. Perhaps we can write to you on the other two questions that we have unfortunately not got to.

Vint Cerf: Yes.

Q256       Lord German: The question is about political advertising. You, as Google, imposed restrictions in November 2019. Have you monitored the effect of the restrictions on political advertising that you have made? What conclusions have you reached?

Vint Cerf: I do not have an answer for that. Do you?

Katie O’Donovan: Yes, I do.

Vint Cerf: I am just the engineer.

Katie O’Donovan: You are right that we allow political advertising on Google, but with restrictions. Those are restrictions that we determined both on a global basis and ones that respect local law. When we have enabled political advertising historically in the US, it has been different from the UK. Obviously, the UK and US political systems have lots of similarities, but the way political campaigns are run is very different.

We allow political advertising, but we allow it with a register so that we can enable transparency. The first thing we want to do, regardless of advertising, is to make sure that people have trusted information about political elections. It is superimportant, if you are looking for a candidate or a manifesto policy, that you have information. We find it to be most trusted when it is from the source and we can display that information.

We also want to make sure that there are secure elections. In the UK, that is perhaps not as front of mind as it is in some other countries, but we work with candidates and political parties to ensure, where they need support, that their sites and their emails cannot be compromised by technology attacks. We work to make sure that, where we have allowed political advertising, there is full transparency.

The criterion we have set is that, if you want to run a political advert, which we say includes a political party, elected officeholder or a candidate for the UK Parliament elections, you need to be registered with us. We will verify that information and then allow you to run a political advert. Obviously, the advert has to comply with the law; the spending has to comply with the legal limits here. We make that information publicly available.

If any of the political parties wants to run an advert, we will provide information on who that advert is looking to reach, how many people saw that advert and the cost of that advert. We collect this data and publish it in our digital ads library and we make sure that it is there with a copy of the ad that you can see, but also that it is available as an API or in richdata form so you can conduct analysis of it.

Lord German: In your definition, you have only allowed adverts to be targeted by age, gender and postcode. You did exactly the same thing in the States, and you were criticised by the Democrats and the Republicans for not engaging in free speech. How would you respond to the Democrats, the Republicans and the political parties in this country?

Vint Cerf: That is my territory.

Katie O’Donovan: I am definitely not a specialist on US elections. We look to get the balance right. The US restrictions are slightly different to the UK restrictions prior to the general election. We do not allow advertising on sensitive topics anyway. We feel it is useful for people to understand the location particularly. But microtargeting is not something that we use on our platform more generally. We felt this was about getting the balance right. Different parties choose to campaign in different ways, and they seek the systems that will be most effective for themselves.

Lord German: Could I just pose the South Bend question? If you wanted to take a particular avenue in South Bend, would you be able to do that? Would you be able to take all the people in South Bend who had views about a particular issue that was current in the political spectrum?

Vint Cerf: That criterion is not allowed in our advertising system. We are not that refined, particularly in the political space. As you said, we have a small number of criteria that are permitted for targeting of political ads.

Lord German: Your definition is different from the ones used by Facebook, for example. Ought we to have a common definition? If so, who should make that definition and how should it be approached? What is your complaint about Facebook?

Vint Cerf: I do not have a specific complaint about Facebook that I could surface, other than my personal reactions to some of the things on Facebook. Your idea that there might be common criteria for political advertising has a certain merit, because then we would see consistency of treatment. That is important, because there are so many different platforms available for purposes of political speech, if I can use that word, not just political advertising.

In the US, we have already experienced the serious side effects of the abuse of some platforms and the ability to target specific audiences for the purposes of inciting disagreement. I do not know whether you have experienced similar things here in the UK, but they have been well documented in the United States. We should make it difficult for our platforms to be abused in that way.

Q257       The Chair: You have been very, very patient with us, and I apologise that we have overrun. You started life in the defence industry. We are dealing with this coronavirus issue at the moment. Do you feel, on balance, that technology is making it more likely or less likely that we will reach a satisfactory situation, particularly thinking about the amount of misinformation or disinformation that is already emerging on platforms? Is that very troubling to you or do you feel, on balance, we are on the right side of the argument?

Vint Cerf: I use our tools every single day. I would not survive without the ability to search through the world wide web, get information and get answers. I exercise critical thinking as much as I can about sources and content. I am a very optimistic person with regard to the value of what has been done so far. I am very concerned about the abuse of the system and looking for ways to counter that. Those ways may be mechanical, but they also involve the wetware up here.

My position is that this is all positive stuff, but we need to preserve the value while we defend against the abuse. Some of the abuse is motivated by the same thing that causes us to read Shakespeare 400 years laterbecause people have not changed a bit. He lays out all the good, the bad and the ugly. But we are human beings, and we should try very hard to make our tools serve us and our society in a positive way.

The Chair: That is a lovely ending. Thank you very, very much indeed.

Lord Harris of Haringey: I am terribly sorry, Chair.

The Chair: It was nearly a lovely ending.

Lord Harris of Haringey: I asserted that if you googled “9/11” you would end up with all sorts of conspiracy theories. I have just done it and discovered that your algorithms now put them so far down I got bored looking for them. If you google “twin towers” you end up quite quickly with a news article from the Sun newspaper, which starts off by calling it a wacky conspiracy theory but then sets out the conspiracy theory without a rebuttal. All I am saying is that things have clearly got better, and I apologise.

Vint Cerf: No, you should not apologise. You have demonstrated exactly the scientific method, and for this you should be congratulated.

The Chair: You have also demonstrated the value of inquiries. Thank you very much.

Vint Cerf: Thank you very much.