Select Committee on Communications and Digital
Corrected oral evidence: Freedom of expression online
Tuesday 20 April 2021
Members present: Lord Gilbert of Panteg (The Chair); Baroness Bull; Baroness Buscombe; Viscount Colville of Culross; Baroness Featherstone; Baroness Grender; Lord Griffiths of Burry Port; Lord Lipsey; Baroness Rebuck; Lord Stevenson of Balmacara; Lord Vaizey of Didcot; The Lord Bishop of Worcester.
Evidence Session No. 25 Virtual Proceeding Questions 199 - 206
I: Chi Onwurah MP, Shadow Minister for Digital, Science and Technology, Department for Digital, Culture, Media and Sport; Robert Colvile, Director, Centre for Policy Studies.
USE OF THE TRANSCRIPT
This is a corrected transcript of evidence taken in public and webcast on www.parliamentlive.tv.
Examination of witnesses
Chi Onwurah and Robert Colvile.
Q199 The Chair: Our next witness is Robert Colvile. Robert is director of the CPS, editor-in-chief of CapX and a columnist for the Sunday Times. He is the author of the Conservative Party election manifesto from the last general election and has commentated in a number of newspapers over the years. He wrote a critically acclaimed book, The Great Acceleration: How the World is Getting Faster, Faster, and was previously a research fellow at the Centre for Policy Studies. Robert, thank you very much indeed for joining us. Today’s session is going to be broadcast online and a transcript will be taken.
We will be joined at some stage during this session by Chi Onwurah MP, who is the shadow Minister for Digital, Science and Technology, as well as holding the role of Opposition Front-Bench spokesman on those issues and being Member of Parliament for Newcastle upon Tyne. She has a professional background in tech regulation, which will make her a very useful and valuable witness. She will be joining us shortly.
Let us kick off this session. In this session, we want to explore the role of the big tech platforms, huge businesses as they are. We have explored a whole variety of issues about tech platforms, in terms of their regulation and competition policy. That is where we will start with Robert Colvile. Robert, do you want to kick off by giving us any brief further words of introduction to yourself that are useful? Give us your perspective on this inquiry, focused as it is on freedom of expression online and, today, as it relates to the role of the big tech companies. Robert, welcome.
Robert Colvile: Thank you. I have one very small clarification. I am definitely the co-author of the Conservative Party manifesto. I refer to myself as the interior decorator, versus the architects of it. I would not like to claim exclusive credit.
I am speaking here with two hats on, as it were. First, as you mentioned, I have been writing about and thinking about technology for quite a while. I wrote a paper for the CPS called Politics, Policy and the Internet, back in the days when the people who knew about the internet and politics were Steve Webb, Tom Watson and a couple of guys in George Osborne’s office. Since then, I have covered it as a journalist. Secondly, with my CPS hat on, we produced a report quite recently called Safety without Censorship, which focused specifically on the online harms Bill, but covering these issues more widely.
There are two points I would make at the start. First, this is a fantastically messy area, with no easy answers. Anyone who says that there is an easy solution is talking nonsense. I suspect I will be saying “on the one hand” and “on the other hand” quite a lot as we go. Secondly, I really welcome this inquiry and the focus on freedom of speech. It is an issue that has been lost from the debate slightly.
One of the really interesting things about tech policy is that, if you look back at the last Tory manifesto and the one before that, you can essentially tell exactly which department does each bit of the tech stuff. There is a bit about start-ups, which comes obviously comes from BEIS or the Treasury. There is a bit about access to talent, likewise. Then there is a very large bit about the dangers of people saying or doing anything on the internet ever, and the fact that there are lots of bad people out there and they need to be stopped, which has been very obviously written by the Home Office.
The Home Office voice tends to predominate. When the online harms Bill was being pushed through, the sense was of an auction within Whitehall to see who could say the meanest, toughest things about the big tech companies. As many members of this committee will know, when things go through Parliament, the odds are that MPs standing up and telling stories about horrible things that have happened to their constituents online and the need for this to be even tougher are, I am sure, going to outweigh the people saying, “Hang on. What about freedom of speech? What about the economic impact?” This is quite emotive terrain. Within Whitehall and Westminster, it tends to appeal to the inner authoritarian.
There is a danger that, because we are reluctant, as politicians, policymakers, legislators—or you are reluctant rather; I do not have any power in this—to get involved in some of this messy terrain, Government kind of ends up outsourcing stuff to the tech companies and then beating them with a stick when they do not do what it wants. One of the really obvious things on this is this very weird distinction we are trying to draw between legal speech and harmful but legal speech, which we do not have offline really.
We have rules about hate speech. We have rules about things that are illegal to say, but we seem to be, very clumsily, awkwardly and haphazardly, inventing a middle ground as we go about things that we would prefer people not to say and would like them to stop being able to say, even if it is not actually illegal. That is quite a dangerous tendency.
The Chair: On that, before we move on, a number of our witnesses said, “Look, politicians, you have to resolve this. It is either legal or not. If you so thoroughly disapprove and think that this kind of content is bad and seriously harmful, make it illegal”.
Robert Colvile: I/we would go along with that. It has been apparent for a while that the internet and the real world are basically the same place now. What happened in the Capitol a few months ago showed that really explicitly. The idea that there should be separate rules for what you can say online and what you can say offline does not make any sense. Intuitively, legally and morally, it does not feel right any more.
The Chair: In some way, people behave differently online, do they not? I do not use social media. I use emails, so therefore I am not particularly online in the sense of the issues we are talking about. I certainly say things in emails that I would never say to somebody’s face. I am much sharper. In a much more extreme way, some people say really horrible, hurtful things online that cause people real psychological harm, which they would never say to people face to face. That is not to say that that kind of horrible discourse does not take place offline, in the non-virtual world, but there is a difference in behaviour, is there not?
Robert Colvile: Yes, there are three aspects of that. One is the fact that—and we may come on to this as a separate issue—I do not think we should ban anonymous speech. We should not make everyone sign their name to everything they say online. But there is definitely a case that, when you cannot see people and you are just saying things to anonymous strangers on the internet, you do not have that empathy, in many cases. It is easier to be horrible.
Then there is the fact that online communities, as I am sure you will have heard ad nauseam, tend to radicalise and intensify themselves. You prove yourself to be the best in the community by being the most passionate about whatever that community is about. That is fine when you are talking about Taylor Swift albums and less fine when you are talking about your worries that there are too many people with non-white skin coming into the country, for example. I am sure we will get on to the fact that algorithms can play a role in pointing people in those bad directions.
There is also the Skinner mechanism or the reward mechanism. I have noticed this myself. You get likes and retweets for being forceful, provocative, punchy and rude. The easiest way to rack up 50,000 followers on Twitter is to turn yourself into a caricature of a human being. There is this really interesting phenomenon. I am not going to mention any names; I am sure you can think of them. There are people in politics who are utterly charming and pleasant to be with, perfectly educated, sensible and sane people, who turn into complete raving nutcases on the internet. I think that is partly because they have worked out that that is what it can demand.
One of the themes of my book was that online dynamics pull you in two directions. It is a very polarising thing. It emotionally rewards short, shouty and punchy. It also rewards thoughtful, considered, incredibly complicated, in‑depth and niche stuff. It is now easier to get access to better information from any number of sources than it ever has been in the world, but there is also this huge volume of frothy, shouty anger out there as well.
The Chair: Let us not forget, given all we have been through in the last year, the facilitation of discussion and engagement that online has brought us. You touched on aspects around design, which is where we will move the questioning on to.
Q200 Baroness Rebuck: Thanks, Robert. My question is about design and how it influences the quality of debate online. We have had witnesses talking about algorithms developing psychological insight to manipulate user behaviour. We know that algorithms can either augment or limit how a post is shared. One thing every single witness agrees on is that algorithms are opaque, not open to research and not open to scrutiny. We are at the mercy of the tech companies to decide when and where to modify their design and to impose frictions.
We had one witness who was quite pleased about some of the frictions introduced by Twitter over the past year, such as having to think twice before you retweet somebody else’s article without reading it. It was only, I think, last week or the week before that Facebook started allowing users some control over who could comment on their posts. Are there design changes that, in your opinion, should be encouraged to allow better debate online, ultimately so people can feel safer expressing their opinion?
Robert Colvile: Part of the problem is that people feel extremely safe to express their opinion.
Baroness Rebuck: It is not everybody. Apparently, over 50% of women do not express their opinion online precisely because they might get attacked. On one level they do and somewhere else they do not.
Robert Colvile: It is really striking whenever I get into a conversation with a woman who is in a similar job or role to mine. I get horrible things said to me, but the gender-based aspect of it is appalling. There are things you can do. I am reluctant to see them prescribed centrally, because I think that is a bit of a red herring. Friction is a good word for it, trying to create space. The problem is that, when Facebook got started, it explicitly redesigned everything around reducing friction because it found that that promoted engagement. There is a fundamental tension between metrics there, between what the commercial imperatives point you towards, what the technological imperatives point you towards and what is good for debate.
One of the themes of my book is that, as a species, we always choose speed, convenience and ease. Equally, there is a sense that it is not just Facebook and Twitter out there. There are hundreds of different forums in which people engage. The ones in which people generally have civilised, rational discourse are quite limited.
Baroness Rebuck: Which ones are there, for example?
Robert Colvile: This is a personal reply, but I subscribe to quite a lot of newsletters, in which clever people say clever things. Quite often they will have comments, Discords, forums or WhatsApp groups attached to them, which people can chat in. I worked at a newspaper for 10 years and policing the comment threads was appalling. Whatever the format is, whether it is 280 characters, Facebook, Reddit, 4chan or WhatsApp, at some point we might have to accept that the problem is the people, not the technology, if you see what I mean.
You can shape how people interact, absolutely. You mentioned Twitter. One of the interesting things I have seen them talk about is almost splintering Twitter and letting people set up their own Twitters, as it were. You could have the core technology and then splinter it into things. On the one hand, that might create space for people to coalesce around interest groups, with more civilised behaviour. It might also cause precisely the kind of division and polarisation that we are already seeing.
Reddit is one example that is quite interesting. It took a much firmer hand, empowered the moderators and booted off some of the more toxic subreddits. It is not perfect by any means.
Baroness Rebuck: What about something such as Clubhouse? It started to great acclaim in Silicon Valley, but recently there has been a huge amount of criticism about the lack of moderation in certain of its chatrooms.
Robert Colvile: This is something that we might come on to later. The issue of moderation is a really awkward one, because moderation is extremely expensive. You now have Facebook talking about safe harbour provisions: “Maybe we should get rid of safe harbour provisions”. That is an entirely self-interested, cynical move in many ways. It has now got to the scale where, if you do that, it is the only one that can moderate. In the German restrictions on social media, you have a similar system. You have basically created a system that rewards incumbency and makes it impossible to ever dislodge these companies.
One of the things we say in our report, which is a key principle, is that you should not punish these platforms for individual acts. There is such a massive swathe of content being created. To backtrack a bit, if you are going to be using AI, machine learning and other systems to help you moderate, that is even more expensive than hiring human beings to do it. It is the same issue.
You should look for and punish patterns of behaviour. If there are platforms that have repeatedly failed to act, when they were told, warned and shown that that stuff was happening on their platforms, that is what you should punish, rather than the fact that some people were saying something. Fundamentally, the people who are saying it are the problem with that, not the existence of a technology platform that allowed that to happen.
Baroness Rebuck: I do not want to stray too much into some of the other questions, so I will hand back to the Chair. Thank you for those answers.
The Chair: Thank you, Baroness Rebuck. The next question is from Lord Colville, I think.
Robert Colvile: No relation, by the way.
Q201 Viscount Colville of Culross: No relation, although I will not let the lack of a third “L” stand between us. Thanks very much for those interesting comments. We were talking just now to Senator Blackburn, who has talked about us being the product when it comes to the tech platform world. We have had a lot of evidence about the commodification of people’s data online by the tech companies. Some people have called this “the systematic mass violation of rights”. Do we need legislation to modify platforms’ behaviours, so they can guarantee to keep personal data safe and use it for the benefit of the data subjects?
Robert Colvile: That quote does not make any sense to me at all. People have signed up to these platforms willingly. They have not read the disclaimers, but they have signed the disclaimers. These platforms have existed for many years. If they were based on the systematic violation of rights, there would have been a very large class action lawsuit and there has not been.
There are three basic things you can do with data. You can sell the data of individual people. You can say, “We have noticed that Lord Gilbert was googling for antique furniture. Therefore, we are going to sell his name and information to someone who can try to put him on a marketing list for that”.
Much more interestingly, and probably much more lucratively, you have the aggregation of mass amounts of data. That will tell you that handsome men with glasses who live in certain parts of London tend to like antique furniture and they tend to be looking to buy at around 5 pm or 6 pm. This is when you should probably market to them.
Then you have the thing that has always happened, which is companies trying to use what data they have about their clients and customers to serve them better. If a company notices that you keep buying one product, it will list that in the set of things it knows about you.
Some of the scaremongering about this is quite excessive. It can feel quite creepy when adverts follow you around from site to site, but it tends to be fairly dumb and the anonymisation tends to be pretty good. There are some alarming things. If you go into Facebook’s lookalike audiences, for example, and get the things just right, you can narrow class to very specific audiences.
Part of the problem I have is that, in the quest to protect people’s data, we have impacted on competition. I know this is about free speech rather than competition. GDPR is the absolutely classic example of this. It has basically made it impossible for anyone to displace sites such as Facebook and Twitter, because it has banned data portability, by trying to give everyone sovereignty over their own data, which no one is using. There are all those things we have to tick about cookies. No one is actually taking back control of their data in the way that the legislation theoretically allows you to.
It means you can move your data to another site, but you cannot move other people’s data. You cannot move your social graph and your friendship network with you, because that impinges on their own privacy rights. That means that, even if you could develop a better product than Facebook, it would be virtually impossible to populate it with people, because you would have to build it again from the ground up.
Viscount Colville of Culross: How important is interoperability in order to deal with that particular problem?
Robert Colvile: It is fantastically important, as are open standards and open access. One thing I wanted to mention is the App Store and the fact that Apple is now taking a 30% cut of every transaction that happens. People are having to pay it for the privilege of being on that phone. It is trying to prevent other people selling things via its phones, basically. That kind of thing is really interesting and important. The further we can get towards openness, transparency and interoperability, the better. I have a soft, romantic spot in my heart for the first version of the web, where we all thought it was going to be open and lovely, rather than building these very pretty but quite restrictive walled gardens.
Viscount Colville of Culross: One other thought you said earlier, which you are quite right about, is that an awful lot of us cannot be bothered to press “do you accept all cookies?” or “do you want to manage your cookies?” What about the suggestion that we have to opt in to be able to accept the cookies, in order for our data to be used? Would that change the dynamic at all?
The online advertising industry is quite hideous and messy. If you look at those lists of all the things you are being served when you go on a webpage, my God, it is awful. At the same time, do we want a world where the only advertising is channelled exclusively through the two big companies?
Q202 Baroness Bull: Thank you very much for all the answers so far. In the last hour, we heard Senator Marsha Blackburn describe social media platforms as today’s public square. It is not an original claim of course. We have heard the same from the founders of Facebook and Twitter, and we know that the US Supreme Court agreed with them. Senator Blackburn also noted the obvious problem that access to that square is through a very small number of very powerful keyholders. The difference is that the rules of engagement are not set by a relevant public body that has democratic accountability and whose motives include the public good. If these platforms are indeed our new public squares, what obligations come with that status?
Robert Colvile: It is a really interesting question. The obvious point to make is that there is not just one public square now. There are hundreds and thousands, and people can wander between them, so it is an imperfect analogy. As Donald Trump has shown, after being booted out of one square, you can still get your message heard, even if it becomes harder.
The thing that defines the public square, if you go back to the Jürgen Habermas concept, is freedom of speech, which is what this inquiry is about. What was new and radical about the public square was that it was a place where you could say stuff without government or bishops sticking your head on a pike for saying it. It had quite an extraordinary, very tumultuous and turbulent, but ultimately positive, effect on humanity. That goes back to my point about legal and harmful. Governments are trying to use the tech companies almost as private contractors, like a private security force for Canary Wharf, and turn it into a pristine space with its own rules.
The problem we have is partly that the big tech companies will say to Government, “Give us rules. Tell us”. They do not want too many, but, if there is a system of rules in place, they are happy to enforce it. That risks a situation, as I have said, where government is regulating by proxy. When they started out, these companies were quite often animated by a very libertarian west coast absolutist free speech position. Over time, that has shifted to a position where they will probably err on the side of caution. If there is stuff that is going to hit their share price, get them in trouble with the regulators or whatever, they will probably take precautionary measures to get rid of it. Over time, that tends to have a chilling effect on freedom of speech.
Baroness Bull: We heard from witnesses that in fact there are not hundreds of public squares. Of course there are small providers, but, in fact, because of the scale of Facebook and Twitter, most of the users on those smaller platforms are also on Facebook. That is the place where they get their message heard.
Robert Colvile: I would not put Twitter in that. Twitter has 500 million active users, I think. Facebook is in the billions.
Baroness Bull: It was a Facebook comment. You are absolutely right; it was in relation to Facebook.
Robert Colvile: I see that my co-panellist has joined us, so I should probably let her catch up.
Baroness Bull: Hello, Chi. Welcome. While she settles in, you made the point that the platforms are looking for Governments to give them advice. How does that work in an international environment? Not all Governments will want the same regulations to be in place and these are global platforms.
Robert Colvile: This is something that we in the UK need to be aware of a lot more than we have been. We are quite frequently providing excuses for much worse countries than ours to do much worse things and then look back at us and say, “But they are doing it in the UK”. If we say, “You can no longer comment anonymously on the internet”, that might make sense in saying, “Let us make everyone nicer on Twitter”. We can all think of the countries where that would be an absolutely horrible idea. One reason I do not like the original online harms proposals is that there was an awful lot in it that very bad regimes have already been pointing to and saying, “If the UK is doing it, we can do it too”.
The international nature is obviously an issue. Not a lot of what is being said by British people is being said on platforms that are owned by British companies and within the control of the British state. At a macro scale on all this, we face an interesting choice. The US has traditionally provided leadership on digital regulation. In the Trump years, that went away because they did not really believe in regulation that much, until Trump then decided that it should be illegal to say rude things about him and you got into the stuff about safe harbour. Europe has also played a leading role in setting standards, which, as I said, with GDPR, I personally think was a really bad idea.
It is not just us. There is a tier of countries, including the UK, Japan, Singapore and Korea, that are trying to work out how they do this and whether you need something international. This is not just about your specific question, sorry. This is a wider thing. They are trying to work out how you do this and whether you can come up with your own rules or whether you need to tilt towards the American model, the European model or, God help us, the Chinese model.
Baroness Bull: Can you give any specific examples of other countries that are looking to the UK and saying, “If they are doing it, so can we”?
Robert Colvile: Off the top of my head, I cannot remember. I think I remember something about Turkey. There was some bad stuff in Turkey. I know it is happening; I just cannot remember off the top of my head. I can send you some stuff afterwards if that is helpful.
Baroness Bull: Lord Gilbert, may I bring in Chi on that question?
The Chair: Chi, welcome. Thank you very much for joining us. I think you have been delayed by a vote.
Chi Onwurah: Yes. I present my apologies. We had two votes in the Commons just as my Bill Committee was coming to its end, on time, so that delayed me. I apologise for being late.
The Chair: No, not at all. Thank you very much for joining us. I introduced you at the top of the session, for the record. Chi, as well as speaking for the Official Opposition on these issues, has a background in tech regulation. It is very kind of you to give your time. We will backtrack over some of the questions we have already put to Robert. We have covered a number of areas, such as platform design, the violation of rights by platforms and the obligations of platforms as public spaces. We will go back over those points. Then we will come back to both of you for some final questions around competition. Let us pick up with the question that Baroness Bull was asking Robert first.
Baroness Bull: I will not ask the entire question again, Chi, but it was really to note that the claim is made that digital platforms are our new public squares. A Supreme Court case agreed with that. But they are not regulated by public bodies and the keys to those public squares are held by a very small number of individuals. What obligations come with that status of being today’s public square?
Chi Onwurah: As well as being the shadow Minister for Digital and having had experience of regulating these emerging platforms when I worked for the Office of Communications in 2004 to 2010 as head of technology, I spent 20 years before that as a chartered engineer, building out the networks that are now the internet and the web. My position is both as a creator of technology and a fundamental tech evangelist who believes in the progressive power of technology, and, as Lord Gilbert mentioned, as a previous regulator and now a shadow Minister.
The idea that these are public squares has much to attract itself to us, in that they are used by the public for the exchange of thoughts, ideas, content, et cetera. As you said, we have yet to regulate them or bring them into the civic domain. One key difference between a public square and Facebook is that the public square is within the regulation and the requirements of existing legislation. Facebook is also within existing legislation, but that legislation has not been adapted or evolved to reflect the power of Facebook as a public square. Nor does Facebook have any of the legitimacy that comes from a public square being controlled and regulated by some civil authority.
You are right to suggest that obligations come with that. I believe those obligations should be set by legislation because I do not believe they can be chosen by the platforms, as they would then be allowed to mark their own homework. Some of that must come from a basic understanding of what our digital rights should be. One thing that has been hugely neglected is the evolution of digital rights. When you understand what your digital rights are, you can understand what the obligations of the platforms should be.
If we had not had a Magna Carta or the establishment of people’s rights, how would you know what your rights were as you walked through the public square or Hyde Park? We need that basic establishment of people’s rights in the digital age. The online harms legislation we are talking about reflects a duty of care approach and that is one of the obligations. There should also be an obligation to protect free speech, to protect people from harm and—this is where it becomes challenging—to ensure that different voices are heard. Without that, it is the strongest and loudest who dominate the public square, in a way that is not allowed in my local park, for example.
Q203 Baroness Rebuck: Welcome, Chi. My first question was really about the design of platforms, how that influences the quality of debates around algorithms and the extent to which algorithms influence user behaviour. We all know that algorithms are opaque and not open to research or scrutiny. As you have intimated, the tech companies, such as Twitter and Facebook, decide where and when to impose any frictions. Are there design changes that, in your opinion, should be encouraged or, for that matter, legislated for that might allow better debate online and therefore everybody, women included, can feel safe expressing their opinions? We, of course, know that many women do not express their opinions online because it is such a toxic place for a lot of them.
Chi Onwurah: Absolutely, research shows that women and minorities in general are targeted. It is an excellent question. I always say that, if it is not diverse by design, it will be unequal by outcome. One of the key underlying factors to this technology and these platforms is that they have not been designed for the public interest or diversity. Effectively, algorithms industrialise bias. They industrialise other things too, such as reach and network interaction. The issue here is that the industrialised bias is in their designers and the training data, which is critical because, as we know, data for everything generally tends to skew to men and against minorities. There is the book by Caroline Criado Perez about the lack of equality within data.
Any algorithm, such as one for facial recognition, will be trained on biased data and designed by people who are not representative of society and, crucially, are not thinking about the opportunities to have diversity in there. It will mean that certain voices and positions are unheard. How could that design be changed? I want to champion the principle of ethical design for equality. People from my previous profession as an engineer are supporting it now. Unfortunately, it is not everywhere and it is not a requirement.
For example, platforms could minimise the exaggerated effect of pile‑ins by ensuring that people get to see those who are not anonymous or trusted. That is one of the things we are looking at. It is not to take away anonymity, which I know is something that you are looking at. We want to ensure that you can choose whose comments you can see by whether they have chosen to be anonymous. We have research on the algorithms that tend to ensure that you go to more and more extreme views. Those algorithms could be changed so that we have a greater diversity of views. I am a real believer in technology. It can do just about anything that we want it to do. The real question here is determining what we want it to do and what the rights should be of those involved and impacted.
Baroness Rebuck: Can I ask you about anonymity? I am really interested in your views on this. It is a difficult subject. If you are somewhere else in the world and under a repressive regime, your anonymity will be very important online, but in the UK at the moment, people get away with a huge amount anonymously. A lot of unpleasant things are said. Would you still argue for people’s right to be anonymous online in the UK?
Chi Onwurah: I would. I want to touch briefly on a really important point that Robert Colvile made about what other countries are seeing the UK doing and how that impacts what other countries and more repressive regimes do. I do not think we should design our public regulation of the internet on the basis of what other countries and regimes may do. This is really important. We should be much more engaged and involved in where the standards are being set. We have, as a country, almost left standard‑setting to private companies, et cetera, in order to ensure that the web and the internet remain as free as they have been.
When it comes to anonymity, it is very complex and nuanced. I do not think it is right to take away the right of users to be anonymous on platforms. People may choose to be anonymous for many very good reasons, whether it is about their sexuality or they are in a repressive regime. Therefore, people should have a choice about whose comments they see or whom they see. The platform needs a way to guarantee that, if you decide to be trusted and to share your identity, you interact only with people who have also done that.
Anonymity and identity is a spectrum that has different levels for different environments and requirements. Just as you do not have to give six months of your salary in order to buy a yoghurt in your supermarket but you do for a mortgage, there should be levels of anonymity. I would not remove it altogether because it is very important.
Baroness Rebuck: That was fascinating. Thank you very much for your answers.
Q204 The Chair: This is not the principal thing we are looking at, but the committee has been interested in it for a long time and you touched on it when you were talking about the bias in algorithms and tech. How do we get more women, girls and people from minority-ethnic and other minority backgrounds to go into tech careers? It seems to me that the environment in the tech companies is not an old‑fashioned macho environment in the way that banking and other industries are, although there may well be issues in the environment of the organisations themselves. They seem to me to be very cognisant of the fact that they need to have a broader workforce in order to better represent society and produce better products. Is there anything in public policy terms that we should be doing to encourage more girls, women and people from a diverse range of backgrounds to enter the industry?
Chi Onwurah: This is an issue I have spent so long thinking about.
The Chair: You have done it.
Chi Onwurah: The fact is that, unfortunately, little has changed. There is a greater recognition of the importance of diversity, but the figures do not reflect that, particularly in engineering and tech. The life sciences have done a lot better. There is much that can be done through public policy, such as celebrating women in tech and including reporting requirements on all technology companies. I was very struck when I visited either Google or Facebook in Silicon Valley. I was sat in the canteen and those working in the canteen were incredibly diverse. In fact, they were almost all from ethnic minorities. We went into one of the engineering meetings and it was a total absence of diversity.
My point here is that we need to offer learning and skills. I would put this requirement on companies to offer lifelong learning and learning opportunities for the employees they already have, who may be from different areas. More generally, the Government is trying to do this through digital skills, but that is very basic. Not everybody makes the choices that they will want to have made when they are 14 or 16, so those skills opportunities are needed.
Google or Facebook, whichever it was, looked at me astounded when I suggested that, if they worked with some of the people in their canteens, they might be able to improve their diversity in tech. That was not what they did. It was very elitist: “engineers here”. You had to have gone to X university and got a first, and everybody else was somewhere else. We need to break down the boundaries, because innovation, technology and engineering should be for everyone. They are part of everyone’s future.
The Chair: In the future, we might come back to talk to you about this, given your expertise and experience in the area. It is something that the committee revisits time and time again in the sorts of areas that we look at.
Q205 Viscount Colville of Culross: Chi, thanks very much for coming and getting here. You talked about getting digital rights and obligations for tech platforms. We had heard evidence that the commodifying of people’s data online by tech companies can be seen as a systematic mass violation of rights. Of course, we have the GDPR. Robert Colvile, when I asked him earlier, said that he thought that that sort of compliance was a weight against new entrants coming into this market. Do we have to be very careful about how we extend the protection of data online? Should we do more to guarantee that personal data is kept safe and used for the benefit of the data subjects?
Chi Onwurah: I feel really strongly that people need to have rights over their data. The idea of a systematic mass violation of rights falls down, inasmuch as the rights are not defined. We have not defined our data rights. GDPR does a lot, but it was seven years in the making and it does not address the forward‑looking issues. I would like to see people have rights over their own data, because we leave such a trail of data. That data, which defines our digital presence, is of so much value to others. We will never succeed in keeping people safe and enabling people to keep themselves safe and secure, which is the important point, without giving people clear rights over their own data and combining that with the ability to share that data. I hope that there will be the growth of a whole ecosystem of data-sharing applications and requirements.
This was very amusingly set out by someone on Twitter just the other day. There is an idea that compliance is a burden for small companies, so we should not have regulation, rules or protections. What would that mean in terms of food safety or health and safety, when you go to a soft play area, for example? We should have a set of rights and requirements, and then help small businesses. Often, an ecosystem grows up to help small businesses and support a more level playing field. Right now, we have a point of huge consolidation of data—I know that is one of the things that the digital markets unit is going to look at—to which many small companies cannot get access.
When many of the small businesses in Newcastle moved online because of Covid‑19, it was very noticeable that they suffered from the fact that they did not get a digital footfall. In public spaces, they are designed so that a real footfall would come past their businesses. Once they went online, their digital footfall was controlled by Google and Facebook. We need digital rights for citizens that also support and help small businesses, make it a more level playing field and provide real value to people’s data. There is huge opportunity to provide real value through data sharing, but that value should be in the control of and accrued to citizens.
Viscount Colville of Culross: I am not a technologist, so I do not completely understand what a data-sharing application would be and how that would work within the present data privacy legislation that we have. Can you talk to us a little more about what that would mean?
Chi Onwurah: One of the things people say about not giving people rights over their data is that individuals would never have the time or effort to be able to keep control of all their data. We know how few of us read the 20‑page terms and conditions when we join a new app or whatever. We see this in open banking, where you can have an application to which you feed in how you want to share your data, once and once only. You want to do this with health data, but you would like to do something else with finance data or the data about how you use Netflix. Then, some app would act as your data sharer.
There are also data trusts, which would be on a more communal basis. People will sign up for a data trust and say, “You can use the data on my daily temperature in order to support the NHS but not in other ways”. These are different ways of mediating data sharing, which is complex. Right now, that data sharing is entirely mediated by GDPR, which is about data protection, and then by large companies, such as Facebook, Google, Instagram and Amazon, that have access to your data.
Robert Colvile: As another example of that, maybe you would pay 10 quid a year to use Facebook if you did not give any of your data, and you would get it free if you did. You could also flip it so that you find some mechanism for people to get rewarded every time their data is used. The problem is that people’s individual data tends not to be as valuable. I have seen some ridiculous things saying, “If we charged big tech companies to use our data, we would be able to pay tens of billions of pounds in public services”. The numbers do not really add up on that.
The Chair: We have covered quite a lot of ground. We will come back and bring this to a close with a discussion with both of our panellists about competition policy.
Q206 Baroness Grender: Thank you so much, both. The digital markets unit is the new thing on the block, so to speak, in trying to regulate these issues— in particular, as you mentioned, Chi, the overwhelming dominance of Google in searches, both on desktops and on smartphones. What is top of your hitlist for the DMU at present in order to create stronger competition? To get your thought juices flowing, if that is helpful, could it be, as a priority, looking at restricting Google’s ability to pay to be the default search engine? Could it mandate interoperability in the social media market? Could it perhaps have some separation remedies to address Google’s dominance, particularly in online advertising? Finally, we have looked quite a lot as a committee at the Australian‑style bargaining code. What would be top of your list?
Chi Onwurah: That is a fantastic question and I wish I had thought of it myself before. I am very pleased to see the digital markets unit. When I worked at Ofcom, what was clear then and is increasingly clear now is that existing market regulation does not reflect the importance of data. I suppose that brings me to my first top thing, which would be digital advertising, because there is concentration there. That is the business model of Google, Facebook, Twitter and everyone, apart from Amazon. Digital advertising is vastly and inconsistently regulated. Regarding digital advertising enforcement powers, right now, the digital markets unit has lots of recommendations and no enforcement. It should be looking at and understanding these new markets.
To the point about interoperability, we know how rare it is for us to change banks, for example, or switch phone providers. If we are to break up some of the network effects that drive the power of these networks, we need to look very closely and rigorously at interoperability and what kinds of requirements to place on these network giants. There is an argument that these platform giants are just so big that we cannot make them competitive, and then we need to think about what we do in that case. Let us look at advertising as the driving force, interoperability as something to help switch and then give enforcement powers.
Robert Colvile: It is a fascinating question and we could go for another hour on it. This is, in many ways, at the heart of the issue for me. There is a tension. We are all now working out that traditional monopolies regulation does not work or make sense in these areas. There may have been cases of these companies exerting their market power at some point in their rise. You can think of Google pulling reviews of restaurants on to Google Maps and essentially pushing other companies out of business. Essentially, these companies have got to where they have got because they have offered a really good product in markets where network effects tended to push toward and reward monopoly.
It is really hard to see how anything you can do is going to have any more than a marginal effect. Microsoft can spend billions of pounds trying to develop a rival search engine. If anyone has ever used Bing, I congratulate them. It is going to be really hard to pull people away from that. There is essentially a tension. At the same time, many of our traditional definitions of consumer harm do not really apply. Facebook and Google absolutely dominate the online advertising market and that is a big issue. I share Chi’s concerns, but, at the same time, it is much harder than with traditional old‑fashioned monopolies to show that people are being harmed. Facebook and Google are providing us with fantastic services without us paying anything directly. My Amazon Prime subscription gives me enough videos to entertain my kids for 30 years plus free packages. It is a wonderful deal. These are not susceptible to trying to set up rivals in the same way. Interoperability is a huge part of it.
I am worried about sliding into a world where we accept this and, basically, in some ways, turn these companies into utilities, so we regulate them in such a way that we accept that they are there, big and dominant, and try to force them to do certain things or to skim off some of the money. You get quite an ossified system. I mentioned the App Store earlier. I am very annoyed because I cannot remember where I read it, but there was a really good essay recently about its indirect effect on innovation. Anyone can put something in the App Store as long as it meets Apple’s guidelines. There is no barrier there. At the same time, you compare it to a world in which the web had remained open and you can see that there is a very big difference there.
As far as I am aware, the DMU is uncharted territory. I do not think anyone else has yet set up anything doing this, although they are bound to come. The initial CMA review was a good first step on this and the focus was good. We should not regulate as though Google, Facebook and Microsoft are the only companies that exist or have ever existed. Microsoft, by the way, gets off fantastically. It is making just as much money as any of them, but does not tend to get nearly the same amount of attention because it is not putting any newspapers out of business.
We need to think very carefully about new entrants and incumbents, the fact that tech is overwhelmingly economically beneficial and Britain wants to have a thriving tech sector, and the ability of new companies to come in from overseas in due course. I was talking to someone about this. He said that, in 10 years’ time, we might be more worried about giant Indian tech companies than giant American tech companies. That is slightly rambling, but there is just so much to get into here.
Baroness Grender: Just so I am really clear on what your hitlist would be for the DMU, is it therefore that it would be about new entrants rather than anything else?
Robert Colvile: New entrants, interoperability, making sure these companies do not abuse their market power and trying to work out where that is happening are very core issues. Trying to treat it as though there is a world in which you could set up another search engine is probably not going to be very useful, if you see what I mean. You should absolutely punish people for abuses of market power. There are some other interesting ideas. Ben Thompson on Stratechery made the point about whether Facebook, a social network with a billion users, should be able to buy WhatsApp and Instagram. At the time Facebook bought Instagram, it had 12 employees. There was nothing illegal about it. By conventional monopolies law, there was absolutely nothing wrong with that, but it basically meant that Facebook ended up owning all the alternatives to Facebook.
Chi Onwurah: Can I come back on this question of harms? It is important to recognise that consolidated market power, necessarily, is a harm. It leads to consumer harm because it reduces innovation and competition. While I agree entirely with Robert that these applications provide huge benefits, they are also providing harm by definition if they are in a position of market power.
Those of us who have been in tech for a while, as unfortunately I have, know that there has been a history of what we call evil empires. IBM was the first evil empire, followed by Microsoft, Apple and Google. I fundamentally believe that competition will bring opposition to the existing giants, if we put in place the appropriate regulation and rights. The difference with Microsoft is that you pay it for the goods directly, so that is subject to the existing understanding of competition. Robert’s point about ensuring competition is really important, but it has to be recognised that one of the harms of market power is that it reduces competition and the opportunity for the next great competitive battle.
The Chair: Sadly, we have run out of time. The session has gone very quickly and has been very interesting. We have had two very knowledgeable witnesses. We are getting towards the end of our evidence. We have had a lot of evidence and will be producing a report in due course. The discussion today has illustrated some of the very complex issues that we will be considering. Your evidence, Chi and Robert, has been very useful to us. Thank you very much indeed for taking the time out this afternoon to be with us.