Select Committee on Democracy and Digital Technologies
Corrected oral evidence: Democracy and Digital Technologies
Tuesday 16 July 2019
Watch the meeting
Members present: Lord Puttnam (The Chairman); Lord Black of Brentwood; Lord German; Lord Harris of Haringey; Lord Holmes of Richmond; Baroness Kidron; Lord Lipsey; Lord Lucas; Lord Mitchell; Baroness Morris of Yardley; Lord Scriven.
Evidence Session No. 1 Heard in Public Questions 1 - 11
I: Baroness O'Neill of Bengarve.
Baroness O'Neill of Bengarve.
Q1 The Chairman: As you know, the session is open to the public. A webcast of it goes out live and will subsequently be accessible via the parliamentary website. A verbatim transcript will be taken of your evidence and put on the parliamentary website. You will have the opportunity to make minor corrections for the purpose of clarification or accuracy. Perhaps, Baroness O’Neill, you would like to introduce yourself, although everyone here knows you.
Baroness O'Neill of Bengarve: For present purposes, as I am a witness in this session, I am not considering myself a Member of the House. I am by trade a philosopher. I have for some 20 years been writing on ethical issues that arise from communication, and what you might even think of as pseudo-communication. Inevitably, 20 years ago, that did not engage with digital technologies. Latterly it does, and many of the things that I am currently engaged in involve people who are far more expert than I on digital technologies. That probably tells you what focus I have on this.
Q2 The Chairman: In fairness, we have you and Lord Lipsey to thank for the fact that we exist at all. Over the next few months, we may thank you or curse you. I am not sure which, but for the moment we definitely thank you.
I will kick off with the first question, which is slightly long. What do you see as the current and possible future relationship between democracy and digital technology? To what extent are democratic problems caused by changes in technology, or to what extent are wider problems in democracy manifesting through online social platforms? That is a bit of a mouthful.
Baroness O'Neill of Bengarve: Let’s try to pull that in several directions. Yes, I think digital technology creates problems for democracy as we have hitherto had it, but the technological changes are not the only changes that are relevant. The most relevant changes are probably not those that people are familiar with and which are to do with social media, which are only a subset of uses of digital technologies. They are to a very great extent to do with the business models by which the tech companies are organised and the market relations that exist between the companies that use these technologies, their users—note: not customers—who supply the data, and their relationship to those to whom they sell those data for the purpose of providing advertising.
So, I do not think it is the technologies themselves that are the problem, although they may be in some respects. It is also a question of the control of those technologies. Who is controlling them? For what purpose? For what profit? Above all, who is paying to use them, not of course for social media purposes, though social media provide the distribution network, but sometimes for political purposes that are quite damaging to democracy?
The Chairman: Where do you see the future possibilities lying whereby we could utilise technology and limit the damage?
Baroness O'Neill of Bengarve: We have seen a very big change in outlook of the tech companies in the last year. I noticed it first with something that Sir Tim Berners-Lee, the father of the World Wide Web, said in September. He said that he had always fought to keep the web open, but he realised that it had become an engine for many dreadful things. He has a project, which I do not understand but you may wish to come to understand, for dealing with that.
I find it quite notable that, since then, the tech companies have changed their tune. They were very much against regulation until a few months ago. I was startled, when I chaired a session of the Westminster forum about four months ago, to find a very different view. Of course, we know now that even Facebook has come on board with this. You may say that I am not being sufficiently cynical and that what they mean by regulation is “regulation that suits us”, although they are explicit that it is no longer self-regulation. I do not know what their thinking is, but it seems to me that there has been a very big change in climate in recent months.
Lord Harris of Haringey: You talked about the business model of the service providers. Could more be could done to regulate how their business model operates? Would that be by an extension of the GDPR and cover some of the data that they hold about service users, which clearly informs those who wish to target advertising or messages to particular people? Could or should more be done about those business models?
Baroness O'Neill of Bengarve: I am sure that more could be done, but I am less sure about what the effective combination of interventions would be. One starts at one end by considering the regulators that we already have in the UK and noting that each of them has quite limited powers. The current powers of the Electoral Commission or the Information Commissioner are rather specific, and there are things that they cannot do.
The question of the adequacy of the powers of the existing regulators is important, and the question that has already been proposed by other committees about having a generic regulator or meta-regulator may be relevant there. There is a lot of detail in there that we might want to come back to. I do not regard regulation as ipso facto wrong.
I do think, however, that our debates about communication have been stymied by a very narrow focus on issues of privacy that is then exacerbated by the interpretation of privacy as it is interpreted in data protection legislation, both the previous directive and the current regulation, which I believe is nearly out of time and will not serve us well in future.
I am even more struck by the narrowness of the ethical standards that people talk about when they talk about technology and data. For example, there is enormous emphasis on privacy, quite apart from data protection, and the assumption that the generic right is what is now called “freedom of expression”. One has to note that that term is only as old as the human rights documents of the 1940s, although it may date to the 1930s, and I believe that it is frequently confused with rights to self-expression.
They are quite different things. When John Stuart Mill wrote about rights to self-expression, he had in mind individuals expressing themselves and said, I think very plausibly, that that should be regulated only when the speech—that is, the speech act of the individual—causes harm.
In the 20th century, we have seen the inflation into this generic “freedom of expression”, which of course can seem a nice way to do it because you cover material that is printed, spoken, broadcast, put online and so on. But in the process we have seen an enormous switch of focus to consider the rights of the speaker/writer/broadcaster while ignoring the rights and requirements of the listener, reader and so on.
Lord Harris of Haringey: I am sorry. I had forgotten the Chairman’s instructions to recite my interests when I first spoke. I am a board member of the Cyber Security Challenge and chair the Independent Reference Group of the National Crime Agency.
Lord Lucas: I have no interests to declare. I am concerned about the algorithms that the big companies use and how we should keep our eye on them, not only the search but the recommendation algorithms.
Is it practical that we should actively take an interest in how those algorithms are working and trying to pull the companies up for biasing the spread of opinion one way or the other, or do we have to rely on some other mechanism for trying to make sure that the picture presented to us by, say, Google Search is an accurate representation of the information that is out there, rather than one that is biased according to their commercial or political objectives?
Baroness O'Neill of Bengarve: At present, we do not know which algorithms have been involved in the material made available to us or indeed foisted on us. One has to ask in particular to whose benefit anonymity about the process is. This affects the public interest in many, many ways, but at present it is the side of it that we do not know.
That is why I began by emphasising the importance of not confusing the questions with which this Committee is dealing with questions about social media. Social media take a recipient perspective, and the innocent recipient does not know which algorithms have provided him or her with whose information, misinformation or indeed disinformation. The algorithms are key, but at present they are regarded as proprietary.
The Chairman: We will come on to anonymity a little later. I think Baroness Kidron has a supplementary question and a main question.
Q3 Baroness Kidron: I also have a lot of interests to declare, I am afraid. I am a commissioner on the UN Broadband Commission for Sustainable Development, I am on the technical board of the WePROTECT Global Alliance, I am a member of the Council on Extended Intelligence, I am a member of the UNICEF artificial intelligence and child rights policy guidance group, I am chair of the 5Rights Foundation, and I am a regular contributor on issues relating to digital technologies and childhood.
I want to go back to something that you have already touched on, freedom of expression, because so much of this is in that arena. I would particularly like your view on how an individual’s right to freedom of expression extends into the privately-owned spaces of the platforms.
Baroness O'Neill of Bengarve: I would approach it this way: although we constantly emphasise both freedom of expression and, as you have narrowed it, individuals’ rights of expression, I do not think we suppose that those principles or norms stand alone. There are many norms other than freedom of expression that are relevant to speech acts. We are entirely willing, indeed keen, to see freedom of expression curtailed when another norm is violated; I give you the law on defamation as a simple example.
We have multiple standards, some of them dating back to antiquity. It has been very curious to see in the last century the degree to which freedom of expression has seemed to many people to be the generic speech right, perhaps supplemented by things that we find on freedom of religion in human rights documents. Freedom of expression is far from being the only relevant standard; one is always dealing with a multiplicity of standards. Some of them are epistemic standards—for example, accuracy, reliability and respect for evidence—and some are ethical standards, among them honesty, truthfulness and so on.
Baroness Kidron: Private companies have gatekeeping policies but then cite freedom of speech as a reason for not upholding their own policies. Do you see that as a potential place to look for broadening out the balancing forces?
Baroness O'Neill of Bengarve: It is extremely appealing to the companies to say that what they take most seriously is freedom of expression—they do not really use the term “freedom of speech”—because it is above all a way of marginalising recipients. Others have rights to freedom of expression that mean that whatever information can be put into the public domain, may be sent to people or disclosed.
However, when push comes to shove, it turns out that people are keen on some other standards. I mentioned, not entirely favourably, questions about privacy. In my view, there has been a serious attempt to deal with the privacy of users of digital technology. It is serious, but at least as we have developed it in the European Union—we have developed it further here than anywhere else as far as I can make out—I suspect that it will just not work. I will try to explain why.
Data protection depends on being able to segregate personal from non-personal data. However, there is unfortunately no criterion by which to identify personal data. In the last few years, I have read an increasing number of publications where people have inferred what are thought to be personal matters from public domain datasets. Once you can do that, you see that the days on which we can rely on data protection as a way of securing privacy are probably numbered, because AI will make such inferences more possible.
It is not that everybody can be identified in all respects using these technologies, but many studies show that 12 per cent or 15 per cent of the individuals whose personal data are supposedly protected are identifiable; for example, hospital patients. I do not believe that it is just a technical deficiency; it is in the nature of the case. Once there are more artificial intelligence approaches, it will be easier.
I have spent the past few days meeting some Chinese AI people and philosophers. It occurred to me that they probably have an easier road than we do, because they are less hung up about privacy.
Baroness Kidron: That leads beautifully to my last question, which is on the relationship between freedom of expression and the idea of a private company moderating content. Do you have some thoughts on that relationship?
Baroness O'Neill of Bengarve: We have always had moderation in the world of print. The world in which we have publishers, editors, serious journalists, the law on defamation and intellectual property law is one in which moderation is constantly taking place and is nothing new—if you like, we could call it mediation, which is what the media do.
However, who does it is very important. The present suggestion that it should be the tech companies themselves may be insufficient. We know that in recent months they have taken to employing a large number of fact checkers, which is all to the good. On the other hand, fact checking is a difficult business, because speech content does not immediately give everything away. We have only to think of the speech acts with which are we all familiar, like parody—think of Private Eye; think of the things that can be put into the public domain. Although they do not apparently defame anybody or breach privacy, actually they have done it.
There is nothing new here either. Montesquieu did it in Persian Letters in the 18th century. Censorship forbade him to talk about the politics of the Ancien Régime, so he set it in a fictitious ancient Persia and everybody understood.
We must be sophisticated about what hides what from whom. Companies alone are therefore unlikely to be able to police this.
Q4 Baroness Morris of Yardley: You have begun to answer the next question, but I will put it anyway. I have no relevant interests to declare for this inquiry.
The Online Harms White Paper requires social platforms to moderate content. Perhaps you might say a little more about how that would affect the right to expression, taking a bit further the reply that you have just given to Baroness Kidron but concentrating on how it affects the individual’s right to expression.
Baroness O'Neill of Bengarve: The individual’s rights to expression are qualified rights under the European Convention and our Human Rights Act. Human rights are not absolute rights. We already have, and it is essential that we do, a number of rights other than freedom of expression, such as rights to privacy and the right not to be defamed.
There are, of course, detailed questions to be asked about how two rights, or n rights, are to be accommodated in particular situations, but this is not a question of the particular vulnerability of rights to freedom of expression. These rights have always been qualified. It has always been possible to be arrested for defamation, breach of copyright and countless other speech offences.
Baroness Morris of Yardley: The White Paper requires a duty of care, I suppose there might be a tendency for the companies to be more cautious than they would otherwise be if they did not have that duty. I imagine that that is quite difficult to define. Might we end up in a situation where, because of that duty of care, something might become illegal in this context that would not be illegal if you said it in another context? Does that cause a problem?
Baroness O'Neill of Bengarve: Context is all when we are talking about how rights are mutually qualifying. They are not unconditional, so it is rather difficult to answer in the abstract. The thought that this can be done by identifying online harms has its natural home in a discussion of social media. We have had a number of debates, including last Thursday, borne of this set of issues, and we have Uncle Tom Cobley and all producing codes for online communication. It goes all the way from the OECD to the Church of England, as we heard last week, and everywhere else in between. The Chinese have very nice codes, too.
I have to say that I am pretty unimpressed by the thinking that publishing a code will get us very far, for two reasons. First, there are multiple codes; this weekend, I heard someone say that she thought that there are probably 70 or 80 out there now. Secondly, they are not all identical. They are very nice and very bien pensant, but they are issued without any attempt at justification or clarity about their implementation.
We must be duly sceptical about codification as a remedy. I can think of lots of nice things in this space but thinking of effective things is seriously difficult.
Q5 Lord Holmes of Richmond: I have a declaratory interest as deputy chairman of the Channel Four Television Corporation.
Is preserving the option of anonymity online desirable?
Baroness O'Neill of Bengarve: That is a very good question. I would put it this way. To many of us in liberal western democracies, which may survive, anonymity has seemed desirable because we have had in mind particular things like investigative journalists in oppressive regimes. However, precisely because human rights are qualified rights, one cannot generalise out from the need to protect the anonymity of a journalist reporting on, say, the way in which women in Afghanistan are treated. Because one is concerned about that, it does not follow that anonymity is a generic right.
I will to give you a bit of philosophy; I am sorry, I cannot resist this one. I quote Plato reporting on why Socrates did not like writing. You will find it in Phaedrus. Socrates is said to have disliked writing because, as he put it, your words go everywhere and there is nobody there to interpret and protect them. You do not know to whom they will speak, and so on. I imagine to myself that in an ancient world—it could be China or Greece—you might see an inscription and think, “Who put that there? What does it mean? Does it mean anything? Is it a curse? Is it for me? Who is it for?”
Decontextualised writing raised enormous problems more than 2,000 years ago. The question of when anonymity is needed is highly contextual. It is sometimes needed, but in my view one of the places where it is not needed is in exercising civic rights. As a citizen, I do not stand behind a hedge and throw stones; I stand in the public square and speak.
Q6 Lord Black of Brentwood: I have interests to declare. I am deputy chairman of the Telegraph Media Group, director of both the Regulatory Funding Company and the Advertising Standards Board of Finance, chairman of the Commonwealth Press Union Media Trust and vice-chairman of the APPG on media freedom.
Over the past 10 years, political campaigning has been transformed, not least because of the intervention of outside parties—that is, people who are not directly involved in the election and who are often from abroad—in the contests. What rights should an individual in those circumstances have to communicate their speech through paid online advertising, programmatic advertising and automated accounts?
Baroness O'Neill of Bengarve: I am tempted to say that, as they are exercising a civic function—a citizen’s function—they should have the same rights as citizens, but that these should not include anonymity or hiding who paid whom for what. It seems to me that we do not have that at present.
As I am sure members of the Committee have, I have read a great deal about the difficulty in knowing where the content comes from, with the consolatory rider that we do not know whether it has been influential. I do not think that is the problem. It may or may not be influential, but anonymous interventions of this sort are extremely destabilising for democracy.
That is why we need to think about the borderline. The sorts of things which I imagine the Committee might want to look at include the imprint to show who placed an ad and how much was paid for it. We do as much for physical print—for example, the Advertising Standards Authority has standards—but we do not do it for online. Just saying, “We don’t know whether this is influential”, seems a weak response to me. We may not know. On the other hand, if it is, we will have subverted democracy, which we must try to sustain above all.
Lord Black of Brentwood: Given that these issues are heightened during election campaigns, are changes to electoral law in this area one likely route that we could look at, as opposed to direct regulation of the platforms, which would be complicated?
Baroness O'Neill of Bengarve: It is very important to look at electoral law. At present, the Electoral Commission does not have adequate powers. I and others have asked questions about this recently, and Lord Young of Cookham has replied, “Oh, we have consultations going on”. I do not know what is going on, but I fear that if we suddenly find ourselves in an election or a referendum without any reform of electoral law, we might be in considerable trouble.
One thing that is clear is that the Commission’s powers need to extend beyond the immediate period of an election campaign and to reach actors other than political parties. It is a quaint and obsolete assumption that all electioneering will be done by UK political parties during campaigning periods. That is the world that we have left.
Lord German: Following on from that last point, the Electoral Commission obviously has the power to know how much is spent. The amount of money that can be spent by non-political parties in election campaigns is very small. Are you suggesting that that sort of power should be extended backwards and cover not just the election period itself but in a long campaign period long before it?
Baroness O'Neill of Bengarve: I do not have a solution. I have talked with the commissioner, and I think it is an extremely difficult question to resolve adequately.
The idea that one can restrict it to periods of a campaign has gone; we know that this sort of influencing has been done in other periods. To me, the deeper problem is that nobody has a secure definition of what constitutes a political advertisement. People say, “Oh no, that wasn’t a political advertisement. That was this individual expressing his views”. The boundary line is very hard to discern, and we have to rethink the powers of the commissioner from the ground up.
I am struck that the Advertising Standards Authority, which of course has quite different powers—non-statutory and so on—has some useful powers. That question about the imprint seems very important. It goes right back to when printing was invented: the imprint tells you who is responsible for what has been done. A further question that one could deal with is declaring who paid for it, which is not quite the same as asking, “Did you stay within the expenditure limits?” These all seem very important things if we are to protect democracy.
Lord Lucas: Would you be happy with the Electoral Commission having powers that were quite broad in principle so that they could catch a modern Montesquieu—a supermarket running an advertising campaign on the glories of European cheeses in the middle of the referendum, for instance?
Baroness O'Neill of Bengarve: A good idea. Look, we have a plurality of regulators, all of them—because we have always been very concerned about excessive intervention—with rather confined powers. At this stage, I am agnostic about whether a super-regulator is a good idea. I know it has been proposed that we should have a sort of meta-regulator, but it is very important to work out exactly which powers are covered by the combination of regulators that exists. At present none of us can be confident, even with Ofcom, the Information Commissioner, the Advertising Standards Authority, the Electoral Commissioner and others, that everything is covered.
Baroness Morris of Yardley: Just a little question. On the notion of giving the Electoral Commissioner powers to look at advertising before the designated election campaign, I was not quite sure whether you were saying that there would be a time limit, or whether it would be for ever and a day, if you could manage to find a definition of what electoral advertising was. Would you extend the time limit or just make it general?
Baroness O'Neill of Bengarve: I do not see that extending the time limit would resolve the problem. The problem is not that the Electoral Commissioner does not quite have long enough, because, as we note out there in the world that we inhabit, campaigning goes on all the time. It is very difficult to say that a campaign began on day X and ended on day Y. “Ended” may be easier than “began”, but it is difficult to know how that is to be regulated.
Lord Scriven: My question comes back to what you have just said and what you have said previously: that it depends on the definition of political advertising. Advertising as we know it will probably become less and less influential in this sphere, particularly when you have a platform where there is vying for power and many individuals have organised themselves, through a network, to be able to influence and not necessarily pay.
I seek to understand whether, in your desire for some form of regulation, whatever it may be, you see it as advertising or in a much broader sense as networks of people who are trying to influence the outcome of an election, which may have nothing to do with advertising as how we know it.
Baroness O'Neill of Bengarve: The lines are blurred, are they not? We live in a world in which some people go under the title of bloggers and influencers. Are these people advertising or not? Traditionally we have thought of advertising rather narrowly: either there is a product or there is a cause, a political party, in an election.
We are talking now about a world in which influencing is what is going on, some of it for honourable civic purposes, some of it for entirely legitimate commercial purposes, and some of it much more shady. My biggest fear is that if we do not get this straight, we will have another election or another referendum that produces a result that is decisive but which our fellow citizens do not believe was fairly produced. We have to think not merely about it being clear that it is fair but about people being able to judge that it is fair.
Q7 Lord Lipsey: I declare an interest as having been deputy chair of Full Fact, the fact-checking organisation, until quite recently. I am also a member of the Advertising Standards Authority’s parliamentary network group.
There is a lot of experience of regulation around this table; in fact, we could form a super-regulator from our own ranks. I have been on both the ASA and the financial regulating authority, and on the Tote board, so I have had some experience myself.
The truth is that regulators are incredibly variable; some are very effective while some are pretty ineffective and bureaucratic. How effective could a super-regulator or a number of individual regulators be in the field that we are considering, and how effective do you think it would be by the time it had gone through the grinding machines of the parliamentary, governmental and commercial networks to get it cut down to size?
Baroness O'Neill of Bengarve: That is the $64 million question, is it not: how effective could it be? There is a parallel question, however, which is: what are the consequences if we do not manage it effectively? They have both to be answered.
I will not try to sketch an answer as to how effective it could be. Lots of things are happening, and you yourself at Full Fact have been very aware of one of them, which is an attempt to stem the amount of misinformation and disinformation that is being produced, and which has led to the emergence to the oxymoronic phrase “fake news”, last year’s neologism of the year. That is all going on, but whether it will be effective is the really difficult question. I think that is what everyone hopes this Committee will manage to address.
I also believe it would be very valuable to say candidly that this is a much-touted remedy to the problems that we face, but it would not work for the following reasons. You know far more about fact-checking than I do, but it seems to me that fact-checking is not proving to be as effective as the tech companies themselves hoped when they began to employ fact-checkers.
There is, first, the very big question of how you tell that something is a factual statement. It is the Montesquieu or Private Eye problem again, when someone says of a given speech act, “Oh, that was just a joke”. How often have we seen people who have been caught out saying something that others believe is offensive, it has got into the public domain and they effectively say, like schoolchildren, “Can’t you take a joke?”
It is not simple to judge speech acts. The Online Harms White Paper mainly addresses just the social media end of the spectrum, not what you are addressing. It is very difficult to know whether a given speech act will be a source of harm or not. Furthermore, I do not really understand how we are to move from thinking, “If someone says that, so what? It’s just noise. It gets lost”. When it is repeated—how many times, we do not know; n thousand times—it becomes something that one has to take account of.
I was much struck—I may have mentioned this to you before—by the voter deterrence campaign that formed part of the Trump campaign for election and targeted black women voters, most of whom had voted for Obama. It simply targeted them with messages like, “It’s not really worth voting this time, is it? It’s not like the last presidential election”. It is very hard to tell by isolating a particular speech act how it is going to contribute to or reduce harm. It is much easier when one is dealing with evidently private harms like child pornography, defamation or fraud.
The Chairman: That is obviously a very important area that is worth looking at. Without putting words into your mouth, and using Lord Lipsey’s line, irrespective of how effective each individual regulator might be, would you agree that the problem we have is how permeable the line-up is? The bad actor that we might be trying to deal with has access to gaps. It is the permeability of the regulatory system that troubles me more than the effectiveness of any one component of it.
Baroness O'Neill of Bengarve: You are right that the gaps are very important. One of the gaps that we have not mentioned yet is that this is not confined within national boundaries. It is entirely possible, like the Macedonian villagers who became rich by inserting material, on behalf of the Trump campaign, into the last American presidential election, that extraterritorial actors will escape the jurisdiction of our regulators. We should always have the thought that if we drove it overseas it might be even worse, but if we do not do it, it will be very bad. I do not envy you.
Q8 Lord Harris of Haringey: I think your answers mean that I have to declare a further interest, in that I chair National Trading Standards and one of the teams that we fund is responsible for delivering the statutory backup for enforcement of the Advertising Standards Agency. There we are.
Earlier, in answer to Lord Black, you said that you did not think it was a good enough defence to say that these interventions by legitimate actors probably did not make a decisive difference. May I blur the boundaries even more? We should not just say, “It doesn’t matter, because they weren’t decisive. It didn’t deliver the election for President Trump”, or, “It didn’t deliver the referendum”. We do not know, and it does not really matter. It undermines the legitimacy, or may undermine the legitimacy, in the eyes of the electors if they feel that there is this interference all these actions going on.
May I blur the line a little more and see your reaction? Is there activity that in itself is not necessarily intended to produce result A or result B but is about delegitimising people’s faith in democracy itself or in democratic institutions? Is that an issue, and should we be worried about it?
Baroness O'Neill of Bengarve: I fear that it is, and that we should be. The disrespect and distrust of our politicians, us and the media are nothing new. If you look at the polls of 20 years ago, you see that politicians and journalists came at the bottom then as they do now, while judges and nurses were at the top as they are now. That is in the UK. Although the rankings may be the same, the actual reported levels of trust and mistrust have polarised.
Lord Harris of Haringey: Okay. Is this definable enough to take action about it? I appreciate that this is not recent—you can look at paintings of people on the other side of the river cheering the burning down of Parliament to see that this is not a new phenomenon—but is there a danger that the simple repetition of, “You shouldn’t trust any politician”, or, “You shouldn’t believe anything you’re told”, which you could argue generates a healthy cynicism that is part of understanding political discourse, can go too far? Can one define what goes over the line of suggesting healthy cynicism to the point at which you undermine democracy and democratic institutions?
Baroness O'Neill of Bengarve: What matters most for democracy is that our politicians are trustworthy rather than that they are actually trusted. I also think that members of the public are not stupid. Very often, having said, “Oh, you can’t trust any politician”, they will then point to their own local MP or their own local council and say, “But of course I know so-and-so, and when I want to find something out or get something done, I talk to him, or her.”
Although the polls naturally give us evidence of generic attitudes only, which is what they are designed to do, the evidence is that people’s judgments are more discriminating than the attitudes that they say they hold. I do not think that people have become stupider or even despairing, but I do think that we have made it very hard for people to judge matters. They find themselves in a cloud of misinformation and disinformation, and it is difficult. It is not like the village life of yore.
Lord Harris of Haringey: Okay. I am concerned that there is a desire on the part of some foreign powers to corrode democracy in countries like ours, and that that is a consistent process. It does not actually matter to those powers which party wins an election as long as people have less and less faith in that process and believe that some other system of government may be better. Is that something that governments, particularly as they have a vested interest in this, have a legitimate interest in trying to do something about? If so, what might that look like?
Baroness O'Neill of Bengarve: Yes, they have a legitimate interest. Ultimately, if those who see other people win an election are certain in their bones that it was done by dishonest means, they will not accept the system. That is the risk for all of us: that we discredit the very system if we do not ensure that people have a reasonable conviction—I do not mean that no electoral fraud ever takes place anywhere—that it is not being orchestrated by “them”, whoever they may be. I do not think people can easily sustain that conviction if we allow anonymous online interventions to proceed unregulated or uncurtailed.
To go back to the question of whether it is a good idea to have regulators, I can see why one might worry. Lots of people in the last 30 or 40 years have gone for exceptionally weak forms of discipline—for example, transparency or openness. We have to be extremely careful about relying too heavily on those. Transparency is a matter of putting content into the public domain, where of course it might not be found by the relevant people, may not be understood by those who found it and may be unassessable by those who at least can understand it.
We have to be extremely careful about relying heavily on transparency. It is a modish remedy that has been touted, although not always respected, since the development of online technologies. In the right context, it is valuable — who would be against the financial audit of companies and the publishing of audited accounts? — but it is not as generic a remedy as some people like to think it is.
Q9 Lord German: I apologise; I did not declare my interest at the outset of my earlier question. I am treasurer of the Liberal Democrats.
I would like to ask you about who has the responsibility for identifying and reducing what you call misinformation and disinformation. Should it be the individuals who read it, the platforms that host it, the Government via regulators or whichever mechanism, or all three perhaps? Perhaps you might explain the difference, if there is one, between misinformation and disinformation, because I am not sure that I understand.
Baroness O'Neill of Bengarve: First, on misinformation and disinformation, if I make a mistake and tell you that the moon is made of blue cheese, but I honestly believe it, that is misinformation. If I know perfectly well that it is not made of blue cheese but tell you so, that is disinformation.
Lord German: Thank you.
Baroness O'Neill of Bengarve: Now for the complicated bit: who has responsibility? We are making the individuals unidentifiable so long as we permit online anonymity, so we can hardly say that the individuals are responsible; who they? One of the big debates in this area is whether the platform type of organisation is tolerable in the context in which it is being used.
A couple of years ago, I still thought that the remedy would lie ultimately in insisting that a platform was not a tolerable organisation because it had too few responsibilities for what it did, and that we should require online companies to be treated legally as publishers. They would then fall under defamation law, to give one example, and would be responsible for breaches of privacy, breaches of copyright—lots of other things.
I am sorry to say that that remedy is not likely to be available. A publisher, as all of us who publish stuff know, has to take considerable responsibility, including reading and assessing the content that he/she/it publishes to make sure that it is not defamatory. This is just not possible where every Tom, Jane and Harry posts content.
So I understand why the platform organisation has been used, but it has been mighty convenient and harmful. I am afraid that saying, “Make the platforms responsible”, is only a gesture. The platforms are designed not to be responsible. That is what they are for, so it is not the individuals or the platforms.
Time was when we might have said, “Aha. What we need is a professional body at this point”. I see you smile, Lord German. I do not think that anyone is saying that now, but we have spent 40 years on the whole diminishing the powers of professional bodies on the grounds that they were abusing their position or whatever else we thought, although we were probably wrong. So it is not professional bodies.
When we come to talking about public bodies, which public bodies do we mean? We have resolved this up to a point by having a plurality of regulators with rather limited powers for different domains. It does not answer the question of globalisation—that is to say, the offshoring of content, which is spewed at citizens, from we know not where sometimes, but anyhow not from somewhere which your jurisdiction can cover.
There is a really knotty problem here. I cannot produce an answer to this question, a question on which I paused the longest in thinking about it. The start will be to think about what is feasible for each sort of information. We may have to invent some sort of hybrid organisation, but on the whole I have tended, in thinking about this, to come back to thinking about who is paying—that is to say, not who is writing or assembling the content but who is commissioning it. In a sense, that is where the power lies. In the end, the crucial thing we need to think about is the political power exercised by the tech companies and their customers.
I suppose I should mention the one other regulatory method that people mention frequently, although again I do not think that it is feasible: anti-trust legislation. When we had this situation a century ago in the United States, anti-trust legislation was the way in which the power of the mighty was curbed.
That is not likely to be feasible with these technologies, partly because they enable connectivity. Saying, “We’ve got a completely local version of Instagram whereby you can communicate with people in your own constituency”, or whatever, will not be extremely popular. The people who do this and do it well—this is the melancholy reflection that I have been driven to—are the Chinese. They do it because the party controls the tech companies. It is a deep irony of the past decade that a group of technologies that we once thought would be so marvellous for communication and democracy—remember the days of the cyber-romantics—are now just lovely for autocracies.
The Chairman: You mentioned trust-busting a hundred years ago. I am a big fan of Louis Brandeis. My understanding is that when he went for the trusts, what was brilliant was that he went for the combinations that the trusts created. It was tough for him to take on Standard Oil, but it was not so tough for him to take it on where it combined with the railways to prevent trade. That, in a way, was the trick. I wonder whether there is some possibility there.
Baroness O'Neill of Bengarve: That seems a very apposite question at a moment when Facebook says that it might issue a currency; I am very inexpert in that area, but of course it raises that sort of question. I listened to Mark Carney the other day, and I do think that central bankers are right to be extremely worried about unregulated currency. Of course, Facebook says that it would be regulated. Well, we have heard that.
The combination of these mega-companies would, of course, be even more of a risk should they decide to do things that are more damaging to some, or many, groups of citizens than the companies on their own. But we have to remember that the tech companies are already extraordinary in their reach and penetration into private life, as well as in their imperviousness to jurisdictional boundaries, and the rest of it.
Another thing that I only recently became fully aware of but that struck me as a useful piece of evidence is the number of start-ups that have been taken over by the big companies. Something like 1,200 start-ups have been bought up. If we think that monopoly is a bad idea for commercial life, for international relations and, by the way, for democracy, we probably need to think about the structure of the major players, the immunities they have and the immunities that they should not have.
Q10 Lord Mitchell: I declare my interests as a trustee of a US-based not-for-profit organisation called Living Online Lab, which is dedicated to developing, teaching and promoting internet studies, particularly in the US.
The Chairman may slap me down for wading too far away from the subject, but I want to talk about tax. We have been talking about trust-busting. To me, this is just a follow-on issue. The big tech companies are as proficient at mitigating their tax are they are at designing algorithms for all sorts of purposes. It is part of the culture within these companies: thou shalt not pay tax and thou shalt not do evil, or whatever it happens to be. It is a whole circumstance.
To me, it reflects the philosophy of the issues that you have discussed today in being almost like a city state—removed, as it were, from the world. In paying no, or a minimal amount of, tax, they have massive resources available to them that they otherwise would not have had, and it just enhances their power. Do you have any thoughts on that?
Baroness O'Neill of Bengarve: I think of Amazon in Luxembourg and Google in Dublin, and I do not have an answer. It is an utterly serious problem, but it goes with globalisation and online business. One might say that it should be where you sell, not where you locate your headquarters, that is decisive. For the UK, I suspect that would make a considerable difference, because Amazon and Google do very well in the UK but do not pay their tax here. There are many examples of this type.
It will require a radical rethink of what sort of tax is effective in an online world. If we can find a tax system that works for the online world, I believe that it would tame the mighty to some degree without in any way impeding freedom of expression.
Lord Mitchell: President Macron is moving quite fast in this direction. Do you have any thoughts on that?
Baroness O'Neill of Bengarve: I have not looked at the detail of Macron’s proposals, but I think that others will join in, too. It is not tolerable that businesses refuse to pay tax where they make their money. Equally, it is not a good idea for jurisdictions that are not those in which businesses make that money to get a big windfall—it is not of course a very high amount of tax, but for small countries it is a big windfall—because it creates an incentive by going easy on regulation of such companies. Tax reform is probably one of the levers that could work.
The Chairman: I suspect that things will move quickly. I am convinced that during our investigations both the French and the Germans will have taken a position that we will be able to comment on in our report, one way or the other. It is a live issue, but it is a moving target.
Q11 Lord Scriven: You have painted quite a turbulent picture in which the digital world and democracy as we know it are colliding and systematic changes are needed. If the Government could choose to do one thing to ensure that democracy is supported rather than undermined in the digital world, what should it be?
Baroness O'Neill of Bengarve: In the short run, it should be sorting out the powers of the Electoral Commission. I say the short run, because, as I said in the Chamber the other day, we may have either a general election or a referendum before many people would choose to have it and we are absolutely not prepared for it. That is my short-term suggestion.
My mid-range suggestion would be closer to the tax issues. Meanwhile, I think that the tech companies will take some steps. One has had a sense in the past six months of some of them being a bit appalled at what is going on. Of course, they would not admit it, but you can catch it in some of their statements.
I think they will try to clear up the degree to which their platforms are used to cause private harms, such as promoting anorexia, fraud or running around with knives. That does not do a commercial reputation much good, and every effort should be made to help them clean up practices that promote the private harms.
However, this Committee is dealing with the public harms to democracy and, more extensively, to a democratic culture and media, so I put that to one side.
Lord Scriven: The picture that you have painted over the last hour or so has been quite negative, and I understand why. Do you see any positivity in the digital world and democracy? If so, what principles should underlie what you see as good.
Baroness O'Neill of Bengarve: Something that I have come to admire considerably is, of course, not for profit: that is, Wikipedia. The fact that anybody with access to the technology can look up a pretty good summary of the relevant information is wonderful for democracy. There are, of course, topics where Wikipedia fails; they tend to be politically hot topics, where somebody goes in and edits information as soon as someone else has edited it in a way that they do not like. One would not use it for discovering about the Israel-Palestine dispute, for example, but one could use it for other things. There is a great source of optimism.
The consumer end of Facebook, with families able to keep in touch with one another across distance, can be terrific. I know that they may have to take the cost that they do not know who is using their data for what or targeting them, but there are undoubted pluses. The curious thing is that, for the first decade, we all said, “Hurrah, hurrah. Look at the lovely things that are happening, and it’s free”, not looking at what the business model really was.
Lord Scriven: We have talked a lot about influencing elections and their outcome as though elections are the only important part of democracy. I am trying to get at whether you see beyond the role of technology a more direct approach to how citizens could be involved in the democratic process rather than just at election times, for example.
Baroness O'Neill of Bengarve: You are thinking about citizens’ juries on a scale or referenda.
Lord Scriven: Kenya, for example, rewrote its constitution in a much more open way because of technology. Some organisations, local and national, allow participatory budgeting, where people can make informed choices about decisions.
Do you see technology in its broadest sense having a positive role in democracy rather than just in influencing elections? That needs to be dealt with, but democracy is much wider than just people casting their vote.
Baroness O'Neill of Bengarve: It could, but to do it we have to be able to address the contagion of influencing, misinforming and disinforming. None of these measures will be valuable for democracy if they get contaminated by the prevalence of disinformation and misinformation.
Baroness Kidron: I want to go back to something in your very first answer when you said that Facebook was welcoming regulation. In the same period when it started saying, “Bring on the regulation”, it also announced a move to privacy and greater encryption of its services. That is often characterised as the individual’s privacy, but actually it allows a broad spectrum of people to discuss things in private. Would you say a little about encryption?
Baroness O'Neill of Bengarve: I am not sure that I dare talk about encryption with you and certain others in this room, because I do not understand encryption technologies well enough. What I do understand is that encryption enables anonymity in certain ways, so I go back to the question: for whom should anonymity be secured and for whom not? I would take a very differentiated view of anonymity. The image of throwing a stone from behind a hedge recurs to me constantly when thinking about this problem.
I am aware that encryption is likely to come into many areas of online communication, partly because blockchain technologies will make it easier to secure the integrity of that which is encrypted. However, encryption sits ill with democratic process.
Lord Holmes of Richmond: This has been a fascinating session, as we knew it would be. What, if anything, purely from a philosopher’s perspective, is different about this stuff?
Baroness O'Neill of Bengarve: What is different is that it permits anonymisation on a scale that is very foreign to the world of print in which we all grew up. That is not of necessity, but it is reality, and it is probably one of the reasons why so many people talking about this area rush to talk about privacy but then fail to distinguish the quite different issues that arise. Someone’s medical treatment and information about it being private to them is not the same as someone’s purchasing and orchestrating an online campaign of one sort or another; in my view, the latter is not the sort of thing that we should deal with as falling under the right to privacy — which covers the individual’s home, his correspondence and his family, to go back to the European Convention formulation of the right.
In reflecting for this meeting on the topics that I thought had been neglected, I decided that it was anonymity that was curiously neglected in the discussions of the political and cultural importance of these matters, as though anonymity were merely incidental. It has been stretched way beyond the demands of a right to privacy.
The Chairman: You have just mentioned something that you have been reflecting on. Is there anything else that we have not been smart enough to ask you that you would like to get across?
Baroness O'Neill of Bengarve: You have asked me a lot, and I am sure that one or other of you can nudge me if you think I know something else. As you could tell, I am not an expert on the technologies. What I have thought about is the ethics of communication, but I think that is the central issue here. If we are to continue to have democracy—occasionally one despairs—people have to be able to assess that which is communicated to them.
The Chairman: Thank you very much. I cannot tell you how grateful I am. It was very interesting sitting here watching people nodding as you spoke. You have probably laid out our terms of reference for us better than we had previously. Thank you very much indeed.