Digital, Culture, Media and Sport Sub-Committee on Online Harms and Disinformation
Oral evidence: Online safety and online harms, HC 620.
Thursday 23 September 2021
Ordered by the House of Commons to be published on 23 September 2021.
Members present: Julian Knight MP (Chair); Kevin Brennan; Alex Davies-Jones; Clive Efford; Damian Green; Giles Watling.
Questions 1 - 33
Witnesses
I: Lord Puttnam CBE, Chair of the Democracy and Digital Technologies Committee; Professor Alan Renwick, Professor of Democratic Politics and Deputy Director of the Constitution Unit, University College London; and Dr Talita de Souza Dias, Shaw Foundation Junior Research Fellow in Law, Jesus College, University of Oxford.
Witnesses: Lord Puttnam CBE, Professor Alan Renwick and Dr Dias.
[This evidence was taken by video conference]
Q1 Chair: This is the Digital, Culture, Media and Sport Committee and this is our Sub-Committee on online harms and disinformation. We have a hybrid session today. We are joined in the room by Talita Dias, the Shaw Foundation junior research fellow in law, Jesus College, University of Oxford. We are joined by Lord Puttnam, Chair of the Democracy and Digital Technologies Committee, joining us via Zoom, and Professor Alan Renwick, professor of democratic politics and deputy director of the Constitution Unit, University College London.
Talita, Lord Puttnam and Professor Renwick, thank you very much for joining us today. Dr Dias, would you please outline for the Committee what you perceive are the strengths and weaknesses of the online harms legislation as it stands?
Dr Dias: I think that the Bill is quite detailed and comprehensive and that is a strength of the Bill. It covers a lot of ground. Duties to establish, for example, mechanisms of redress for users—the whole idea of a duty of care is a positive one. This type of regime is much better than the intermediary-liability approach to regulation of online platforms.
By intermediary liability I mean the fact that a certain platform can be held responsible or liable for the exact content that is posted on the platform, whereas a duty of care, for example, entails a responsibility, not a duty to achieve something—a certain result—but a duty to exert their best efforts to achieve a certain result by means of adopting a series of measures. In my opinion this is a positive thing. There has been a lot of debate about that but I think the Bill is on the right path. It is the right way forward.
However, there are some weaknesses. The main weaknesses are the vagueness of the definitions. In my view, the definitions of harm and the definitions of illegal content are the main weaknesses, and also the fact that the Bill does not outline the measures that the regulator might impose on platforms or the measures that the platforms themselves might or must adopt when exercising their duties of care. Those are the weaknesses in my view and, also, the fact that there is no provision for judicial remedies.
That is an important weakness because in the social media context, the online space more generally, there will be mistakes. That is inevitable because you are talking about important decisions about content and these are inherently subjective, so there will be mistakes and content will be taken down erroneously. People will be upset. People will complain, and it is important to have that safeguard, which an internal mechanism of redress is not sufficient to address because it is just a private mechanism. It is not public. It is not independent. It is not fully independent or legitimate. There has to be access to courts and the Bill needs to make sure that this is explicitly laid down there.
Q2 Chair: Thank you. Following on a couple of points there, I completely agree when it comes to the regulator, and there not being enough definitions. In our first session in private we were discussing the words “recommend” and “might”. It all sounds a bit woolly. Frankly, it sounds as if we are still letting them decide what is the best course of action in that respect. Secondly, you mentioned definition of harms. Could you expand on that? What is it in particular about that area that you feel concern?
Dr Dias: First, the definition of harm is way too vague. It is a definition that includes indirect harms. For example, if certain content influences or drives somebody to do something that might cause some harm. That is an indirect definition of harm and the evidentiary threshold to assess harm is also quite low.
The definition includes both material or physical harms and also psychological harms and that is very subjective. It is not necessarily a bad thing to include a subjective definition of harm, but there has to be other criteria included in this definition to make it clear for users and platforms as to what kinds of content are considered to be harmful to adults and to children. Other criteria must be included, not just harm.
Harm is an important element of harmful content but there are other things at play there. If you take this definition as it is now, somebody might easily say, “I was affected by this kind of content personally” and for the general audience—in the context in which a particular content was published, it might not be as harmful as the particular victim claims it to be or the platform perceives it to be. So the context is important because language is contextual and there is no provision for any contextual analysis.
Q3 Chair: How do you resolve that?
Dr Dias: In my opinion, the best way forward is to, at the very least, include context as an element of the definition of harm so that platforms and the regulator can assess the content in context. Even terrorist content might not be as clear-cut as one might think. For example, in the context of the Israeli/Palestine conflict earlier this year, posts that featured the name of a mosque, which were not terrorist content, were taken down erroneously because there was no assessment of the context of that content. That is important.
Of course these decisions are subjective. It is not easy to lay down exactly what kinds of content are going to be harmful, but a mention of the relevance of context is important so that it is clear that the regulator and in-scope services need to take that into account whenever deciding on harms.
Q4 Chair: Professor Renwick, you have heard what Dr Dias has outlined. Do you have anything to add or any thoughts on what she said?
Professor Renwick: I very much agree with Dr Dias regarding those strengths of the Bill. Regarding the weaknesses of the Bill as it stands, the draft Bill, I come at that very much as a student of democracy and someone who thinks about how democracy can work effectively. The omission of harm to democracy from the Bill is a serious concern. It was very clear in the original White Paper that harm to democracy was a matter that the Government cared about, were concerned about and wanted to act upon, but that has disappeared from the Bill in the course of its development.
When we are thinking about democracy, there are two key aspects that matter. One is freedom of expression, which is an absolutely central tenet of democracy and that is clearly very strong in the Bill and strongly protected by the Bill. The other is the dangers caused by misinformation, which again were very strong in the original White Paper but have disappeared almost entirely from the Bill.
I think it is essential for this Bill to have regard to the danger to democracy caused by misinformation. Were it to do so, there are three broad elements to an approach to seeking to overcome those dangers. One is measures that seek directly to address misinformation, in the most extreme cases by taking down material but also by adjusting algorithms and by ensuring that there is fact checking and flagging of material that has been found by reputable fact-checkers to be problematic. There is that set of measures.
Secondly, there are measures to promote media literacy, which are in the Bill but it is rather difficult to see quite what those measures are going to do and what their extent might be. Thirdly, we need to care about ensuring that good quality, accurate, accessible information is readily available for citizens in a democracy. That is not present in the Bill at all.
If I might add one further thought: when we are dealing with these matters of democracy, we clearly need to be very concerned about process and ensuring that the democratic process cannot be unduly skewed by someone powerful who wants to skew the pitch in their favour. In that regard, I am somewhat concerned by some of the powers for Ministers that are proposed by the Bill. We ought to have a system where the broad rules are set out by Parliament and are subject to detailed parliamentary scrutiny, which means primary legislation.
Q5 Chair: To cut across you there, Professor, what specific part of the powers of Ministers? For example, is it where they can move companies from one tier to another? What is the area that you are really concerned about there?
Professor Renwick: That is one point. Another point is the ability of Ministers to direct changes in codes of practice in order that they should fit with government policy, for example, which is in Clause 33 of the Bill. There are a number of aspects there. The principle should be that Parliament sets out a framework that is provided for in primary legislation, which can be properly scrutinised, rather than simply in secondly legislation. The regulator should operate within that framework and should be independent from political direction that may be skewed from one side of the debate.
Q6 Chair: Your concern is that a Minister could dictate to Ofcom a change of direction when it comes to codes of practice and that, in turn, could lead to potential censorship or perhaps a favouring of one particular viewpoint over another?
Professor Renwick: Exactly so, yes, that is the danger.
Q7 Chair: Thank you. I will just ask the witnesses if everyone can move closer to the microphones or speak up or adjust your volumes. We are having one or two members having difficulty hearing you.
Lord Puttnam, could you add to this? You have heard what Dr Dias has said and the professor. What are your thoughts, from a parliamentary viewpoint and societal viewpoint, of exactly where we go with this and where are the areas that we should be focusing on when we want to improve the Bill?
Lord Puttnam: If I may, Chair, I will pick up on two points: one that Alan made and one that Dr Dias made. Alan’s point about the removal of the issues of harm to democracy is a very, very serious one. Interestingly, I felt that report—and we did look hard at this—anticipated the events that occurred on 6 January. What is weird about the Government’s response is it is as though the opposite happened. As though there was a certain amount of alarm at what could happen to undermine democracy through misinformation, but it is as though it did not occur. It is a very, very peculiar response to a very serious event and it allows me to think that Governments are extraordinarily complacent about harms to democracy.
Dr Dias also touched on the issue of duty of care, which is something that I am pretty obsessed by. What I am sure she knows better than I do is that duty of care applies not so much to what is done but a failure to act, or not acting. Duty of care is trying to anticipate what might happen. I would suggest, as a parliamentarian, that every single Member of Parliament will have, within their own constituency in the next five years, a Molly Russell case. It will happen. Every single Member of Parliament is going to have to answer to their constituency as to what they did, what protections they laid in and how they attempted to influence the events that could have led to a Molly Russell situation. Not to do so is not to exercise a duty of care. That is where I come down on the duty of care issue.
To give it another tiny bit of context, I was impressed by the Prime Minister’s speech to the UN yesterday where he basically said it was time for humanity to grow up, and he used the phrase, “To understand who we are and what we’re doing”. He said the adolescence of humanity is coming to an end. I would argue that the adolescence of social media is coming to an end. The flaw in the Bill is that it is treating social media as a mischievous adolescent, not as a very, very serious component of all of our futures. That is an argument I can expand on, obviously, but those would be my broad contextual thoughts.
Q8 Damian Green: Can I continue with the democratic thought and stay with Lord Puttnam? One of the things the Bill does, as we have agreed, is give quite significant powers to not just a regulator but to the Secretary of State. Do you think it provides for sufficient parliamentary oversight of how those powers are exercised?
Lord Puttnam: Simply stated, no, I do not think it does at all. One of the things I would like to say is that the secret of our report lies in the title. I chose to call it “The Resurrection of Trust”, because all the evidence that we took suggested that trust had collapsed, so that in my view we are dealing with a much more serious situation.
This was a unanimous report, it was a cross-party report. Can I say, Damian, I found the Government’s response to be lamentable, absolutely lamentable, in that it does not address the evidence or the arguments we made? It addressed our recommendations, which is fine, but it did not address the evidence lying behind the recommendations. I would be much more upset than I am but for the fact that I have some experience of pre-legislative scrutiny, so that the 45 recommendations that we made will be trawled by the pre-legislative Scrutiny Committee that is sitting right now. It will be very interesting to see how many of those get resurrected and how much of the evidence that we offered does get taken seriously.
It is an occasion of great sadness to me that the Government blew that opportunity and I still do not understand why. It was a very, very poor, ill thought through response to a report that had taken a year to compile and that had taken evidence from all around the world.
Q9 Damian Green: In that context, I am fascinated by your analogy that the tech companies are not adolescents. Going from my own experience when I was a Minister, you sat across the table from the big tech companies and they were not that interested in the British Government because they were bigger than we were, effectively. They are not just adults and not hippies playing in the garage any more. They are very, very powerful adults who know how powerful they are and are quite happy to use that power. Do you think that the underlying problem is that the Government do not get that, do not acknowledge that you are dealing with hugely powerful and potentially quite dangerous people in terms of the tech giants?
Lord Puttnam: Maybe when I say “adolescents”, I am referring to a mob of adolescents, because that is certainly what would make me fearful if I was in a football crowd and they turned on me. It is a little more nuanced that that. I do not think the people who founded these companies were bad people and I do not think they were ill-intentioned people. I think that the power that accrued to them was almost accidental, in a sense.
They then fell into the hands of shareholders. What they know—and I am going to say “know” and I really mean “know”—is that the changes they could make to their algorithms, the changes they could make to their business practices will reduce their revenues. They cannot confront their shareholders with the notion of a reduction in their revenues. So the argument that lies out there—and I suspect Alan certainly agrees with this—is that they know exactly what they can do to tweak their algorithm and make them safer. They know how to protect the Molly Russells but it would cost them money, cost them revenues, cost them reach.
One of the most powerful points that we made was we argued that it isn’t a question of freedom of speech. It is a question of freedom of reach. There is a moment at which they have an obligation to challenge reach. We have suggested it is roughly 5,000. They have the algorithmic ability to do this, incidentally. Once a piece of misinformation or disinformation has reached 5,000 voices, that should be challenged by the organisations themselves and either taken down or they should be required to justify it. All the evidence we took, and all the work that we did on that, is largely ignored in the Government’s response to this and I am still genuinely puzzled as to why.
Q10 Damian Green: Professor Renwick, do you broadly share that analysis that there are practical steps that could be taken in terms of this legislation and regulation that is in danger of not being there? Effectively, this country is going to get one shot at this type of regulation and we are in danger of misfiring.
Professor Renwick: Yes, I agree with everything that Lord Puttnam has just said. If I can go back to your original question there about the powers of Ministers, there have been concerns expressed by the Constitution Unit, by the Hansard Society and by others for some time about delegated powers and the powers that Ministers are accruing in many areas. It would seem that this Bill proposes to do that again.
There was an interesting point made in the written evidence submitted by Full Fact. It pointed out that we get legislation on immigration on average every two and a half years, whereas this is the first piece of legislation in this area for 18 years. The Government’s argument for having extensive ministerial powers seems to be that it is very difficult to find parliamentary time and, therefore, we need to put flexibility in this legislation to make it foolproof against developments.
But as Lord Puttnam has pointed out, these matters are incredibly important. They are at the heart of our democracy today and our democracy is fundamentally important to all of us, so perhaps we should be thinking that these matters deserve a bit more parliamentary time than they have been getting in recent years and that, therefore, Parliament should have a greater role in setting these rules.
Q11 Damian Green: Thank you. Dr Dias, you set out at the outset some of your fears, or you identified gaps in what is there in the way that our other witnesses have as well. What do you think the implications would be if the Bill, more or less in its current form, came into force? What would happen, what would go wrong?
Dr Dias: It can go either way. As I said, the Bill as it stands is quite vague. What I think is most likely to occur is that because of the fines—the fines are really high—and if the regulator decides to enforce the codes of conduct thoroughly and applies these fines, these high, prohibitive fines, what is going to happen is that companies are going to err on the side of censorship. They are going to take down everything, especially with regards to illegal content, because that is the only measure that the Bill lays out for that kind of content, so they are just going to take it down.
When we think about content takedowns, probably most of that content is not going to be taken down by a human. It is an algorithm and algorithms are bound to fail because they are just code. They don’t understand human language. What is going to happen is that the majority of content, whether it is disinformation, alleged disinformation, whether it is offensive, satirical, nudity, all of these kinds of content are going to be taken down. In most cases it will be an algorithm.
That is the problem. The Bill is setting the stage for enhanced censorship, whether it is by companies themselves or by the regulator, because we don’t know what is going to be in this code of practice because, apart from very general objectives, the Bill does not spell out the measures, the exact steps, not necessarily to the very detailed, but it does not say, “What are the options, what are the measures? How is freedom of expression going to be limited? In what ways?” We don’t know as users and companies also don’t know, so they are going to tend to censor because they want to protect themselves from the fines. That is the implication.
I agree with what Lord Puttnam said before that the root of the problem is algorithms. They are the root of the problem and the Bill does not say much about them, about measures to tweak those algorithms or at least to review those algorithms, whether it is by a third party or the companies themselves.
I like to think of this as an analogy between a fan and a broom. That is what these companies are doing. They have different teams, which are purposefully separated because the idea is that they don’t know what each other is doing. For example, the integrity team of Facebook is charged with chasing bad content—misinformation, electoral interference, hate speech or whatever. That is what the integrity teams are doing. At the same time the algorithms are promoting that. It is almost as if you have a fan and that fan is spreading out the dirt and then they are trying to grab the pieces.
Q12 Damian Green: Is the underlying solution—because it is obviously hellishly difficult to legislate in this field—to allow the regulator to get into the algorithm, look at the algorithm and recommend quickly how that algorithm should be changed? Would that be the solution?
Dr Dias: No, that would have a lot of backlash from companies because they say that these algorithms are proprietary, so they won’t let that happen. In my view, the solution is to have a third-party auditor; an independent auditor that is going to look at the algorithm. Or, if that is not feasible, if there is there is reluctance to do that on the part of companies, at the very least there should be specific transparency reports about how they train the algorithms. What is the type of content that is being used to train the data? Because these algorithms are machine-learning algorithms. We do not understand how they work. What we understand is the kind of data that are fed to these algorithms.
In these transparency reports, there has to be, absolutely, an explanation of what kinds of data are being fed to these algorithms. Are we using content that is representative of different cultural groups in this country or in other countries? Because then we run into the risk of bias if we do not have a representative dataset. All of these things need to be laid down in a transparency report. As the Bill stands right now, there is nothing about that. Of course, the Bill cannot regulate to the very last detail but it can specify that this kind of review process needs to be part of the transparency report, for example.
Q13 Damian Green: Lord Puttnam, you were nodding very energetically about the point about algorithms. Do you feel that very strongly?
Lord Puttnam: I do for two reasons. First, when we took evidence, all the companies made an enormous amount of how they were working ever more closely with research organisations—Alan can expand on this—and that that was their route through to find out more about their own algorithms. That position, according to The New York Times last Sunday, has now been reversed. They are closing down research because the research that is emerging is not supporting their position.
I absolutely agree with what Dr Dias said. In our report we do set out a structure for an impartial ombudsman, for want of a better word—I hate the word—who will look at these things from an objective and public-focused point of view on how we reach a satisfactory balance between freedom of speech and responsibility. At the moment I am afraid the Bill comes down too hard on freedom of speech, much as I treasure freedom of speech, and has backed away from putting enough responsibility on companies.
To finish on that, I do not happen to think that financial fines are all the answer. I have sat on more than a dozen boards in my life. When you get a board’s attention is when the board understands it has personal responsibility. That is the moment that it grabs you. So long as you think that you can deal with the fines because you are huge organisation, yes, that is unfortunate but that is something for the auditors. It will be chargeable against tax and maybe even insurable. It is personal responsibility that I would like to see added to this Bill.
Q14 Chair: Dr Dias, in a meeting with officials earlier, I suggested the idea of compliance officers in firms, a little bit like the financial services sector. They stated that the Bill did not rule that out, that Ofcom could potentially have the power to do this. Is that a fair interpretation?
Dr Dias: It depends on what Ofcom decides to do. Because the Bill is vague as to the powers of Ofcom and gives Ofcom wide powers, all of these things are possible but we just don’t know if it is going to do that. That is the problem. If Parliament thinks that this is what should happen, it should lay that down clearly rather than relying on the goodwill of Ofcom to just set that in place.
Q15 Chair: Do you think that is a potential solution to this in terms of oversight of algorithms? Having someone on the ground, paid for by the companies themselves, not by the taxpayer, but at the same time independent of those companies. They may not have all the specialism to look precisely at algorithmic code and understand it but can at least ask, “What does this algorithm do, what is the purpose and how is this enabling a better ecosystem?”
Dr Dias: That is one way. That is one way forward, having an independent body that is constantly reviewing how the algorithm works. As you said, it is impossible to know what exactly and why the algorithm decides as it does, but there has to be somebody, an independent body, that is going to oversee the data that are fed to the algorithm and the results—how the algorithm is working and whether it is working effectively. So far what we have is that we rely on companies to do that voluntarily. We do not have an independent body that is charged with doing that, and that is one way.
Q16 Kevin Brennan: Dr Dias, something you said a moment ago slightly depressed me. You seemed to suggest that because the companies don’t like the idea of someone looking inside the magic box, that therefore it cannot happen. Isn’t that just an example of the power that tech companies have, even over you as an academic in this field? People assume they have the power to prevent national state Governments from wanting to regulate their activities appropriately?
Dr Dias: Yes, it is depressing and the law is on their side, I would say, because they can always claim that it is an issue of intellectual property.
Q17 Kevin Brennan: Why do you need another body, though? You seem to suggest we need a third body. Why can we not simply legislate to say that the regulator, while respecting commercial confidentiality, should have the right to look into how algorithms and the machine learning associated with them are affecting people’s lives in the content that is being pushed out into the public sphere?
Dr Dias: Absolutely, if there is a safeguard of confidentiality that is feasible and that addresses the intellectual property concerns. Many NGOs have proposed that kind of review and they have proposed a way to balance the need to review and assess algorithms with the need to preserve confidentiality and intellectual property. There is a way to do that. It has to be done well and it has to be done with the right safeguards but it is possible.
Kevin Brennan: So we don’t have to be slaves to the algorithm?
Dr Dias: No, we don’t.
Q18 Kevin Brennan: I am glad we have established that. You mentioned earlier on that you thought that the draft Bill as proposed was vague in detail in relation to illegal content. In what way can you be vague about illegal content? Either it is illegal or it is legal.
Dr Dias: The problem here is that, first of all, illegal content includes things that might be criminal offences or other types of wrongs like civil wrongs. There is no differentiation there. This is important because, depending on the severity of the content, different measures might be applicable. That is a requirement of necessity and proportionality, which is necessary. These requirements are necessary to give effective freedom of expression, to preserve freedom of expression. That is my main issue with the definition of illegal content.
The second issue is that, even though illegal content might correspond to content that is already illegal under the law—existing law already says, “This conduct is criminalised, this conduct is a civil wrong”. The problem here is that the Bill is imposing additional limitations to some of these kinds of acts. These kinds of acts are already illegal but then there are the penalties that the law provides for these kinds of conduct.
But here we are imposing new limitations, limitations to freedom of expression, because you are talking about speech acts. We don’t know exactly which among these, criminal or not criminal, illegal acts going to fall within the scope of the Bill. There is a range of illegal content that is out there but we do not know exactly which one applies in the context of the Bill. That is the problem. For example, we have those schedules for terrorist content and for pornography. That is my concern.
Q19 Kevin Brennan: It may not be possible to be as specific as that but it may be possible to test all of that during the process of scrutiny of the Bill as we go through with those who are putting it together?
Dr Dias: Yes.
Q20 Kevin Brennan: Thank you, Dr Dias. David Puttnam, you were saying earlier on about social media companies being adolescents, or not naughty boys anymore, and they should be expected to act in a grownup fashion. Goggle’s original slogan used to be, “Don’t be evil”. Isn’t it the case that what we have known for over 2,000 years, via the parable of the good Samaritan, that it is not enough to not be evil? You have to cross the street to be good and to do good. Is it unrealistic and hopelessly idealistic to expect us to be able to put together a piece of legislation here that would mean that social media companies were not only encouraged but compelled to do the right thing?
Lord Puttnam: That is an important question, Kevin. First, I have met the two original founders of Google, who are extremely nice guys, and I don’t think it is insignificant that they have distanced themselves very greatly from the business. I don’t think they want the sort of heat that Mark Zuckerberg gets. So I do not think there is inherent evilness there and I think that their original slogan was well intentioned. But they no longer run the business and the business is perpetuated by the marketplace.
To touch on something you said about algorithms just now. I am a very old man now and we have managed to struggle through the whole issue during my lifetime of nuclear verification. I would argue that algorithmic verification should be child’s play compared with nuclear verification. We can do it. As legislators we can do it. It is having the will to do it.
The other point I would like to make in all of this I think is very important. Dr Dias will no doubt have her views as will Alan. I think the Bill as it stands is an invitation to judicial review. It basically will put Ofcom in an almost impossible position. I particularly would highlight the issue here of personal versus group harms. The personal harm will inevitably, if it is a severe harm, go to some form of class-action suit. They will be supported by a group who will share their concerns.
On the other side of the equation you have these very, very powerful companies. Ofcom will then make a judgment. Either party will then have the ability to appeal and I am not a fan, and I do not think you are, of judge-made laws. So, it is very important—and it is subject to what Dr Dias is saying—that we straighten these things our now or they will be straightened out over a period of 10 or 20 years by the courts. That could bring conflict between Parliament and the courts. We have flirted with that in the past. I would not want Britain to go there. So it is imperative that we sort it out.
Personally, I would bet a pound to a penny that we will end up going to the group harms not just the personal harms, certainly when it comes to the House of Lords and I would like to think in the Commons. I have said this to officials. I think that it is a major flaw in the Bill. The sooner they straighten it out the better, because they will be avoiding the absolute inevitability of judicial review.
Q21 Kevin Brennan: That point leads me on to the question I was going to ask you, which is: why do you think the Government have chosen to emphasise harm to the individual rather than the generic harm to our democracy that you recommended in your Committee’s report?
Lord Puttnam: I think there is a history to this. I sent to the Department a clip. If you have one minute 40 seconds to spare at any point, go on to YouTube. You will see the evidence given by the so-called tobacco barons—it is called “The Seven Dwarfs”, in 1994—claiming that nicotine was not harmful and there was no relationship between nicotine and poisoning. Three years later that was settled in court and a $385 billion fine was levied but there was no acknowledgement of personal responsibility. To me, we are going down exactly that road. When that congressional hearing occurred they were all swearing on oath, and every one of those men for 15 years had had on their desks all the evidence they needed to know that nicotine was deeply harmful.
I have spent a lot of time looking at Road Traffic Acts and the way that those developed over years. We have to get smarter, I would suggest, and look at the history of these things and understand that it is possible to head them off before you are dealing with seriously dramatic harms to many, many thousands, maybe millions of people. Just look at history. We have made these mistakes before and we have been trapped into them, if you like. The corporate resolution has been a long time in coming forward and, even then, is only based on fines. Fines are not enough, for all the reasons I was saying to Damian a few minutes.
Q22 Kevin Brennan: A last question for Professor Renwick. You spoke earlier with the Chair about the issue of content of democratic importance. If you were going to draft for us or the Joint Committee a few amendments to this Bill to rectify that, in a nutshell what would they say?
Professor Renwick: It would emphasise that misinformation and disinformation constitute harms to democracy and, therefore, need to be taken into account alongside all the other various considerations when codes of practice are being drafted and subsequently implemented.
The Bill appears to protect, through its protection of democratically important material, deliberate misinformation in the realm of democracy against any kind of intervention by these companies. So it seems to make things worse rather than better in respect of misinformation and disinformation. At the very least, an element within the Bill that countered that and ensured that both of these considerations—freedom of expression and the dangers associated with misinformation—are taken into account and balanced appropriately. That would help.
Kevin Brennan: You could perhaps get some of the super-clever law students at UCL to draft up a few suitable amendments that would bring effect to what you have just said, with a bit of luck.
Chair: Thank you, Kevin, I think you are just getting the Committee’s job done for it now, aren’t you?
Q23 Giles Watling: In a previous incarnation of this Committee we held a grand panel in Washington. It was interesting that we had representatives from Twitter and Instagram and so on there, and it seemed at the time that we had these undergrads who had created something wonderful in a garden shed and it was suddenly brought blinking into the sunlight, understanding the power and the importance of the platforms they had created. So I understand, Lord Puttnam, exactly what you are talking about, about this adolescent thing, but it is incredibly powerful.
What I want to focus on initially is a comment that was in Politico a couple of weeks ago, where they said that the UK wants to protect journalists from plans to regulate big tech, “It just doesn’t seem to know how”. Oliver Dowden said that he wanted to place, “A protective bubble around journalistic and democratically important content in the upcoming Online Safety Bill”. As Dr Dias has said, there is a lot of vagueness about this. Alan Renwick, do you think that the carve-out for journalistic content is sufficiently clear?
Professor Renwick: One point is that if freedom of expression matters it matters for everyone, so I am not quite clear why we should have a specific carve-out for journalistic content and content of democratic importance. Freedom of expression matters generally but it does need to be balanced against the need to also protect society against deliberate misinformation and disinformation. That applies across the piece. So, I am not sure why there should be specific provisions for journalistic content.
Q24 Giles Watling: You would rather it be a general expression rather than focusing on freedom of the press, which is the much-lauded phrase, of course.
Professor Renwick: Of course, but free expression is a right for all of us in a democracy. As we have moved into the digital age, the distinction between journalists and the rest of us has rather diminished. All of us have very important rights of free expression and all of us, if we are saying things in public, have duties to have regard to the effects of what we say upon society as a whole.
Q25 Giles Watling: I turn to Dr Dias on this one, because there is a particular carve-out for journalistic content within the Bill. Do you think that should still be there?
Dr Dias: No, I completely agree with what Professor Renwick said before. Freedom of expression is a right of everyone and also let’s not forget equality and non-discrimination, which applies side to side with freedom of expression. As the Bill stands at the moment, sections 13 and 14 and these carve-outs seem a little bit out of place. You can see this in particular by looking at section 13(7) and section 14(1) for journalistic content. You can see that there are some more temperate actions to deal with these kinds of content. Rather than just taking down the content, these people are entitled to not just having their content taken down but having a warning, being suspended, restricting the user’s ability.
You can see that these measures are good but they only apply to journalistic and content of democratic importance however the platforms want to define what is democratic content. These privileges, if you will, should apply to everyone rather than just having their content taken down.
Q26 Giles Watling: As Alan Renwick said, yes. Thank you very much. Last week, The Wall Street Journal—I do not know if you have seen it—published a five-part series entitled “The Facebook Files”. They had an investigation and I will quickly run through it. Facebook had a system in place called “whitelisting”, which, “Subjects high profile users to a different review process than regular users. “Facebook-commissioned studies have repeatedly found that Instagram can have harmful mental health impacts on users”. It came up with other points. Have the practices at Instagram and Facebook described by The Wall Street Journal in any way changed your perspective on how we should approach regulating big tech? Lord Puttnam.
Lord Puttnam: I read the piece and I certainly agree with it. I do not think it should alter our attitudes to it but it should be a warning shot across the legislation. Instagram was directly, as you know, related to the Molly Russell case. So the very thing I am saying about each constituency having its own crisis makes sense there.
Can I go back one second to your earlier note? I do not blame him, but the then Secretary of State, I am sure, was just playing to the crowd with his remark about journalism and the freedom of journalists. What we have suggested in our report is to take far more seriously the recommendations in the Cairncross review, which is a very, very good and a very detailed review and answered many of the questions that were implied in your questions to Alan and Talita. Yes, that report was a good, one. I would definitely recommend the report in this week’s New York Times regarding the backing away of Facebook from academic inquiry and academic review. In a sense, that may be even more significant. It has reached a point where it does not want to understand or to look at what it might be able to do to reduce these harms.
Q27 Giles Watling: Fundamentally, you are saying the will is not there.
Lord Puttnam: Yes.
Q28 Giles Watling: Thank you. Alan Renwick, do you have any comment on that?
Professor Renwick: Nothing to add. I don’t think the stories were surprising. I don’t think they told us anything we did not already know. They just confirm that a large multinational corporation will need to be regulated. We cannot expect a company on its own to do all the good stuff, from society’s point of view, so we need to regulate.
Q29 Giles Watling: The fact that anti-vaccine activists have used Facebook to sow doubt and spread fear for the Covid-19 deployment is very serious and damaging stuff. This is part of the reason we are looking at regulation, because this is people’s lives here. So we have to regulate but we also have to take into account freedom of expression and of speech. You think that we are moving too far towards the regulation and not paying enough attention to freedom of speech. Would that be fair?
Professor Renwick: No. The Bill places too much emphasis, at least in the democratic sphere, upon freedom of expression. Freedom of expression is fundamentally important. I am not suggesting that we should downplay it, but it needs to be balanced against the need to protect society from harmful discourse, harmful information and misinformation. At present that latter part is simply absent from the Bill.
It was not absent from the White Paper. It was very clear in the original White Paper that Ministers understood and were concerned about the damage to trust and confidence in the democratic system that is being caused at present. I completely agree with what Lord Puttnam said: that the events of 6 January on Capitol Hill illustrated where we can go if we fail to take seriously the threats caused by misinformation. It does look like complacency on the part of the Government not to take those threats seriously and incorporate them in this Bill.
Giles Watling: Thank you, a point well made.
Q30 Alex Davies-Jones: Thank you to our witnesses for joining us today; we do appreciate it. Dr Dias, I will come to you first. We know that Germany was the first country in the world to try to take a stand on this and it introduced landmark legislation in 2017, the unique hate speech law to try to get online accountability for this. It has recently been highly criticised for not working, particularly by women and public officials who feel that nothing changed and that it has made the situation worse by pushing it into unregulated websites and social media accounts. What can we learn from Germany to stop that happening with this legislation, and how does our legislation differ from that and other jurisdictions in the world?
Dr Dias: The German approach is one that comes closer to the idea of intermediary liability that I mentioned earlier, as opposed to a duty of care. In particular, there is a duty to swiftly take down illegal content, including hate speech and terrorist content, in a matter of hours and it is subject to penalties, like our legislation.
The problem is, of course, as I said earlier, that that will drive companies to self-censor themselves, to over-censor. What happens is that that content moderation happens through algorithms because there is no way a human being can do that in a scalable manner. They have to rely on algorithms and what is going to happen is that these algorithms are going to delete content that sometimes is content that is denouncing what is wrong on social media, because they don’t have contextual knowledge, the technology. What is going to happen is that this is going to jeopardise vulnerable populations rather than protect them. That is the main problem with the German legislation.
Also, one main point of criticism from many NGOs is that they did not have access to justice provisions. If I am not mistaken, they amended the Bill last year to include something to cover that. But it was still far from ideal because the inclusion was basically out-of-court proceedings. There is an arbitral tribunal that is meant to settle these disputes, but that does not solve the problem because there is still no access to public justice. These are some of the main problems with the German approach. The French Bill initially was following the German one but the Constitutional Council scraped out the Bill because of those concerns of censorship, freedom of expression and so on and so forth.
What lessons can we learn from the German model? We should not require companies to just take down content immediately in one hour or in two hours. There has to be a little bit more time and there have to be alternative measures to just content takedowns. As I said earlier, the emphasis should be on the duty of care rather than the specific types of content that are published online and that are left online.
There has to be an emphasis on whether the companies are doing their best rather than if they are removing particular pieces of content, so a more holistic approach to what the company is doing overall with the transparency reports that they are required to do with their algorithms, how they are enhancing their algorithms, how they are tweaking their algorithms. That should be the focus of the Bill, not intermediary liability. Intermediary liability is not an option.
As it stands at the moment, although it is purported a model of duty of care, some aspects of the Bill do come close to intermediary liability and that is what worries me the most. That is the penalty aspect, the penalties that are enforced by the regulator and we don’t know how, as I said earlier.
Q31 Alex Davies-Jones: Lord Puttnam, you attended the International Grand Committee in Dublin that occurred during the 2019 general election. How do you think the UK Government’s approach differs from international partners to this issue?
Lord Puttnam: I would say that we are somewhat behind the consensus that existed on that Committee. Thanks to Damian Collins, I have been able to feed in and stay familiar with what is going on. We tend to be back markers.
Can I touch on the German issue? A long time ago, in 1973, I did two films on Germany from 1918 to 1945, looking at the rise of fascism. It does not surprise me at all that there is a reflex response within Germany to anything that could disturb the, I think, brilliantly built democracy that we helped to put together.
When we took evidence from around the world during our Committee, the most interesting was from Estonia. Estonia sees misinformation and disinformation as an existential threat for them. They sit hard on the border of Russia and 20% of their population is of Russian origin. Their evidence and the way they deal with this, sensitively and sensibly, was exemplary. I would recommend to the Committee that it is well worth reading that evidence. They were brilliant and thoroughly impressed our Committee.
So, yes, there are lessons to be learnt from everywhere. Singapore is quite interesting, in our terms probably slightly draconian. Canada has a very good grip on these issues. It is a very stimulating group to be among. Of course, what is important is that David Cicilline, who was the American representative, is now the ranking congressional figure on the investigations into regulation in the United States. So, it is not a limp Committee. It is a Committee with quite a lot of oomph to it, which is quite promising. If I can, I intend to attend further meetings.
Q32 Kevin Brennan: My colleague Giles Watling raised The Wall Street Journal article earlier on and it is only fair that we should give Facebook a right of reply here. I don’t know if anyone has read the recent blog post by the vice-president for global affairs and communications of Facebook, someone called Nick Clegg. Can I put this to David Puttnam? What he says in his blog post, “What The Wall Street Journal Got Wrong”, was, “These stories have contained deliberate mischaracterizations of what we are trying to do, and conferred egregiously false motives to Facebook’s leadership and employees”. He went on to say, “We fundamentally reject this mischaracterization of our work and impugning of the company’s motives”. What is your reaction to that robust response by the vice-president of Facebook?
Lord Puttnam: I think the vice-president is very, very eager to keep his job. I am sure it is well remunerated. I would reject his rejection. Interestingly, they themselves have appointed an ombudsman, this very, very senior group. Alan Rusbridger is on it, the former Prime Minister of Denmark is on it. One of the charges made is that they have deliberately not supplied that group with information that would be quite crucial when it is making decisions.
My argument again about that group is that they themselves are very highly paid. If you create a group of people to whom sitting on that ombudsman-type committee is a very significant proportion of their total income, you are not necessarily going to get a wholly objective response. So, there is something essentially flawed in all of Facebook’s response to criticism. It does not like it. It believes it can ride it out, and only legislators such as yourself can do anything to stop it. This is where the buck stops. I don’t think there is a cosy compromise to be reached. Basically it is either a regulated company, just as you regulate energy companies and anything else, or it is unregulated because it is too big to deal with.
Q33 Kevin Brennan: To follow up on that, he goes on his blog post to say, “At the heart of this series is an allegation that is just plain false: that Facebook conducts research and then systematically and wilfully ignores it if the findings are inconvenient for the company”. In the next paragraph he says, “The fact that not every idea that a researcher raises is acted upon doesn’t mean Facebook teams are not continually considering a range of different improvements”. That rather undermines, I thought, the previous statement that he has made. What is your reaction to that, Professor Renwick?
Lord Puttnam: Can I go first and then hand over to Alan, because he will probably give you a better, more balanced view? The answer is very clear. Follow the evidence. Ask the academic community whether they are getting increasing or decreasing access to the algorithmic systems that Facebook uses. If the consensus from the academic community is that Facebook has been very helpful and very open everything I am saying is nonsense. If on the other hand, as I suspect, the consensus from the academic community is that they are being blocked from getting the access they would like, I am afraid that what I am saying is absolutely correct and very frightening.
Professor Renwick: I cannot address that specific point because I have not sought such access. What I would say is that I am entirely open to the idea that the people running Facebook are very good people and are very well intentioned people and are trying to do a good job, but the fact is that they have interests at stake and they are subject to the laws of human nature. We all are and we all have to think about how we may be biased and they are among the people who may be biased. So, for any company such as this, it is necessary to have a proper regulatory system in order to ensure that those biases—even if they are wonderfully well-intentioned people—do not to lead to harmful outcomes.
Dr Dias: I agree with what has been said before. One thing to bear in mind is that we cannot think of these companies as amorphous entities. They are different people, different teams and sometimes these teams are isolated from each other purposefully. I have spoken to people working at Facebook who are really well intentioned, really professional and really keen to make a difference, and also at Twitter. I am friends with, for example, Twitter’s human rights council.
These people are well intentioned but the problem is the higher-ranking leadership. That is a problem and there is a huge separation between what happens there at the top and what happens at the bottom. That is something that is important to bear in mind: that sometimes the big decisions don’t come from Facebook as a whole but they come from the top leadership. That is the main problem.
Also, one point that was raised earlier about regulation and freedom of expression. It is important to also remember that regulation is not the antithesis or the opposite to freedom of expression. It is a necessary safeguard because people need to have clear notice of how their freedom of expression is going to be limited. The more specific, the clearer the regulation the better it will be for freedom of expression.
Chair: That concludes our session today. Lord Puttnam, Professor Renwick and Dr Dias, thank you very much for joining us today.