final logo red (RGB)

 

Communications and Digital Committee

Corrected oral evidence: The future of news: impartiality, trust and technology

Tuesday 16 April 2024

3.40 pm

 

Watch the meeting

Members present: Baroness Stowell of Beeston (The Chair); Lord Dunlop; Lord Hall of Birkenhead; Baroness Harding of Winscombe; Baroness Healy of Primrose Hill; Lord Kamall; Lord Knight of Weymouth; The Lord Bishop of Leeds; Lord McNally; Baroness Primarolo; Baroness Wheatcroft; Lord Young of Norwood Green.

Evidence Session No. 11              Heard in Public              Questions 109 - 114

 

Witnesses

I: Freddie Sayers, Editor-in-Chief and CEO, UnHerd; Katie Harbath, Founder and CEO, Anchor Change; Professor William Dutton, Emeritus Professor, University of Southern California.

 

USE OF THE TRANSCRIPT

This is a corrected transcript of evidence taken in public and webcast on www.parliamentlive.tv.

 


14

 

Examination of witnesses

Freddie Sayers, Katie Harbath and Professor William Dutton.

Q109         The Chair: Thank you very much. We are now moving on to our second panel, where we are joined by three witnesses. I will ask each to introduce themselves very briefly, because we are running a little behind time. Let me start with our colleague who is joining us from the US, I believe. Would you like to state your name and your organisation?

Katie Harbath: Thank you for having me. I am the global affairs officer at Duco Experts, a technology consulting firm. Prior to that, I spent 10 years at Facebook on its public policy team.

The Chair: Thank you very much for joining us.

Freddie Sayers: I am CEO and editor-in-chief at UnHerd, which is a news and commentary platform that has somewhat taken off in recent years.

Professor William Dutton: I am emeritus professor at the University of Southern California. I was the founding director of the Oxford Internet Institute and was the first professor of internet studies at Oxford. Right now, I am the Oxford Martin fellow, a senior fellow at the OII and director of the Portulans Institute in Washington DC.

The Chair: We are very grateful to all three of you for being here. If our last session was focusing on operational issues, we are expecting in this session to get into more of a discussion about what I might describe as the political debate and discourse around the topic of disinformation. We will go through a range of topics about how that discourse has changed and the politicisation of it, some of which you touched on a moment ago, how media is responding to this and what the role of government is at the end. Lord Knight is going to kick us off.

Q110         Lord Knight of Weymouth: Thank you very much for coming and giving us a bit of time. I am interested, just fairly simply, in getting your various takes on how the discourse around disinformation has changed over the last 10 years or so. I am a recovering politician. I am well aware of attempts to manipulate media. They have been around for a long time with propaganda, spin doctors, and so on and so forth. Yet it feels like something has changed. I will start with Bill, given your history. Maybe you are the right person to start with the history.

Professor William Dutton: I will try. I listened to the last session, so I will make two points. First, it is very seldom said but there is nothing new about disinformation. Truly, in the early days of the internet, in the late 1990s, as many of you may remember, it was usually referred to as a garbage heap. It was just full of trash. The job was to find a kernel of truth or an interesting titbit like a needle in a haystack. How do you do that? Eventually, people started creating lists of a few sites that were decent. Those became some of the early foundations of search engines. Then we invented Google and other search engines. Search engines enabled us to find good information among a dross of bad information.

Today, we somehow have this image of a totally edited internet that has occasionally a misspelled word, a fact that is wrong or disinformation. It is exactly the opposite. We need tools, technologies and mindsets to find the good information among much worthless information, either irrelevant or factually flawed. Changing our mindset means really trying to focus on improving search, not demonising search.

The second point I would make after listening to the last session is about what has been the biggest change in terms of the politics and the notion of disinformation. I am now studying the impact of the Ukraine war. It is a game changer in dramatic ways. Some of what is going on in Russia now was typified in China before this, but was not paid attention to, because 90% of the filtering of information between China and the rest of the world is because of the language. We just thought, “What goes on in China will stay in China. It’s not really affecting matters in the rest of the world as much”.

With the Ukraine war, there is a tremendous censorship of all media and complete elimination of any opposition statements. Even today, when “war” is being used by some elliptically to talk about the Russian special military operation, I am told that you still can get arrested for using the term or referring to the special military operation as a war. This is government censorship and government control of information, and the use of this in cybersecurity. That has changed dramatically.

Just to target thismaybe we can get back to it laterbut once national cybersecurity becomes an issue for the flow of information, we are no longer talking about an open global internet. We are talking about national sovereignty over the information. This is some of the discussion that you are involved in. “What can we permit? How do we control this?” The day of the open global internet seems to be now a distant memory. More and more Governments are moving in to control information flows and security. In the same process, this is affecting the control of information.

It is a return to the premodern propaganda of World War II, almost. Just as you are going back into trenches, you are going back into old propaganda about Russia, but it is in a different form. Instead of trying to influence opinions, we are changing beliefs. Cognitive politics is arising, in which it is all about asking, “What is the truth?” What is the border of Ukraine? Is Ukraine Russian or a sovereign nation? These are cognitive politics that are going on. One country’s disinformation is another country’s truth. It is a very big game changer. Anyway, I will stop there.

Lord Knight of Weymouth: There are a couple of things I will probably come back on. Let us work through everyone’s initial views.

Freddie Sayers: Thank you very much. The question was whether it has changed in the past 10 years. I think the past eight years is the most relevant timeframe. It seems to me it is inextricably linked to the populist revolutions of 2016, both Brexit and Trump, and everything that has happened since, particularly the Covid era. You can chart this if you look at Google Trends, which is quite a useful tool, to see how much people are searching. Nobody was really searching for disinformation prior to 2016. It was not a talked-about topic. It quadrupled during 2016, and it increased more than 30 times between the middle part of 2016 and 2022. It is a relatively new talking point.

I have been in charge of UnHerd for the past five years. I see our role as a publication as facing both ways: towards the establishment, in having very high standards, meticulously fact-checking and making sure it is high quality, but also understanding the anti-establishment world, YouTube and the open internet. What I have really observed in the disinformation movement, if we can call it that, particularly during the Covid era, is that attempts to counter disinformation have themselves been very damaging. They have, I would say, exacerbated losses in public trust and fastforwarded the collapse in trust in the media and in government. It is not talked about nearly enough: what the backfiring negative effects are of every new initiative to counter disinformation.

There was talk in the earlier session about international actors and state actors. We should think about that in a separate category. During a war or if a foreign country is attacking us, we would agree that that needs to be responded to very vigorously, but this concept that somehow it is the Government’s role to point out so-called disinformation from their own people and that the term should be used in a political context between political opponents is a new one. I am worried that the attempt to fix it is actually more dangerous than it is of itself.

Lord Knight of Weymouth: Could you give any examples? I am interested in whether antivax during Covid is an example of that.

Freddie Sayers: The fusion of government messaging with the mainstream media during Covid and how strictly that was policed was important for us at UnHerd, because we were actually conducting interviews with experts who had all sorts of different views about lockdowns, vaccine mandates and the rest of it. It took off. There was a huge, underserved audience for responsible questioning of those issues. You were not finding it in the mainstream media.

I am certain, having witnessed it up close, that many thousands of people, who prior to the Covid experience were not especially politicised and did not really think about these things, have been radicalised during that process, because there was this sense of a single message coming out of all the television screens. It felt very dystopian and unnerving to people. It did not feel like a free media.

Big mistakes were made during Covid. I was pleased to see the House of Commons Culture, Media and Sport Committee, just a couple of days ago, request a report into the activity of the so-called government disinformation unit in relation to Covid. The creation and proliferation of those entities fans the flames of the conspiratorial worldview, because it gives legitimacy to the idea that there are faraway government-backed censors watching your every move and policing normal political discourse.

My point of view is definitely that, while recognising that the internet is open, and that disinformation or inaccurate ideas can travel and we should be observing that, we should be very careful about measures to start censorship, particularly if they involve the Government, because they do more harm than good.

Lord Knight of Weymouth: There is so much that I would like to come back to you on there.

Chair: Let us resist the temptation to get into a discussion about the topics, because we have a lot of ground to cover.

Katie Harbath: I can just add a couple of points to my fellow witnesses. First, a definite change in the last 10 years is the commercialisation of disinformation. This is now something that can be for hire. Even in 2016, we saw the story coming out of the US election about Macedonian teenagers running fake news to make money. It is important to make sure we have a broad lens when looking at the disinformation problem.

Secondly, what has also changed in the last 10 years is the number of online platforms in which this information can be spread. In addition to the big ones, your Facebooks, your Instagrams and your YouTubes, you now have TikTok. You have gaming platforms such as Discord and Twitch. A lot of this is happening over private messaging services as well. That can make it very difficult to understand the overall information ecosystem and how this information is being spread.

In addition, as my fellow witnesses mentioned, particularly here in the United States, the researchers and those who study this environment have been attacked politically. They have been brought in front of congressional hearings; they have been buried under freedom of information requests. This has had a chilling effect on the work to even understand what is happening. On top of that, the platforms have pulled back some of the access to data in order to understand what is happening on those platforms. Legislation such as the Online Safety Act and the Digital Services Act in the EU will potentially change that, but that remains a challenge for many of these folks.

I would add a plus-one to the challenges of understanding foreign versus domestic. At Facebook after 2016, we looked at this as coordinated inauthentic behaviour. We were looking at the behaviour of these actors, and not necessarily what the content was that they were pushing out. I would recommend that as another angle on how to combat some of these problems.

Lord Knight of Weymouth: I will just come back on one thing, because we are a bit behind on time, which probably is more to Bill and Katie, tempted as I am to talk to Freddie about stuff that is probably less relevant. Bill, you talked about that trend around national control of information. How much, though, at the same time, do we have algorithmic control of information flows by big tech?

Professor William Dutton: I see Katie nodding her head, but I disagree. I think this is widely exaggerated. It is displacing focus on governmental controls, such as in Russia, in China and so forth. I did a study in 2017, after the 2016 election when I was based in the US, on the impact of filter bubbles and echo chambers, and to what degree they are shifting what people see online. The idea is that Google or other search engines would hide information from you, and you see only what you like.

Lord Knight of Weymouth: It did on antivax and Covid, for example. It supressed some content, which I approved of, but it did.

Professor William Dutton: Even if it did, that does not matter. We surveyed 2,000 people in seven countries, the largest countries in the EU and the US. Most people go to at least four different sources for information, if they are interested in health or if they are interested in politics, one of those being online. They will read the newspapers, they will watch television news, they will talk to family and friends, and they will get online. If they go online, where they are interested in politics or in their health, they go to four or more sites. That means that it is highly unlikely that they would be caught in a filter bubble, because they are going across media and across sources.

They are caught in their own confirmatory biases. This is the problem. I do not deny the problem of politicisation and confirmatory biases. It is not a problem of algorithms. It is not a problem of the technology. The problem is that we are the algorithm that decides that we are going to look only at what we believe is the case. If you are a conspiracy theorist, you are the hardest person to change the mind of out of anyone, because you will not look at alternative facts. It is not a problem of algorithm. In my opinion and in terms of our research, it is a problem of confirmatory biases of individuals.

As Katie was saying, we have many platforms and so forth. We have more diversity of information globally than in the history of the world, but we can choose what we believe in and what reinforces our own beliefs.

Lord Knight of Weymouth: Let us quickly hear from Katie, because I saw you nodding in almost a contradictory way.

Katie Harbath: I did not mean to, no. I actually agree with Bill on the lack of filter bubbles. I thought he was going to go in a different direction. I would say, though, that there are decisions being made by tech companies right now, particularly Meta, to reduce the amount of political and news content on their platforms. I do not know whether Bill or others have done research into this yet, because it is still somewhat new, but I worry about what that could mean for the overall information ecosystem. Something we need to pay attention to as part of this is less about the filter bubbles and more about who they are getting that information from, how we think about that broader ecosystem of news influencers and others, and what that might look like as part of this. It is just more that it is an ever-changing situation. The decisions the companies make about how they are prioritising, what is in their algorithms and such and what is being surfaced, is still something to be paid attention to, but it should be much broader. It is not the filter bubble problem. It is much bigger than that.

Q111         Baroness Harding of Winscombe: I just want to go back to the topic that we finished the last session with Professor Martin on, which is how you have a discussion about misinformation and disinformation without it being immediately politicised. To start with you, Freddie, how do you respond to the concern that the concept of misinformation or disinformation has become completely politicised? If it has, how do we have a sensible conversation about it?

Freddie Sayers: It has become politicised, and that is very worrying. The fear on one side of that argument is that, under the banner of the word “disinformation”, essentially political censorship is allowed to take place. Both that and the ability of big monopolistic tech companies to control the conversation are the dangers that the Government should be worried about, finding measures to try to break those up and return to a more organic, pluralistic conversation.

I would love to provide one specific example, if I could, of just how embedded the danger of politicising this movement has become. We at UnHerd decided to take ads last year for the first time. We have a subscription business, and we did not have ads, but we decided to take them. We went to three successive ad agencies, each of which was really excited about our product. We have a large audience in the UK and the US. They are very influential people. The numbers they were expecting us to get were very significant but, each of the three successive times, we got only a tiny fraction of what was expected. This was a real mystery. The ad agencies themselves were confused by it.

Eventually with the third agency, we uncovered that, in the machinery of delivering online ads, there exist these gatekeepers, which call themselves ratings agencies, that give ratings to websites based on disinformation or not, based on security and risk. One of those agencies, called the Global Disinformation Index, had given our website a “dangerous” rating. I should say another one of them, NewsGuard, gives us 92.5%, which is more than the New York Times. That shows how subjective this is.

The Global Disinformation Index, by the way, is supported by money from this Government, as well as the European Union and the US Government via the Department of State. These kinds of bodies have become detached and unaccountable actors that can literally turn off the business model of news websites by turning off their ad supply.

We managed to get through eventually and asked why this has happened. “On what grounds are you putting us on this danger list?” They came back and said, “The rating will not change as it continues to have anti-LGBTQI+ narratives … The site authors have been called out for being anti-trans. Kathleen Stock is acknowledged as a ‘prominent gender-critical’ feminist”. Their rationale was quite simply the fact that we published Professor Kathleen Stock, an esteemed philosophy professor, whose work has been really important and played no small part in the important discoveries of the Cass review that we learned about last week. The view of whichever anonymous researcher worked for this unaccountable body was enough to classify us as disinformation. That entirely destroys one of our core revenue streams.

I offer that as just one example, which is current, because it is happening right now. The Government are paying for this organisation right now. Under the umbrella term of “disinformation”, there has been this huge blossoming of not-for-profits, companies and, indeed, government agencies that can take very politicised views on things without proper accountability. We should try to stop that.

Baroness Harding of Winscombe: That is a hugely powerful example you have just given. In your earlier question, you acknowledged that there are, as we heard from Professor Martin, nation states acting in this space. How do you focus on the very real nation state issues, while at the same time protecting free speech, as highlighted by the example that you have just given? How do you square that circle?

Freddie Sayers: I know we have not very much time, so I shall give a short answer. Acting against your own people, whether it is observing them, defunding them, censoring them or fiddling with their business models, should be avoided in almost all scenarios. Governments can make a very clear distinction. Yes, if it concerns foreign actors, let us get involved, but we should be extremely careful about anything regarding our own citizens, because it will magnify paranoia and distrust.

Baroness Harding of Winscombe: Professor Dutton, how do you square that circle of protecting free speech but at the same time being able to have a fact-based discussion about misinformation and disinformation?

Professor William Dutton: Let me give you a different answer, which is that you should focus more on media literacy. By that I mean getting away from a discussion of who knows the truth here or who is going to determine the truth, whether it is a platform, a Government or an agency, but educating people, from children to senior citizens, about fundamental aspects of media literacy, such as the importance of avoiding confirmatory biases, of being open-minded and of civility online. What is the etiquette and civil behaviour online? How do you use search effectively to fact check and to make sure that you look at alternative, contrary opinions? How can you empower yourself online? This has a very positive role.

The whole focus of the last book I did is on the fact that the internet is empowering individuals as never before, and they can do positive things. Think of Greta Thunberg. Think of Martha Payne, a Scottish girl of nine years old who changed the way school lunches are delivered across England and across the UK as a whole. Schools do not teach them that. Schools teach them only to be afraid of the internet and to help their parents keep them away from the internet. Some schools are trying to use the fifth estate as an approach, meaning children cannot actually see that they can get involved in a civic activity online or that they can make a statement in support of an issue that they are concerned about.

There is also the rise of cognitive politics and being aware that people are trying to influence your beliefs. A lot of countries are pursuing this. I was at a conference online in Bangkok recently. That is exactly one of its fociI was plotting that—and that is a way of somehow depoliticising it. It is trying to enable individuals to use the internet more effectively and for the good of their own information content.

Baroness Harding of Winscombe: Katie, would you have anything else to add on how we manage this trade-off between free speech and tackling potential real misinformation and disinformation risks?

Katie Harbath: I would just add on to what Professor Dutton said. These two things are inextricably linked. Nation state actors will utilise chaos and polarisation inside another country and use its own citizens to help spread their information. It is often not clean, because sometimes you do not know right away who the actor behind that information is. In addition to media literacy for folks, having more transparency of understanding about what is happening in our information ecosystem and then building a series of checks and balances, so that people can hold the tech companies, the Government, the news organisations and civil society accountable for their decisions, is the only way for us to be able to hold one another accountable and have these debates about the impossible trade-offs that come from having these conversations.

Baroness Harding of Winscombe: What would you want to be more out in the open? What would holding those tech companies to account look like?

Katie Harbath: It is twofold—it is about understanding more about their own decision-making process, how they determine what their policies and values are, and how they choose what they are prioritising or deprioritising in their algorithms, as Meta is doing with news and politics. Then it is also about understanding how this information spreads across platforms. Facebook knows what happens on the Facebook platforms, X knows what happens on its platforms, but you do not necessarily know how they go across all of that. By having more of that information available, and funding for researchers with which to analyse that, we can have a better understanding of what loopholes and other things these actors might be abusing, in order to try to close them off.

Chair: That is probably a neat segue to the Lord Bishop, who wants to pick up on the way media organisations are responding to this.

Q112         The Lord Bishop of Leeds: I am quite taken by what Freddie Sayers said earlier about the unintended consequences of focusing on disinformation. I would be interested, first of all, to hear what you would do, if you are not going to run that risk of creating what you are trying to oppose or trying to deal with. Secondly, what responsibility do media organisations carry for addressing misinformation, disinformation, checking and so on?

Freddie Sayers: It should be almost entirely the responsibility of media organisations. It used to be considered an intrinsic part of journalism to do fact checking, and it still should be. I do not know why it is now considered a separate skill set. Every journalist should be a good fact checker. It is absolutely the responsibility of a publication to check the quality of the information it is putting out. If it puts out shoddy information, it will become less trusted by its audience. We must credit people with the ability to be more sophisticated. Already, people are becoming so.

As for what we should do, less is more in terms of creating new units. This goes for big corporations as well as Governments. Every time you create an entity with an Orwellian-sounding name, you create a new risk that everyone is going to suspect you of censorship. First of all, ask, “Do we really need to do this?”

Secondly, can it be done in a more organic way? Community notes were mentioned in the previous hour. That is an interesting example. It is a system within the new Musk-controlled Twitter. It is by no means perfect, and I am sure it frustrates a lot of people, because when you put out a tweet, if enough people think it is inaccurate, you get a note under it saying, “This has been questioned”. But that is a more realistic and honest way to talk about disputes about facts, because when you see one of those, you think, “This is the claim and I understand that it’s contentious. I do not know whether it’s definitely wrong”. Crucially, it avoids the taint of officialdom. There is no sense that it comes from the centre. In fact, Elon Musk himself is subject to fact checking on his own platform, which is good to see. The more organic and bottom-up that process can be, the more trust it will have.

Thirdly, if any kind of measures need to be instituted to do with censoring things or taking things down, make sure there is a transparent process attached to it, because a lot of the alienation and paranoia has come from this sense that there is never an explanation given, things are mysteriously removed and you have no recourse. It goes against your liberal instinct that, if you are judged to have committed disinformation, you want to be able to appeal it, but it is very rarely possible. These things happen at a remove. If you are going to act as a court on people’s pronouncements, you need to offer a process that is clear, where people can appeal. They would be my suggestions for how to better treat the disinformation problem.

Professor William Dutton: I disagree with you on the idea that the media are going to fact check social media, the internet and so forth. It is often the opposite: that networked individuals are fact checking the media. This is a problem that crosses all mediathe internet and the networked individual, but broadcasters and newspapers as well. As they are stressed financially, they are moving to a model that is not about quality but about engagement. How do I engage my audience? To engage my audience, I know that I have to feed them what they want to hear, back to the confirmatory bias.

Fox News is the poster child for this kind of behaviour, but almost everyone is adopting a Fox News model. Now we have the data to do it. We have big data that allows us to profile our audiences. Then we can decide how to feed that and what content, whether liberal, conservative, conspiracy theories or whatever, will enable us to engage our audience, to get them to buy our paper, subscribe to our website or blog, and so forth. We have to make sure that we continue to have a diversity of information so that we can fact check each other, but also to try to reinstall quality.

I do not want to disagree with Katie Harbath, but I will. If you have transparency in search engines, you basically allow bad actors to figure out how to deal with your search engine. The whole idea of the search worked because it all dealt with trying to figure out what the highest-quality sites are, so we can turn people to them. Transparency taken to the extent that I thought you were suggesting would probably undermine search, because it would basically give everyone the information that they need to optimise their search. It would just feed into the engagement strategy. We should move away from engagement, focus on quality, and get individuals literate in how to use it and what is going on in their world, so that they can grapple with it.

The Lord Bishop of Leeds: Can I ask Katie Harbath just to respond to that? Then I have one very brief follow-up.

Katie Harbath: Professor Dutton makes a very valid point. Again, I call it an impossible trade-off in finding that right balance between having that transparency and not providing that roadmap, because bad actors, we know, will go right up to the line. Oftentimes, what we at Facebook call borderline content is the most difficult to know what to do with. Finding the balance and having these conversations is really important to understand where you might draw those lines.

To add two quick points, one is also about training for journalists and news organisations about how they might be targeted by actors trying to spread disinformation, to get them to write stories on this, to make them think that they are somebody they are not, to seed them with stories and stuff like that. We saw that in 2016 and continue to do so, so that is important. Secondly, I have been quite frustrated lately at seeing some sloppiness, particularly with headlines and other things mixing up the words “misinformation” and “disinformation”. They are two separate problems, so it is very important to think about that.

Especially in the age of AI, a lot of apocalyptic terms are making it seem like anything that people see on the internet could be false. That contributes to people’s distrust in the overall information ecosystem. We need to think about how we separate the signal from the noise and give people the right perspectives around what is happening and what those impacts are, which goes back to the importance of media literacy too.

The Lord Bishop of Leeds: Just very briefly, an old friend who I had not seen for a long time, who works for one of our major media organisations here, told me that it is good to work for an organisation that does good in the world rather an organisation that, contrary to the protestation by journalists that journalists are all interested in the truth, tells lies, and lies repeatedly. I hear what you are saying, Freddie, about journalists needing to check their own facts, but they are in a different environment now, where the prioritising of clicks over truth seems to have won out. How do you deal with that? You could do it in your particular organisation, but what about across the board?

Freddie Sayers: I definitely would not deny that it is a very rapidly changing and fraught time. Many media business models, frankly, are not working. We know, as all our colleagues in the industry do, that you have to get attention to ultimately convert people to subscribers, or whatever your model is. What people call chasing clicks ultimately has to be part of the recipe.

I would say two things. First of all, I do not think that it is for the Government to come in and try to fix itbasically because, if you start to try to centralise the narrative, you are going to create a whole load of secondary problems. Secondly, I take a more optimistic long-term view. Yes, at the moment, people are sucked into echo chambers and are having their opinions reinforced but, as Professor Dutton said, people increasingly look to multiple sources. The very young people I speak to are quite sophisticated about the information that they receive. They are very sceptical, already anticipating questions of AI and so on.

In the long term, there must be a value to truth, and you are going to have an advantage over your friends and colleagues if you have accurate information. There will be a demand for that in the long term, and the business model will surface, but that is not to say that it is not going to be a turbulent time in the meantime.

The Lord Bishop of Leeds: So it is about preventing scepticism, which is good, from becoming cynicism, which is destructive.

The Chair: We will take a brief supplementary question from Lord Kamall, and then I am very conscious of time and votes expected, so I will ask you to be brief and direct it to perhaps just one of the witnesses.

Q113         Lord Kamall: This is to Professor Dutton. If the question justifies a longer answer, maybe you could write to us afterwards with evidence. I was very interested when you mentioned quality in response to Freddie Sayers’ suggestion that journalists should be checking the quality of what they do. What do you mean by “quality”? Is that not a subjective term? Do you mean “highbrow” as opposed to “tabloid”? People will have different views of what “quality” is.

Professor Dutton: I was referring to it in the context of, for example, search. Over 5 billion people are online. There are not 5 billion people posting every day, but billions do, and you have multiple websites. If you have a search engine, finding good information is about studying the links within those sites. There are algorithms, which are very valuable, for how you identify a site that other people use as a reference. It draws more people, for example, but it is not just an engagement algorithm. It identifies the sites that are probably the most valuable, and that is “quality” in the search context.

Search is also looking for more money, so you have a huge list of sponsored sites, and I wonder whether some people ever get to the organic search results, which are the highestquality sites of the 80,000 to 100,000 that they give you to look at.

Q114         The Chair: I will wrap this up with a final question. Mr Sayers, you have been really clear in your evidence about there not being a role for government here, but you did touch in one of your answers on something called the Global Disinformation Index, the fact that that is receiving government funding at the moment, and the impact of an organisation such as that on the viability your own business model. I have heard loud and clear that you do not feel that there is a role for government in tackling the problem, as it were, of people’s confidence in news. Let us put it that way, but tell me if I have misunderstood what you have argued today.

What I have heard from you is a clear distinction. While government has a valid role in dealing with foreign actor-type activity, which we heard about from Professor Martin, because that is a threat to us as a nation state, when it comes to the issue of disinformation that arises from discourse among our own citizens, you do not see that there is a role for government in that. I get your general argument, but is there anything specific that you would identify that the Government ought to be doing and that we might not have touched on? Is there something specific that could make a difference if the Government were to intervene in some way?

Freddie Sayers: The Government could stop funding these kinds of organisations. That is an easy win.

The Chair: I am sorry to interrupt, but you talked earlier about a disinformation movement. Clearly, that organisation would qualify for that category. How well known are the different organisations that are involved in this sort of thing?

Freddie Sayers: They are very obscure and very few people know about them, but it has really blossomed over this period. I would like to see those organisations held to a much higher standard, in so far as any of them need to exist, to put an emphasis on bipartisanship, to make sure that they are not pushing a particular ideology, which happens all too frequently, and generally to be sceptical of the attempt by what I would call the disinformation movement to draw government in.

We saw the effect of it, for example, with Twitter during the Covid era, which was recently exposed when Elon Musk bought it. Once you have a cosy, direct channel between government and big tech or big media sites under the guise of safety, dealing with disinformation and things that sound like very responsible, appropriate government activities, it very quickly escalates to asking, “Will you please arrange a more helpful message for us?” That is very damaging. We should try to re-establish the principle of separation between the media and these big tech companies.

The Chair: We know about the Twitter files, as you say, because there was that exposure in the US. There are other examples in the US that have exposed some kind of involvement of agencies in ways that perhaps were not known about before. Is there any example like that in the UK that you have come across?

Freddie Sayers: The example that I began with is the clearest, which was the Covid era. We should all look back at that era with sobriety, because, in the atmosphere of fear, which was completely understandable, government messaging and media narratives merged to a very close degree, not only at the BBC but among the mainstream media generally. We saw WhatsApp messages from Hancock and people inside the Government saying, “We want this on the front page tomorrow”. It was all under the guise of safety—“We need to communicate this safety message”—but it ended up being that the Government were just dictating to the media, which was really damaging. That, to me, is the most salient example.

The Chair: I am conscious that the Minister is now on his feet, and we are about to get a series of votes. Ms Harbath and Professor Dutton, is there any final comment that you wanted to make on the topic that I have just been covering in terms of government role?

Katie Harbath: I have just one, if I may. We need to be very careful about having extreme separation between the government and these other entities, especially when it comes to understanding the threats that come from especially nation state actors. I wrote a piece about jawboning, if folks would like to dig into this a little more. Information sharing is still extremely important as we are going into all of these election cycles, and so I would caution against making an extreme separation.

Professor Dutton: Very briefly, if I can characterise it, the Government are going to get really involved, much more so than ever before.

The Chair: Do you mean they have been or they should be?

Professor Dutton: They are and will be far more involved, because platformisation of the internet is a gift to regulators. When your constituents say, “Do something about what is going on online”, and you tell the regulators to do something about it, now they can, because of the duty of care notion. These platforms can be sued and lose a lot of money. They can get criminal indictments. Therefore, they will over-regulate. They have already started to over-censor and over-surveil users in order that they will not get fined or get a criminal record.

The Government are not directly involved, but they are, basically, leaning on the tech platforms to regulate for them. They can now, because of platformisation, so there is a train wreck coming in terms of domestic control of information. We do not want to get derailed, but, because of the international nature of this, we need to focus on what the UK in the Online Safety Act, and the EU in the Digital Services Act, have created, which is a system that is likely to lead to over-regulation, oversurveillance and over-censorship of the public.

Freddie Sayers: I agree with that. Could I make one very quick final remark? Just to put it on the record, I do not think that there is no role whatsoever for government. In an example such as a massive NHS data breach, or some great big international hack that the security services became aware of, it would be entirely appropriate for a Minister to pick up the phone to the editors and say, “Do not touch this. We must get rid of it”. But those communications should be rare. There should not be a revolving circle of people moving between them and a constant discussion about what the Government want to see in the media.

The Chair: It is about being clear on what is a legitimate, proper dialogue for genuine public benefit as opposed to something that starts to become controlling. Thank you, all three of you, very much for joining us today and for giving up so much of your time. I am sorry that we have had this pressure of a vote expected any time at the end of the session, but I am hugely grateful to all three of you. Thank you very much indeed.