Select Committee on Communications and Digital
Corrected oral evidence: Freedom of expression online
Tuesday 16 March 2021
Members present: Lord Gilbert of Panteg (The Chair); Baroness Bull; Baroness Buscombe; Viscount Colville of Culross; Baroness Featherstone; Baroness Grender; Lord Griffiths of Burry Port; Lord Lipsey; Lord McInnes of Kilwinning; Baroness Rebuck; Lord Stevenson of Balmacara; Lord Vaizey of Didcot; The Lord Bishop of Worcester.
Evidence Session No. 18 Virtual Proceeding Questions 153 - 159
I: Peter Wright, Editor Emeritus, DMG Media; Lizzie Greene, Legal Adviser, DMG Media; Matt Rogerson, Director of Public Policy, Guardian Media Group; Gill Phillips, Director of Editorial Legal Services, Guardian News & Media.
USE OF THE TRANSCRIPT
This is a corrected transcript of evidence taken in public and webcast on www.parliamentlive.tv.
Peter Wright, Lizzie Greene, Matt Rogerson and Gill Phillips.
Q153 The Chair: Welcome to our second set of witnesses today. Peter Wright is editor emeritus at DMG Media and Lizzie Greene is the legal adviser. Matt Rogerson is director of public policy for the Guardian Media Group and Gill Phillips is director of editorial legal services for Guardian News & Media. Thank you all very much indeed for joining us. As there are four witnesses from two organisations, can I ask you to determine for yourselves who is best placed to answer each question, so that we have one voice from each organisation on most subjects, unless your colleague wishes to add anything?
Today’s session is broadcast online and a transcript will be taken. You have been following our inquiry into freedom of expression online, which follows on from our most recent inquiry on journalism and the news media. You will have a perspective on a wide variety of issues. We are going to focus today on the issues around freedom of expression. Can I kick off by asking each of you to very briefly introduce yourselves and your respective roles and give us a brief overview of your perspective, from where you sit in your respective organisations, on freedom of expression online? Then we will move on to some specific question areas from members of the committee.
Peter Wright: I am editor emeritus at DMG Media. We have three large online news titles. Clearly, for the purposes of this committee, the focus is on freedom of expression as it is likely to be affected by forthcoming online safety legislation. In a very tight nutshell, my view is that I understand the need to police the internet. There are dreadful things there and it is right that the Government and the law should take steps to deal with that.
However, I noticed that the previous witness was talking entirely about illegal content. The thing that worries me and other journalists about online safety legislation is that it also covers content that is lawful but deemed harmful. In my view, that raises very difficult questions. We have not yet seen any attempt to define or give examples of what content is going to be considered legal but harmful.
I am also very concerned that this task is going to be delegated to commercial monopolies. I am not sure that Google and Facebook have either the skills or the right incentives to make wise decisions on this matter. For that reason, we have been pressing very hard with the DCMS for an exemption from online safety legislation for news publishers. We would define that in the widest possible way to include any journalistic organisation that meets a set of simple requirements about having a named editor, a complaints process and taking legal responsibility for what it publishes.
The Chair: There is plenty there for us to unpack and come on to in a moment.
Lizzie Greene: I am a legal adviser, so I work within our editorial legal team. My role is to ensure that as much of our journalists’ work gets to the public as possible, within the limitations set by criminal law, civil law and the regulation we have signed up to through IPSO. My primary concern would be to ensure that that process is not affected by additional layers of regulation ostensibly aimed at online platforms. I share the concerns that Peter expressed about asking private companies to police legal but harmful communications between individuals.
Matt Rogerson: Good afternoon. Thanks for having me again. As for my view on freedom of expression online and our view more broadly, we all know that the internet has fundamentally changed the way that people express themselves and interact online. News organisations, which were once gatekeepers of news information, are no longer. The global tech companies are the gateways and their decisions impact on freedom of expression in society. Balancing freedom of expression with the capabilities of those global networks is clearly a huge source of tension for societies across the world.
At present, the legal framework that governs free speech online does not encourage consistency in decision-making by the platforms. At the same time, it has enabled a business model that allows the platforms to publish a vast amount of content, collect vast tracts of consumer data, inside and outside their walled gardens, and generate huge amounts of targeted advertising inventory, from which they profit massively. They take minimal responsibility for both the content and the advertising that they publish. They use largely automated systems to determine what content is shown to whom, what ads are served, where articles appear and what price is achieved for inventory, all of which is done in a largely opaque way.
I was struck by some evidence to one of your sessions a couple of weeks ago, I think, where one of the academics highlighted the degree to which fairness, justice and equality in this world are enabled by the ability to compare outcomes with those of others. In a world of personalised news feeds, personalised product offers and redacted commercial data, such comparisons have become increasingly tricky. That has reduced trust and transparency in the digital economy.
Businesses and consumers were left largely in the dark about how our content and posts are treated and how that content is monetised. The opportunity with online harms and digital markets unit legislation is to shine a light on how and why content is served, and to understand why and how commercial outcomes have occurred. Once we have that information, we are better able, as businesses and consumers, to establish whether platform behaviour is fair, just or equal. I hope it came through in our response that these two bits of legislation could work together for consumers and businesses to put in place standards and expectations against which dominant online platforms are judged and held to account.
Gill Phillips: Thank you very much for having me here. I will try to be short, in the sense that my job is very similar to Lizzie’s. I have been at the Guardian since 2009. I am a qualified solicitor. Before I came to the Guardian, I had done stints at the BBC, the Times and the Sunday Times, and the Sun and the News of the World. I am a bit long in the tooth now in this game.
I echo most of what other people have said. On areas of difficulty, we have moved on from the time where we were okay with Facebook, the big social media companies and online companies generally being private companies that were controlling their own spaces by their own terms of reference. Clearly, that does not work now. It causes problems, as Matt and others have said, of consistency. It causes jurisdictional comity problems in all sorts of ways as to who is in control, following what set of laws, where, when and how.
There are other difficulties. Peter has highlighted one: definitions. What do we mean by offensive speech or hate speech? What is harm? All those things are really tricky. Anonymity is a really difficult area online. There are lots of old things that one can say about the benefits of anonymity for people being able to speak, but it comes at a price. The other big difficult area is opinion. A lot of what we do not like is opinion, and there is a lot of latitude legally around opinion. Those are just some of the difficult areas, to which I am not necessarily going to offer solutions.
Q154 The Chair: Let us try to solve as many of the issues as we can. We will be seeing a range of witnesses, so do not feel the burden to have definitive answers to all our questions.
The first question is about the role of social media companies, and the way in which they present and determine the reach of news reports online. What is your view of Facebook and Twitter’s response to the New York Post story about Hunter Biden? In restricting access to that story, did the major platforms threaten media freedom?
Gill Phillips: We published quite a lot of articles about the story at the time. In our view, it was a not very well-sourced and possibly quite inaccurate story. That puts a burden on platforms. I understand that, in that case, Facebook took the article down because it sent it off to its third-party fact-checkers. On one level, one would commend it for that, because that is trying to avoid disinformation. On the other hand, it plays into a political argument about it taking one side or another at a particularly sensitive time in the election. Twitter, likewise, took the story down because it said it had personal information in it.
It is a very good example to put before us of an article that, I would very much hope, neither the Guardian nor the Mail would have put out, because they would not have felt it was properly researched in the first place. Irrespective of that, once something arrives on the social media platforms, the speed and scope with which things go, and the complete lack of any boundaries around that, are problematic. It highlights in a way the difficulties that the platforms have. They can ignore it altogether and leave it; they can do what they did, which is dial down and try to manage it a bit; or they can completely purge it and take it off. This seems to highlight those three stark choices.
Matt Rogerson: I have sympathy for their predicament. It was a story published very deliberately at a very acutely sensitive time in the US election cycle. They had a very difficult decision to make. How it relates to the UK environment, where there are laws and levels of self-regulation in place that mean that that story would not be published, is the fact-checking element of this story. At the moment, the fact-checking element of the social platforms is not in a formal place. It all feels fairly ad hoc. If an article is subject to fact-checking, it is very difficult to get that check removed and know whom to speak to about that process. There is work to be done to formalise that process and make it much more rapid and efficient.
There is another point, which we mentioned in our submission, is that the content was not taken down; it was decelerated. There is a difference between that and the decision by some hosting providers to just cut services to Parler. I am not a user of Parler, which will not surprise you, but the decision-making by the hosting provider was inconsistent. We understand that it took it down because Parler was involved in the 6 January riots in the Capitol. But news reports suggest that Facebook was also used as a vector for the riots on 6 January, and yet the hosting services continued to be provided to Facebook but were taken down for Parler. That lack of consistency in decision-making is really tough for businesses, especially start-ups, to get used to.
Peter Wright: I take a slightly different view, although I would start at the same point as Gill. As a former editor, it is the sort of story that sometimes arrives during an election campaign and, frankly, your heart sinks. The sourcing was, in the first place, very obscure. It could have been a hoax. It could have been a hack. It could well have been stolen property. Not only that, but you do not know where this laptop first got into the public domain. It is handed to you by the personal lawyer of the incumbent president, who clearly has an axe to grind. There are a hell of a lot of questions that the New York Post would have to ask. It would have to take responsibility for getting the right answers to those questions.
I do not think that it is the position of Twitter or Facebook to attempt to answer those questions on behalf of the New York Post. If the New York Post did not ask those questions or got them wrong, it has editorial control over the story and it takes the consequences for the decisions it has made, which could mean the job and career of the editor in some circumstances.
In the event, the whole thing ended up as a bit of a damp squib. Twitter backed down after getting something of a roasting from a congressional committee. I do not think the story made any difference to the outcome of the election, and possibly gained far greater credence by Twitter and Facebook trying to restrict it than it would have done if it had just been allowed to run its normal course.
Lizzie Greene: I agree with the concerns that the others have raised about the story itself. On the broader principle of whether it is up to the platforms to decide that, I would observe that there is a lack of transparency on how those decisions are made by the platforms, if you compare it to, say, a court ruling or an IPSO ruling, where you would get clear explanations. It is rightly very difficult to get a ruling pre-emptively preventing publication. For example, it would be very difficult to get an injunction preventing publication of an article on the basis that it is defamatory. It would be very concerning if the platforms emerged as an alternative route to try to prevent publication of news stories, where you do not get the transparency and consistency that you would through the courts.
The Chair: Peter, Matt mentioned the role of fact-checkers. How comfortable are you with the role of fact-checkers? We saw fact-checkers in the inquiry. In most cases, their argument was that they are not about right and wrong. They produce wider context around a story to enable readers to make up their minds about the veracity of the story. They do not always see it as their role to absolutely determine whether or not a story is completely accurate. None the less, if this story was taken down on the basis of the finding of a fact-checker, is that a worry for you?
Peter Wright: I am afraid that fact-checkers do not necessarily have all the facts, do they? Every controversial story I have published has come with a great deal of information that you cannot publish, either because there simply is not space or because it is provided to you in confidence. I have not read an account of the New York Post story that actually gives the full reasons why it felt confident to publish it. But they are professional journalists and they would not just have published it because it had been handed to them. They must have had discussions, both about the sourcing and the public interest. A fact-checker does not necessarily have access to that.
Gill Phillips: There is a big issue around transparency: if they are going to do something, why are they are doing it and on what basis? We rarely get to know any of that.
Second-guessing what the publisher has decided to do is very tricky. If the publisher has decided to publish this piece and has taken its editorial decision to do so, it is somewhat worrying that someone else, for reasons you do not know, who does not know anything about the background to it or the decision-making process, and definitely has a different take on what public interest is, takes these things down. There is a big problem of censorship there, if you are not very careful.
The Chair: That was very interesting. Let us move on to the online safety Bill and its impacts in particular on the media.
Q155 Baroness Bull: We have heard in your opening statements some concerns about online safety legislation and how it can possibly navigate some tricky and still contested terrain, particularly around what is harmful and what is harmful but still legal. The Government’s response to the online harms White Paper noted the concerns raised during the consultation about the possible impact of legislation on journalistic content and media freedom. They committed to include “robust protections for journalistic content” in the online safety Bill. I would like to ask each organisation, from your perspective, what form these robust protections should take.
Matt Rogerson: I think there is fairly broad agreement in the industry about what would be a good outcome, and Peter will correct me if I get it wrong on that basis. Where journalism is from trusted news sources, which would be defined in legislation in some way, and is distributed through search and social platforms, there should be a presumption that the platforms will not be in the position of determining whether it should be taken down. Those should not be subject to the obligations that were put in place through the wider legislation.
If platforms were notified of a breach in relation to a particular piece of journalism or to user-generated comments connected to it, there should be a process in place to notify the publication in question. Then there should be clear and transparent timelines and processes in place to enable the publication to appeal against any decision by the platform. That identification of news sources is not really done on platforms at the moment. There is a great reluctance to identify what are known as trusted news sources. We think that can be done, and then that should put the presumption against blocking in any way.
Peter Wright: I would be a bit more direct than that. I know there are various proposals floating around at the moment, but the only way to achieve this effectively is for the legislation to include a complete exemption for trusted journalism, backed by penalties if it is breached.
I am afraid that the only way the platforms can discharge their duty of care is by using their algorithms. There are very large problems surrounding algorithms. First, they are very ineffective, blunt instruments; they largely work off key words. Secondly, they are used by the platforms to further their own commercial purposes and even, we suspect, sometimes their political purposes
I suspect that it will not be a matter of content being challenged after it has been published, otherwise the whole purpose of online harms is being subverted. In order to avoid the draconian penalties that they face, the platforms will set their algorithms to prevent content being published. In that context, I am afraid that an appeals process is not any good. The shelf life of news is incredibly short. If a big story breaks, and 20 to 30 different news outlets are covering it and the platforms decide to block three of them, the story may have moved on within an hour or two. There is no appeals process you could put in place that would repair the damage that had been done.
On top of that, we already know that our own news websites are out of scope, because we are not platforms. What possible justification can there be for platforms blocking a piece of content that is freely available to read on our own website? It just does not stand up to logic, I am afraid.
Baroness Bull: Can I follow up on a couple of things? One is the concept of trusted journalism at a time when we know that journalism, like politicians, is not very trusted. How would we get to that definition? The second is that tricky “lawful yet harmful” concept. You said in your introduction, Peter, thar you have not seen any attempt to define this or give examples. Is one of the problems that it is always going to be living on shifting sands? Among different cultures and different generations, different things are considered seriously and emotionally harmful. That terrain is going to be shifting all the time. I appreciate there are two questions there, one about trusted journalism and one as to whether we will ever be able to draw that line to say what is legal but harmful.
Peter Wright: To deal with the second question first, we know what is legal and what is not. That is simple. What is harmful is a subjective judgment. The most often quoted example of legal but harmful content is anti-vaxxer content. There is of course an argument that discouraging people from having Covid vaccinations threatens harm to the whole community. On the other hand, you have the dreadful possibility that members of the public or a journalist, or a journalist quoting members of the public, might have detected a pattern of dangerous side effects to a Covid vaccine that the medical profession has not detected, or that it does not want to detect because of the vested interests in the vaccine programme working, which we all have to acknowledge.
I am in the fortunate position of not having to define “harmful”, but the danger is that it will mean what is harmful in the view of either the Government or whatever the current orthodoxy is. That will deny voices to people who might challenge it. One of our duties as journalists is to look for the voices that are not being heard. We sometimes take risks and get berated for doing just that.
On the definition of “trusted journalism”, there is a debate going on about this. You have to look to a series of indicators. Are they, either individually or working for a publication, taking legal responsibility for what they publish? Does it have a code of practice that it requires its journalists to follow? Does it have a named editor? Do you know where its ownership lies? There are a whole series of tests that can be applied. I agree it needs very careful thinking through. You have to be awfully careful that it does not become a licencing system, but there are means to do that. At the moment, there is a long-standing system for issuing press cards to identify individual journalists that works on just that basis.
Matt Rogerson: It is not about whether I, as an individual, trust one publication over another. I suppose the problem at the moment is that organisations are able to register as news providers on search and social platforms without any checks being put in place as to whether they are, or whether, as Peter says, they have a complaints process, an address in the UK or an address anywhere. I am talking about the trusted markers that show that this is a proper organisation that does journalism, has a code of practice, takes complaints and makes changes to stories where complaints are made. Those are the indicators I am talking about in relation to trusted news sources.
Q156 Baroness Featherstone: The Government have acknowledged now that media freedom will be under threat if journalistic content is not given special protection. If they did the same for politicians, there would be uproar. Can any regime from which journalists need to be protected be compatible with users’ freedom of speech?
Peter Wright: The answer is not really, truthfully. I am cognisant that there is some very unpleasant content online, which I am sure causes harm to individuals. I acknowledge the attempts of the Government to do something about that. My own preference would be that those harms were properly defined in law and that, where content is harmful, it is also illegal, so everybody knows for sure what we are talking about. The law, whether criminal or civil, should have proper resources to deal with it.
Baroness Featherstone: Judging from the current rate of prosecution, it does not.
Gill Phillips: From my perspective, I am with Peter on concerns about lawful but harmful speech. I worry about the precedent we set. You see already around the world what look like perfectly balanced systems being used and exploited. The definition of a terrorist used is anyone who opposes a Government. A great deal of care and caution has to be exercised in that whole area. As Baroness Bull said, a lot of it is very subjective. That is why it is so difficult.
I see what is happening to our journalists. An article that Jay Rayner had written about a Jewish cookery writer that we posted on Facebook had the most terrible comments underneath it, really quickly. While we can manage that on our own site, we do not have the ability to turn off comments below our articles if we post them on Facebook. It is part of the devil’s pact: you put it on there, and comments go on it because that drives traffic and commerce for them. There are two sides to it.
It is also very difficult to get them to take it down. Even we do not really have a very easy route in. Heaven only knows what anyone else out there who is not in our privileged position does when they try to get things down. Have you ever tried to fill in one of those online forms they have to report abuse? It is not straightforward. One thing that comes out of this discussion is that, on accountability, they should at least have a route that people can take and that they put resources behind. Part of the problem is that they say, “We cannot moderate all this comment”. That is a resource issue. They are making billions and billions. There is something there that does not balance out for me.
Baroness Featherstone: But then you would have to standardise moderation, and to what standard? The Guido Fawkes site is going to be very different to the Guardian site, for example.
Gill Phillips: I entirely agree. We were having this discussion the other day. We have our community guidelines. Where do they fit into the Facebook world, for example?
Baroness Featherstone: It is about the interoperability of moderation, in some way.
Lizzie Greene: We have touched on the subjectivity of the idea of “harmful”. If we were trying to draft criminal legislation, it would need to meet the standards of the rule of law, which would mean that you need to know in advance, reasonably, what the consequences of your actions will be. You cannot reasonably predict whether what you are writing is likely to cause psychological harm to someone, somewhere, to some extent.
There needs to be some more thought about the level of harm. Can we define that in some way? And harm to whom? Is it to the person your message is directed to, to people who are likely to see it or to anyone who may possibly come across it? Is there going to be some sort of standard of reasonably tolerant, robust people or do you need to take into account the person whose personal experience or background may mean that they are particularly sensitive to the topic you are writing about? There are a lot of issues there that mean it is shifting sands. That needs to be thought through.
Q157 Baroness Grender: Thank you so much so far. It is really fascinating. Matt, you mentioned right at the beginning that the platforms, in effect, are gatekeepers for publishers. We would really like to get both your perspectives. One of you cut a deal with Facebook at the end of January, and the Guardian cut a deal with Facebook much earlier. That is all within the context of the threat, potentially, of the news media bargaining code in Australia. It would be great to hear both your perspectives on how much you think that bargaining code is playing a part in driving platforms such as Facebook towards ensuring that there is proper payment to publishers such as you.
Peter Wright: I agree with you 100%. Only three months ago, both Google and Facebook were saying that they could not pay for content at all. Google was saying that it would have to withdraw Search from Australia and Facebook was saying it would have to withdraw news from its service. As far as I am aware, Search is still functioning perfectly well, and people are being paid. Facebook tried withdrawing news, but the international outcry was so great that it had to restore it. You are right that both platforms are offering terms for payment in any jurisdiction where they think they are under threat.
Where I am less happy is that, as I speak, we have not done any deals with Google because, frankly, the money is not adequate and the terms are too restrictive. We have done a deal in the UK with Facebook, but, while the money is helpful it is not particularly significant—it certainly will not shift the market. More importantly, in both cases, they are not deals across the whole spectrum of the company’s services. They are specific deals on one specific service that has been set up in response to payment for content and can be switched on and off as the platform chooses.
My great concern is, I am afraid, that the Australian Government, at the last minute, stopped behaving in the spirit of Gallipoli and allowed a deal to be done which may mean that, in the end, neither platform is designated under the mandatory bargaining code. The compunction to pay for content is removed and those who sign deals may find that, in three years’ time, they have nothing.
Baroness Grender: Can I check something? When the news was launched, Facebook claimed that it would earn publishers millions and that the deals were seven-figure sums. That is not the case.
Peter Wright: They will earn publishers millions if you aggregate all the publishers in the world.
Baroness Grender: I think they were saying it about the UK deal.
Peter Wright: I do not know what other people have got and commercial sensitivity does not allow me to tell you here what we have got, but it is not a massive amount of money. I was talking the other day to my opposite number at one of the big regional publishers. It has deals with Google and Facebook. The company has had to take them because it is so short of money, but they are certainly not going to save it from the possibility of eventual collapse. In any case, they run across only about 15% to 20% of its bigger titles. Facebook and Google are not interested in doing deals that cover weekly newspapers.
Matt Rogerson: We are coming up to 10 years since the Hargreaves review was launched in the UK, which seems like a tangent, but it is important. That was at the peak of platform power, when we were going to get rid of the UK’s copyright regime, allow fair usage and not worry about payments for the use of copyright content by online platforms. In the 10 years since, there has been a realisation that the use of high-quality content to build your products, to gain attention and to build brand equity is quite valuable. When we talk about deals, we need to reflect that this is payment for content in a way that has been done for many years, on many platforms—including Microsoft, by the way, which has made payments for many years to use news content in its services.
Then you look to Australia. The Australian code has definitely changed the dynamic and the mood around payment for content. There were agreements made in the UK for the news tab, on which the Guardian and the Daily Mail were probably negotiating at about the same time but announced slightly separately. That was definitely a step forward in the money that was on the table to license content for that news tab. Similarly in Australia, the payments now coming from Google to publishers are meaningful and significantly higher than the payments that they would otherwise make in the UK, where there is not a mandatory code in place.
The code itself has a differential impact on those two platforms. The combination of the final offer scheme, which takes the power out of the platforms’ hands and puts it into a third party’s hands to decide what is fair value, works where the platform actually values news as an integral part of its product. Google values news and gets a lot of value from news; it is an integral part of the product. To see that, you only have to look at the experiment it ran in Australia, where it stripped popular news brands out of news searches. It looked terrible, because you could not see any identifiable brands; it was flotsam and jetsam of news.
For Facebook, the position is quite different. It has partnerships teams and it talks to publishers. Ultimately, news is a pain for some of the reasons we have discussed. What is a quality publisher? What is a quality news article? How should it rank that content amid all the other content on the platform? The Australian code does not have the same effect on Facebook, because it could live quite happily without news being part of the corpus of content it has.
I know News Corp has announced a deal with Facebook this morning, and negotiations are still ongoing with other publishers. If the Australian code is not going to lead to barnstorming negotiations between Facebook and publishers, it brings you back to that question about what the role of news on Facebook is. Should it be obliged to carry news as part of its service to the public, given the fact that so much time is spent on those platforms?
The way I would describe it to colleagues is that the Australian code takes a big step forward, but, ultimately, it is a baton that will be passed to the next regulator that looks at it and then the next regulator. I think there will be a constant iteration on what the Australians have done to find a better outcome.
Baroness Grender: Do we need a code to sit there and to exist almost as a kind of a stick? Robert Thomson is quoted on the deal that has been done with News Corp, announced today: “Rupert and Lachlan Murdoch led a global debate while others in our industry were silent or supine as digital dysfunctionality threatened to turn journalism into a mendicant order”. I must declare I had to look that one up. Is there merit in what he has said about that? Do you think that he could have cut the deal that he has just described without the code existing in Australia?
Peter Wright: No, and I do not agree with his description of the rest of us as supine either. We have been fighting alongside Robert Thomson. He may be in New York and not have seen it, but I can assure you that Matt and I have sat in the same room with David Dinsmore and have been fighting the same battle in Australia. Without question, without an enormous amount of pressure from publishers and some really decisive action from Australian politicians and the Australian Competition and Consumer Commission, none of this would ever have happened. We need a code here. Google and Facebook have unlimited money. Without a code, where they see a possible regulatory threat, they will throw it around. If that means that the regulatory threat goes away again, they will take the money back and use it on something else.
Matt Rogerson: I agree with much of what Peter said. It is quite dangerous to see one aspect of that code as being what will drive sustainability of the news media. The ACCC’s work is far broader than just payment for content. It is all about resetting the relations between platforms and publishers. Copyright and licensing is one aspect of it, but there are a whole bunch of other measures that need to be put in place to enable sustainable business models. That is across conflicts in advertising and ad tech. It is about transparency of ad transactions. It is about updating privacy laws, so that platforms do not get a free pass when independent publishers have to offer consumers the ability to opt out from uses of their data. It is to attack things such as taxes on app store fees from Apple, which takes 30% of subscriptions.
It is no one single thing; it is the accretion of actions by regulators to intervene across the different functions of the platforms that will drive a more sustainable business model and new entrants into the market, who see that there is a sustainable market here and that you can invest in journalism and it will get a return.
Baroness Grender: I am so pleased that you mentioned new entrants. I am sorry that we do not have time to get on to the smaller independents and public interest news. I wondered if Gill or Lizzie have anything to add to that.
Gill Phillips: No, this is definitely Matt’s area from my end.
Lizzie Greene: It is not my area either. It is worth looking at related issues playing out in Europe as well, following the introduction of the press publishers rights in the copyright directive. France was the first country to implement that, and it required the publishers and platforms to negotiate. The outcome of that negotiation was that the publishers agreed, “You can have it for free”, because that is the nature of the bargaining power that the platforms have. It was only when the competition authority got involved that more meaningful deals began to be struck. It is another illustration of the need for some teeth behind any sort of bargaining code. There will be other examples, as other EU countries implement those requirements. I am sure we will be watching to see how they play out.
Q158 The Lord Bishop of Worcester: Thank you so much for your evidence so far. I want to ask you about the digital markets unit. As you will know, the Government have said that it will be set up in April, although it will have no powers until legislation is passed. Have the Government moved fast enough in setting that up? That is a yes or no question, so it will not take very long. Subsequently, I wonder if you could say a bit about what measures the unit should take to increase competition in social media, search and online advertising markets.
Peter Wright: Yes, we are pretty impressed with the effort that has gone into setting up the DMU. It will not have no powers; it will be able to use the existing powers of the CMA. We have been told by the CMA that it is intended to hit the ground running. I am looking forward to 1 April, hoping that they have a bargaining code already drawn up for us, but perhaps they will not.
As far as what it can do, it has clearly been set up early in order to start drawing up the codes recommended by the CMA in its final report on the digital advertising market study, which we pretty much endorse 100%. The Guardian may have different priorities, but we are particularly keen to see much more transparency in algorithms, warnings being given of algorithm changes, and a form of redress and remedy when those algorithm changes cause great commercial damage, which they often do. We would like to see more interoperability.
We would also like to see an end to “take it or leave it” contracts, which we all have to deal with at the moment. We would like to see an end to hidden fees. We would like to see an end to ties, where you can access one Google service only if you do it via another Google service.
Matt mentioned moves being made by Apple that would discriminate between Apple’s own apps and third-party apps. We get a lot of user traffic through our apps on the Apple App Store. It would be extremely damaging and pretty much prevent us selling advertising on our Apple app. This is being done in order to forestall regulation, we believe, and to increase Apple’s dominant position.
Google is doing the same thing, with a complicated set of measures called the privacy sandbox. It is called the privacy sandbox because it uses concerns about privacy but, effectively, transfers all the data associated with advertising and news websites to Google’s own control. This will starve out third parties through which we are currently able to offer advertising at better prices than we get from Google. This is going on now. There is an application to the CMA for interim measures to stop the privacy sandbox until it has been properly investigated and a proper, fair and transparent set of rules can be put in place. We would very much welcome some action on that.
Matt Rogerson: The whole CMA project, from the digital platforms report through to its advice to the Government, is really welcome. It has led the way globally, as has the ACCC, on aspects of thinking about platform regulation. The adoption of the advice pretty much in full by the Government is also really welcome.
There is not much I can add to Peter’s list of reasons why the DMU should come into force, other than to say that the Government could prioritise the publication of a White Paper to finalise the detail that is set out in the advice. That is one thing the Government could do to speed it up. They could begin the process of designating online platforms with strategic market status that will be in scope of the DMU today. They could start that process, which I understand is expected to take about a year to complete, now. The other big question is whether they should be using parliamentary time to pass the digital markets unit legislation potentially at the same time as online harms legislation.
As I tried to say in my introductory remarks, we see the online harms legislation and the DMU as the yin and yang of how we restore some competition and order back in the digital economy. Personally, my preference would be to attack that dysfunction as an economic issue, and attack the incentives that are misaligned at the moment in the digital economy, and then come to the speech aspects, which, as we have discussed, are very complicated, after we have resolved those economic aspects. Many of the negative externalities we see from some of these companies, because of that business model, would disappear if we had a better-regulated market, where people could switch to other services.
The Lord Bishop of Worcester: Thank you very much. That is very helpful.
Q159 Lord Lipsey: I wanted to ask about competition more broadly. I am all in favour of competition, particularly economic competition. If you are facing the issues that our committee is facing, between maximum freedom of expression and avoiding the abuse of freedom of expression so that it does not really exist, as with Twitter storms, I wonder if competition actually helps. When he was thrown off Twitter, President Trump was able to go to Parler. If Parler had been 10 times as big and Twitter half the size, he would have been in just the same powerful position as he was when he could twitter away like mad. How do you see competition assisting in finding this very difficult balance between free expression and preventing abuse of expression?
Peter Wright: Freedom of expression is summed up in the phrase “let a thousand flowers bloom”. You are never going to be able to give every voice an equally sized trumpet. Some people will express their view with more skill, and some will attack or give views on subjects that are of greater interest to people.
President Trump had a big voice on Twitter because he tweeted a lot and he was prepared to be outrageous—and because he was the President. The reason he went on Twitter was that he judged that he would not get a hearing in the established American media, which was hugely sceptical of him from the very beginning and very rapidly became hostile to him. In a sense, he was using Twitter to get round the dominant players in the media. That is probably not a very good answer for you, but I am not sure I have a simple one.
Matt Rogerson: Without sounding like a broken record, the business model has quite a lot to answer for in terms of the toxic political environment we find ourselves in. That is because division sells. Divisive comment, tweets or bits of video from a mainstream broadcast channel are redistributed on channels in order to get clicks. That gets advertising dollars through the door, and many times advertisers do not know they are funding that stuff. If you add more transparency, so that advertisers know what they are funding, and start to inhibit the clicks-for-outrage model, you will probably get less of the toxic stuff circulating round on these platforms.
Gill Phillips: I would add to what Matt and Lizzie have said. Over the years, it has been almost impossible for individual publishers to negotiate anything meaningful with any of the platforms because we have no clout. Competition has to play some part if that balance is to be shifted in the negotiations we have with them over putting our stuff on their platforms all the way through. That has to change the dynamic somehow. For years, that conversation just could not take place. It did not matter what you wanted to ask for, because you knew you were not going to get it.
There are some little signs that that is changing, from all these different perspectives. Online harms legislation is coming in, there is the digital markets unit and there is what is going on in Australia. All that feels to me, looking at it over the last 30 years, like there is a slight tipping point. I am sort of hopeful about that. It will change things, maybe only in small ways, but I hope meaningful ways.
Peter Wright: I would endorse that. For a long time, we have had a fruitful and commercially quite good contract with Snapchat, which is an independent player in social media. In the last few months, Microsoft, which is a very big company but a minority player in search, has been talking about striking payment-for-content deals with publishers. It is doing that because it wants to break into the market dominated by Google. Anything that can be done to help competition will give us the opportunity to secure better terms. The reason our terms are so awful is that there are so many of us. It is quite right that the press is pluralistic, and that should be protected at all costs. But it is very difficult when you are a plural industry trying to extract fair terms with monopolies. They are always able to say, “Either you sign here or you can go away”. And if you go away, you do not get distributed.
Matt Rogerson: This is a point I missed, but it is really important. It is important that different business models are allowed to function and grow, and that they are not subject to aggressive takeovers or attempts at destruction. Think about Facebook and Instagram. The founders of Instagram had a very different view about what their business model would be and potentially had very different views about how they would deal with moderation on their platform. Sadly, we never got to see what that looked like because the platform was bought by Facebook and is now consumed within that larger organisation. Where there are divergent businesses that are trying to do things differently, we need a framework in place to stop them being either obliterated or bought.
The Chair: Thank you very much. Sadly, we have run out of time, after a very interesting and quite wide-ranging session. Thank you to both groups for the written evidence that you have submitted and your oral evidence today. Matt and Peter are regular witnesses to our inquiries. We see their faces often. Thank you also to Lizzie Greene and Gill Phillips for your contribution today, which has been extremely helpful. Do keep in touch with us if you have any further thoughts on the road to the online harms legislation. Thank you again for your time today. With that, the meeting is now concluded.