HoC 85mm(Green).tif

Petitions Committee

Oral evidence: Tackling Online Abuse, HC 766

Wednesday 1 December 2021

Ordered by the House of Commons to be published on 1 December 2021.

Watch the meeting

Members present: Catherine McKinnell (Chair); Martyn Day; Nick Fletcher; Christina Rees.

Questions 131-164


I: Chris Philp MP, Minister for Tech and the Digital Economy, Department for Digital, Culture, Media and Sport, and Orla MacRae, Deputy Director for Online Harms Regulation, DCMS.

Examination of witnesses

Witnesses: Chris Philp MP and Orla MacRae.


Chair: Thank you very much for coming to talk to us today about tackling online abuse. This is our Committee’s fourth session on the issue. So far, we have heard from stakeholders, including petitioners, who have been directly affected by online abuse; organisations representing people who are disproportionately likely to be victims of this behaviour online; and experts in technological and regulatory solutions to online harms, the Law Commission and social media companies. So we are really looking forward to questioning you today, Minister, about the Government’s efforts to tackle online abuse, including through the Online Safety Bill. Before we launch into our questions, could I ask you both to introduce yourselves, please?

              Chris Philp: Certainly. My name is Chris Philp. I am the Member of Parliament for Croydon South, and for the last three months I have been Minister for Technology and the Digital Economy. Before that, I was a Minister in the Home Office and the Ministry of Justice.

Orla MacRae: Good afternoon. My name is Orla MacRae. I am the deputy director for online harms regulation in the Department for Digital, Culture, Media and Sport.

Q131       Chair: Thank you. We spoke to a number of young people in schools across the country and the message was very clear: they feel that facing abuse is just a normal part of the online experience. What do you think are the most important things that the Online Safety Bill will bring about, change, or do that the online platforms are not already doing? What will the Online Safety Bill achieve in order to tackle the abuse that is on these sites?

Chris Philp: Thank you for the question and for the opportunity to appear today. It is probably worth saying a word about where we are in the process of the Online Safety Bill, and then talking about some of the provisions that may assist in dealing with some of the terrible problems you touched on in your introductory remarks. In terms of the process, we published a draft Bill earlier this year. That has been going through a process of prelegislative scrutiny, including by a Committee of the Commons and the Lords sitting together, chaired by Damian Collins, which I believe intends to report around Friday 10 December, give or take a few days perhaps. Once that report has been received, we may make changes to the draft Bill to improve it and strengthen it in some areas prior to its introduction a little later in this Session. In process terms, that is where we are.

The Government are very open to listening to various stakeholders and of course to parliamentarians as well, including this Committee. We are very open to the prospect of making changes to strengthen the Bill even further compared with where it is today, because the Prime Minister, the Secretary of State, other Ministers and I feel very strongly, as I think most parliamentarians do, that there are problems with the online world affecting children and adults and a lot more needs to be done, so we are keen to make sure that the Bill is as effective as it can be prior to its introduction and then, I am sure, very extensive debate in Parliament.

In terms of the measures as drafted, the Bill is structured around three pillars. It is probably helpful to go through each of those. The first pillar covers content that is illegal. Where content is illegal, there is a duty on in-scope social media companies to prevent that illegal content. For the priority illegal content, which will include things such as child sexual exploitation, terrorism and other things to be specified, there will be a duty not just to take it down when they spot it, but proactively to seek to identify it and prevent it. That is an area on which I am sure everyone will agree: illegal content just should not be there at all.

The second area in which there will be duties on social media firms is the area of content that is legal but harmful to children. You might have touched on that in your question. Where there is content that is likely to be accessed by children and that is harmful to children, there will also be a duty on social media platforms to prevent it, regardless of what their terms and conditions may say. That will be an absolute obligation sitting on them.

There is then a third area, which is the more challenging one in policy and principle terms, and that is content that is legal—Parliament has passed no law to say that it is illegal—but potentially harmful to adults. That might be, for example, content that is racist, but not to the point of crossing the threshold of inciting racial hatred, or that breaches the current law—so it is offensive, but not unlawful as the law stands. In such areas, there is a balance to strike: we want to respect free speech, but at the same time to ensure that hateful content is properly dealt with.

The Bill addresses that by requiring the largest companies, the so-called category 1 companies—the Facebook-sized companies, TikTok, Snapchat, Instagram and those sort of companies—first, to do a full risk assessment, so that they understand what risks are posed. Secondly, they must be really clear about what they are doing to deal with that harmful content through their terms and conditions—forcing transparency, effectively. Thirdly, they are required to stick to their terms and conditions. Often, the terms and conditions prohibit racist or misogynist content, for example, but those terms and conditions are not properly or consistently enforced, so there will be a duty to enforce their own terms and conditions.

All those obligations and duties that I have described will be policed by Ofcom, acting as regulator. If a company breaches those duties, they will be liable to a fine of up to 10% of global revenue.

I have tried to summarise briefly a big bit of legislation, but those are the key measures. It is a groundbreaking piece of legislation. I do not think that any country around the world has attempted to do this so systemically and so comprehensively. It represents an enormous step forward from where we are today, or from where any other country is today, but as I said at the beginning, if there are ways in which we can improve it, we are really open to that. I am not sure whether your Committee will write a report but, if it does, we will certainly be reading it with great interest to see if we can learn from its recommendations.

Q132       Chair: We will be producing a report at the end of this inquiry. Obviously, we are looking very much to reflect the concerns expressed by petitioners—hundreds and thousands of members of the public who have signed petitions because they want to see change on this front. There are also the young people we have spoken to as well, as part of the inquiry.

Chris Philp: On timing, we want to introduce the Bill as quickly as we practically can after the Joint Committee reports. Obviously, there is a bit of lead time in developing policy, drafting and so on. Do you have a rough timeframe in mind? I do not want to put you under any pressure, but the sooner we get your comments, the more chance we have of incorporating them.

Q133       Chair: If we are talking about timing, we will be timing from the point at which you produce the draft legislation. There is obviously the prelegislative scrutiny you mentioned. Can you be clearer on your timing?

Chris Philp: The Bill is published already in draft form. The prelegislative scrutiny Committee, the Joint Committee being chaired by Damien Collins, will report on approximately Friday next week. The next step will be us publishing an updated Bill. We will actually introduce it later this Session—we can guess roughly when the Session might end, so this Session. We will not publish anything else before that. As we update the Bill ready for introduction, we will look at the Joint Committee’s report, but also at everything that other parliamentarians have said, including your Committee. I have had loads of meetings, as you can imagine, with all kinds of colleagues—Margaret Hodge and many others—so we will incorporate their comments as well, as part of the process before introducing the updated Bill.

Q134       Chair: Great. We can keep in touch on the timing, but we were thinking mid-January—we were hoping that that will be in time to be considered as part of your revised drafting.

Chris Philp: That will likely be before we introduce the updated Bill. I do not want to trespass on your independence, but I would say that the earlier you can publish your report, the more chance we have of incorporating its recommendations. There is quite a long process of policy development, policy instructions, legal instructions and Office of Parliamentary Counsel drafting. It is a surprisingly long process, so the sooner you can do it, the more we can incorporate your comments.

Q135       Chair: That is helpful, Minister. I know that there is a balancing act here between maximising the drafting process and getting it right and speeding up the process of getting this legislation in place, which is really what our petitioners want, as do we all.

To pick you up on a couple of the points that you highlighted in your opening comments, are there any specific features about how social media or other online platforms are designed, or the functionalities that they offer to users, that you are particularly concerned that may be enabling or amplifying abusive content and that you are seeking to address as part of the legislation?

Chris Philp: A number of things concern us, which is why this legislation is so necessary. One thing is that, as I said, platforms do not consistently enforce their own terms and conditions. If, as a first step, we could get them to properly enforce their own terms and conditions, that would be a big step forward. That is definitely an area of concern.

There is a concern about the inherent nature of social media platforms because they can propagate content at a scale that other forms of communication cannot: what starts off as a single tweet, Facebook post or Instagram post can rapidly be disseminated to millions or even tens of millions of people in a very short space of time, in a way that no other media does, certainly not in so uncontrolled a way. It is an inherent feature of social media that means that these things get amplified.

However, there is a particular concern that we are very conscious of, which I think Frances Haugen—the Facebook whistleblower—referred to in her evidence to the US Senate, as well as parliamentary Committees here and in the Facebook internal documents that she quite rightly leaked to The Wall Street Journal. Those documents illustrated the way in which algorithms can pick up hateful content and amplify and propagate it, not because they have chosen to be deliberately hateful but because the algorithm notices that hateful content gets lots of what they call user engagement, which then drives profit. By running these algorithms that are optimised for user engagement and therefore profit, they end up propagating, amplifying, disseminating, upranking—whatever word you want to use—often quite unpleasant content, and that is a concern too.

Q136       Chair: I guess the question is what will the legislation do about that? You highlight that online media platforms are not actually enforcing their own terms and conditions. I was also going to ask you about the categories that you highlighted in terms of the legal but harmful content in relation to adults in particular. There are some alternative platforms—Telegram and BitChute, for example—that would fall outside of the scope of being a category 1 platform, but my understanding is that they are specifically designed to promote hateful content. They would fall within the “harmful” category, but not within the category 1 requirements. Have the Government considered including the risk rating of a platform, as well as its size, in order to categorise it in terms of the approach?

Chris Philp: There are a couple of points there. First, when it comes to category 1 companies not abiding by their own terms and conditions in relation to legal but harmful content, doing so will become a requirement now, so they cannot duck out of that.

On the algorithms letting rip and propagating material, if the content is illegal or harmful to children, that will be generally prohibited. In relation to content that is legal but harmful to adults, category 1 companies, the big companies—I will come on to the definition point in a second—have to do the risk assessment. If they are going to let their algorithms promote harmful content, which at the moment they say they don’t but in fact they do, they will have to be completely explicit about that, and it will be visible to the whole world.

On the question about the definition of category 1, the illegal stuff applies to all firms, and the stuff that is legal but harmful to children applies to any in-scope firm where there is a danger that children will see the material, so there is no category 1 or not distinction in those two areas. The category 1 distinction bites in relation to the stuff that is legal but harmful to adults, and the definition of category 1 is both high reach, meaning that it reaches lots of people, like Facebook, and high harm, or high potential to commit harm, so there is an “and” in there. One of the suggestions that has been made—I am not making any commitment, but we are thinking about it—is that you could change that “and” to an “or”, so it would be high reach or high harm, as opposed to high reach and high harm.

Q137       Chair: That is helpful, thank you. One final question from me, and then I will hand over to colleagues. Our inquiry has also heard about specific frustrations that some users have in engaging with platforms. You might report harm, but it does not get dealt with, or you are not fed back how it has been dealt with if it has. Presumably it is a priority to address that issue of enforcing the terms and conditions. How do you believe the regulations will address it?

Chris Philp: The intention is that the regulator, Ofcom, will make sure that platforms have proper mechanisms for user redress: making complaints, and then acting upon them if they are meritorious. If it turns out that the platforms either do not have a mechanism to deal with it or do not deal with it properly, or that legitimate complaints get ignored, Ofcom will not adjudicate individual complaints. If person X has a complaint, Ofcom will not adjudicate that individual complaint, but if there is a systemic problem where a particular platform does not deal with complaints properly on a systemic and routine basis, Ofcom will take regulatory action, followed by enforcement action, in relation to that systemic failure.

Q138       Chair: I guess one of the challenges is the transparency around that, because young people have reported to us that they do not actually know how to report abuse. It is not made that obvious on some platforms. Others are doing a better job of actively encouraging that engagement so that they can address these issues. Are you confident that the legislation is drafted in a way that will encourage users to report abuse, which will become a community enforcement of terms and conditions?

Orla MacRae: Yes, I think we are. I think two things are relevant there. One is the provisions in the Bill about user redress—specifically that the mechanisms have to be accessible to users, including children but also parents and carers of children. The second thing that might be worth highlighting is our broader work on media literacy. That is helping to ensure that everyone is aware of how to report issues and to use the tools available to them, which does not necessarily need to wait for the Bill. A great deal of work is under way already on that.

Q139       Christina Rees: What is the latest data that the Government have on the scale and nature of online abuse, and how has that affected the approach taken in the Bill to tackle this issue?

Orla MacRae: We have been working on this in Government for several years now. Obviously, a lot of the technologies involved are quite novel, and how users interact with them is also changing all the time. I think it is fair to say that the evidence base on things like online abuse is still emerging. We do quite a lot of work in the Department to improve that evidence base. Ofcom already has a series of work that it does every year to look at what is going on online. Some things are relatively clear at this point. The scale of online abuse is huge, and it particularly affects certain groups of people. We know, for example, that women are more likely to suffer online abuse than men. I know that there have been reports by the previous Committee about the experience of disabled users online, so we have quite a lot of evidence about online abuse. I do not think that it is comprehensive yet, and part of that will rely on the provisions in the Bill in terms of transparency and information gathering. It is really important that there is an independent regulator that has the power to access the information to really understand what is going on.

              Chris Philp: I endorse those comments. It is clearly a widespread problem. These platforms are ubiquitous. Huge numbers of people access them, including children, some of whom are under 13, which is typically the age at which social media platforms say that it is safe for children to use them. We are under no illusions about the scale of this problem. It is huge.

Q140       Christina Rees: How will the impact of the new regulations on the amount of abusive content online and the harm that it causes to users be measured?

              Chris Philp: That is a good question. Ofcom will obviously be producing regular reports on its activity to assess the impact its work is having and trends in the industry. Under the Bill, Ofcom will also have very extensive information-gathering powers. It will be able to ask companies to reveal their internal data and documents. Through that process, I imagine that Ofcom will be able to access quite a lot of interesting statistics, data and information that is currently hidden, or that comes out only if there is a whistleblower, as in the recent Frances Haugen case.

There are also powers in the Bill that allow Ofcom to not just compel that disclosure, but severely punish any company that does not comply with disclosure requests, including through fines of up to 10% of global revenue. In addition to that, on this question about disclosure, there is also personal liability for named executives; if a company didn’t produce the information, the relevant named executive would be criminally liable for that failure to disclose, as in financial services regulation.

Q141       Christina Rees: As far as named executives are concerned, where we are putting the blame on a person, do you think it is possible for that to be avoided in any way?

              Chris Philp: The clauses are drafted in a way that is designed to be precise—it has to be a named person or a relevant senior director. Obviously, that draws on precedent in financial services regulation. The concept of personal liability in financial services has existed since the credit crunch just over 10 years ago. Interestingly, in the financial services landscape, it seems to have had a very good, persuasive effect on senior executives, whose behaviour seems to have improved quite a bit since 15 years ago. We hope it is one of those powers that never has to be used because its very presence will compel good behaviour, but it has been drafted carefully to make it as waterproof as any piece of legislation can be.

Q142       Chair: On the financial services breach, the maximum penalty is seven years in prison. As you say, evidence shows that that is having the desired preventive effect. My understanding is that the proposal in the Online Safety Bill is two years, and the criminal sanctions very much tie to the transparency duty, rather than the wider duty of preventing harm. Have you given any consideration to whether that needs to be broadened in scope, and whether the potential penalty needs to be increased?

              Chris Philp: Do you want to talk about how the drafting does scope? Then I will talk about other scope questions, and length of sentence.

Orla MacRae: Yes. The current scope is focused on information requests from Ofcom, which is slightly distinct from the transparency powers. The offence kicks in when there is a failure to provide information. It is focused on that because that is a really critical part of the regime. A critical part of the problem is that there just isn’t the information that people can have confidence in. That is fundamental to the regime, and that is why the offences are focused there.

One point on the sentences: the proposals are based on Ofcom’s powers in relation to telecoms, where there is the same level of sentencing. That was our benchmark.

              Chris Philp: On scope, you might ask, “Should the criminal liability extend to other duties under the Bill?” As always, there is a balance to strike. We do not want to deter companies from operating in the UK or have an unduly chilling effect on the market. I think policy makers felt, in developing these proposals, that focusing on this information disclosure was completely within their control. If Ofcom asks for information and it is not provided, there is no excuse for that at all—it is open and shut. That clarity, and a desire to not have a chilling effect on the UK as a place in which to do business, was why the balance was struck in that way.

Q143       Martyn Day: We have heard some quite powerful testimony from witnesses on the impact of online abuse, and the effect on those who receive it. What provisions are there in the regulatory framework that the Bill creates to ensure victims of online abuse are consulted and represented when tackling the issue?

              Chris Philp: In framing the legislation, we have engaged, and we continue to engage, very extensively with groups who represent victims or people who have suffered abuse. I think the Secretary of State is meeting with the Children’s Commissioner today—possibly as we speak. We have had a huge engagement exercise. The legislation sets out a framework, but the implementation details—precisely how these duties bite social media firms—will be set out by Ofcom in a series of codes of practice that will be developed and published after the passage of the Bill. The intention is that they will go through a very extensive consulting process in drawing up those codes of practice; it might take six or even 12 months. Those consultation processes will provide a ready opportunity for people who are affected by this to have their voice heard and their views clearly taken on board. This is more a matter for Ofcom, but I think that they will be in listening mode on an ongoing basis, particularly as this is a very new area of regulation.

I am sure that Ofcom will want to make a number of changes as time goes by, as they learn how this works in practice, and as new harms emerge that we have not thought of. It is likely that in three years’ time, there will be problems arising that we have not thought of today, and Ofcom will have to respond to those. Listening to victims, and to people who are affected, will be a critical part of that.

Orla MacRae: We announced when we published the draft Bill that we intended to include specific duties on Ofcom to establish user advocacy mechanisms. That is one of the things that is not included in the Bill as published, but we have committed to including it.

Q144       Martyn Day: That is very helpful, and you have answered what I was going to ask you next. I have one final point that ties in with that. How would you monitor the impact on children and young people specifically?

Orla MacRae: Obviously, Ofcom will have a role in evaluating the impact of its interventions. The transparency reports are a really important part of this; many companies already publish transparency reports, but there is no independent judgment as to whether they are an accurate representation. It is difficult for users to compare across the board. Once the regime is up and running, and those transparency reports are overseen by Ofcom, there will be a much better understanding of the impact on children and how the regime is delivering for them.

Q145       Nick Fletcher: I want to go over a few of the answers that you have given. The draft Bill is going through all this scrutiny—is it becoming too big? It sounds to me as though if it becomes too large, it might miss the point—and I am concerned.

              Chris Philp: You are raising a reasonable question, Nick. However, we have one shot at getting this Bill right, and it has gone through very extensive prelegislative scrutiny. Any changes that get made to this will have been very carefully considered over a period of almost a year. In fact, the path leading to the publication of this Bill took two or three years—it was two or three years in gestation. A huge, unusually large amount of thought has gone into this Bill compared to others that we may encounter in Parliament.

This Committee has heard from victims and people affected, so I think it is important that we do not leave loopholes in here—because this topic is so important. Some of these social media firms are very well resourced financially—they have lots of money—and can be legally quite challenging if the legislative basis for what we are trying to do is not sound. For those three reasons, it is important that we ensure that the Bill does everything we need it to.

Obviously, there will be things that are not addressed through this Bill, but that will be addressed through other Bills and other programmes. However, we want to take this opportunity to ensure that the Bill closes loopholes and does everything it needs to in order to protect our fellow citizens, particularly children.

Q146       Nick Fletcher: You talk about the terms and conditions; those are the terms and conditions for the platforms, which I as a user would have to abide by. I don’t know about you, Minister, but often on these websites you can just tick the box without reading all those terms and conditions; a lot of them tend to be pages and pages. Will there be anything in the Bill to ensure that the end user can understand the terms and conditions, and that they are put as succinctly as possible?

Chris Philp: I will turn to that in a second. I talked about enforcing terms and conditions; obviously, even if some users do not understand or have not read the terms and conditions, the platforms certainly do. One problem we have is that if somebody puts a piece of racist content on Twitter, Facebook or whatever that violates their terms and conditions—as racist content would, for almost every platform—the social media platform does not always take it down. They ignore their own terms and conditions. That is what we are talking about policing. Even if the user, through ignorance or indifference, does not know or care that the racist content violates the terms and conditions, the platform should. We will be saying that the category 1 platforms should take action on the “legal but harmful” categories; at the moment, they do not always do so.

Orla MacRae: There are provisions in the Bill stating that the terms and conditions will have to be very clear and accessible to users.

Q147       Nick Fletcher: So it is actually in there? What people think is abuse can vary, so there need to be lines there that people can clearly understand, and the legislation must be fit for purpose. We do not want any blurred lines. I think there can be some grey areas between “illegal” and “harmful”. Who sets that? Do you understand where I am coming from? It is difficult for Ministers and civil servants to write this legislation, so this will be extremely difficult for your average user, unless you set it out clearly.

Chris Philp: The definition of what is illegal has a very hard edge: it is contrary to statute, and there is a load of case law that essentially gives the court’s interpretation of statute as applied to particular cases. That is pretty clear. In relation to the “legal but harmful to adults” category, there are a lot of shades of grey, as you say.

It will be up to Ofcom to ensure that the terms and conditions are clear—clearer than they are now—so that they are as obvious as possible, but in terms of applying those terms and conditions to real-world cases, in the first instance it will be up to the social media company to interpret their own terms and conditions and apply them. However, if they are doing that in a way that is systematically or systemically flawed, Ofcom can pull them up on that, and there will be an evolving body of regulatory case law, for want of a better term, to provide clarity.

There are certainly areas of interpretation, but the alternative is to do nothing, and taking this action is better than doing nothing, even if there may be some areas of interpretation that need to be clarified over time, either by the social media companies or, if they do not do it properly, by Ofcom.

Q148       Chair: I want to follow up on the point about terms and conditions. We made a report on the impact on disabled people of online harms, and we have a lot of petitioners who are very focused on that issue. Just to be clear, will the Government ensure that the legislation requires easy-read versions of terms and conditions, accessible to everybody, including disabled people?

Orla MacRae: The legislation works by setting a high-level requirement that the terms and conditions be clear and accessible, and it includes things such as the user address provisions. How the framework functions is that there is a duty on Ofcom to produce codes of practice that set out in more detail the exact standards that we expect from platforms. Things like easy-read versions and making things accessible to different groups of users would be something for Ofcom’s codes.

Q149       Nick Fletcher: On the categories, when do you become a category 1 platform, and who will be watching platforms come up through the ranks, as it were? As you said, in three years, things can completely change, and TikTok might be No. 1.

Chris Philp: Determination of who qualifies as a category 1 company will be done by Ofcom on an ongoing basis. It is not that it will be done once at the beginning, and will be like that forever; it is an ongoing risk assessment undertaken by Ofcom, which will review that periodically to ensure that it is up to date. You mentioned TikTok, which is a good example. When preparation work started on the Bill, TikTok barely existed, but now there is little doubt that it would qualify as a category 1 company. That needs to be agile and kept updated.

Q150       Nick Fletcher: When abuse crosses the criminal threshold, how will you hold these companies to account? You will obviously want them to remove the content immediately. Is there a timespan on that? Will you set standards or key performance indicators for companies to comply with?

Chris Philp: That is the sort of thing that Ofcom will set out in the codes of practice, but for illegal content and for content that is harmful to children, we would expect it to be extremely quick—no messing around and no delaying. Moreover, if law enforcement or the police want to investigate a criminal offence, we would expect the platforms to co-operate fully with that investigation.

Q151       Nick Fletcher: There has been concern that the line on ensuring that platforms make sure that their terms of service “deal with” legal but harmful content is too weak. Will Ofcom have the power to intervene if a platform chooses to set terms of service that “deal with” legal abuse in a way that Ofcom feels leaves users at a high risk of harm from this content?

Chris Philp: The legislation as drafted does not give a power to Ofcom to essentially rewrite the terms and conditions—to say, “We think you’re being too permissive. Be tougher. Ban more content.” However, if the terms and conditions are unclear, Ofcom can require clarity, and if the risk assessment is inadequate and does not property assess the risks, it can make a platform do them again, or even take enforcement action. But Ofcom cannot set new standards. It cannot come crashing in and say, “We think x, y and z content should not be allowed” if it is legal but harmful. That is essentially because it is open to Parliament to make things illegal, and if Parliament has not made something illegal, it would seem excessive if we gave Ofcom the power magically to make something illegal online. You are describing what is right on the edge of where we have drawn the line.

Q152       Nick Fletcher: That is where the issues will be—on the edge of whether it is legal or illegal.

Chris Philp: If it tips into illegal, it is very clear, but even though Ofcom does not have the power to impose new terms and conditions, it does have the power to ensure that those term and conditions are clear, that there is a proper risk assessment, and that the companies police them properly. Actually, at the moment, most major companies’ terms and conditions—in fact, probably all of them—prohibit the things that, while legal, you, I and probably everyone in Parliament would find offensive and inappropriate. Ofcom will have the power to make them enforce those existing terms and conditions, and that is a huge step forward compared to where we are today.

Q153       Chair: You said at the beginning that you were looking at categorising according to reach “and/or” risk, but that you are looking to make that into “or”.

Chris Philp: We are considering it. I do not want to put it any more strongly than that.

Q154       Chair: Presumably Ofcom would be taking that assessment of risk level. Would the Government prescribe how that risk should be assessed? How would that work in practice?

Orla MacRae: There is a process set out in the Bill where, first, there will be the criteria for what constitutes the high risk or high reach, and then the setting of thresholds that will be done through secondary legislation with advice taken from Ofcom. Once that process has gone through, and there are those criteria and thresholds, then it will be for Ofcom to decide which platforms meet those thresholds, using things such as its information-gathering powers to make sure it understands. As the Minister says, it will have that risk-assessment and horizon-scanning function to make sure that it is looking at platforms that might be coming up the track and might meet those thresholds in the future.

Q155       Chair: To clarify, I understand that at present search is exempt from category 1 designation. However, we have seen examples of harmful search. For example, Amazon Alexa was giving people Islamophobic search returns, and I have seen Google providing antisemitic suggestions to people. While search returns might not necessarily be provided by search platforms themselves, there do appear to be algorithms within them that can be harmful. I am just wondering why Google, Bing and others are not being included in the assessment of risk and also reach, and being included in category 1 assessment.

Orla MacRae: Search engines do have their own set of duties in the Bill which recognise that they are fundamentally different to user-to-user services—as you say, they do not host content directly, but index it and make it available to users. They will have duties to tackle illegal content and to protect children, so they will need to make a judgment about whether their auto-complete on things like antisemitic content might lead users towards illegal content and, if so, they need to take steps to mitigate and manage that risk. It is correct that they cannot fall into category 1 because their role is to provide access to information. If you put duties on search companies in relation to legal content, the tools they have available to them are quite blunt, so that could lead to quite a significant restriction on people’s freedom to access information online that is legal to access. It is the balance that the Minister described earlier.

Q156       Chair: So they are not going to have a duty, really, not to direct people specifically to harmful content, as I understand it.

Orla MacRae: Well, they will in relation to illegal content and content that is harmful to children. They will not have a duty to stop adults accessing legal content online.

Q157       Chair: I guess I am more wondering whether there will be a way within the legislation to actually provide some kind of penalty for directing people to harmful content. It is this algorithm issue that we have seen with Facebook and other platforms. As you set out, Minister, they make money out of, not necessarily deliberately wanting to promote harm, but the harm is what generates income for them, and I think search engines have fallen foul of that too. So I just wonder whether the legislation will address that.

Orla MacRae: They will have to do the risk assessment of their functionality. They do obviously use algorithms to rank their content. It is quite different from the user-to-user algorithms because they do not have the same incentives to drive engagement on particular posts. They are obviously looking to provide the most relevant posts that users are looking for. There is a different context there in terms of the use of algorithms, but they will have to do the risk assessment as part of their duties on illegal content and content harmful to children.

Q158       Chair: What will be the penalty for not mitigating against those risks?

Orla MacRae: They are subject to the same enforcement sanctions in the Bill—the fines of up to 10% of global turnover for information offences and, in the most extreme examples, there are business disruption measures as well.

Q159       Christina Rees: The Bill allows you to set out priority categories of harmful content in secondary legislation. The Government have stated that those will likely include forms of online abuse, such as racist abuse. If you are willing to commit to that now, why not include these forms of abuse as priorities in the Bill itself?

              Chris Philp: That is a very good question. One of the changes being considered—nothing is decided—is to include on the face of the Bill a more comprehensive list of priority offences. We have already said publicly that child sexual exploitation and terrorism are examples of such priority offences, but, for the sake of early clarity, there may be a case for including more than just those on the face of the Bill. That is under very active consideration.

Q160       Christina Rees: Are you satisfied that the Bill does enough more generally to reflect the disproportionate online faced by people from specific communities or with certain characteristics?

              Chris Philp: Thank you for that question. I draw the Committee’s attention to clause 46(3) and (4) of the draft Bill, which defines the concept of content that is harmful to adults. Subsection (3) says it is when there are “reasonable grounds to believe that the nature of the content is such that there is a material risk of the content having, or indirectly having, a significant adverse physical or psychological impact on an adult of ordinary sensibilities.” However, subsection (4) goes on to say that if content might have a particularly bad impact on people with certain characteristics—for example, race, gender, ethnicity, disability, et cetera—that has to be taken into account in the definition of “ordinary sensibilities”. That is a really good point, which I think is encapsulated in clause 46(3) and (4)—[Interruption.]

Chair: Order. We are required to vote, so I will suspend the Committee for 10 minutes.

              Chris Philp: It is Committee stage of the Finance Bill, so it might be a group of Divisions—three or four. We have several minutes. Do you want to try to conclude, or would you rather suspend and come back?

Chair: Let’s finish this question, then go and vote.

Q161       Christina Rees: I was just going to ask for a definition of “particularly bad”, in clause 46(4).

              Chris Philp: It was “a material risk of the content having, or indirectly having, a significant adverse physical or psychological impact on an adult of ordinary sensibilities” and in assessing the question of “ordinary sensibilities” certain characteristics people have must be taken into account. That is the precise wording.

Q162       Chair: I have some more questions for you, Minister, but if we have to go in a few minutes and you think it might be difficult to reconvene after the votes, I will put them to you very quickly. Many of the petitioners asked about anonymity online, and we wanted to get your view. To what extent do the Government consider anonymity to be an enabler of online abuse and a factor to tackle as part of this legislation?

              Chris Philp: It certainly is an enabler of abuse, although it also provides protection for some people—for example, whistleblowers, people suffering from domestic abuse, or people who are worried about what their employer might think if they say a particular thing. Anonymity is both a threat and a protection. We have heard various proposals from stakeholders, one of which is to give users choice in whether they see content posted by anonymous unverified accounts. You could give people the choice of whether to be identified or anonymous, and separately give people the choice about seeing content posted by anonymous accounts. We also heard evidence and received submissions about the concept of traceability, so that even if someone’s profile or their public face on Facebook, for example, is anonymous—“Person X”—so you don’t know who they are, the platform should know who they are, so that if the police come with a warrant saying, “We want to find out who is behind that account,” the platform should have the information available. Those are very powerful submissions that we took and we are thinking about them very carefully.

Q163       Chair: Broadly speaking, what the petitioners—particularly Bobby Norris, who started the petition relating to homophobia online—would like to know is that there will be accountability for abuse that is perpetrated on line. You said that where it is illegal, the police are able to trace that person; but where it is anonymous, many of our petitioners find it difficult to be confident that it can be addressed. Are the Government proposing to be able to challenge—to do what the petitioners are looking for?

              Chris Philp: Yes. He raises a really good point. I cannot make a firm commitment, but we are considering the concept of traceability. If something is illegal—if it crosses the criminal threshold—there is a case for saying that the police should be able to find the person behind the anonymous account.

Q164       Chair: It is where it is harmful but not illegal and the abuse is perpetrated anonymously that many of the petitioners are particularly concerned about and how it is going to be addressed.

              Chris Philp: If it is legal, it is hard to see what basis the police would have to identify the person behind it.

Orla MacRae: It might be worth going back to the buckets in the Bill. The big platforms will have to tackle abuse that is legal but harmful to adults, whether or not it is anonymous, so I think it will have a bit impact on that.

Chair: Thank you. My understanding is that we have two votes, so I think we need to call an end to this session. Thank you very much for giving us evidence today.