Select Committee on Communications and Digital
Corrected oral evidence: Freedom of expression online
Tuesday 23 March 2021
Members present: Lord Gilbert of Panteg (The Chair); Baroness Bull; Baroness Buscombe; Viscount Colville of Culross; Baroness Featherstone; Baroness Grender; Lord Griffiths of Burry Port; Lord Lipsey; Baroness Rebuck; Lord Stevenson of Balmacara; Lord Vaizey of Didcot; The Lord Bishop of Worcester.
Evidence Session No. 19 Virtual Proceeding Questions 160 - 165
I: Professor Penney Lewis, Commissioner for Criminal Law, Law Commission; Dr Nicholas Hoggard, Lead Lawyer (Protection of Official Data and Online Communications), Law Commission.
USE OF THE TRANSCRIPT
This is a corrected transcript of evidence taken in public and webcast on www.parliamentlive.tv.
Professor Penney Lewis and Dr Nicholas Hoggard.
Q160 The Chair: Welcome to our first panel—Professor Penney Lewis and Dr Nicholas Hoggard from the Law Commission. Professor Lewis was appointed as a law commissioner for criminal law on 1 January 2020, and Dr Hoggard is lead lawyer for the protection of official data project and an expert on a number of the issues that we are discussing in our inquiry into freedom of expression online. Professor Lewis and Dr Hoggard, thank you very much indeed for your time and for joining us today.
Today’s session will be broadcast online and a transcript will be produced. Members of the committee have a number of quite dry and technical questions on your work as it relates to our inquiry into freedom of expression online.
Before we get to those questions, may I ask you briefly to add any further words of introduction and give us a very brief overview of your work in this area? Once you have done that, we will move on and take those questions.
Professor Penney Lewis: The Law Commission is currently working on two projects that have significant freedom of expression implications. One is a project on hate crime and the other is on harmful online communications. The hate crime project is sponsored jointly by the Home Office and the Ministry of Justice; the harmful online communications project is part of DCMS’s online harms work and is sponsored by DCMS.
Both projects were in consultation in the autumn of last year and they are now in the analysis of consultation responses phases. The harmful online communications project is hoping to report in the summer and lay recommendations before Parliament. The hate crime project had an enormous consultation response and we think that we will report in the autumn.
The hate crime project is not concerned primarily with speech. A lot of hate crime involves violence. There are a few, very rarely prosecuted and quite serious offences of stirring up hatred, which I understand we will come to in due course, but most communications that are criminally prosecuted are not prosecuted using those offences. They are prosecuted using either public order offences or the subject matter of the second project that I mentioned, the harmful online communications project. They are prosecuted as communications offences.
DCMS has asked us to look at the existing communications offences and reform them. We have done so from a position of trying to ensure better protection for freedom of expression, while ensuring that individuals are adequately protected by the criminal law from harmful behaviour online.
We have tried to look at the existing offences and identify where they may not comply with the protection of freedom of expression under Article 10. For example, they contain very vague terms such as “grossly offensive”, “indecent” and “obscene”, and those types of communications are not necessarily harmful. A conversation between two consenting adults that may be indecent, or could be described as such, should not really be within the criminal law.
Similarly, being offended is not enough to engage the criminal law. In our view, even “grossly offensive” is not the right threshold. Behaviour that meets that threshold is not sufficiently wrongful or harmful to warrant the intervention of the criminal law.
We have tried to design a new communications offence that not only deals with these vagueness and over-breadth problems but responds to harms that are not currently dealt with by these communications offences—for example, cyberflashing, the encouragement of self-harm and the sending of flashing images to people with photo-sensitive epilepsy, all of which cause tremendous harm but are not currently dealt with coherently by existing criminal offences. We think that by focusing on the potential for harm we can ensure that we have only a justifiable and proportionate interference with Article 10 and better protection of freedom of expression.
Our provisionally proposed offence requires the prosecution to prove that the defendant sends or posts a communication—that could be a letter or an electronic communication—that is likely to cause harm to a likely audience. The type of harm that we are interested in is emotional or psychological harm, and the threshold of harm is that it must amount at least to serious emotional distress. The likely audience is someone who is likely to see, hear or otherwise encounter the communication.
As for a fault or mental element, the defendant must either intend to cause harm or be aware of the risk of causing harm, and the prosecution must prove that the defendant did not have a reasonable excuse for the communication. This is designed to ensure that freedom of expression is adequately and properly protected. In assessing whether the defendant had a reasonable excuse, the court would have to have regard to whether the communication was, or was meant to be, a contribution to a matter of public interest.
Briefly, that is the provisionally proposed harm-based offence. As I said earlier, we consulted on that offence and we are currently looking at the consultation responses so that we can refine our proposal into recommendations.
The Chair: Thank you for that very useful introduction. I think that we have lots of questions arising from that. Dr Hoggard, welcome.
Dr Nicholas Hoggard: Thank you very much. Penney has said everything that needs to be said as an introductory matter on the project itself. I should say for the record that I am also the lawyer working on this project as well as having been a lawyer for the protection of official data work that we did on official secrets and espionage. I have nothing to add at this stage.
Q161 The Lord Bishop of Worcester: Thank you both very much for being with us today and for all the very good work that you have done on this project. Penney, thank you for outlining so clearly what is proposed.
May I press you on a couple of things that have been raised, as you will be well aware, during the consultation? The first is the question of how it is possible to determine the likely audience, particularly of a social media post. If I send a private message to someone, I know exactly, I hope, who the recipient will be, but given the social thinking of Twitter followers I have no idea who the likely audience is. As you will know, some questions have been asked around that. I would be grateful if you said more about how it might be possible to determine the likely audience of a social media post not directed at an individual. Penney, as you introduced this, perhaps Nicholas wants to start.
Dr Nicholas Hoggard: I am happy to do so. I am sure that Penney will want to add something once I have finished. The first thing to say about it is that it is a factual question. How you determine the likely audience will depend in large part on exactly how the post appears online, so we would be looking necessarily at the number of followers somebody had and where the post appeared. Was it, for example, a post on a very prominent article or very well-followed tweet?
There will be a series of factual questions. This is not new to the law. It is important to stress that we are not introducing any new consideration that the law has not had to grapple with before. For example, we already do this in respect of advertising standards and broadcasting. “Likely audience” appears in consumer standards; it appears in CPS hate crime guidance. We also see the language in the Obscene Publications Act, so this is not anything new to the law. I am sure Penney will have something to add to that.
Professor Penney Lewis: This is a question of fact that will be decided case by case. At this point it is worth stepping back and recognising that the existing offences do not have such a requirement; in other words, the offence is complete no matter who sees it. It could be that one person sees it; it could be 1 million people see it, but there is no constraining effect for those who were likely to encounter it. We think that the defendant should be able to tell at the point of sending whether they have committed a criminal offence or are about to do so. Therefore, they should be required to have in their consideration only those who are likely to see, hear or otherwise encounter the post.
This requirement is designed not to open up the scope of the offence but to narrow it in comparison with the existing offences. It is about constraining the assessment of harm to those who are likely to come across the communication. In that way, the defendant is not criminalised simply because their communication was seen by people they could not have foreseen would see it. Let us say one has a couple of hundred Twitter followers who are basically one’s friends and family, but something happens to a tweet and it goes viral and is seen by all sorts of people the defendant could not possibly have contemplated. We do not think the defendant should be criminally responsible for the likely harm to those people who were not likely to come across the communication, so it is a drawing back of liability rather than an expansion of it.
The Lord Bishop of Worcester: That is reassuring, because one never knows who will share posts put online, as you intimated.
May I take you to the proposal to adopt the “emotional or psychological harm” standard? As you know, it has been recommended—for example, by English PEN—that it would be better to go for a recognised medical condition standard. Why do you go for the former?
Professor Penney Lewis: The category of harm with which we are concerned is emotional or psychological harm. In the paper, we make an argument that we think that is the common denominator. We know that there are many other kinds of harm and we detail those in the paper, but we think that emotional or psychological harm is the common denominator. It is not the threshold. That is just the kind of harm we are looking at. The threshold will be serious emotional distress. To put it more simply, the criminal law currently uses “serious distress”; in other words, the harm has to be as serious as “serious distress” before the offence is triggered, so to speak.
A recognised medical condition is a standard that comes from diminished responsibility. It applies to defendants who have an abnormality of mind so severe that Parliament has decided that a deliberate act of killing by that defendant should be treated as manslaughter rather than murder. That is a very high bar. We are talking about a much less serious set of criminal offences.
We do not think that “recognised medical condition” is the right standard, in part because it is defendant-focused rather than victim-focused, but also because it would allow a considerable amount of harm without ever reaching that threshold. While we are extremely concerned to ensure that these offences do not interfere with freedom of expression any more than is necessary in a democratic society, we think that protecting individuals from harm is an objective that the criminal law is and should be concerned with. We think that a recognised medical condition will simply put the bar too high in order to do that.
“Serious distress” is used in the controlling or coercive behaviour offences under the Serious Crime Act 2015. It is also part of the more serious stalking offence under the Protection from Harassment Act 1997. It is a recognised threshold for perhaps less serious criminal offences than the distinction between murder and manslaughter.
I should add that “distress” on its own without any qualifying adjective is the threshold for many of the Public Order Act offences that criminalise in-person behaviour, so we are proposing a higher threshold for online than for, example, someone who is yelling at you in the street. In those circumstances, “likely to cause distress”—I do mean “likely to cause distress”—would be sufficient for a relatively minor criminal offence, but it is still a criminal offence. Therefore, we are proposing a higher standard than that, but we do not think that it should be as high as “recognised medical condition”.
The Lord Bishop of Worcester: You have chosen not to use the “reasonable person” test in that.
Professor Penney Lewis: Yes. I mentioned in the introduction that we are trying to move away from very vague terms—“grossly offensive”, “indecent” and “obscene”—in part because we think that they criminalise the wrong kinds of speech and do not necessarily criminalise speech that is harmful, and that offensive speech should be protected, but also because they are so vague as not to allow a defendant to know whether they are committing a criminal offence.
We think that the “reasonable person” approach will have the same effect; in other words, we would be getting rid of one kind of vagueness and replacing it with another. One person might find something grossly offensive; another person might not. One person might think that something is funny; another might not. It is difficult to know which of those people are “reasonable”.
The other concern that we have about a “reasonable person” approach is that it risks both overcriminalising and undercriminalising. To take the first one, if I send a communication to someone who shares my dark sense of humour, why should I be found guilty of a criminal offence just because a reasonable person would have been harmed by that, or likely to be harmed by that communication? I sent it to someone who I was confident would not be harmed by the communication. I do not think that “reasonable person” should be used to criminalise that behaviour.
The contrast is where a “reasonable person” approach would undercriminalise. Let us say that I know that the recipient of my communication is particularly vulnerable in a certain way—perhaps because I have had a previous relationship with them or they are a friend—and I choose to capitalise on that vulnerability by sending them a communication that I am aware is likely to cause them harm. I might even intend to cause them harm, but a reasonable person receiving that communication would not be likely to be harmed. Why should I be able to escape criminal liability in those circumstances simply by saying, “She was harmed, but a reasonable person would not have been”?
We think that the “reasonable person” approach is the wrong one because it is not focused enough on the inquiry about what the defendant’s conduct was likely to do in the circumstances in which the defendant was.
The Lord Bishop of Worcester: That is helpful, but I am still puzzled about why the “reasonable person” test is thought to introduce vagueness, whereas “serious distress” is not. Who quantifies what serious distress is and how, in order that that might be vague? Nicholas, you might like to answer that.
Dr Nicholas Hoggard: I can do little other than develop a couple of the areas that Penney has already highlighted. As she correctly pointed out, “serious distress” is used in the law already—for example, in stalking and under the Serious Crime Act concerning controlling and coercive behaviour. The wording used is “substantial adverse effect on usual day-to-day activities”. A body of case law will already be developing and already exists where we can point to specific types of harm and results of action that will help courts to work out what is likely to meet that serious distress threshold.
The other point to make is that we have not defined what we mean by “serious distress”. We talked about serious emotional distress. Penney rightly pointed out that “serious distress” might be clearer, and currently we are considering that point. We are not yet convinced that the word “emotional” adds anything. None the less, the existing law may well do that.
The other point that we are considering is whether it will be helpful to include within the statute a list of indicative factors to assist the court to determine what we mean by serious distress.
To develop one point Penney made about the “reasonable person” standard, it is important to stress that “reasonable” is not an assessment of the majority opinion. It is a more abstract notion than that. The simple fact that the majority of people might be offended by something, whereas a minority might not, does not necessarily dictate it. It is not that revealing about whether something is none the less reasonable. The majority of people may well be unreasonable.
More than that, the fact that a small group of people might find something to be inoffensive and the majority of people might find it to be offensive does not really tell us anything about which reaction is more reasonable. We can think of really extreme examples where in a really grotesque or egregious situation somebody might find it funny. We can say that reaction was unreasonable, but there is a huge amount of grey where people will have totally legitimate different reactions to something where some people are caused offence such that they are harmed and some are not.
That type of analysis is not amenable to the “reasonable” standard, where we say it was reasonable that you were offended or it was reasonable that you were not. Therefore, reasonableness there does not really make sense; it is the wrong question.
That is to say nothing of the points that Penney made, which are quite important. When you start to import these objective notions of reasonableness you end up very quickly back at the situation in which we already find ourselves, which is that you have to find an objective notion of what makes something wrongful or harmful. We end up saying, “This was offensive. Therefore, it must be harmful”. That is a situation in which we find ourselves now and we do not think it is satisfactory, because those objective standards that attempt to find a kind of majoritarian view do not necessarily track harm. I do not think that there is any avenue for pursuing reasonableness without undoing the context specificity that makes this a justifiable interference in Article 10.
The Lord Bishop of Worcester: Thank you both very much for your answers, which—how can I put it?—were more than reasonable. I leave my colleagues to follow them up.
Q162 Baroness Buscombe: This is quite complex, which in itself is quite concerning in terms of the criminal law. I am worried about subjectivity. Dr Hoggard, you referenced the similarity with, for example, advertising standards, but that is self-regulation, which is an entirely different bar. One cannot necessarily equate that with the criminal law. Do you feel comfortable that the term “likely” rather than “actual” harm will not encourage vexatious complaints?
Professor Penney Lewis: The first thing to say is that requiring actual harm for a relatively low-level criminal offence is not that common. If you think about offences against the person, for assault and battery there is no need for “actual harm”. Then there are more serious offences that have harm elements: assault causing actual bodily harm, assault causing grievous bodily harm, et cetera.
I suppose that the other theoretical problem with actual harm is that it means the defendant does not know whether he is committing an offence at the point at which he sends the communication, because whether he is committing an offence will depend on whether someone is actually harmed. We think that with these offences it is important that the defendant is able to foresee whether he is committing a criminal offence at that point.
Baroness Buscombe: I am sorry to interrupt. Do you mean to show criminal intent, or mens rea to commit a criminal act?
Professor Penney Lewis: No. We mean that the defendant should be able to predict, looking at the communication they are about to send, whether it is a criminal offence. That will obviously include, “Am I intending to cause harm, or am I aware of a risk of harm?” Let us assume that they do not intend to do that, but they are aware of the risk of harm. If there was an actual harm requirement, the point at which you know whether an offence has been committed is when someone is harmed. We think that it would be better if you could tell whether this offence was committed before the message was sent.
That does not mean actual harm will not be relevant. The likelihood is that the prosecution will use actual harm in some cases to prove that harm was likely. It will be evidence that harm was likely, and it will be harder to prove that harm was likely if there is no actual harm. That is certainly the case, but there are other ways of proving that harm was likely, and it will simply be part of the prosecution’s burden of proof.
If we look at existing offences, vexatious complainants do not have to prove either likely or actual harm; they just have to say, “I think this is grossly offensive”, “I think this is indecent”, “I think this is obscene”. That does not mean it was grossly offensive, et cetera, but it will be evidence of it being grossly offensive if some people found it to be grossly offensive.
Therefore, we think that it is important to include a harm-based element, but we also think that it is important that the defendant is able to predict at the point of sending whether they have committed the offence.
The other elements of the offence—the fault elements, which mean either an intention to cause harm or awareness of a risk of harm, the likely audience, which I spoke about earlier, and whether it was sent with reasonable excuse—will also help to constrain the scope of the offence. While it is completely understandable to look at the elements in isolation—indeed, we do that in the paper—one also has to consider them in combination to get a holistic picture of how this offence will work.
Baroness Buscombe: Dr Hoggard, do you want to add to that?
Dr Nicholas Hoggard: An actual harm element would not preclude the risk of the vexatious complaint. The mere fact somebody was harmed, even to the required standard, does not in and of itself mean that harm was likely. Indeed, one of the reasons we decided that the “likely” criterion was important is that it does import the notion of foreseeability and builds on what Penney said about the ability of the defendant to know at the point they send the communication that harm is likely—that they are doing something that might be culpable.
We are saying that culpability rests on the fact that harm was likely and not the mere fact that through some chain of causation somebody was actually harmed. Our submission is that the “likely harm” criterion is more restrictive and appropriate than the offence being complete at the point that actual harm can be proven.
Baroness Buscombe: My second question is: how would a court judge emotional or psychological harm for a victim who does not necessarily exist, for example in relation to a public message that goes out on social media that is not directed at any individual, but somebody decides it is creating some sort of psychological harm, or are we not approaching this in the right way?
Professor Penney Lewis: I am not sure that it helps to consider a fictional victim. One needs to consider the likely audience. This is an inquiry like “serious distress”, which is not foreign to the criminal law. I mentioned in my introduction offences of stirring up hatred. Since 1976, one limb of the offence of stirring up racial hatred can be committed where words or behaviour are likely to stir up racial hatred. Courts have to assess whether this communication of these words was likely to stir up racial hatred. Racial hatred does not have to have been stirred up for this offence to be complete.
Two of the public order offences I mentioned earlier involve either likely harassment, alarm or distress or an inquiry as to whether a person was likely to believe that violence would be used. It might seem as though these hypothetical inquiries are unusual or even fictional, but they are the bread and butter of the magistrates’ court and the Crown Court. We do not think it will be quite as challenging as you fear, although it will be context-dependent. The court will have to assess this in the light of the evidence it has about actual harm and, where there is no actual harm, it will be more difficult to establish that harm was likely.
Baroness Buscombe: May I ask you both for a quick response to a question that might interest those listening? How many responses did you get to the consultation? Were they predominantly lawyers, was there a mix, or are you not at liberty to say?
Professor Penney Lewis: I think we can talk in general terms. We had 133 responses to the consultation from a very wide spectrum: for example, groups concerned with freedom of expression; what we call operational stakeholders—such as the CPS, the police, the Magistrates’ Association, the Justices’ Clerks’ Society; lawyers in the form of the Bar Council, the Criminal Bar Association, the Law Society and certainly individual lawyers; and quite a number of members of the public.
Most of the responses were organisational or representative—they were representing a number of people. We also had some very helpful contributions from charities—for example, anti-bullying charities and those who work with victims. One of the things that we are looking at in this project is the encouragement or glorification of self-harm. We had very helpful responses, for example, from organisations which work with victims of self-harm and people contemplating suicide.
Baroness Buscombe: That is really helpful to know and it also helps with the context.
Viscount Colville of Culross: I would like to ask an extra question that is not in the list. In the law of harm, the course of conduct component is taken into consideration when prosecuting for harm. Do you anticipate a similar component in the online safety Bill? For instance, is it possible for a single harmful tweet to set off a prosecution, or would it have to be part of a campaign?
What are the issues surrounding the specific offence of inciting or encouraging pile‑on harassment, which we have heard so much about?
Professor Penney Lewis: You are right that in the Protection from Harassment Act there are some offences that require a course of conduct. The offences that we have been asked to review are those that can be committed by one single communication. We were also asked to look at pile‑on harassment, both co‑ordinated and unco‑ordinated. It is possible to prosecute co‑ordinated pile‑on harassment through the Protection from Harassment Act, so in that sense there might well be a course of conduct prosecution, but the general-type communications offences can currently be committed by one communication and our provisional proposals could also be committed by one communication.
We are considering whether new offences are needed to deal with pile‑on harassment or possibly whether amendment might be necessary of the Protection from Harassment Act offence that currently can be used to deal with group harassment.
That is a broad outline of where we are. I do not know whether Nick would like to say something more about pile‑on harassment.
Viscount Colville of Culross: Nick, perhaps you could talk us through some of the amendments you are thinking about.
Dr Nicholas Hoggard: I will begin by noting that pile‑on is very difficult for the criminal law to deal with, in part because of the scale of the problem. It is slightly odd that the more widespread the problem the less able the criminal law is to deal with it. None the less, the data we have seen are astonishing. In the most extreme instances, we are aware of well-known public figures who have received harassing messages at the rate of between 30 to 200 a minute from people from all over the world, so as a broad problem the criminal law is not well equipped to deal with this. We just do not have the capacity to do it.
Therefore, we thought that the best way to address it, if indeed we can address it at all outside the existing harassment law concerning the course of conduct provisions, is not to look at everyone who happens to contribute to a pile‑on, because they are numerous, but instead to ask whether we can focus the attention of the criminal law on those attempts to create a pile‑on; that is, those early instances that happen maybe not on Twitter, for example, where the pile‑on might take place, but in separate and perhaps more private groups where people are attempting to co‑ordinate a pile‑on.
One of the things we are looking at, as Penney correctly mentioned, is the extent to which the existing law on group harassment—that is where the course of conduct is committed not by one person doing something at least twice but by two people agreeing to a course of conduct, and at that point each of their acts can be imported to the other so it is then a course of conduct for the purposes of harassment—addresses what we need it to address, or whether we need something that looks a bit more like a law of incitement, or law of criminal attempt, where you are doing something with the intention that others carry something out. Therefore, even though you have sent only one message—so it is not harassment in the strict sense— you have none the less tried to incite harassment in the form of a pile‑on.
That was why we ended up considering that. We have not yet come to a firm view on whether the existing law can adequately deal with that, and we are talking to parliamentary counsel about it at the moment. None the less, that was why we took the view we did in the consultation paper. Does that help?
Q163 Viscount Colville of Culross: That is very helpful.
Turning to a different subject, I am very pleased that the commission recognises the importance of free speech for journalism, but is there not a danger that the narrow definition of the carve‑out to cover regulated media, such as news media, broadcast media and cinema, will chill the freedom of speech of users of those media sites? For instance, you specifically do not include readers’ responses, yet they are now very much seen as an integral part of the news story online.
Dr Nicholas Hoggard: The first thing to say is that the law does treat the press differently from individuals, certainly in terms of Article 10 of the ECHR on freedom of expression. The law does draw these distinctions.
There is a whole line of case law in the European Court of Human Rights that makes it clear that when you are interfering with the freedom of expression of the press you have to justify that interference to particularly high standards. The standards are particularly stringent. That is because when you are interfering with somebody’s Article 10 rights that interference needs to be necessary in a democratic society. That is the legal justification or requirement for the interference, and the phrase “in a democratic society” imports very particular values. It is very clear that the court takes seriously the notion that the free press is, if you like, the sine qua non of democratic society.
To deal with your specific point about how we carve out the press, the first question is: why do we have to carve out the press? That obviously flows from the concern that, if we were to impose laws that governed all forms of communication, the press might be caught up, if you like, in that regulation. The press can obviously commit offences—for example, disclosing damaging classified information. You could do that in a published article, in the same way as you could, for example, incite an act of grievous bodily harm.
That would be another offence that you could commit in a published article, but the problem with communications offences is that they apply broadly to communications. Therefore, rather than looking at a specific act targeted at a specific problem, such as glorifying terrorism—although there are concerns around that standard, so it might not be the best example—none the less the problem with trying to apply the communication offences to articles is that it would be a very broad application of a criminal law that would in effect, we submit, operate as a form of press regulation.
The reason that we decided to propose a carve-out focused on regulated press was the problem of how you define “the press”. This is not particularly easy to do when you are talking about the role of the press, if you are trying to use that as the standard, because, as you rightly point out, that becomes broad very quickly. Would a journalist tweeting in a private capacity be able to avail themselves of this carve-out?
Concerned though we were to ensure that we were not regulating the press through a side wind—we had been asked to consider abusive communications, not new regulatory standards for the press—we had to have some kind of carve‑out. What that carve-out looks like is a difficult question. We have, luckily, had a number of helpful, considered responses from consultees, not just lawyers but representatives from press organisations and across the spectrum of NGOs and academics, on how we might tackle this problem. None the less, there is a very real concern that if we did not have some kind of carve-out we would be regulating more broadly, but if you do not tightly define what that carve‑out is you could almost have a run‑away situation where the law becomes even more vague simply because it is not clear on what basis you can avail yourself of this carve‑out.
Viscount Colville of Culross: Penney, do you have anything to add about the narrowness of the carve-out?
Professor Penney Lewis: Not on the carve-out specifically, but the broader point about comments below articles is where we would expect “reasonable excuse” to feature heavily. One reason we did not construct it as a defence that the defendant would have to prove is that we think it is so important to ensure the best protection possible for freedom of expression compatible with the harm prevention rationale for interfering with it. That is why we would impose a requirement on the prosecution to prove beyond reasonable doubt that the defendant did not have a reasonable excuse. For comments below the line of a newspaper article, that would be very difficult.
The other element that we think protects the writers of those comments in most circumstances is that the defendant will be guilty only if they either intended to cause harm or were aware of a risk of harm. We are certainly aware that there was some concern among consultees about the latter mental element, so we are looking at that to see whether we need to refine it. We are certainly paying close attention to all the responses that deal with that particular aspect to make sure that we have it exactly right.
Viscount Colville of Culross: That is very helpful.
The Chair: We still have two quite substantial areas to discuss with you and only about 10 or 15 minutes. Perhaps I may ask for reasonably brief questions and answers. We might have to write to you if we have anything outstanding at the end.
Q164 Lord Griffiths of Burry Port: Thank you for being here. On harmful online communications, the process in which you are involved, among other things, is an attempt to give us a contemporary provision to update the Malicious Communications Act 1988 and the Communications Act 2003. My goodness me, what communications have developed into in the intervening years is extraordinary. I am sure that in 10 years’ time we will be looking at what we are doing now and asking about the online safety Act and whether it adequately covers what we will then be experiencing.
We have been talking thus far about the criminal law and what happens in courts in certain instances. Of course, it is not just the courts or the criminal law that are the active players in what is happening right now. The major active players are the platforms. They are effectively being asked to be judge and jury in a quasi-legal setting and to know the difference between legal and illegal, and the whole concept of “legal” and “harmful” comes into question at that point. For example, Facebook—we have talked to it—has established what feels like a court with an oversight board to judge in certain instances. The volume of stuff that comes across the platforms is such that algorithms are turned to again and again to make judgments in many areas where subjectivity and nuance make it almost impossible for human beings, let alone machines and artificial intelligence, to make reasonable judgments.
How do you see what you do in your part of this process working in collaboration with these people who all the time are concerned with making judgments of their own? It is vital to take a holistic view, is it not, as we look at this vital area of our work? How would an algorithm define, for example, the word “likely”? Let us just leave it there. I look forward with enormous interest to your replies.
Professor Penney Lewis: The first part of my reply is that, luckily, it is not my job to define “likely” for an algorithm, but what we would say is that these companies are already having to do this using a criminal law that has very vague terms such as “grossly offensive”, “indecent”, et cetera. What we will be asking them to do is agree a category shift. Instead of looking at a piece of speech or communication and labelling it as grossly offensive, obscene, indecent, threatening, et cetera, we will be asking them to think about whether there is likely harm to someone who is likely to encounter it.
Having said that, this is a task businesses have to undertake. They have to make sure that they comply with the law, including the criminal law. We generally do not design the criminal law in such a way as to make easier the lives of businesses that will have to follow it. We need to make sure that we design the criminal law in a way that reflects the harmfulness and wrongfulness of conduct and criminalises only those defendants who are sufficiently culpable. In this context we also need to make sure we do not interfere with freedom of expression any more than is necessary in a democratic society.
We have spoken to the platforms; we did so during the consultation, and we have also received some helpful consultation responses. While they are certainly interested in how we are going about this and the elements we are proposing, they have not raised serious concerns about how they are going to identify content that falls into the category of unlawful. We are in conversation with them, but we also need to make sure that the tail does not wag the dog. We do not want a criminal law that is designed by Facebook.
The Chair: Dr Hoggard, do you have anything to add? If not, we will move on.
Dr Nicholas Hoggard: No. Penney has said everything I would have said, and more eloquently.
The Chair: Let us go back to freedom of expression, which Professor Lewis touched on earlier.
Q165 Baroness Bull: Thank you for a really fascinating and at times slightly perplexing conversation this afternoon. I have three questions in about as many minutes, so I will be brief. We have talked a lot about the definition of harm and who is harmed. Do you have concerns that the proposed new approach that we have been talking about today would have a negative impact on freedom of expression?
Professor Penney Lewis: Do you mean the harm-based communication offence?
Baroness Bull: Yes.
Professor Penney Lewis: It is a criminal offence; it interferes with freedom of expression. There is no question but that these offences constitute an interference with freedom of expression. The question is whether they are necessary in a democratic society and, therefore, that interference can be justified.
Baroness Bull: That is a very clear answer and also a slightly chilling one. It moves on to a bigger discussion that we probably do not have time for: the balance on that spectrum and who decides where we sit. Dr Hoggard, do you want to add anything to that?
Dr Nicholas Hoggard: Only that it is a balancing exercise that we have to make in all sorts of areas of criminal law. To take a potential, obvious example, a state might have a number of legitimate aims for wanting to constrain free speech, one of which is national security. We have no qualms particularly with the notion that to some degree, though we might quibble over the degree, it is right to limit people’s ability to share classified information. There is no doubt that that is an interference in freedom of expression. That is one example. The law has to balance these considerations with the aims of society or other people’s rights all the time.
Another example might be my right to share private images of somebody else, or a photographer’s right to break into somebody’s house, take photographs and publish them online. The law deals with this balance all the time. In a way, this is nothing new. What we want to ensure is that, with this law, we can justify the basis of our interference in freedom of expression. We think that the proposals that we have made do that better than the existing law because we are pointing directly to a harm and directly to a notion that here is a right that has been interfered with, and that is better than simply having these objective standards of offensiveness.
Baroness Bull: That is very helpful. It leads me nicely on to the second question. You talked about it pointing to an absolute harm, but the words are “likely to”. We have touched on this a little bit. Some people have suggested that is too low a bar and that those phrases are not easily susceptible to statutory definition. In cases where intent cannot be shown, how do you envisage that the court will decide whether hatred is likely to be stirred up? What sort of test will it apply? Dr Hoggard, why not carry on with that one?
Professor Penney Lewis: Perhaps I may jump in because Nick does not work on the hate crime project. This is the offence of stirring up hatred. Currently, the offence of stirring up racial hatred can be committed in circumstances where hatred is likely to be stirred up. This element has been in the offence since 1976. It was also in the precursor to this offence, which dates from the 1930s.
The courts interpreted this in the earlier Act. They adopted a relatively high threshold, holding that “likely to” means more than merely “liable to”, and we think that would apply in the context of “stirring up”.
To give you an example of what is prosecuted in the context of these offences, it is communications that call for the rounding up and extermination of a particular religious group. For example, after the Manchester bombing, someone was prosecuted for calling on Facebook for mosques to be burned down, preferably while full. We are talking about very serious offences that have a very high threshold. In the last year for which we have data, there were only 13 such prosecutions, with 11 convictions. The courts are familiar with the “likely to stir up hatred” element and have been able to apply it.
We also propose that additional protections be included in the “likely to stir up hatred” limb that would make the offence narrower. We are proposing the removal of insulting words from this limb. The words would have to be threatening or abusive, and we are proposing two fault elements. The defendant would have had to know, or ought to have known, that the words were likely to stir up hatred, and the defendant would have had to know, or ought to have known, that the words were threatening or abusive. We think that this will be a more focused version of the “likely to stir up hatred” offence that will be narrower and ensure that defendants are sufficiently culpable before they are convicted of what is a much more serious offence than the communications offences that we have been talking about.
Baroness Bull: Again, you have led on to my next, and last, question, which is about the extension of hate crime law offences to philosophical beliefs and sex workers. You will know that concerns have been raised about this. What do you see as the consequences of this inclusion, particularly for freedom of expression?
The Chair: Perhaps you could be quite brief in your response. We may have to come back to you to follow it up in writing.
Professor Penney Lewis: We have not proposed any extension to include these characteristics; we are consulting on them. In particular, we have not proposed the extension of stirring up hatred offences to include any additional characteristics beyond possibly sex or gender.
As for aggravated offences, we are talking about existing criminal offences—criminal damage, assault, arson—where the sentence can be enhanced because there is hostility on the basis of a protected characteristic. I think that the concerns about freedom of expression that we have been talking about today are less present in circumstances where, for instance, someone is assaulting a sex worker while screaming abuse of sex workers at them. Having said that, we have not made any provisional proposals in this regard and we are looking at responses to see whether any additional characteristics should be added.
The Chair: Sadly, we need to draw the session to a conclusion. It has been fascinating. This is incredibly complex both technically and in terms of getting the public policy balance right. The session today has been enormously helpful to the committee. Professor Lewis and Dr Hoggard, thank you very much for your time. We may well be back in touch with you for some further evidence in writing, but it was good of you to join us.