final logo red (RGB)

 

Select Committee on Communications and Digital

Corrected oral evidence: Freedom of expression online

Tuesday 8 December 2020

3 pm

 

Watch the meeting

Members present: Lord Gilbert of Panteg (The Chair); Lord Allen of Kensington; Baroness Bull; Baroness Buscombe; Viscount Colville of Culross; Lord McInnes of Kilwinning; Baroness McIntosh of Hudnall; Baroness Quin; Baroness Rebuck; Lord Storey; The Lord Bishop of Worcester; Lord Vaizey of Didcot.

Evidence Session No. 3              Virtual Proceeding              Questions 27 - 37

 

Witness

I: Mehwish Ansari, Head of Digital, ARTICLE 19.

 

USE OF THE TRANSCRIPT

This is a corrected transcript of evidence taken in public and webcast on www.parliamentlive.tv.

 


14

 

Examination of witness

Mehwish Ansari.

Q27              The Chair: Welcome to our witness for our inquiry on freedom of expression online. We have two evidence sessions today. Our first witness is Mehwish Ansari, who leads ARTICLE 19s global team digital. Todays session is broadcast online and a transcript will be taken.

Thank you very much indeed, Mehwish, for giving your time in joining us to give us your expert evidence. Could you start by briefly introducing yourself and telling us a bit about ARTICLE 19? Could you give us an introductory perspective on freedom of expression online, what it means for you and what you think the main issues are? We will then go around the Committee and take questions.

Mehwish Ansari: Good afternoon. I am head of digital at ARTICLE 19, which is an international human rights organisation with a mandate to protect and promote freedom of expression and information. ARTICLE 19 works on freedom of expression issues related to the applications and platforms that come to mind when we think about the internet in our daily lives: our email providers, social media, VOIP providers and workplace messaging apps.

As an organisation, we go below the content layer of the internet. Specifically, my team and I are tasked with building human rights considerations in the design of the internets infrastructure. At ARTICLE 19, we define infrastructure as both the physical technologies that make up the network of networkscables, routers, cell towers, satellitesas well as the technical rules or protocols that govern how data moves across those heterogeneous technologies.

My perspective on freedom of expression online is rooted in the international human rights framework. Freedom of expression is a human right guaranteed in the 19th article of the Universal Declaration of Human Rights. On a very basic level, that means that you, I, or anyone, have the right to seek out information, to share information with others, as I am doing today, and to disagree with those in positions of power, and hold them accountable without fear. Freedom of expression underpins our very ability to think for ourselves. At the same time, it enables other rights under the international human rights framework, such as freedom of religion and belief, or freedom of association and freedom of assembly. As consistently affirmed by the UN Human Rights Council, the same freedom of expression that people have offline must be protected online.

The Chair: Thank you. Let us start to get into some of the detail.

Q28              Baroness McIntosh of Hudnall: Thank you very much, Mehwish, for being with us this afternoon. My question is both very short and, in my mind, immensely complex.

What should not be protected by the right to freedom of expression online? Very specifically, would you talk to us about the difference between definable harms that we can all see and understand and which, broadly speaking, you might see as protected by legal frameworks, such as incitement to violence, and other kinds of freedom of expression that are harmful, in the sense that they harm peoples mental health or perhaps impede their own right to free expression, but are not illegal?

What I would like to hear you talk about is, first, how we agree the limits provided for under the protection of health or morals bit of the convention, and, secondly, how we police their observation, because that seems problematic.

Mehwish Ansari: As you note in your question, the scope to the right to freedom of expression is broad. It requires states to guarantee to all people the freedom to seek, receive and impart information or ideas of any kind, regardless of borders, and through any media of a persons choice. In the past, the UN Human Rights Council has affirmed that the scope of the right extends to expression of opinions and ideas that others may find deeply offensive, which you allude to in your question. However, that does not mean that all kinds of speech are protected. Under international law, any advocacy of national, religious or racial hatred that constitutes incitement to discrimination, hostility or violence must be prohibited by law.

States may make other limitations on the right. In any case, any limitation must be subject to a three-part test. This might help answer some parts of your question. First, any limitation must be provided for by law. That means that the law or regulation must be formulated with sufficient precision as not to be overly broad or vague. Secondly, it must be in pursuit of a legitimate aim. Legitimate aims make up an exhaustive list. They are to respect the rights of others, and the protection of national security or public order, or of public health or morals, again, as you mention in your question. The third element of the three-part test is that the limitation must be necessary in a democratic society. The state must demonstrate the precise nature of the threat, and establish that the limitation is necessary and proportionate in relation to that threat.

When answering the fairly normative question of what should not be protected by the right to freedom of expression versus what limitations can be placed on that right, we often talk about hate speech or hateful content. There is no universally accepted definition of hate speech in international human rights law. Different regional human rights instruments or national instruments, national legislation or platform community guidelines reflect variation in the standards for defining and limiting that type of speech.

International legal instruments such as the International Covenant on Civil and Political Rights—the ICCPR—the genocide convention and the Rome statute provide some guidance on what types of speech are lawful and must therefore be protected, what types of speech may be restricted depending on circumstance, which goes back to the three-part test, and what types of speech must be restricted. That relates to what I mentioned earlier: the limitations that freedom of expression does not protect. That is what is in the law for states.

Companies have a corporate responsibility to respect human rights. That idea is endorsed in the UN Guiding Principles on Business and Human Rights. In so far as those human rights are set out in international law, companies have a responsibility as well, even though the primary duty bearers are states.

Baroness McIntosh of Hudnall: You made it very clear that this is necessarily enshrined by some means or another in international law, if it is to be effective, but, increasingly, individual nations are creating law in this area. Within that, could you give us your view of the rather difficult to understand principle of legal but harmful, or, to put it another way, harmful but legal? That is where the law apparently cannot extend, yet we seem to expect as a community, as it were, that there will be some kind of restriction on things that are harmful, even if they are not illegal.

Mehwish Ansari: I can speak a bit about the kind of speech that you mentionedspeech that is protected but still harmful. I can speak to why that grey area exists in the law. Often, censorship of contentious issues or viewpoints fails to address the underlying social roots of the kind of prejudice that undermines the right to equality. For example, that was noted by the UN special rapporteur on freedom of religion and belief, who identified that intolerance is fuelled by populist political movements that scapegoat marginalised and minority groups, and by hatred in their name against persons, based on religion.

You note that it is a very complex and difficult issue. The complexity becomes evident when we analyse the restrictions placed on freedom of expression and information by states. They have failed to provide space for people to debate and have constructive dialogue. The focus is on pushing for online environments that redress the inequities that prioritise oppressive speech over the voices of others who are systematically marginalised, rather than overcensoring speech that is lawful but may still be deeply offensive, intolerant and even hateful, but still protected under the law.

The Chair: Let us move on and talk about the role of platforms and companies, which you have alluded to.

Q29              Viscount Colville of Culross: Good afternoon, Mehwish, thank you for coming. In Side-stepping Rights, ARTICLE 19 says:the vast majority of speech online is now regulated by the contractual terms of a handful of companies. It goes on: “Terms of service usually include lower standards for restrictions on freedom of expression than those permitted under international human rights law. How can platforms develop policies and definitions so that they fall within human rights law on free expression?

Mehwish Ansari: To pick up on the policy that you mentioned, or the line that ARTICLE 19 has taken, as a general understanding of the way that dominant platforms have approached freedom of expression, their terms of service usually include lower standards for restrictions of freedom of expression than those permitted under international human rights law, as you mentioned. While low free speech standards generally enable companies to grow their user bases by creating safer online environments, they also turn those spaces into much more sanitised environments, in which freedom of expression is limited not by the principles of necessity and proportionality but rather by propriety.

On content removal as an example, while the mechanisms put in place by dominant social media platforms to remove content generally feature some procedural safeguards, none contains all the appropriate safeguards, so they fall short of international standards on freedom of expression and due process in some way.

Viscount Colville of Culross: What can they do to try to address those problems?

Mehwish Ansari: One piece is transparency. Although we have seen positive improvement in companies transparency reporting, it should be further improved. In particular, social media companies should specify whenever they remove content, on the basis of their terms of service, at the request of Governments or trusted third parties, such as local NGOs or other associations. Moreover, companies should provide information about the number of complaints they receive about alleged wrongful removals of content, and the outcome of such complaints, whether or not content was restored after the removal took place. In general, we recommend that transparency reports should contain the types of information listed by our partner Ranking Digital Rights, another NGO, and its indicators on freedom of expression. That is one concrete way in which we need to see improvement.

For models of how to improve that at a more systemic level, we need to hold companies to the UN Guiding Principles on Business and Human Rights, so that their community standards and terms of service are in line with international standards and applied in a manner consistent with them. They need to be designed not only with the international rights framework in mind but with the consistent application of them. At the same time, states must provide an effective remedy for free speech violations by private actors, particularly in situations where companies unduly interfere with the right to freedom of expression by arbitrarily removing content or imposing other restrictions.

Viscount Colville of Culross: In Side-stepping Rights, you looked at how the different platforms have defined various things such as hate speech, and you criticised those definitions mainly as being too vague. What would you like to see the platforms do to address that concern? After all, they will get together in the middle of next year to harmonise definitions and procedures.

Mehwish Ansari: As part of what we are looking at for models of regulation, we often see that, where there is quite heavy intervention from the state, regulation ends up having a disproportionately negative impact on freedom of expression. We are looking towards models of self-regulation. That is different from the current practice of regulation by contract through their terms of service, which is a framework that relies entirely on voluntary compliance, and legislation plays no role in enforcing the relevant standards. Such models rely first and foremost on members common understanding of the values and ethics that underpin their professional conduct, usually via codes of conduct or ethical codes. Those kinds of voluntary initiatives certainly need to be taken with a grain of salt.

We need to properly scrutinise those models, because they can be used to circumvent the rule of law, and they can certainly lack transparency and fail to hold companies and Governments accountable for wrongful removal of content. However, when we take that in line with what I said earlier about holding companies accountable to the UN guiding principles, and when it comes to states providing effective remedy for free speech violations, we see that this could be an opportunity to help companies move forward such that the application of their terms of service is in compliance with the international human rights framework.

Viscount Colville of Culross: Thank you very much.

Q30              Lord McInnes of Kilwinning: Mehwish, thank you very much for coming along today. I was very interested that you raised as a principle of freedom of expression the ability to receive and be in receipt of content. The Hunter Biden New York Post article was an example of platforms using algorithms to suppress the reach of particular content that they viewed to be harmful. I guess we will see a lot more of that with anti-vax stuff over the coming months. How does it sit with freedom of expression if algorithmic control of reach is deployed, perhaps to suppress the audience and stop something that could be judged as harmful but is legal, as Baroness McIntosh said, from reaching that audience?

Mehwish Ansari: In my personal understanding, free speech does not necessarily mean free reach. There is no right to algorithmic amplification that would disproportionately increase the reach of certain voices over others. In many ways, platforms are very much tilted because of their monetisation and the ad economy.

Other forms of reducing reach, such as placing fact-checking labels on content, as Twitter did during the US elections on Donald Trumps tweets regarding postal voting, constitute, in ARTICLE 19s view, a more positive view to protecting freedom of expression when it comes to regulating the speech of political leaders, rather than simply removing content. Placing a notification that the tweets violated Twitters rules, while nevertheless recognising the public interest in not removing them because it is the speech of a political leader, enables human rights such as the right to political participation. Removing the posts would have completely restricted the ability of people in the US to hold President Trump to account in that kind of situation.

Lord McInnes of Kilwinning: Going back to your previous answer, I guess the important thing is clarity and transparency about the management of reach, as done by the platforms. Would that be fair?

Mehwish Ansari: Absolutely, yes.

Q31              Baroness Rebuck: Thank you, Mehwish, for your very interesting answers to date. I want to look at amplification and how technology can deploy user insight to manipulate behaviour. You mentioned a couple of initiatives that you think are positive, but how could the design of platforms better balance freedom of expression and reduce online harm? What measures have you seen that attempt to address that issue, and what further changes, if any, would it be helpful for us to note through this inquiry?

Mehwish Ansari: In answering your question, I want first to take the opportunity, if I may, to push back a little on the idea of balancing freedom of expression and harm reduction. It is a framing that we hear a lot, but it creates the false dichotomy that implies that freedom of expression comes at the cost of peoples safety and security online. It is important to remember that freedom of expression is not an absolute right that permits all manner of hate speech. At the same time, freedom of expression enables a discursive environment where oppressive, intolerant and even hateful speech can be directly challenged, as I mentioned before. It is the kind of environment that enables us to identify the root causes of harm and hold those in power accountable for their responsibilities in addressing those root causes. It is about the framing.

To your question about the design of platforms, we have seen that more recently during the US elections, again going back to Twitter, which rolled out a feature to slow down retweets without comments. While this seems like an interesting design measure that specifically speaks to your question, we do not have the data to determine whether it was effective in reducing harm, however we choose to define what harm is, or however Twitter takes that definition and gives us that data. We do not have that insight.

That speaks to a larger and more fundamental issue in being able to answer your question well. While it is possible that different design choices could better enable human rights, we need better data generally across all the platforms to be able to answer that question. There are two big obstacles to getting better information to be able to answer that question.

The first is that whether the measures are considered effective very much depends on the definition of effectiveness, who is defining effectiveness, and how it is assessed. We have seen that with major platforms such as YouTube, which have used measures such as flagging, content removal and recording mechanisms. By and large, the effectiveness of those design features, those mechanisms, has been assessed by reference to the volume of content being taken down. Whether companies report a significant volume of takedowns is not in and of itself a function of how they write their terms of service. If the definition of harmful content is expanded, it is more likely that the volume of removed content will go up. At the same time, it is more likely that legitimate content will be removed. It is very important to be able to analyse closely the definitions of the information and how the information is being framed.

Many of those same platforms are actively thwarting researchers conducting independent research from being able to make inquiries into design features, even down to the fundamentals of the platforms and what makes them work. Recently, Facebook shut down a research project conducted by the Ad Observatory at NYU. The research aimed to uncover how politicians use the social network to target its users with ads. It is pretty fundamental to understand how that platform works. It is a move that doubles down on the lack of information that Facebook itself makes publicly available. Even though Facebook ultimately reversed the decision, just last week I believe, it is an example of how platforms and big tech shut down critical research that could better inform design, but at the risk of hurting profits. If we do not have that information, we cannot meaningfully assess what design features would work in reducing harm, however we define it, and enabling freedom of expression.

Baroness Rebuck: You have sort of answered my second question, which was about the extent to which a platforms business model, of which a key part is to maximise user engagement, is an inhibitor to discovering the truth about what design modifications could be effective and how we could make the online world more equitable, with better etiquette. I do not want to answer the question for you, but do you want to say anything else about the business models of platforms and how they affect the key issue that we are talking about at the moment?

Mehwish Ansari: From my personal understanding, it goes back to the idea of trying to address the root causes when we are talking about amplification, and what drives the tilt that we are seeing with these platforms. Some voices, particularly those that are more oppressive and more hateful, are disproportionately amplified, which creates a chilling effect on the voices of others, particularly those who are most marginalised and the recipients of that kind of speech. It is very difficult for us to be able to address meaningfully what that means or how to stop it without looking at the underlying profit models and how the platforms run in the ad economy. I think that is all I will say in reiteration of your question, but thank you.

Baroness Rebuck: That is helpful. Thank you very much.

The Chair: There is a bit more on platforms.

Q32              Baroness Buscombe: My question continues from where we are in this interesting discussion. What about a form of intervention vis-à-vis the monopolistic powers of, for example, Google and Facebook? Would that help to balance freedom of expression and generally benefit freedom of expression? If that were possible, what kind of intervention would you suggest could make a difference in benefiting freedom of expression?

Mehwish Ansari: Competition can be a critical tool for protecting and promoting freedom of expression on platforms. ARTICLE 19 has been addressing that in its work in recent years. In the social media sector, there are strong network effects that mean that the more people join a platform, the more value ora less charged termutility it has for its users.

The market for social media platforms is currently dominated by a very limited number of companies, as you note in your question, which enjoy a dominant or a super-dominant position. Such a privileged position allows social media companies to impose on individuals terms of service that they might not otherwise accept and that do not sufficiently protect their human rights. In a competitive market, individuals would be able to shift to other platforms that offer better terms of service, which may better protect their freedom of expression, in the same way that consumers of any other product might shift to a competing product that offers higher quality. In our case, the quality is marked by the degree to which terms of service protect human rights.

The current social media market concentration creates indirect effects that serve to exacerbate the direct threat to freedom of expression that you noted in your question. It also creates a single point of failure. If only one or a handful of companies holds that kind of power, it can make it easier for Governments to pressure gatekeepers into changing their terms of service in a way that is not compliant with human rights. The structure does not just have direct effects on freedom of expression; it creates an environment where there can be entrenched and continued threats to freedom of expression.

The second part of your question was about recommendations, or how to address the issue. Fundamentally, there is a need for deeper consideration of the goals of competition. The traditional consumer welfare standard, when talking about competition, is strictly focused on economic values such as prices and quantity. Those values are exactly what have allowed big tech platforms to grow and concentrate, as you can imagine. ARTICLE 19 has been calling fundamentally for regulators to consider a shift in approach and, when considering consumer welfare standards, to consider quality, choice, innovation and of course users human rights. That is one thing we can do to begin to redress the issue.

Baroness Buscombe: It is such an interesting area. I absolutely agree that competition has to be the way forward. Would it be better if in some ways it was driven by the users? How can we encourage users, particularly young users who perhaps go in certain directions because that is where everybody else they know goes, to help others to flourish in the market? It is a tough one.

Could there be a way in which the infrastructure of the internet can affect freedom of expression? You touched on that. Could a competitor develop something a bit different that would serve the customer better, in a sense? Is that part of your role at ARTICLE 19? For example, does automated content moderation really work? Is it sincere, or is there a need to try to get more people into the marketplace so that there can be better accountability and transparency? Certainly, from my perspective, this is one of the hardest challenges we face. What do you think?

Mehwish Ansari: Absolutely. I agree that it is a very difficult question. Trying to bring users to create user pressure to push towards more competition by bolstering potential competitors is certainly an interesting idea; however, there are certain structural considerations for users, particularly with social media platforms. First, as I mentioned, are the network effects. If social media is the key way in which you connect to your friends and your family online, you want to be where all your friends and family are, and it can be very difficult for you to move if everyone else is there. That is the power of network effects and exactly one of the reasons why platforms have become so dominant and super-dominant in their position.

Some of my colleagues at ARTICLE 19, and researchers such as Ian Brown in the UK, have been looking at interoperability for platforms. It is a concept that is gaining traction. To be honest, I do not have a lot of expertise on it, but it could help to address the network effects by allowing users to be able to connect to a particular network, but not to be boxed in by that network if they want to talk to others who are part of another network. The technical idea, and one of the models where it could work, is similar to the way email works.

My email client is probably different from your email client, yet if I were to send you an email, you would be able to receive it. There is a level of standardisation at the network layer through protocols and design, such that we are able to send data in a consistent and standardised format; and through APIs, and other kinds of infrastructure we can add, it allows platforms to interoperate. It is a pretty nascent idea and there is a lot of work that we would still need to do to move forward on it, but researchers are already looking at it and thinking about it.

Baroness Buscombe: That is great and a good idea. Thank you very much.

The Chair: Shall we move on to domestic regulation?

Q33              Lord Vaizey of Didcot: This is very interesting and thank you very much, Mehwish. I want to pull together some of the things that Baroness Buscombe and others have been talking about. What is emerging in what we are hearing on the Committee is interesting as regards competition between different providersWhatsApp versus Signal and DuckDuckGo versus Google. To a certain extent, privacy is becoming a sort of customer option and customers are sort of understanding it.

I am interested in content regulation. We all are. In the UK, the Law Commission is looking at reviewing the kinds of offences that people are prosecuted under, which is long overdue. I say that in a neutral way. These are offences that existed before Twitter, to put it bluntly, and reviewing them to make them more coherent and realistic has to be a good thing. How do you think UK law balances freedom of expression with other rights, such as the right not to be harassed or bullied? The online harms proposals are imminent. You must have had a very good look at those. Do you think that is a good model, potentially?

Mehwish Ansari: As a non-lawyer, I will shy away from giving a full legal analysis of the UKs national framework. However, I can give some broad comments on the online harms proposals. While it is clear that the Government responded to the backlash they faced in response to the initial consultation on the Online Harms White Paper, ARTICLE 19 maintains that the revised stance is still problematic, specifically because the approach taken in the online harms framework expects that internet companies should have a duty of care to their users, overseen by a regulator. That is the core concept.

The harms, however, are undefined. The Government themselves acknowledge that some of the harms they would like to include in scope are legal but harmful, with unclear definitions. We have touched on this in previous questions in the session, but the lack of clarity extends to other parts of the proposal. It is unclear how much discretion Ofcom would have under that framework to decide what is acceptable and what is not, and what accountability or due process would look like.

Finally, it is unclear what systems and processes companies will need to have in place to address harmful content. Those approaches could very well threaten the privacy of users, undermine the security of their communications, and block lawful content even before it is uploaded. There is still some lack of clarity, and, without a draft Bill, the answers to those questions are still unclear. Those are some of the broad comments that ARTICLE 19 has on the current state of the online harms proposals.

Lord Vaizey of Didcot: I will put this question in a slightly unfair way. If you had carte blanche to do whatever you liked on content regulation, what would you do? Perhaps more specifically, if we pushed ahead with the Online Harms White Paper, would you support people who wanted to abolish Section 230 in the US?

Mehwish Ansari: I would not speak for my organisation on that question. A lot of the questions on the online harms proposals and Section 230 are about legal liability.

Lord Vaizey of Didcot: That is what I am trying to get to. What kind of legal liability is it, and where would you draw the line?

Mehwish Ansari: It is a tricky concept. From a freedom of expression perspective, in general, increasing intermediary liability for platforms presents a danger to freedom of expression. I will give you a couple of examples to illustrate that. The pressure on companies that the liability creates often leads to an environment of overcensorship, where platforms proactively take down an overly broad range of content, which inadvertently includes lawful content, in an effort to keep themselves safe from legal action.

Moreover, the technical mechanisms that platforms may feel compelled to employ to maintain good standing often undermine the security and privacy of users. That kind of approach to liability, and imposing intermediary liability on platforms, does not just have an impact on freedom of expression from a human rights perspective; it also has an impact on privacy. Privacy is an enabler of other rights, not just freedom of expression but freedom of assembly and association, so it can lead to serious ramifications considering what we use platforms to do every day.

If platforms are held liable for the content that users generate, they could create measures to break end-to-end encryption. In fact, in Europe, the US and elsewhere, that has already been on the table as a proposal. Breaking end-to-end encryption can mean creating technical back doors when it comes to encrypting a communications path or locally reviewing content on devices before it is sent over an encrypted communications path to someone else. If you create back doors, such as creating a skeleton key, as some regulators have proposed in the past, you risk undermining encryption altogether; for example, you risk the key being found by malicious actors, or you risk its being used by other platforms in unauthorised ways. By virtue of creating a vulnerability in the encryption, you create an opportunity that can be exploited independently by someone else.

Lord Vaizey of Didcot: Most politicians want to abolish encryption at the behest of their security services. Thank you very much. I think you have summed up that this inquiry should be about whether we should overcensor or undercensor and what the implications are for privacy of restricting online expression. That was great. Thank you so much.

Q34              Baroness Quin: My thanks to you, Mehwish, for being with us and for making some complicated issues intelligible, which is a real challenge.

I want to ask about some of the international examples and your view of them. Mention has just been made of Section 230 of the US communications Act. Other countries in the democratic world are grappling with these issues, more or less successfully. There is the German Network Enforcement Act. I was looking online at some of the arguments around the legislation in Singapore. In France, the Projet de loi Avia has partly been struck down by the constitutional court. Are there some examples that we should try to follow and some examples that we should try to avoid?

Mehwish Ansari: Again, as a non-lawyer, I hesitate to indulge in comparative analysis of other jurisdictions. Forgive me. I will focus on a broader explanation that, hopefully, speaks to your question.

It would be useful for the Committee to consider the range of regulatory models that ARTICLE 19 has identified when considering this question. Currently, as I mentioned before, it seems that platforms are operating on a model of regulation by contract. Companies have conditional immunity from liability set in precedent by statutes such as Section 230, as you mentioned, and set terms of service they are held to, to varying degrees of success, by the users.

Other models being considered and implemented by states are, first, interventionist regulation, where the state holds disproportionate censorship powers, whether through setting prison terms or fines, or holding content-blocking powers and chilling free expression in general. In those kinds of models, the underlying content laws that the regulators are required to enforce are generally overly broad. In many countries, the regulator is not an independent body, and the law does not always provide for a right of appeal or judicial review of the regulators decisions. By putting the state in the position of being the ultimate arbiter of what constitutes permissible expression, or what measures companies should be adopting to tackle “illegal content, regulatory models are more likely to undermine minority viewpoints, from what we have seen.

The other model we have looked at is co-regulation, a slightly different regulatory regime involving tighter regulation that is actively encouraged or even supported by the state. Many types of co-regulatory models ultimately present the same flaws that we see in regulation from a human rights perspective, in that they entrust too much power to state institutions to regulate online expression. Just as in a more interventionist regulatory model, that would not only have a chilling effect on freedom of expression but would hamper innovation. There are knock-on effects that go beyond the human rights framework.

The third model we have looked at is the model of self-regulation that I mentioned earlier. It is a framework that relies on voluntary compliance. Legislation plays no role in enforcing the relevant standards, and we are still in a position where companies have conditional immunity from liability. There is still concern that the lack of transparency could lead to a failure to hold companies and Governments accountable for wrongful removal of content even in that situation.

The upshot is that there is no easy answer to the question of better or worse models. It goes back to the more fundamental approach that ARTICLE 19 proposed, which looks to hold companies to the UN Guiding Principles on Business and Human Rights and centring the international human rights framework as they develop their community standards on terms of service, and for the state to provide an effective remedy for free speech violations by private actors.

In the past, ARTICLE 19 has noted that that could be done in practice through the creation of a new cause of action, which could be derived either from traditional tort law principles or from the application of constitutional theory to the enforcement of contracts between private parties. There is a way in which it could be put into practice.

Q35              Baroness Quin: There is a lot to reflect on in what you have said. Earlier, you referred to the desirability of holding companies to the UN guiding principles. Is there an appetite in the UN for going down that route? Is there any sort of appetite, given the difficulties attached to individual national regulations, for having more international co-operation generally, either on a regional scale, as in the EU, or on a more universal scale?

Mehwish Ansari: I apologise for sounding a bit like a broken record but, as you can tell, I am hesitant to advocate regulation or co-regulation, given the threats to human rights and freedom of expression, which naturally do not put us in a position to offer a clear option with a bow tied on it. As regards international co-operation on digital regulation, there is hesitation based on the concerns about the overbroad censorship powers that have come to bear as a result of them.

One of the opportunities, however, when it comes to greater international co-operation, may well be in competition. States are interested in finding opportunities for competition and are looking to revise the standard. We should think about how we can revise the standard so that we can hold companies accountable through competition as a mechanism to facilitate freedom of expression and human rights online.

Baroness Quin: Is there any concrete initiative in the international community, specifically in the United Nations, to take some of these issues forward at the moment?

Mehwish Ansari: I would not be able to tell you for certain what specific initiatives there are. However, there is interest in the idea of the UN guiding principles and their application through the UN Human Rights Committee, which reviews implementation of the ICCPR for member states, and the UN Human Rights Council, which has made note of it. The former special rapporteur on freedom of expression, David Kaye, has looked extensively, in, I believe, his 2015 report and his 2017 report, at applying the business and human rights principles and looking at that framework specifically for the ICT sector.

Baroness Quin: Thank you very much.

The Chair: Mehwish, you have been very concise in your answers. Perhaps it is because you have been at pains to point out you are not a lawyer that you have been so concise. It is very welcome as it allows us to squeeze in one more question.

Q36              Baroness Bull: I will circle back to the beginning when Baroness McIntosh asked what should fall outside the protective umbrella of freedom of expression. I am keen to hear your view on the extent to which cultural differences need to be taken into account in determining that. If there are cultural differences in what is deemed harmful, what extra challenges does that create in the global environment online?

Mehwish Ansari: Culture is not presented in the exhaustive list of legitimate aims on the basis of which expression can be limited. Public morals are. At a general level, it is unclear who gets to define what morals are. The process of definition can easily become an issue where majoritarian or powerful interests further disadvantage religious, cultural and political minorities, so it is an important question. As regards the international human rights framework, while the protection of public morals generally constitutes a permissible ground for restricting freedom of expression online, the UN Human Rights Committee has recognised that, because of the pluralism of what are considered morals, any limitation made on that basis must be on principles not deriving exclusively from any single tradition.

Baroness Bull: Are you aware of anybody writing on the issue of cultural differencesone mans joke being another mans harm, if you likethat we could consider talking to for this inquiry? It is a topic that we would find particularly interesting.

Mehwish Ansari: I can confer with my colleagues and follow up with the Committee if that would be useful for you.

Q37              The Chair: I think it would. We have talked quite a bit about what amounts to behaviour and etiquette and the way people treat each other online. What is your view of the approach of Governments and tech to digital citizenship, and indeed to citizenship generally, in promoting decent behaviour and etiquette on their platforms and societally?

Mehwish Ansari: Digital literacy is a very important tool in being able to address some of the root causes that drive offensive or even hateful speech. Digital literacy is a very important tool, and Governments can put funding towards providing it. In the approach that ARTICLE 19 has taken on providing support for digital literacy programming, it is important to consider local contextual needs, and solutions that are driven by communities, as different communities use the internet and platforms in different ways. That also facilitates adoption.

When it comes to hate speech, there are certainly resources at international level. For example, there is an opportunity for the UK Government to develop a comprehensive plan for the implementation of the 2015 Rabat Plan of Action, which was set to respond to hate speech and how to consider hate speech in the international human rights context. That would provide a comprehensive plan for training law enforcement authorities, the judiciary and those involved in the administration of justice on issues concerning the prohibition of incitement to hatred and hate speech. It takes a multi-stakeholder approach, so it would involve civil society as well as the tech companies in those kinds of discussion. Those are important considerations for any kind of programming that would lead towards an understanding of good citizenship.

In thinking about what good citizenship is, my only question would be: who defines what good citizenship is? Is it the platforms role to do that? There is the question of personal responsibility. Of course, states are the duty bearers to uphold human rights, and companies have the responsibility to respect human rights, and users fall into that. That is why we think about taking multi-stakeholder approaches and community-driven approaches in that more grass-roots approach.

The Chair: Mehwish, thank you very much indeed. Your evidence has been really interesting, and you have opened up a lot of issues that we will explore further. You offered to send in one or two additional bits of information to Baroness Bull. If you have any other reading for our Christmas reading list, please send it. We are just getting into the subject, and your insights have been extremely useful. Thank you for giving up your time and joining us today.

Mehwish Ansari: Thank you very much.