Select Committee on Communications and Digital
Corrected oral evidence: Freedom of expression online
Tuesday 2 February 2021
Members present: Lord Gilbert of Panteg (The Chair); Baroness Bull; Baroness Buscombe; Viscount Colville of Culross; Baroness Featherstone; Baroness Grender; Lord Griffiths of Burry Port; Lord McInnes of Kilwinning; Baroness Rebuck; Lord Vaizey of Didcot; The Lord Bishop of Worcester.
Evidence Session No. 7 Virtual Proceeding Questions 65 - 71
I: Dr Roxane Farmanfarmaian, Director, Protecting Freedom of Expression in Religious Context, University of Cambridge; Dr Sharath Srinivasan, Lecturer in Governance and Human Rights, King's College, University of Cambridge.
USE OF THE TRANSCRIPT
This is a corrected transcript of evidence taken in public and webcast on www.parliamentlive.tv.
Dr Roxane Farmanfarmaian and Dr Sharath Srinivasan.
Q65 The Chair: Welcome to our next set of witnesses for our inquiry into freedom of expression online. Today’s focus is going to be perhaps a little more fundamental, looking at cultural perspectives on the issue of freedom of expression in general rather than the more specific technical issues we have been talking about up until now, such as the role of platforms.
For our first session, we have two witnesses. Dr Roxane Farmanfarmaian is director of protecting freedom of expression in the religious context at the University of Cambridge. Dr Sharath Srinivasan is a lecturer in governance and human rights at King's College, University of Cambridge. You are both very welcome. Thank you very much indeed for coming and giving us evidence. The session will be broadcast online and a transcript will be taken.
We would find it very useful if you could start by giving us your perspective, from the point of view of your specialism, on freedom of expression in general, as well as any issues arising in relation to freedom of expression online. In so doing, give us a brief introduction to yourselves and your professional background.
Dr Roxane Farmanfarmaian: I teach modern international relations of the Middle East at the department of politics and international studies at Cambridge, which is also where I received my PhD in 2008. Prior to that I was a journalist, so I have had an opportunity to practise as well as to study some of these issues. I began by covering the revolution in Iran and I went on to cover the demise of the Soviet Union in the 1980s, so in some ways I was also a journalist of revolutions. I went back to New York and covered the women’s movement into business and corporate management, which was another revolution.
Ever since, I have been deeply involved in covering media and the developments that technology has brought it, particularly in the context of the Middle East and its use in conflicts there, as the next new thing in mobilising people and bringing change to those cultures. From the very beginning, my career has been looking at concepts of free expression in a cultural context.
In Cambridge, I ran a five‑year project of $1 million funded by Al Jazeera on media and political transition after the Arab spring. I did fieldwork in Turkey during the attempted coup against Erdoğan, for example. I was in Tunisia right after the overthrow of Ben Ali and I watched the change in the viewpoint of the people, from pride that their revolution had brought them free speech, to gradual disillusionment at its growing toxicity. I also received an ESRC grant to study media, religion and conflict right after the Charlie Hebdo affair.
Culture has many impacts on the understanding of free expression. We each understand free expression on our own terms and as a result of the historical context of our cultures. It is as difficult for the West to understand the cultural limitation put on free expression in the Middle East, or even the United States, as it is for those cultures to understand what we consider universalist principles. Article 19 has the greatest multiplicity of interpretations, as an article, that I have ever encountered.
Dr Sharath Srinivasan: Thanks to everyone for having me today. I am also at the University of Cambridge. I am the David and Elaine Potter lecturer in governance and human rights in the department of politics and international studies, and co‑director of the centre of governance and human rights. There are three elements of my work that are very cognate to the discussion today and that I will be drawing on. The first relates to the work of the centre. With colleagues and students, for the last decade, we have worked on a number of projects that focused on providing research towards policy development of international human rights law, and two of those are especially pertinent here.
The first was a decade ago, working with the special rapporteur on summary and arbitrary executions on the safety of journalists. That led to various policy formulations at the international level. One aspect of that was the emergence of what we call citizen journalists and the kinds of harms they were increasingly facing because of what they were publishing online. Very recently, again with colleagues, we contributed research for a general comment that the UN Human Rights Committee put out on Article 21 of the International Covenant on Civil and Political Rights, on the right to freedom of assembly. We were looking at whether that right can and should extend online, and the implications of gathering online being protected under this right. That is a cognate right of the right to freedom of expression, of course.
The second element of my work is focused on sub‑Saharan Africa, in particular east Africa. For a decade, I have looked at media liberalisation and digital technologies, and their combined impact on democratic politics, looking especially at Kenya, Uganda and Zambia, as well as Sudan and South Sudan, but also at the region as a whole. That has been both applied research into the dynamics of change in these contexts, in which there have been rapid transformations, and looking at innovations in how these new capabilities can enhance citizen engagement in policy‑making and policy processes.
Finally, and more recently, this work has focused increasingly on collaborations with computer scientists in Cambridge, looking at the relationship between technology and democracy. In particular, what kinds of politics get embedded in technical artefacts? We are talking to technologists about how they are embedded and how they scale, but also the possibilities for embedding other kinds of politics in the design of technologies. This is very fruitful research. Perhaps interestingly, our experience of doing this work in the African context has given us some entry points into thinking anew and differently about the relationship between technology and democracy.
I will highlight a couple of points, which I hope to expand on in the discussion. Echoing what Roxane said, first, freedom of expression has never been an absolute right. It has always been qualified and had restrictions, which relate primarily to ideas of state and society, and their relationship, in different contexts. There have always been these restrictions and qualifications, which are historically and culturally encoded. In that sense, there is not a universal right and then an accommodation of culture. There are only these rights as they have emerged within political cultures and histories in slightly different ways. But there is a common core and anchor, which is always important to unearth.
Secondly, from my experience with technologies, talking about online is increasingly anachronistic. It does not help us. It is defined by a binary opposite of offline, yet today we traverse a very complex reality of digital communications that blurs this distinction. In some sense, online/offline is the language of modems and internet service providers. It is 1990s language. It is how we first connected to the internet. We do not do that in the same way. Digital connectivity is ubiquitous now. Our job is to come to grips with what freedom of expression means in a digital age, full stop. That of course encompasses digitally mediated technologies.
The central concern I have is that the “freedom” part of “freedom of expression”, and less so the “expression” part, is at risk at the moment. What do I mean? I am not talking about a libertarian idealisation of freedom as such. Our present worries stem from the way expression is taking pernicious forms, such as hate speech, incitement, polarisation, disinformation, fake news and forms of intimidation, but we have to ask why this expression is such a contagion in our world today.
It is not only an outcome of being more connected. I am increasingly concerned that the structural changes to our public realm in the digital age are somehow damaging human agency, independent judgment and citizens’ capacity to navigate their social and public world of ideas freely, and to decide how to express themselves. I hope to expand on why that concern is so sharp for me in this discussion.
The Chair: We look forward to that. Thank you both very much.
Q66 Baroness Buscombe: Thank you. That is a really helpful introduction from both of you. My question, “To what extent does the right to freedom of expression accommodate cultural differences?”, is an important one. But can we take it a step further? People now, when asked a question, do not think about saying what they believe. They think, “What is the acceptable answer?” We are all afraid that we are going to upset the people listening to us, and that they are going to accuse us of hate speech or racism.
We are feeling inhibited, when we should feel more able to have free expression in the digital age. In fact, because we are so concerned to make sure that we give a careful answer, we can end up being incredibly patronising in terms of thinking about cultural differences.
Dr Roxane Farmanfarmaian: I must begin with the point that Sharath brought up. The question is not specifically on digital, but it is becoming a question of how we communicate in a culture that is rapidly having to incorporate different identity politics and what that means in terms of expression. The internet is only one mechanism through which we are seeing that, although it is an expression of our global society, since it goes across borders.
This issue is wrapped up in the impact that the internet has had on ideas of privacy. That has two sides to it. The fear that you described, the concern about, in a sense, inflicting one’s own culture upon another, is requiring a greater sense of siloing and communicating only in private. That divides people within society. It has multiple impacts. We saw this very early on with students. They had all sorts of things on Facebook, which were picked up by their prospective employers, and that had an impact as well.
We are looking at several issues that have to do with our current culture, but I do not see them as cultural differences per se. I see them as moving into a culture of the digital, the shared and the instantly amplified. In a sense, the new forms of etiquette and ethics have to protect our rights to freedom but also the impact of the expression. That is very difficult. In some ways, that comes from the norms that develop out of social engagement.
In effect, we are finding a systemically new form of engagement, and it has brought out a lot of political identity issues. In some ways, those will play second fiddle to the harms that we are now coming to realise the internet can bring, which we will possibly discuss as the other side of this. In the wake of the attack on Congress on 6 January, that discussion has moved into a new awareness that we are dealing with a sorcerer’s apprentice, if you will.
Baroness Buscombe: That is really helpful. Thank you. I am interested that you used the word “etiquette”. In a way, part of that etiquette is the ability to make another person’s life almost intolerable by divisive or appalling responses to an opinion through anonymity.
Dr Sharath Srinivasan: I echo much of what Roxane said. It is not about culture. For some of us, it is about offending others and not misunderstanding them. For young people, with the same technologies, the concern is not culture but how they will be appreciated or valued by people on the other side of the communications medium. The Canadian media theorist Marshall McLuhan said, “The medium is the message”. You can ask, “What is the message of this digital technology?” This is one aspect of the message: very fast, very far, for ever. When you say something, you say it very quickly; it goes very far and wide; and it lasts for ever.
That is quite a challenging thought to get across. That is not the way we have been used to communicating in human society. Oral society in intimate settings is very fast, but it evaporates into the ether. It does not stay. We might put a bit more thought and concern into written text. Published documents take some time to come out, so we can be more assured that, if they are going to go very far, they are more thoughtful and considered. This is a very different medium in how it has effects. It affects not just how we think about culture and identity, but a range of consequences are created by that. Just as some of us may be more inhibited by that, some may be more disinhibited by it. That speaks to much of the concern that we have here today.
For me, culture is one lens on the same fundamental problem that lies with these technologies in a broader sense. It is an especially sensitive lens, and rightly so. What is at stake is what kind of political culture makes sense of this age of hyper communication. How can we tolerate and admit plurality but allow for judgment and care in how people communicate? We are having to come up with a new political culture that makes sense of these levels of communication and information exchange.
Q67 Baroness Bull: Thank you so much. This has been fascinating already. My question is about the impact of cultural differences on the potential for future international collaboration. We heard from a previous witness, Dr Edina Harbinja, who said that she was not optimistic, given the vast differences in culture, and that she could not see any common ground at the UN.
Sharath, earlier you talked of a common core and anchor; Roxane, you talked about our current culture in the singular, as if there may be a common culture. Sharath, can you say a bit more about the common core and anchor? Is there potential for international collaboration, given the differences within our cultures?
Dr Sharath Srinivasan: I am a little more optimistic here. That is probably because I did not have too high an ambition in the first place about how international human rights such as this work. I am reminded of the debates in the 1990s about Asian values. I do not know whether you remember that, but Prime Minister Lee Kuan Yew from Singapore was at the forefront of a movement that said, “In this moment of liberal western triumph, universal human rights do not quite make sense. There are other values in Asia, around society, family and deference, that operate in different ways from liberal western democracies”.
It became clear then that there are variations. We can see variations between western democracies in how they deal with freedom of expression. The First Amendment in the US is vastly different from how we think about incitement to racial or religious hatred in the UK. This is as much about the political regime of a time. What really ended that argument about Asian values in the 1990s was political change. Indonesia was a different country post Suharto than it was under Suharto, and expression was valued tremendously.
If we take culture as ancient, primordial, fixed and certain, and we think, “How could we ever bridge these divides in universal human rights?”, we overestimate the importance of culture in a thick sense. There is a whole lot more room for what we call the progressive realisation of universal human rights around these common anchors. My concern here is that there is no absolute universal standard that we are progressively realising. We need the space for discussion about what the common anchor and core is that different societies care about.
That is what is being progressively realised. We are learning as we go what is common to all societies about protecting freedom of expression, and how certain elements will be decided differently, depending on time, place and context. In Thailand, there is a lèse majesté law that applies in very different ways from similar laws in Europe in the past. It has, some argue, a fair bit of democratic support. There are variations here. That is not to say that freedom of expression is lost in this context. There is a core, but certain amounts of variation are admitted in different contexts.
I am optimistic that we need to protect the space to develop shared understandings on the common core of this right. That is more important than ever, given the interdependence of the world that we live in.
Dr Roxane Farmanfarmaian: It is very apt that you would notice that we use the concept of culture in the singular. I appreciate that very much. I come to that from a very different angle from the one Sharath has just set out. We are, unexpectedly perhaps, all part of a single culture being established around the world by a handful of enormously powerful companies, all operating on a similar agenda, which is, as we know, that our attention is the product.
We all share that. Our response to it at the moment has been multicultural and multinational. We are already looking at this with a system view, because nations are coming out with laws of protection, enablement and embrace in quite different ways. They reflect our different national interpretations of not only what freedom of speech is but where it fits politically into our countries.
In my mind, that is developing a body of new but common law. Personally, I feel as though there is a whole level that has to be taken that is also global. It will take time; it will not happen quickly. In some sense, it has to happen. The main structures of those protections will be contributed to by the possibilities of AI. There will be global constraints on major harms, including violence and the kinds of disruption that can happen to our societies based on misinformation. That has to happen at the global level, very simply because these are global companies.
You have a company with 4 billion customers. That is four times the size of China; that is China plus India, most of Asia and the United States. This needs to be looked at as a patchwork of nationals but also as a global protection system, because the protections are imperative. Otherwise we are going to head down the rabbit hole of going from an information age to a perpetual misinformation age, and that to me is frightening.
I am not sure I am quite as optimistic as Sharath. We are in such a period of transition that something like a world information organisation has not taken shape yet, but I would not be surprised if it does.
The Chair: Let us move on, then, to the responsibility and regulation of platforms.
Q68 Lord McInnes of Kilwinning: That brings this topic very nicely forward. Roxane, you mentioned ethics and etiquette; Sharath, you mentioned a common core. The application of both requires, as you have said, the platforms at this time to be understanding of them and to apply them. I guess that the platforms would suggest that their current moderation looked at universal values. Do you believe that the values in their moderation are applied universally, or are they sometimes not taking account of religious and cultural difference when moderation is applied both by AI and by humans?
Dr Sharath Srinivasan: The short answer is that platforms are doing reasonably poorly. It is not just a question of how they are accommodating religious and cultural difference, and we can come to that. It is important to remember the starting point: “This is not our business”. Especially in the context of content on Facebook, but even Google when it came to search and YouTube, there was a catching‑up logic rather than a forward position of how they wanted to deal with this.
The first hire at Facebook is quoted in 2007 as saying that the policy was this: “If something feels wrong in your gut, pull it down”. That was content moderation, and it expanded from there. In the context of Google, there was a sense that the aim was to not break the law and to ensure that the business could keep flourishing. The main concern was that in specific contexts, such as Turkey, there had been some pushback from the state or from courts on particular content, and this was a risk to business. That is not to say there are no values here at all, but the logic has been to use technical and business solutions to address risks faced by the company.
From there, it has moved a long way. There is no doubt that we have seen much more elaborate things now. The implementation standards at Facebook are much more significant. There is a board of oversight, which is increasingly looking at these sorts of concerns. The logic that is operating is one of fundamental catch‑up, and it is driven by certain efficiency logics rather than a comprehensive value‑driven approach. You see that especially when it comes to minorities or groups who have suffered more egregiously from the way in which the logics of search or content work.
If you take the extreme version, as Roxane just said, this is a ‘sovereign entity’, whose founder has referred to it as being more like a Government than a corporation, and which has, in that sense, over 4 billion constituents. If you go to Kenya, you can look at Kikuyu or Luo content on Facebook. You can be guaranteed that content moderation there is a far thinner affair, in terms of the response. They have focused on it only very recently, in the last couple of years. They miss lots of things, but even more so when you are talking about fast-moving urban slang languages, such as Sheng in Nairobi. Yet these are the languages in which various forms of contentious and sometimes acrimonious politics takes place. That can involve the targeting of minorities or other ethnic groups.
It is not to say that this is casually disregarded, but the size of that challenge compared to the business logic that is driving content moderation is vastly different. When I think about the question of content moderation, I just want to remind us of the more pernicious logic that is going on here. The primary aim is to keep people’s attention and to keep them engaged with these platforms. It is from there that their activity produces data that is valuable to productise and sell for advertising, but also for other purposes. In that sense, content moderation is about doing as much as needs to be done to protect this valuable commercial so‑called public square, which is actually a private square of exchange.
It is playing a role that is a necessary minimum, rather than driven by a set of core values about the good society, which another public forum might consider adequate.
Dr Roxane Farmanfarmaian: How much these companies, particularly Facebook and Google, are moderating content across the world is very difficult for us to tell, but it is often considerably less well informed in places where they play a much larger role, in many ways, than they are here. I see, for example, Facebook playing a significant role in elections or in the projection of information across civil wars, such as in Syria or Iraq. There is no sense of a significant monitoring role or capability on the part of Facebook or the other big platforms. As you discussed in a previous session, Facebook has a governance board that has reached out and said, “We cannot do this. We have to have help”. They are right. Trying to find the location of that help is what I hope to contribute to this discussion.
In the same way as one can walk into a conversation where one’s cultural awareness is too low and get into a fix, as I would suggest Salman Rushdie did and ended up paying significant consequences for, that is exactly what is happening now to the platforms. They have been driven by a mercantilist agenda, and suddenly they are hitting the consequences of the public square. We are all scrambling at this point to figure out how this works.
In the same framework as what Sharath was saying, it is happening so quickly. A year ago, we did not know about TikTok. How do we include something such as that in there? I was reading yesterday that China now has its own version. That is going to do something else, and next week there is going to be something else. None of those quite fits with the abuse that happens on social media.
All this is hitting, and yet it is clear that every one of the players at this point is saying, “Something has to be understood as part of this; otherwise, our social fabric is at risk”. That is putting it very starkly, I know. In effect, it all comes back down to culture. Our culture is at risk, in terms of our social interaction, and our sense of what is true and what can be manipulated in the world that we understand.
Q69 Viscount Colville of Culross: Sharath, you talked about the need to build a new political culture. We have talked about this internationally, but I am interested in what we are doing in this country. In the Government’s online harms response, they have steered clear of new rules for politicians and political discourse. Our colleagues on the Democracy and Digital Technologies Committee said about the Government’s response that there is no duty of care towards democracy itself.
Bearing in mind that politicians very often represent huge swathes of people, should more be done to ensure that the platforms work to defend democracy and, within it, free speech? How can they do that?
Dr Sharath Srinivasan: I am wary of how much we can expect from platforms. Partly going back to my previous response on content moderation, there are limits to what drives their logics of operation and their desire to get involved here. I was surprised that the head of public affairs for Facebook, who will be well known to many of you, when asked, “What do we do now?”, in terms of the new Biden Administration, said, “We will adjust to this new environment”. Adjusting to a new environment is a survival strategy; this is about risk management.
The question that has been on everyone’s mind is why these platforms responded when they did, given that more egregious things had been said on their platforms in the many months and years leading up to the events for which they banned people for life, including the former President. That is the sober reality of dealing with these sorts of corporations. They are going to play to the best of their ability to manage risk, and that is all.
The bigger question for me, and perhaps we will go further in this direction during the discussion, is understanding what has gone wrong, at more fundamentally structural levels, that means these corporations with these logics dominate the public square and what can be done about that. There are a few things that can be done, but they lie less with the sorts of responsibilities the platform can have and more with understanding how, for example, as individual citizens, we have increasingly lost sovereignty over our data or identity, and how we can reclaim some of that. We have to see the oligopolistic practices and capital power of tech companies. Once you have the ability to grow very large, you can suck up all the plurality coming from below through innovation, because you have the ability to buy it off.
There are lots of things that can speak to containing or correcting the worst excesses that we are seeing with some of these platforms. That is not to say that the platforms or individual corporations are, by dint of who they are, problematic. There is a certain aspect of this about how digital technologies, and how businesses built on them, work. We need to reset, in a sense, the rules that operate for these sorts of businesses.
Viscount Colville of Culross: Roxane, you talked about Facebook’s intervention in the American elections. What should be done to regulate the platforms and to ensure that they encourage free speech and democracy?
Dr Roxane Farmanfarmaian: I go back to something Sharath said. A few years ago, they said, “It is not our problem. We cannot do it anyway. We do not know how”. Now we know that they can do it. Every day, they have huge teams constraining, checking and moderating. They now increasingly feel as though they have a stake in doing so. We saw, pre election and post election, a very interesting set of cases. We are coming to understand that these companies, as has been mentioned, are increasingly developing the characteristic of utilities. Twitter can say, “There will be no more President Trump on our platform”, but we cannot so easily say, “There will be no more Twitter”. The Chinese can, but they usually develop a similar service.
Regulation is relatively still in its infancy, but the cases that are coming up now are suddenly either going to court or coming before parliamentarians and Governments. The Google case in Australia is very interesting. This is a spat over what it carries, as a lorry flatbed, and Google is saying, “We do not have to pay extra fees for media reading, because that is not really us”. It is now reaching a point where it is threatening to withdraw from an entire country.
That is a very interesting dynamic, and luckily it is one we can watch from the outside. As with significant crime in the past, the Australian Government can up their taxes and say, “Okay, at the moment, we will deal with it this way”, but eventually it comes down to a really important point: how fundamental a role do these companies now play in our society and, therefore, what kind of legal control do we have?
In the United States, there is a law that says what they carry is not their affair, but that is a US law. It does not apply anywhere else. In fact, the US is now rethinking that law. Again, it overlaps slightly with the whole idea of privacy but also the culture of where these companies fit in society. All this has to be rethought. These companies should not, in the way they are now, be given a free pass to say, “If I do not like your laws, I am just going to pull”. There should be something in place for that, such as a fine. There has to be a consequence.
We are not even recognising the possibilities that lie ahead until something such as this comes up. We need to look at the courts and these cases before we have the basis of a real regulatory framework. Post election, the US is facing that dilemma. It is similar to this dilemma: how does a liberalised society constrain free speech and still be a liberal society writ large?
Viscount Colville of Culross: It is interesting. You talk about these platforms as utilities; they are crucial to all of us for our existence. We have looked at that in previous inquiries. However, if you get Google Australia saying, “We are going to pull our search from Australia”, because it is a utility, that brings the country to a halt. Does there need to be a very concerted attempt right now, not just to wait for the courts, but for legislators to intervene and face off against these companies?
Dr Roxane Farmanfarmaian: There are two sides to this. We have run into another part of this with Huawei and our electrical and technological grid. We have allowed private enterprise to take over too much of what needs to be construed as the needs of our states to go forward. That needs an enormous amount of investment, which the private sector simply is not going to produce, to ensure that we have the basis for our society and a multiplicity of options so that, if Google withdraws, we have other smaller utilities that are perhaps Australian.
That needs to be put in place now. We get back, in a sense, to the competition side of it, which we have to encourage. Likewise, we need pots of significant funding for the big picture, for the AI that is required on a global level, to ensure that we start having transparency. We cannot do that without AI, and that takes investment. We can do that, but we need to establish that as part of law.
Viscount Colville of Culross: That was fascinating. Thank you so much.
Q70 Lord Vaizey of Didcot: I wanted to ask about behaviour online and how you might change it. I found very interesting what was said earlier about different cultural approaches to issues such as human rights, and what is acceptable both within the law and culturally. As part of efforts potentially to regulate platforms and content, one of those will be, as it were, through the front door in ensuring that offensive content is taken offline, but people clearly find other avenues to express their views.
I always think about the internet as it being the first birth of people engaging with it. It is a cliché when it is said that people will say things online to people that they would not dream of saying to them in the pub. Indeed, it may amaze some people but behaviours in pubs have got a lot better since the 16th century.
Is there any mileage at all in government banging on about digital citizenship lessons and trying to engage with kids in schools? Indeed, is there any room at all to engage with adults about what is acceptable behaviour online? Could we try having a sustained public education campaign?
Dr Roxane Farmanfarmaian: Further education is always a good thing. I was on Twitter the other day—yes, I was—and it basically showed me an advertisement with somebody’s face being replaced with somebody else’s. I thought, “Oh my goodness. We need further guidance, to be aware of what is there and how quickly these capacities are changing”.
On the other hand, etiquette offline and online is the same. If you are more polite at the pub and less polite online, are you less polite at home? It is a form of understanding not only how society works but how relationships work. There are organisations, such as the Ethical Journalism Network, that do extremely good work in ensuring that those who, according to their profession, are responsible for presenting news, opinion and media to a large group of people are more aware of the ethics.
Containing that on a private basis has to start earlier, at school. It is something that comes from the practice of living, rather than something that comes in later as an adult. Sharath, you may have different views.
Dr Sharath Srinivasan: It is an important part of any solution. I would say two things. First, it has to meet change that comes from more structural efforts less than half way, which is to say that the burden is not on the individual’s behaviour or etiquette as much as we might like it to be, because it would be easier to regulate. Secondly, it should be less about the disciplining of that behaviour than the empowering of the ability to think sensibly and act well in a digital age. That is a very different thing to trying to police or contain bad behaviour online.
I want to use this moment to stop and ask, when we focus on the individual, what we are missing. We have to start from a different point and assume that people are being, in a sense, manipulated all the time by these hyperdominant digital platforms and cajoled into handing over their attention, which usually comes from the dopamine effects of being adored and loved, or scandalous information that attracts your attention. Fake news attracts six times more attention that truthful news, as such.
The attention is the economy, but in a more pernicious sense. The attention matters, because whatever you are doing while you are giving attention is data that is valuable not only to run ads, but to predict how you and people like you act in the world. The more data is collected, the more you can predict how people are going to act, and the more you can direct strategic communications towards their action. That can be for advertising, but it does not just have to be for advertising.
What is for sale is the ability to act on a population at a level that is underneath their consciousness, because it is all these small micro‑tactical ways that you can know, through the evidence of the data from the different experiments you run, that you can affect people’s behaviour and action. The data tells you that this is what makes people tick. If they like this singer, they are probably likely to be attracted to something else.
That can happen in the social space, or the advertising and consumer marketing space, but it can also happen in the political space or the movement politics space. While it would be wonderful if the problem was etiquette, if what is going on is the most profound assault on the human physical brain, in its unevolved form over many millennia, due to this rapid change in technology’s ability to act on that capacity for agency and action, we have to be super‑careful about thinking that the etiquette side of it is going to be the answer.
What I am describing is quite dystopian. I felt quite sheepish expressing that kind of dystopian‑ism in this space, but, increasingly, this view is coming from within the industry and within the technology space, and from those who have studied it for a long time. The logic of its operation is to undermine that freedom aspect I have talked about that should then lead to expression of a certain kind. It is undermining the freedom aspect.
If we are going to police and educate on the expression and behavioural aspects, we have to know that the capacity to act independently is still protected. Then we can afford to be concerned with schooling that capacity in the right direction. What disturbs me is how much that is being undermined by the way these technologies and companies work.
Q71 The Lord Bishop of Worcester: Thank you both very, very much indeed for a really helpful session and for helping us to engage with all this. I will begin by saying that, Sharath, you should not feel bad about expressing what you described as a dystopian approach. Coming from the religious tradition I do, I would say that St Augustine would have recognised very well what you are describing. He would have called it original sin, the fact that people are suggestible and that there are all sorts of ways in which we can be affected.
You have mentioned the limits of what etiquette and ethics can provide, so let us turn to what can be done from a statutory point of view. What would you like to see as the outcome of this inquiry, given that you have painted in some ways quite a bleak reality? As you described it, Roxane, the power of these platforms is enormous. They have sovereignty over data and identity, and they have capital power that sucks up the plurality of other providers.
Recognising that what can be done at a national level might be quite limited, what would be the best outcome for this inquiry? What might be done from a regulatory perspective in this country?
Dr Roxane Farmanfarmaian: I like your reference to St Augustine. This discussion has partly been about where a sense of responsibility lies and how we can build that in a new setting. In a sense, it goes back to Baroness Bull’s question about what kind of co‑operation we can expect by coming together with these different cultures.
From a statutory perspective, I would like to see a greater focus on ensuring that the possibility was there for a multiplicity of new companies or new inventions to come up, so that we always have more of a buffer of alternatives to the companies that are out there and the services that are offered. As a media person, I have come to appreciate enormously that one aspect of responsibility comes from knowing what is going on in your own environment and your own community.
One of the things the United States is suffering from, more than here, is the loss of the local news outlet. That has to be recognised as what grounds people in their homes and their home regions. That is the ground upon which we then more securely move into new realms and we can feel the confidence from that. We need to keep an eye on those, and possibly to look more carefully into the older pillars that we understood our society to be built on, to ensure that those go ahead into the newer environment that we are looking at.
Another thing I would like to come out of this is a sense that Britain has a responsibility to contribute to a more global conversation and the eventual development of a system of global statutory rules, which will need to be at one level of analysis in all of this, so that, say 10 years down the line, we have a structure at least of processes that we all agree on, if not the actual definitions of harms or cultural interpretations at the national level.
Dr Sharath Srinivasan: The most important thing that can come out of this inquiry or these processes is a sense of true, bipartisan or inclusive responsibility. As protectors of representative democracy, Parliament has to have in its grip the issue of how to protect a liberal democratic society in a digital age. This is very broad, but it is not about these individual pieces. It is about making sense of a new architecture and political culture for an age in which digital technologies are so central to the way we live.
In that unified, bigger‑picture sense, the penny is finally dropping that that is what is at stake. It is not privacy; it is not surveillance; it is not consumer protection. It is the entirety of the thing that we call democratic life. There is no public square right now that is being protected, and our representatives in their various forms need to take hold of that. That is a prime concern.
More practically, this Committee can hint towards two aspects of that, not specifically on freedom of expression, but close to it. Yes, there is a capital power that needs to be unwound a bit, because it is itself very dangerous. Of course, the break‑up of Standard Oil, as Shoshana Zuboff mentioned in an op‑ed recently, did not lead to wonderful elysian fields; it led to ExxonMobil and a few other entities. She said that the big issue we should have thought about then was climate change. It is not about the breakup of the monopolies; it is the fundamental commodity.
Going to my second point, which is not just about the competition issue, we have decided that people cannot be commodities in the past. We have changed what business is allowed to be about. Right now, the commodity is again people: their feelings, their actions, everything about them. That is the commodity, and we can change that. Why not change the sovereignty of people’s individual citizenship in a digital age? The state can do that. Then businesses have to work out what the next business is. They will; businesses will be businesses.
But it is not Google and it is not Facebook. The logic of tech capitalism needs a decisive move that shifts it in the right direction, because what is at stake is the equivalent of what climate change was when we did not see it back in 1919. What is at stake is a fundamental transformation in society that is not sustainable. I hope somewhere in there is something helpful for where this inquiry can land.
The Lord Bishop of Worcester: Absolutely, thank you very much indeed for widening the perspective and ending us on a hopeful note. Can you point us to any international examples of regulation relating to freedom of expression that we should either emulate or avoid?
Dr Sharath Srinivasan: Unfortunately, the short answer on this issue, and in terms of the digital aspects, is that nothing stands out very strongly. There is a lot of movement in a number of quarters at the same time, as you will know. This is an opportunity to set a standard rather than to follow one, and I hope it will be taken.
Dr Roxane Farmanfarmaian: We have come to think of it as the internet of things. In a way, I would suggest that personal expression is the opposite to that. If we had a new political framework or culture of expression and the personal, that would be a different way to look at it. I agree with Sharath: there is not really anything there, but there are green shoots of awareness. It is a big project, but we are heading in the right direction.
The Chair: Witnesses, thank you very much indeed. It has been a really interesting session and very useful for the Committee. Do continue following us, if you can. Your further thoughts would be appreciated and welcome throughout the inquiry. It is very good of you to give up your time to be with us this afternoon. Thank you both very much indeed.