HoC 85mm(Green).tif

 

Home Affairs Committee

Oral evidence: Hate Crime and its violent consequences, HC 683

Tuesday 20 November 2018

Ordered by the House of Commons to be published on 20 November 2018.

Watch the meeting

Members present: Yvette Cooper (Chair); Rehman Chishti; Sir Christopher Chope; Stephen Doughty; Kate Green; Stuart C. McDonald; Alex Norris; John Woodcock.

 

Questions 824863

 

Witnesses

I: Chloe Colliver, Research Manager, Institute for Strategic Dialogue, Jacob Davey, Research Manager, Institute for Strategic Dialogue, and Alex Krasodomski-Jones, Research Lead, Demos.

 


Examination of witnesses

Witnesses: Chloe Colliver, Jacob Davey and Alex Krasodomski-Jones.

 

Q824       Chair: I welcome our panel to this session of the Home Affairs Select Committee inquiry into hate crime, in which we are also particularly looking at far-right extremism. Could I ask each of you to introduce yourselves and also the organisation that you represent and tell us the recent work you have been doing in this field?

Alex Krasodomski-Jones: My name is Alex. I head up digital research at the think tank Demos. Recent work has looked at Russian influence operations, primarily through social media, as well as some bits and pieces looking at Islamophobia online and hate crime in London.

Chloe Colliver: My name is Chloe Colliver. I am a research manager at the Institute for Strategic Dialogue here in London. Most of my work covers the work stream we have on disinformation and extremism, especially at the moment around elections. We have just done a couple of pieces, one on the Swedish election, one on the Bavarian election, the German federal election last year, and also broader work looking at digital policy and digital research.

Jacob Davey: My name is Jacob Davey. I am a research manager also at the Institute for Strategic Dialogue. My focus in my research is on the far right, both internationally and in the UK. Recent work has been focusing on the communications tactics of the far right to broadcast their ideology and then also some work on online interventions to counter far-right extremism.

Q825       Chair: Thank you very much. Could you each give us an overview based on your recent work? What is your assessment of the impact of international factors, whether it be the international operation of the far right, whether it be the impact of hostile state activity and so on, on the far right here in the UK? Mr Davey, would you like to start?

Jacob Davey: Yes. Internationalisation of the far right is a key trend that we have seen in the communication strategy of global far-right activists. What we are seeing increasingly is an international group of grassroots activists who co-ordinate their activity to broadcast and amplify far-right talking points. An example of this would be around the free Tommy campaign, which was around Stephen Yaxley-Lennons arrest. What we saw there was 47% of tweets amplifying that campaign were coming out of the US.

Chair: What percentage did you say?

Jacob Davey: 47% and I can send afterwards those figures. What this represents is to some degree the building of an international consensus around far-right activists. We are also seeing the building of a common tactical playbook, for example the use of amplification of certain hashtags across borders to try to broaden the support for particular talking points.

Chloe Colliver: Most of my work recently has been looking at elections, as I mentioned. The international support from quite fringe extreme far-right groups online for more mainstream populist right-wing political parties and the success of those parties in recent elections across Europe has given the more extreme far right a greater sense of confidence, potentially given it a greater sense of its own scale and potency and garnered it with what it feels is a sense of legitimacy in terms of its viewpoints in the mainstream.

I would say that also we have seen a shift in the narrative strategy of the far right—that includes the UK far right—that focuses on culturalist issues of white identity rather than the traditional nationalist, the more racial and racialist narratives we saw maybe 10 or 20 years ago within these groups. I think that is a more unifying idea and it gives this cross-border agency to different groups that are trying to support one another in their political agendas.

Q826       Chair: What is the difference, compared to the kinds of things that they might have been pursuing, say, 10 years ago?

Chloe Colliver: Part of it is a sort of brand cleansing that is happening across borders, which I think is in part a response to both technology company and Government attempts to regulate and moderate content on technology platforms. It is knowing how to play the game of staying on public platforms, preaching to broader and wider audiences by using language that is not violent, is not necessarily directed at racial minorities but instead is holding them up as if they are the defenders of the liberal order. They are defending white liberal democracy and that is a more positive narrative for engaging new audiences rather than just a negative, violent or derogatory narrative, building up a momentum and a movement that can co-opt positive narratives and terminology to use it for their more extreme agendas.

Alex Krasodomski-Jones: Yes, I agree very much with what we heard before, the idea that internationally we can come up with a series of messages that apply to a national context. When we look at this stuff online, very much the mechanisms that we see in terms of the narratives that are taking place in one country can be used to inform a view of what is happening domestically. If there is a terror attack in the UK, as we saw in our Russia dataset, that is used as ammunition for far-right groups elsewhere in the world. There is a reflection on the situation within their own countries.

That idea of cross-border, cross-movement, cross-pollination or sharing of narratives applies also to other things. Content is the most obvious one. News is a secondary obvious one in the example I just gave you, memes, ways of getting the message out, even manpower. We saw in some far-right forums after the election of Trump that now we were going to make France great again, we were going to make Germany great again, we were going to make Britain great again, so there is a very cross-border co-operation online, largely facilitated by a huge number of different online platforms.

Q827       Chair: What is your assessment of the reach, the scale of impact, the number of people who might be seeing far-right social media posts at any time?

Chloe Colliver: I think that these groups have learnt how to use amplification tools provided to them by technology platforms very adeptly. For example, in our analysis of the Swedish election we were analysing the international far rights campaign to smear Swedens reputation and use it as an example of the failure of liberal democracies through increased migration. Within the top 100 public Facebook groups that mentioned the word Sweden, the first political post I came across was from Tommy Robinson, which was disinformation about Sweden and supportive of far-right agenda. So they have adeptly been able to co-opt narratives and to reach more people. They have done that partly also through taking on mainstream political discussions and narratives for themselves, reaching large audiences.

Jacob Davey: I would say from the approaches that we use at ISD in terms of mapping conversation online, it is very difficult to get a comprehensive figure that will be able to say that is the amount of people, but if you can dip into particular case studies and particular examples you can see that there are often tens of thousands of people who are actively engaging in this.

For a couple of examples of work that was done, looking at the free Tommy hashtag, we saw tens of thousands of individuals engaging with this. In another piece of work that we did looking at public Facebook pages leading up to delivery of interventions to people supporting violent, dehumanising, far-right talking points, we found 40,000 individuals who were engaging in these ecosystems. Likewise, you could get hundreds of thousands of tweets over a year-long period that target anti-Muslim sentiment, for example. It is difficult to get a big picture but if you pick a particular example you can see that it is a very large scale.

Alex Krasodomski-Jones: I do not think we have a very good answer here beyond saying it is hard to estimate as a total and it is getting harder. In the wake of data-privacy scandals that have affected the large social media companies, with a few exceptions the response has been to make accessing this data on scale much more difficult, allegedly in the idea of protecting the privacy of its user base. The kinds of work that we found ourselves able to do four years ago we can no longer do.

The second point I would make is around what reach means in this context. If 10 years ago we had talked about 1 million people belonging to a far-right group in the UK we would have been horrified, but that is very different to what it means to be part of a Facebook group with 1 million members in it. I do not know if you can call it armchair activism, but the idea that you can take part and play a role within ultimately quite an extreme organisation from the comfort of your home without ever really committing to marching or going out on the streets is another change in how these things are taking place.

Jacob Davey: Absolutely, yes. There is essentially a lower barrier to entry and lower social risks. Whereas you might have faced some sort of ostracisation and alienation from your family or from your peer network or from your colleagues if you were seen to be getting involved in this on the street or with a traditional party, now you can just activate from your home.

Another tactic, just to raise awareness, is crowdfunding. By donating money to individuals you can become to some degree engaged in the financing of far-right activity with very limited risk to yourself. We saw that with the Defend Europe mission last summer where the identitarians chartered a vessel with the aim of disrupting migrant flow. What we saw was that was a very international effort in terms of raising money for these groups, often quite small donations, £1,000 or £500 here or there, which again lowers that barrier for entry.

Q828       Chair: How far do you see what is happening as being people with extreme or racist views or far-right views talking to themselves, talking to each other in a way they may not have been able to do before but previously held those views, and how far do you see this as a process of radicalisation or grooming other people or spreading extremist views?

Alex Krasodomski-Jones: I will pass over to you in a moment, but through work that we have done and that I have read the ISD itself have done, looking at the tactics of some of these larger Facebook groups and other groups, it is quite clear that there is an attempt here at outreach. Social platforms are built for this kind of thing, building an audience, growing your following there. All these metrics that traditionally in the hands of advertising companies are now being used as measures of political capital.

At one point Britain First, which I think now have been removed from Facebook, was larger than all the parties in the UK combined because of their Facebook presence. How did it get there? It was sharing content that was much more palatable. Anything around Armistice Day. Animal cruelty was another big topic, stuff that is very easy to access that then perhaps sends you further down the path of this. The fact that this content was out there and the fact, as we have already heard from Jacob and Chloe, that there is an attempt to tone down the language to focus on a more palatable version of extremism, to me can only be evidence for an attempt at outreach.

Chloe Colliver: Yes, I would say it is definitely both and there are two parallel structures, almost, online that are being used for those two different things. In terms of the recruitment side of new audiences, the larger, more traditional social media platforms are still a space in which these activists need to have a presence in order to reach new audiences. There is also a radicalisation ecosystem, if you will. Once people get drawn into these movements or these ideas, they are taken on to other kinds of platforms that often have been purpose built by these movements but sometimes have been co-opted or exploited or hijacked for the purposes of far-right mobilisation and co-ordination. I think those filter bubbles are often where that more extreme radicalisation is taking place. You are only consuming media from certain fringe and often disinforming news media sites without any kind of traditional professional media. Your views become legitimised and you think that there is a sense of scale in terms of the people who agree with you.

I would also say that these things do not happen in isolation from continuing offline radicalisation structures and opportunities. Marches, protests and latching on to other cultural aspects that are now manipulated by the far right, for example football clubs or support clubs, is also an important part of the threat picture that needs to be looked at, especially in terms of offline harms or potential violence.

Jacob Davey: I am in absolute agreement with what has just been said. Perhaps also if you think about the role individuals take as they travel through this constellation of different social media platformsmaybe not just on Facebook, maybe co-ordinating on encrypted chat journals likes WhatsApppeople take on different roles. You can then take on the role of a recruiter to go out and find other individuals. You can take on the role of a content creator so you can sit there and produce hundreds of memes that can then go out and flood. It is a range of different activity depending on where you are operating from.

What is quite concerning to us is this use of not particularly controversial issues by far-right individuals that can potentially be used to on-board people. If you look at Stephen Lennons recent communications campaigns, he has been campaigning under free speech, not a particularly controversial issue, or soldiers welfare with his I am Soldier X campaign. These are things that quite a lot of people can get behind. You are not bringing people on immediately with the hateful content or the hateful narratives, you are bringing people on with potentially these softer narratives that can then take them into an ecosystem where they get exposed to more radical points.

Alex Krasodomski-Jones: To add to that, it is important to remember they are not producing these softer narratives. There are plenty of other people doing that for them. A huge part of this tactic is the amplification of sources that are mainstream news. That could be selectively or it could just be the amplification of a given right-leaning media outlet, but very often the content is there for them. By presenting it in a selective way or over-representing one side or another, they have all the material they need out in the public domain, as it stands.

Q829       Stephen Doughty: We have spent a lot of time in this committee looking at the big three or big four in terms of Twitter and Facebook and YouTube. In my view, based on the evidence they gave us, they are grossly underestimating the threat that is posed by this and not putting enough resources in. That is very clear to me.

Given what you said about amplification and other means by which this information is being spread, particularly by the far right, to what extent do you think that we, Government and other regulatory bodies need to pay a lot more attention to those smaller channels in which it is being spread? I have a report here from Counter Extremism Project and it is talking about the use of Vimeo, VideoPress, archive.org, Bit Che, Dailymotion, VK, Spreadshirt, CafePress, WordPress. I have even been told that TripAdvisor is being used to spread links and signpost to far-right material. I know I have a local blog called the Penarth Daily News, the owner of which has some fairly strong views and is allowing content to be shared on there from the BNP, homophobic, far-right hate content, online radio and podcast, Chatango. To what extent do you think the authorities have grasped that wider, diffuse challenge that exists?

Chloe Colliver: I think they are starting to grasp it now but I think that the questions around response are much harder when it comes to these kinds of platforms. Many of them do not have their own community standards or were set up with the purpose of being able to spread this kind of content, whether for libertarian, free-speech values or for a more extreme agenda. For those kinds of platforms it would be very difficult for a regulator to engage with a platform in order to try to encourage that kind of content removal, especially if it is not illegal content but if it is content that usually would meet the terms of service or the community standards of a larger platform.

What we have seen on Gab recently is an interesting example. In the US, after the Pittsburgh shooting, Gab was pushed into enforcing its terms of service more consistently than it has had to previously under political, technological and media pressure. The main prompt for that was its hosting server removing its services in protest against the content that is allowed on the platform. We have seen anecdotes of users saying, “Gab is now purging all of our accounts, it is worse than Twitter, we should just go back to Twitter, it is not worth it”. There are kinds of pressure or maybe non-regulatory pressure that would be interesting to explore in terms of these other platforms that are probably not likely to respond in terms of co-operating with Governments.

Q830       Stephen Doughty: In terms of hosting and in terms of providing cloud space or whatever?

Chloe Colliver: Yes, and I think there are serious rights issues to take into consideration with those questions but it has shown in some instances that it can help to encourage some of these smaller platforms to better enforce existing terms of service around this kind of content.

Q831       Stephen Doughty: Do think that something like TripAdvisor is even aware that their review sections are being used surreptitiously to promote content?

Chloe Colliver: Possibly not. There are a few initiatives that are starting to try to help some of these kinds of companies that were not set up in any way for this kind of communications purpose. Tech Against Terrorism is one of them, which deals a lot more with illegal terrorist content but it supports small platforms like that. It has Just Paste It in its network, it has a number of the platforms that you mentioned. It tries to provide them—that often do not have the resources, obviously, or the staff or the knowledgewith basic-level resources in terms of definitions, in terms of proscription lists, in terms of frameworks for terms of service that they could implement to try to encourage at least a low level of activity to try to prevent these platforms being exploited in that way.

I think there is more that could be done, certainly, to provide small companies with at least a low level of support on this, and potentially technological sharing. We see this hashing technology that was developed with the Global Internet Forum to counter terrorism now being used across over eight different companies, including some smaller companies, sharing expertise and technology that enables them, as smaller companies, to remove flagged terrorist content at speed, so I think there might be interesting solutions to explore there for smaller technology platforms.

Q832       Stephen Doughty: To what extent do you think the groups out there and the individual trying to promote this content are focused on the broadcast radicalisation through all these different methods as compared to, for example, signposting individuals into dark-web messaging services, encrypted messaging and so on where they are able to perhaps share clearly illegal content or much more direct calls to action or whatever that they would not post even on some of these sites?

Chloe Colliver: It is very difficult to get a sense of the scale of the two tactics against one another. I do think there will always be a strategic need to do the former for these kinds of groups. Any further activity that the larger social media companies can continue to take to discourage any malicious or malign activity or information campaigns on their platforms will continue to help to stem that recruitment side of the problem, but I think it is very difficult to ask those large technology companies to deal with the entire ecosystem of platforms that are being directed to from that content on their platforms.

Q833       Stephen Doughty: Specifically on the far-right organisations, perhaps this is one for Jacob. To what extent are you seeing what we certainly saw around some of the Islamist groups, essentially rebranding, rebadging to avoid the proscription orders that exist? For example, I was aware of Islamist groups operating locally in my area that were linked to proscribed organisations but had simply rebranded themselves under other names. I raised this with Ministers at the time and we were struggling to keep up. To what extent are far-right organisations doing that as well?

Jacob Davey: Clearly we have seen that with the rebranding of National Action into Scottish Dawn and NS131.

Stephen Doughty: Sorry, can you say that again?

Jacob Davey: We have seen that clearly with National Action and their rebranding into Scottish Dawn and NS131, and they were then proscribed. Something that has come out of the recent discussion was the possibility that individuals who are on the periphery of National Action are now engaging or have been engaging with the identitarians in the UK, so we have seen that. It is worth flagging here, though, that the groups that proscription is focusing on are fairly small in number. It is a few hundred people and in terms of the scale of the issue it probably exists within that slightly less radical description, where individuals and groups will not be particularly calling for violence or calling for militant activity, for example. That street-protest movement is where the greater numbers exist.

Q834       Stephen Doughty: Therefore, they are sailing as close to the wind as they can get without falling into that?

Jacob Davey: We have seen, for example, there is an awareness of what you can say clearly legally but also what you can say within the parameters of particular social media platforms around making sure that you cannot get kicked off. People are actively talking about this, saying, “Okay, you are allowed to go this far” without facing action either from the platforms or from law enforcement, for example. There is an awareness that you can tailor your narratives to avoid that sort of disruption from their activity.

Q835       Stephen Doughty: One last question. You mentioned so-called veterans campaigning that had been going on to try to drag people in. To what extent do you think that our armed forces personnel and even the police, vulnerable individuals within both those, are being targeted by far-right groups? I ask this in light of the case of Mikko Vehvilainenwhich obviously reporting restrictions have been lifted on now—a soldier serving in the British Army who is Finnish and was clearly linked to all sorts of Nordic extremist far-right organisations and has now been prosecuted. To what extent do you think they are trying to deliberately target those in the Army and law enforcement community?

Jacob Davey: I think there have been active and concerted efforts to target and recruit individuals in the armed forces, again across that whole range of spectrum of groups operating on the far right at the fringe but also some of Stephen Lennons recent campaigning to try to on-board soldiers. It is also worth situating that, that is a trend we have seen globally as well. In Germany, for example, there are currently 400 active investigations into far-right members. This is something that is a global trend on the far right.

Q836       Stephen Doughty: Are you aware of anything the British Army is doing to, for example, alert particularly new recruits and others, both at the vetting level but also when recruits are in early training to be aware of groups that are trying to influence them or others?

Jacob Davey: I am not aware of anything.

Q837       Rehman Chishti: How do you address online material that demonstrates either sympathy to far-right ideology or extremist ideology that is not regarded as illegal or does not cross a threshold of what is deemed appropriate or not appropriate on social media outlets?

Jacob Davey: That is the £1 million question to some degree. It is difficult. There is a grey area of content and there is a range of different responses that are advocated. Some people suggest a more heavy-handed takedown action from platforms, which I think we have seen potentially has a range of unintended consequences. That could be unintended flagging of non-radical content or driving individuals into these ecosystems where they cannot be engaged with.

I think there are other things that could be done, for example, to limit the impact of these groups. Digital education, for example, among young people, raising their awareness of the impacts and activities of propagandists and building resilience against that. There has been a significant amount of work in the counter-narrative space that is potentially still valuable. Likewise, interventions work and online interventions work. Active outreach to individuals to start a conversation to ty to reclaim some of that ground are alternative tactics outside of legislation that we have seen and I think there needs to be more investment in.

Q838       Rehman Chishti: Having had the Assistant Commissioner, Mr Basu, giving evidence over here, he says it is a difficult judgment to strike between free speech and what is a threshold as to when one breaks the law. Is the legislation correct where it is at the moment or is it not?

Jacob Davey: Around legislation I would say that I am a researcher on the threat and maybe not the best-placed person to answer that.

Q839       Rehman Chishti: Let me ask you the question on threat again. Earlier you said that looking at open pages on Facebook you noticed a lot of anti-Muslim hatred. I come from a Muslim background and I get material where individuals sometimes do not put their picture up or they put up, “Useless MP, never in the constituency”. All my constituents would know to the contrary. But when you then click on that persons page, it is filled with anti-Muslim hatred, whether it is with EDL—you look at it all the way through. There is disinformation against people in public office where the message initially may be innocent and free speech that people can say, but you click on that and then see the litter of pure hatred towards people of faith. Do you have data on how much of that is happening?

Jacob Davey: With regards to offline, for example, there is a range of offline statistics around hate crime and hate speech that is religiously motivated. I believe it was around 9%. I can share that. Online again, as I said earlier, it is very difficult to get a precise sense of the scale but we and I think CASM have done some work looking at anti-Muslim hatred on Twitter, for example, and it is hundreds of thousands of individuals and tweets. Again, if you are researching these people, looking at where they come from can be quite difficult. If you look at the narratives that have proven very effective internationally at smearing Western democracy and attacking individuals, anti-Muslim narratives seemed to come out on top in terms of those that are prioritised and privileged by

Q840       Rehman Chishti: Let me put it the other way to you. In relation to some of the individuals who I see out thereand we look at it to see who is sending these messagesthe person who gets the most of this is the Mayor of London. The anti-Muslim hatred against the Mayor of London is vile and evil. Then you look at other Muslims who are parliamentarians or in public office and we get it at a lesser level than that. Do you have a data system, for Muslim parliamentarians or those in public office, to look at whether the intent behind either direct messages or indirect or innocent messages is linked to pure hatred of somebodys faith? Do you have any of that data available or can you get that data to see the scale of what people have to put up with?

Chloe Colliver: It is something we can look into and I think that is an interesting crossover between the issues of harassment and hate. One issue is that often technology companies look at those two issues in silos and therefore the content, if referred, goes to different reviewers, for different reasons often. Content moderators are sometimes placed just looking at content that is flagged for hate or just for harassment and may not have the expertise to understand that that is hate content as well as harassment content. For example, that may not be escalated to the same level.

I think there is also a lack of transparency in terms of the processes of content moderation. We do not know whether a content moderator would look at the entire profile and page of someone whose content was flagged in that kind of example, whereas it might be shielding that content from being removed if they do not have oversight of the fact that the entire profile, as you say, could be filled with vitriolic hate speech. Increased transparency, which is something that Government and civil society organisations have been calling for for some time from technology companies around the processes behind content moderation on these issues, would only help both researchers and policymakers to understand where the gaps are.

Q841       Rehman Chishti: Can I ask another question on that? I get material from an individual who then makes certain threats or certain assertions. Click on their page and it is complete hatred. On their own Twitter account it goes, “I was told by Twitter to delete this and Ill have my account back”. Therefore, they delete certain material, which is quite purely hatred, and they get their account back. Should there be a system where if they continue to put material up there, irrespective of whether they delete it and get back their account, those individuals who put that material should then be banned forever if they continue? Otherwise you put it on there, you delete it, you put it on there, you delete it, you get it back and you then do it again. There should be a system that means if you cross that threshold, whether it is two strikes and you are out and then no more, that is it, because we should not have to put up with it.

Chloe Colliver: There currently are systems like that but they take a lot longer and it takes a lot more examples. I would have to check the exact number it has to be for Twitter or Facebook but it is something around three instances of content being reported and reviewed within a month or something. It needs to be that many to have the account removed permanently.

Q842       Rehman Chishti: We send it to our parliamentary security and they deal with it, the IP and where this is coming from, because we do not know whose these individuals are, because they are faceless cowards. You could be sitting on the train with them, you do not know. That is why that data needs to be checked.

One final point if I may, Mr Davey. It is with regards to the scale of how much of this online material, which may not cross the threshold but that is then sucking in individuals to carry out criminal acts. Say, for example, you have individuals who are disillusioned, disturbed, they cross the threshold and they may get sucked in by it. Do you have any data in relation to individuals who have been convicted of hate crime in relation to where there were incidents of where they were sucked in by poisoned ideology, online or elsewhere? We saw it in the Finsbury Park incident, where the Court said the individual who carried out the attack was sucked in in a very short period of time by Tommy Robinsons vile ideology to then carry out an attack. Do we have wider statistics and data in relation to the number of convictions and how many of those individuals were sucked in by hate material online?

Jacob Davey: As far as I am aware that does not exist currently. I think that is probably to do with how hate crimes are recorded by the police. I do not believe that that is—

Alex Krasodomski-Jones: We recently finished a review of hate crime data as held by the Metropolitan Police system. One of the key recommendations that came out of that report was that they desperately need to update how they recorded hate crime. One example was around transgender hate crime, which made no distinction for any gender beyond male or female, which meant that there was a lot of transgender hate crime that was being silenced down. I am sure the same thing will cut through here. There is a clear requirement here that in the case of a hate crime being either encouraged or individuals being radicalised online, I would not be surprised if it was not being recorded as a result of the system currently in place in the police system itself.

Q843       Alex Norris: Looking at bots, Alex, Demos did a very interesting report about the activity of Russian bots around the 2017 terrorist attacks in London and Manchester. Could you talk over what you concluded from that?

Alex Krasodomski-Jones: First things first, which is about what we know about bots. I think it was picked up a bit funny. The word “bots” does not appear in the report once. A bot is an automated account that you can have as many of as you wish. You tell it to do something and it will go out and repeat those instructions. It is completely automated. What we saw taking place in the dataset—a very limited, I should say, dataset, in my view—that Twitter released to the public was saying, “We have taken down about 10,000 accounts we believe to have been operated out of a street in St Petersburg, which were looking to disrupt primarily the US elections, but also the dataset ran for seven or eight years. I think some of that was likely automated through bots but also the majority of it will have been just some bloke behind a keyboard churning this stuff out.

Q844       Alex Norris: Abroad, though, in a troll farm?

Alex Krasodomski-Jones: Yes. Sorry, to be clear, this particular dataset was identified by Twitter to be operating out of Russia, which I suppose made it unique and perhaps one of the first examples where a social media platform had thought, “We are going to get a second or third opinion on this and release a dataset that will obviously result in criticism of our platform but we will hand it over to a third party for analysis”. I suppose they should probably be commended for that to some extent.

What we found is how representative this can be is a really difficult question. My feeling is this was a small segment of the total number of influence operations that were carried out through Twitter, but I think we saw three things. First, this was a really long-term thing. They had been working on this for years. The accounts that later emerged as trying to run influence operations had been kicking around for five or six years. The second point is what they were attempting to influence. In this particular dataset, the UK is a sideshow. It is used as collateral in an influence operation against the States. This is particularly obvious in the case of the Brexit vote and in the aftermath of the terror attacks last year, where these incidents were used as an attempt to influence domestic policy in the States, in our view.

Finally, the third point is around strategies and tactics and I think this divides into two. The first is that this was not necessarily an attempt to influence at a political level but to influence at a social and a cultural and a community level, to make people angry, at the most bottom line, but also to set communities against each other, most commonly around the subject of Islam. The second half is around the amplification of sources. This is not a boiler room churning out fabricated stories with no grain of truth. This is simply the amplification of a certain set of media sources, a certain set of views, very often grounded in fact or at least with a grain of truth behind them. This is not what we heard about a lot from people who were creating completely fictitious articles for profit, the “fake news”.

Q845       Alex Norris: It is not bots, it is more troll farming, for want of a much better phrase than that. Is that a purely Russian phenomenon? Is there evidence—looking at the rest of the panel as well—of this happening in other countries? How broadly is it happening?

Alex Krasodomski-Jones: The data was, I would say, about two-thirds Russian, a third Iranian. Everyone is up to it. After the murder of the journalist Khashoggi, there was a huge campaign that we have seen on the Saudi side.

Chloe Colliver: There is an enormous number, in all the elections we have looked at, or not necessarily affiliated bots—it is very difficult to affiliate agency to where they are being managed—that will be built purely to promote one state media agency outlet, for example. We saw an interesting example in the Swedish election where a Polish state-funded right-wing populist media outlet was being promoted in Sweden by suspicious accounts, for example. The barrier to entry for doing this is cheap and easy, though it should I think be pointed out that as a researcher I come across fewer obviously bot accounts now than I did a year ago, by quite some way, on the mainstream social media platforms.

Alex Norris: Sorry, just clarify that. Bots or troll farming?

Chloe Colliver: Bots. There are estimates of how you could identify a bot in a way that is much more difficult for a troll account. Obviously a troll account is largely trying to look like a human whereas, at least a year or two ago, bot accounts often were very rudimentary and would be posting hundreds of times a day at a rate that was not possible for a human account. I would also say that has probably forced people who are still employing automated technology and bots to become better at hiding them. It may be that there are still many out there but it is very difficult for us as researchers to understand what is human and what is not.

Q846       Alex Norris: Looking at the material that is coming out of the troll/bot accounts, we have heard about misinformation but we are looking online hate at the moment. Is it what by definition in UK law is hateful or is it misinformation designed to confound the democratic process of people within it, or all of the above?

Alex Krasodomski-Jones: Whether it breaches the legal threshold for hate speech I could not say but I would describe it very much as hateful. The content that was most widely circulated in the aftermath of the terror attacks by Russian affiliated accounts was hateful. There is no other word for it.

Q847       Chair: This is Russian-sponsored promotion of Islamophobia in the UK?

Alex Krasodomski-Jones: Yes.

Q848       Alex Norris: To finish, Chloe, you may be better placed to answer this. Looking at your research on the impact on other elections in other parts of Europe, we have heard that perhaps Britain is perhaps collateral damage in a wider war. Is there any reason to think we are more susceptible in Britain to it or that we are more vulnerable?

Chloe Colliver: I would make two points here. One would be to say that there are opportunities upcoming that I think would be vulnerability points. The fallout of Brexit I believe will be one in which these kinds of malicious actors, whether state or non-state, will have an opportunity to try to sow discord and division, disinformation. Something we have seen across every election that I have looked at was a pre-planned election-fraud misinformation campaign to try to undermine faith in democratic processes around elections. Obviously we do not have an election coming up here in the UK, but around processes to do with votes and democracy—and Brexit would fit into that category—I think we should be aware of attempts to undermine the actual credibility in our democratic processes from these kinds of groups, for whatever agenda they may have long term.

The second point would be that maybe the UK, through its first-past-the-post political system, has some barriers in place to that facilitation of fringe groups promoting political agendas in the same way. It is probably strategically more difficult for either state or non-state movements to try to have a political aim or agenda in that sense that maybe they have tried to do, say, in Germany or in France to some extent, so I think there are two sides of that coin.

Q849       John Woodcock: Following on from that, if you take the debate over the scale of Russian influence in the European referendum, there are different strands of thinking as to how significant this was and whether it potentially did shift the needle or was just an irritant around the edges. Acknowledging that there is no way of getting a definitive view, what is your sense of the scale and influence of Russian engagement within online far-right hate speech?

Chloe Colliver: The scale question is incredibly different and I do not think there is good enough data accessibility out there for us to understand that. What I would say is that even if the scale of attempted activity is huge, which we do not know, the tactics from research like Alexs show that they are playing on existing grievances and wedge issues and that root issue is domestic and that it is here. Without knowing the level of foreign interference coming from that kind of thing, there are source issues that we can deal with that will make that vulnerability less.

I would also point out that there are some non-state actors alongside state actors like the Kremlin who have expressed clear interest in interfering with democratic processes in Europe, including the UK, in the coming year. Steve Bannon and his explicit attempt to build The Movement here in Europe to influence both European parliamentary elections in May but also to support the buildout of far-right movements and right-wing populist parties in Europe. We have seen that gain some traction in certain areas of Europe and that is something that we should be extremely aware of, even if it maybe has not garnered as much traction as it had in the US.

Q850       John Woodcock: What you were describing at the beginning and throughout you could characterise as a professionalisation of the communication strategies of the far right. Do you have any sense of how that has occurred and whether it is being driven by a particular source? Has it emerged organically? It really does seem like quite a quantum shift in the way that they communicate, making it much more difficult, potentially, to counter?

Jacob Davey: That probably depends on the activists that you are looking at and where they are engaging. The far-right troll farms that gained provenance in the US election and that we then saw engaging in disruption activities in the German election and the French election, and attempting in the Italian election as well, I would characterise as a chaos agent. They have essentially seen what has worked before and they are happy to try it again. I think it is the fact that this activity has grown and it is basically an iterative definition of success, “This has worked before, lets pile on top of that”.

However, that is distinct, I would say, from the communications activities of individuals like Stephen Yaxley-Lennon or groups like the Rebel Media. To some degree I think professionalisation is out there as to the funding. Also the amplification of this relates to the individual celebrity of particular individuals, the fact that you have charismatic leaders emerging throughout Europe and North America and in the UK who are able to broadcast to that established user base.

Alex Krasodomski-Jones: It also worth not underestimating the power of numbers and metrics in this. If you think about the attendance at a far-right protest, it would still be reasonably low on the streets, but now through the fact that all of this online political campaigning and campaigning in general is datafied and everything has a number and everything can grow, finding ways to increase those figures, in the same way that any other marketing platform would look to, has been very effective in guiding far-right groups, as any other political group, into how best to use the platforms that they have been given.

Jacob Davey: It is also worthwhile adding that extremists have always been early adopters of technology. If you think back 10 years ago at the BNPs web presence, it was already experimenting with social media when that was probably not something that mainstream parties were engaging with so much. If you think about that, it is necessity breeds innovation. If you are coming at this from the fringe and you do not have access to some of the mainstream infrastructure that has traditionally been used to amplify your views, then you are going to experiment and, by necessity, get good at using a range of alternative communication tactics and techniques.

Q851       John Woodcock: The range of tactics that you set out at the beginning—crowdfunding, online radicalisation and then recruitment—seems to mirror where western jihadi groups have gone over these last 15 years or so. Is there a sense that those two things have grown in parallel or is there potentially a sense of learning tactics from each of them?

Jacob Davey: Definitely a tactical cross-pollination. This is not data driven but anecdotally driven. If you look at these people talking about it, they will write blogs and say, “This has worked well for us”. You see the phrase “white jihads” starting to be branded around a little bit. There is an awareness, and you can see this across the board, of different tactics that are employed of what has worked before.

Chloe Colliver: It also perhaps encouraging, given a lot of the tactics are relatively similar, that the response from major technology companies, while potentially slow at first, has now meant that accessing Islamist terrorist propaganda on their platforms is a lot more difficult than it was, say, a year ago. If the same impetus and resource can be put behind these other kinds of extremist activity, then it may do a good job in stemming some of the public recruitment side of the question.

Q852       John Woodcock: My final question was you mentioned the two strands of thinking as to whether restriction can be counterproductive, the restriction of material. You mentioned it in the context of further restriction. Do any of you have a view on whether the current balance is right or whether you would change it?

Alex Krasodomski-Jones: It is difficult to say. Proscribing an organisation like National Action was effective in the short term and what was once a very easy group to research online is now very difficult, given that it has been forced off major platforms as a result of it very clearly being proscribed. That clarity, which applies to your example of Islamist fundamentalism as well, if you can provide clarity about exactly the type of content that is illegal, then the major social media platforms are reasonably—not perfect—good at dealing with it and they are getting better. Where we have a lot more difficulties is when there is that murky grey area around free speech, around hate speech, around satire, everything where the lines get blurred. Then not only are we unable to provide extremely clear guidance on what should not be allowed, nor do the social media companies know themselves. The technology itself, the algorithms used to police this kind of content, which we have to remember are often as fallible as we are in determining whether something is okay or is not okay, also struggles. Where things are clear, then things are good, or getting better. Where things are murky, it is much more problematic.

Q853       Stuart C. McDonald: A couple of questions on the Government response to the far-right threat. First of all, arising with the discussion with Mr Doughty earlier on, you spoke a little bit about how organisations like National Action are able to reinvent themselves. When they are able to do that, how effective can proscription be? How disruptive is it as a tool?

Jacob Davey: I think if you look at some of the statistics, proscription has been very effective in starting to address the issue. If you look at the number of individuals in 2006 who were in custody for terrorist-related arrests or far-right terrorist-related arrests, it was four people and this year it was 29, so it has been effective in giving police the powers and the opportunity to go out and engage with far-right terrorism.

Likewise, to speak to Alexs point again, the fact that these groups have been proscribed has made accessing the material and content online a lot more difficult. This time last year we did some work looking at the scale of the problem on YouTube. We found it very easy to get National Action-branded content on YouTube. Now we have not been able to find that. That content has been pushed over to platforms like Bit Che. In terms of limiting the accessibility there, I think it has been effective. It was particularly promising because it did demonstrate a proactive effort to limit memberships of far-right groups, so I thought the proscription of National Action was a very positive thing.

Q854       Stuart C. McDonald: Any ways in which we can strengthen proscription as a tool or is it pretty much doing what it needs to do?

Chloe Colliver: The difficulty that will come around proscription is the lower barrier to entry and the fluidity of some of these movements means that understanding the organisational structure will be more difficult with new far-right movements. It is not card-carrying, branded movements in the same way National Action is and started out as. Understanding what part of that far-right spectrum you are able to identify as an organisation and proscribe will be more difficult. I do not think that is a reason not to attempt it.

Q855       Stuart C. McDonald: One final question from me. We have heard from Sara Khan, who is the Commissioner for Countering Extremism. She has concerns that the Counter Extremism Strategy from 2015 is now essentially already out of date because the far-right extremist organisations have evolved. Do you agree with that and, more generally, what else should the Government be doing to tackle this threat?

Chloe Colliver: We would be in agreement with that, that it needs refreshing and it needs new priorities, for example really making sure to deal with the issue of extremism as a whole and not just focus on violent extremism, which maybe misses the ideological recruitment side of the issue. One of the parts of that strategy is building partnerships. I think that is something that is incredibly necessary, considering the threat of the new far right. Building institutional partnerships and supporting civic actors and grassroots actors to do longer-term, more sustained work in both understanding the risks on the ground in their communities but then also responding to them is incredibly important.

Part of that strategy is supposed to say there is no safe space for extremism. I think, as we have noted here, there are a number of online spaces currently not being tackled with any kind of response, that are currently safe havens for vile extremism. I do not mean by that that that should all be a regulatory response. I think that there are communication strategies and prevention strategies that could be trialled that are not just regulatory or content moderation.

Jacob Davey: To add on to that as well, more broader government strategy around countering extremism and CVE is I still think there remain a number of knowledge gaps and skills gaps. Speaking to local authority Prevent officers, for example, I have hosted a couple of trainings this year to build up their awareness of the current threat as it is of the contemporary movements and there is not that understanding.

Likewise on Channel providers as well we have seen that there has been a massive increase in Channel referrals for far-right-related activity, but as I understand it the number of trained intervention providers who are capable of delivering those interventions are still relatively small. Expanding that number of professional intervention providers would be important I think as well.

Q856       Kate Green: On that point, on Prevent and Channel, how effective do you think they are in terms of their design and structure at expressing far-right extremism? Is there anything that needs to change other than the availability of a skilled workforce?

Jacob Davey: I think those two gaps are the two key ones that have been identified, at least through our work at ISD. It is particularly that knowledge gap among frontline practitioners. Frontline workers who are going to be making those referrals to Channel I think still are not necessarily confident about where the line lies, what constitutes far-right extremism, which groups membership of should be something of concern and, to reiterate, those individuals who can deliver interventions.

Q857       Kate Green: Do you think the community feels a confidence in reporting into Prevent in relation to far-right extremism?

Jacob Davey: This is something that there could also be better work around in terms of engagement with individuals and communities that are at risk of radicalisation. There probably is a gap for programmes that can do that on the far-right side at a local grassroots level.

Q858       Sir Christopher Chope: Thank you for sharing with us your compelling analysis. We are a group of lawmakers asking each of you individually is there any one change in the law that you would like to recommend to us?

Alex Krasodomski-Jones: No, but if you will allow me, I would like to give that some real thought and get back to the Committee, if that is okay.

Chloe Colliver: My personal passion would be to see some regulation around the accountability of algorithmic design and how that affects harm and safety. I don’t think that should be any interference in the design themselves but some mechanism that encourages technology companies to be transparent in terms of design.

Jacob Davey: I will mirror Alexs response in that I will get back to the Committee with my response on that.

Q859       Sir Christopher Chope: This is essentially a propaganda war for the battle of ideas in a liberal democracy. Do you have any information as to how effective the Soviet or Russian propaganda machine has been in influencing public opinion in the United Kingdom in relation to the Salisbury?

Chloe Colliver: I could not give you anything on impact because I still think that there are no good research mechanisms for understanding attitudinal change or behavioural change. What I could tell you is we did a little bit of research around Skripal, around the second phase of the incidents, and the communication strategy on Facebook, in particular from Russian-funded media outlets like RT, dominated every single hashtag to do with Skripal that I searched for. I think that maybe there is a better-mobilised communication strategy out there than we have in the counter side, but I could not speak to impact.

Q860       Sir Christopher Chope: Was there any counterstrategy from the United Kingdom authorities or were we just relying on the BBC?

Chloe Colliver: What I noted was in two phases of the Skripal case an attempt from a number of media organisations, including the BBC, in the first instance to respond to particular facts of the case in quite specific ways, which did not seem to meet the media strategy effort from the Kremlin side. In the second phase what I saw was a much more concerted effort to call out the entire Kremlin operation as a disinformation campaign and to really look at it as a whole, and that was the story. Anecdotally, to me, that seemed more effective in undermining the disinformation or the smear campaigns that were coming from the Kremlin side.

Q861       Sir Christopher Chope: My last point is President Trump seems to be the victim of quite a lot of hate propaganda. Do you think that that is just promoting more divisiveness, or do you think it is resonating? What do you think the consequences of that are going to be? Is there going to be an assassination attempt on him, or what do you think is going to happen?

Alex Krasodomski-Jones: What comes through clearly in the data that we analyse is that influence operations play both sides. That even comes down to getting people out on the streets. They will organise a protest and a counter-protest, both organised out of Russia. It would not surprise me if the same influence operations that we felt were supporting one side were also supporting the other. The point is chaos. The point is not knowing what is true and what is not true, and the point is anger above all else.

Jacob Davey: I am in agreement there. There is an active effort to play all sides. Again, something anecdotally, which I do not have data on currently, is an awareness that Russia is attempting to play the hard left in the UK and in Europe with its messaging as well in terms of speaking to that. There could be some awareness that this is something that is scattergun. It is chaos agents. They will throw everything at the wall to see what sticks. It is not just the far right who are getting targeted by this campaigning.

Q862       Chair: A final question substituting into Christopher’s question the word “perpetrator” in place of the word “victim”. Do you think that the way in which President Donald Trump behaves online has an impact on far right activity in the UK?

Chloe Colliver: The rise of populist language, not just in the US but across European democracies as well, gives these movements a sense that they have been legitimised, even if that is inadvertent, from politicians themselves.

Jacob Davey: In terms of some of those communities of trolls and of activists that came to prominence under Trump’s presidential campaign, we have certainly seen them start to effect and start to mobilise to harass activists and journalists and academics in the UK, to engage in disruption and disinformation campaigns. In terms of seeding the ground for what is acceptable on the far right, that has had a huge effect at influencing far-right activism globally.

Alex Krasodomski-Jones: I am in complete agreement. It is a complete legitimisation of a series of viewpoints. I think we are now starting to see that spill out into the streets. The Proud Boys over there are the latest example. Violent, legitimised right-wing extremism. I very much see Donald Trump as being part of that.

Q863       Chair: Thank you very much for your time. If you have any further thoughts for us on whether there is any legislative change needed or anything further that any of the social media companies should do, also on the point, Chloe Colliver, that you raised about algorithms as well, on the impact of algorithms in terms of escalating some of the access or escalating the promotion of the far-right activity, that would be really helpful as well. Also, any further thoughts that we have not had a chance to cover today on how this is impacting on offline activity and on whether it is linked to the football clubs’ activity, whether it is linked to particular demonstrations and organisations, anything further that we were not able to cover today on that would be immensely helpful for us. Thank you very much for your time today.

Chloe Colliver: Thank you.