Written evidence from Protection Approaches (TFP0039)

 

About

 

0.1    This submission comes from Protection Approaches. Protection Approaches works to confront and prevent identity-based violence by developing and implementing innovative programmes that address all forms of hate. From Newham in East London to Bangui in the Central African Republic, we work with local communities, civil society organisations, policymakers, governments, academics and multilateral institutions to develop strategies that predict, prevent and protect people from identity-based violence. Protection Approaches convenes the UK Atrocity Prevention Working Group: a group of 25 NGOs based in the UK who collaborate on atrocity prevention policy and advocacy. 

 

0.2    Protection Approaches is a registered charity in England and Wales, charity number 1171433. For more information, please see www.protectionapproaches.org.

 

0.3    The submission has been prepared by Dr Kate Ferguson and Detmer Kremer. Dr Ferguson is Co-Executive Director at Protection Approaches, where she has trained and advised state representatives, parliamentarians, and law enforcement from Romania to the Central African Republic to the United States, including on national and local atrocity prevention. In 2016, for the Partnership for Conflict, Crime & Security Research and the University of Cambridge, she undertook a comprehensive analysis of the evidence base online communications-based approaches to preventing violent extremism, mass atrocity crimes, and other forms of identity-based violence. She is also Chair of Policy at the European Centre for the Responsibility to Protect and Honorary Research Fellow at the University of East Anglia. Her book, Architectures of Violence: The Command Structures of Modern Mass Atrocities was published in 2020 by Hurst and Oxford University Press.  Mr Kremer is the Policy and Communications Officer at Protection Approaches. Previously Mr Kremer was the Human Impacts of Climate Change Programme Assistant at the Quaker United Nations Office. He holds a BA in anthropology from Bates College, United States and an MA in Human Rights from University College London.

 

0.4    Contact information: Kate.Ferguson@protectionapproaches.org | +447715475357  

 

Assessing the risks new technologies pose to open societies

 

1.1    There is growing concern among governments, NGOs, media stakeholders, and publics around the world that increasingly new technologies are being deployed by a variety of state and non-state actors that threaten international stability, social cohesion, and human rights.[1] The rapid rise and evolution of social media has magnified for many the impacts hate speech and misinformation have on good governance and democracy, trust in institutions, polarisation, and identity-based violence. While the relationship between communications, propaganda and hate speech with violence has long drawn the attention of scholars and policy makers, current global challenges posed by tech-based communications strategies are new in scope, frequency and impact. This challenge is emerging alongside others posed by quickly evolving technologies such as surveillance, internet accessibility control, Artificial Intelligence (AI), and augmented reality, which all have increased the need for Her Majesty’s Government to prioritise a) an evidenced-based understanding of how democracies can respond to such threats b) the capacities and systems to analyse such data and c) coherent cross-cutting policy to guide UK responses to the threats and challenges posed by technology and online communications. This submission addresses the threats posed by, and the opportunities open to, the Foreign, Commonwealth and Development Office, relating to the advances and trajectory of technology and human rights, identity-based violence, and mass atrocity crimes. The examples provide below are intended to be illustrative of the scale and urgency of this challenge.

 

1.2    Mass surveillance in Xinjiang: The ongoing modern mass atrocities in Xinjiang, recently declared a genocide by the UK parliament,[2] have used surveillance technology to track over 13 million Turkic Muslims for years.[3] This has allowed the Chinese government to constrain people’s freedom of movement and has facilitated the current mass detention of Uyghurs and other Muslim minorities in detention camps. Emerging AI technologies are being tested on detained Uyghurs, which operate with biased algorithms to further persecute Uyghurs and other minorities.[4] China-based hackers have targeted Uyghurs abroad through social media in attempts to disable, divide and manipulate outspoken opposition members.[5] China has paid social media websites to run propaganda campaigns depicting Turkic minorities as happy and thriving[6] and outlets reporting critically on the mass atrocities in Xinjiang, such as the BBC, as untrustworthy, inaccurate and biased.[7] This highlights both the transnational nature of cyber-based strategies and the role of tech companies, which in this case have profited directly off of modern atrocity crimes through advertisement revenue.

 

1.3    Facebook as a tool of genocide in Myanmar: The Independent International Fact-Finding Mission on Myanmar found that “Facebook had played a role in spreading hate speech” in the lead up to and during mass atrocities against the Rohingya in Rakhine.[8] The Myanmar government established and funded hate speech social media campaigns and accounts that further fuelled anti-Rohingya campaigns.[9] Facebook was subsequently accused of responding too slowly as violence escalated in Myanmar and the company agreed it had failed to prevent its platform from becoming a tool “undermin[ing] democracy and incit[ing] offline violence”.[10] Currently Facebook is withholding data from the International Criminal Court in the case initiated by The Gambia. The weaponization of tech has continued during the ongoing military coup, where the junta attempted to fully shut down the internet, including mobile data and wireless broadband, and disrupted services like VPN access that can bypass restrictions to apps like Facebook which in developing countries are often synonymous with the internet. A full or partial shutdown of the internet is an intimidation tactic, undermines the organisation of a popular movement, violates the civil and political rights of Myanmar people, hides evidence of state violence including enforced disappearances and military attacks on protestors, prevents people from accessing accurate information or verified shelter, and limits access to news and developments both within and outside the country.[11]

 

1.4    Conspiracy theories undermining civilian protection in Syria: The campaign against the Syria Civil Defence, known as the White Helmets, demonstrates how malicious social media campaigns can undermine and impede direct and life-saving responses to violence, including the provision of humanitarian services. Investigations by the Guardian and the Syria Campaign, and more recently by the BBC, have shown that a Russian-funded network of trolls and bots ran a misinformation campaign on claims that the White Helmets were either western-funded actors hired to record war crimes to justify foreign intervention or embedded with terrorist organisations such as Al-Qaeda.[12] The campaign was highly effective and almost forced the White Helmets to cease their work, and it contributed to the decision of James Le Mesurier, one of the founders, to take his own life.[13] The campaign against the White Helmets is an example of the evolving capabilities of state actors – such as Russia and Syria – and non-state actors – such as academics in UK universities or pro-Assad online influencers – to undermine civilian protection efforts on the ground, establish “alternative facts” regarding responsibilities for and actual violence, and challenge the ultimate goal of preventing further atrocities.

 

1.5    WhatsApp driving identity-based violence in India: WhatsApp has been linked to large scale mob violence in India, where hundreds of Muslims, Dalits and Adivasis have been lynched, injured and threatened by Hindu-nationalist groups. Research from the London School of Economics suggests that the mis- and disinformation spread through WhatsApp on kidnapping gangs, cow slaughters, and forced conversions is driving the increase in lynchings and associated vigilante violence.[14] The identity-based mass violence in India is fuelled by modern technologies including social media, the creation of deep fakes, and encryption technology and demonstrates the need for capacities and frameworks able to monitor, communicate and respond to such violence occurring outside of fragile states or contexts of armed conflict.[15]

 

1.6    Nigeria’s twitter ban: In response to Twitter removing a tweet by President Muhammadu Buhari deemed to incite violence against south-east secessionists, Nigeria announced the suspension of the social media platform and issued directives allowing federal prosecutors to arrest anyone still using it on June 4th 2021.[16] Nigeria’s Minister of Information and Culture condemned “the persistent use of the platform for activities that are capable of undermining Nigeria’s corporate existence”.[17] Social media has been a crucial tool of protest for movements like ‘#EndSARS’, which calls for an end to the Special Anti-Robbery Squad unit of the Nigerian Police Force and gained traction after a video showing SARS operatives killing a man and fleeing with his vehicle surfaced on Twitter.[18] Twitter served as a ‘co-ordinating platform’[19] for the mass protests against police brutality in late 2020.[20] Global support for the protests was high,[21] with the hashtag becoming the number one trend worldwide.[22] The Nigerian Government’s Twitter ban is an example of governmental tech control violating human rights and undermining open, democratic societies. [23]

 

1.7    As Protection Approaches has noted in previous submissions to the committee, the FCDO currently lacks the internal systems and capabilities necessary to recognise the risks of modern atrocities and identity-based violence. These indicators of risk include tech-based strategies, online communications, and access to information or the internet. Embassies and UK missions in countries at risk of identity-based violence, atrocities, and tech-based threats do not have the capacity to properly or comprehensively process and analyse open data or intelligence. The remainder of this submission reflects on how HMG and the FCDO could strengthen its systems and capabilities to identify and respond to tech-based threats as they relate to polarisation, identity-based violence, democratic backsliding, media freedom, and mass atrocity crimes.

 

Assessing UK risks and responses

 

2.1    The Integrated Review of Security, Defence, Development and Foreign Policy seeks to strengthen “security and defence at home and overseas.”[24] It states that HMG “seek good governance and create shared rules in frontiers such as cyberspace”[25] to protect open societies and democratic values. A joined-up approach between domestic and foreign policy indeed is essential as the UK is not immune to the risks and strategies discussed above. The threats to British open society include eroding trust in expertise and institutions; reducing domestic resilience including by exacerbating social polarisation; internationalisation of hate-based, conspiracy, and/or extremist online networks; cyber security, from threats to individuals, communities and the state. Some consequences of such threats can be seen in the online abuse of elected women representatives;[26] the 900% increase in the global use of hashtags on Twitter encouraging violence against China, Chinese people, and those assumed to be Chinese at the start of the pandemic which has contributed to the 300% increase in reported hate directed at the UK’s East and Southeast Asian communities in the first quarter of 2020;[27] and unprecedented numbers of far-right accounts, influencers and conspiracy theorists on increasingly unmoderated ‘alt-tech’ platforms such as Gab and BitChute. [28]

 

2.1    Online Safety Bill: In response to domestic risks associated with tech, HMG brought forward the Online Safety Bill. The bill pushes tech companies that host user-generated content to have more accessible and rapid avenues for reporting and removing illegal content, and risk being fined if companies fail to do so. The scope of the bill is limited and excludes communication services such as email and SMS, fraud via advertising, and cloned websites. The latest iteration of the bill has moved away from addressing democratic harms, and instead seeks to protect content defined as ‘democratically important.’[29] Although this provides an important safeguard for freedom of speech, it fails to cover many of the offensive risks posed to British open society from within and outside the UK. A welcomed inclusion is that risk assessments performed by companies must also include “the design and operation of the service (including the business model, governance and other systems and processes),[30] which will ask companies to not only consider content but also the algorithms and advertisement models that have played large roles on magnifying the risks to open societies and human rights associated with evolving technologies. The remaining gaps and best practices in the proposed domestic framework should inform the direction, scope and capacities of UK’s foreign policy on tech.

 

2.2    The Integrated Review: The Integrated Review sets out the objectives for the UK to be a “global and responsible cyber power” and to “building resilience at home and overseas”[31] with commitments to uphold universal human rights; an open and innovative digital economy; shape the international order of the future; and detect, deter and respond to state threats.[32] The most concrete tech-related proposal to realise these ambitions is through the establishment of the National Cyber Force,[33] which will have offensive tools and complements the domestic-facing National Cyber Security Centre. A institutionalised joined-up approach between the domestic and international centres is crucial for HMG to be able to respond to changing technologies and should be integrated into UK’s new cyber strategy scheduled to be published this year[34] and future drafts of the Online Safety Bill. For the new cyber strategy to be fit for purpose, it must have the capabilities to monitor, communicate, and respond to the threats technology poses to open societies, international security and human rights.

 

2.3    The Integrated Review connects the new capabilities of the National Cyber Force, and the Counter Terrorism Operations Centre, to the new Situation Centre. This Centre will be at the heart of government, improving [HMG’s] use of data and [HMG’s] ability to anticipate and respond to future crises.”[35] The building out of these new capabilities and their integration with the Situation Centre are part of a necessary cross-cutting approach. What remains unclear is how the Conflict Centre and commitment to “addressing the drivers of conflict (such as grievances, political marginalisation and criminal economies), atrocity prevention and strengthening fragile countries’ resilience to external interference”[36] will be embedded in or aligned with the Situation Centre and its substrategies. Without an evidenced, cross-cutting and joined up approach with the capacity to monitor communicate and respond to the transnational tech-based risks that exist right here in the UK as much as anywhere else, the threats to British and other democracies, open society, human rights and security will continue to be missed and current gaps in the UK’s FCDO systems and capabilities will persist.

 

2.4    As the FCDO continues to develop its conflict strategy and build out its commitments to prioritise both atrocity prevention and an approach to tech-based and cyber threats, it  should seek to draw from, not replicate,  existing international frameworks. Christchurch Call, the UN Principles on Business and Human Rights, the UN Strategy and Plan of Action on Hate Speech, Rabat Plan of Action, and the Berkeley Protocol on Digital Open Source Investigation.[37] The recently announced G7 Internet Safety Principles[38] set out the foundational principles of transparency, openness of process and participation, relevance, and consensus-based decision-making to underpin the development of digital technical standards. The UK can build on its G7 presidency and its seat on the security council to bring states together, potentially through already existing systems such as the Aqaba Process, to support the development of a comprehensive international framework. Current incidents of modern mass violence, and other threats to democracy like misinformation campaigns, are transnational, and require transnational responses. A coherent global framework would also provide clarity, consistency and guidance for businesses, which currently develop their own individual inconsistent and conflicting policies.

 

Recommendations for HMG

 

3.1    The rapidly evolving tech landscape brings new opportunities for atrocity prevention. The FCDO should ensure it has the capabilities and resources to quickly understand new and emerging technologies to identify and leverage opportunities for preventing and responding to modern mass violence. The rise in open-source technology allows information to be shared, analysed and tracked in more democratized and accessible ways while being able to sidestep restrictions and censorship.[39] Satellite technology can assist in uncovering sites of mass violence[40] while social media has played an important role in documenting evidence, raising awareness, and organising popular movements in response to modern mass violence.[41]

 

3.2    HMG should integrate an early warning system, capable of monitoring and analysing threats to national and global security, reporting on real-time trends of exclusion and violence, and with internal prevention analysis incorporating indicators of grievance, trust, and resilience into the Conflict Centre, the Situation Centre and the future cyber strategy. Current horizon scanning or risk assessment tools used by HMG are not early warning systems; they are not able to respond to immediate threats or rapid changes.

 

3.3    The FCDO must establish a training budget for all FCDO staff including Mission staff and country teams on atrocity prevention, identity-based violence, and early warning including the roles new and emerging technologies play

 

3.4    HMG should open up easy-access, quick release, low-level funds to support domestic community-based initiatives on prevention, early warning, and response efforts to identity-based violence, including those that provide education and training for communities to identify and confront online harms

 

 

 

 

 

 

July 2021


[1] Ceasefire, Minority Rights Group International, “Peoples under threat 2019: the role of social media in exacerbating violence,” 4 June 2019

[2] Patrick Wintour, “UK MPs declare China is committing genocide against Uyghurs in Xinjiang,” The Guardian, 22 April 2021

[3] Human Rights Watch, “How mass surveillance works in Xinjiang, China,” 2 May 2019

[4] Jane Wakefield, “AI emotion-detection software tested on Uyghurs,” BBC, 26 May 2021

[5] Cody Godwin, “Facebook removes accounts of ‘China-based hackers’ targeting Uighurs,” 24 March 2021

[6] Newly Purnell, “Facebook staff fret over China’s ads portraying happy Muslims in Xinjiang,” Wall Street Journal, 2 April 2021; Sigal Samuel, “China paid Facebook and Twitter to help spread anti-Muslim propaganda,” Vox, 22 August 2019

[7] Jacob Wallis, Albert Zhang, “Trigger Warning: the CCP’s coordinated information effort to discredit the BBC,” Australian Strategic Policy Centre, 4 March 2021

[8] Tom Miles, “U.N. investigators cite Facebook role in Myanmar crisis,” Reuters, 12 March 2018

[9] Evelyn Douek, “Facebook’s role in the genocide in Myanmar: new reporting complicates the narrative,” Lawfare, 22 October 2018

[10] BBC News, “Facebook admits it was used to ‘incite offline violence’ in Myanmar,” 6 November 2018

[11] BBC News, “Myanmar coup: internet shutdown as crowds protest against military,” 6 February 2021; Rebecca Ratcliffe, “Myanmar coup: military expands internet shutdown,” 2 April 2021

[12] Chloe Hadjimatheou, “Mayday: How the White Helmets and James Le Mesurier got pulled into a deadly battle for truth,” BBC, 27 February 2021

[13] Olivia Solon, “How Syria’s White Helmets became victim of an online propaganda machine,”  The Guardian, 18 December 2017; The Syria Campaign, “Killing the truth: how Russia is fuelling a disinformation campaign to cover up war crimes in Syria,” 2017; Intrigue, “Why no one could save the man who co-founded the White Helmets,” BBC Radio 4, November 2020

[14] Shakuntala Banaji, Ram Bhat, “WhatsApp vigilantes: an exploration of citizen reception and circulation of WhatsApp misinformation linked to mob violence in India,” London School of Economics, 2019

[15] Kate Ferguson, Michael Jones, Between war and peace: preventing mass atrocities outside of armed conflict,” RUSI Newsbrief, 21 May 2021

[16] Lindsay Hundley, Hakeem Bishi, Shelby Grossman, 3 things to know about Nigeria’s Twitter ban, Washington Post, 15 June 2021

[17] Danielle Paquette, Nigeria suspends Twitter after the social media platform freezes president’s account, Washington Post, 4 June 2021

[18] Vincent A. Obia, #EndSARS, a Unique Twittersphere…, Media@LSE Blog, 11 November 2020

[19] Yomi Kazeem, How a Youth-led Digital Movement is Driving Nigerias’s Largest Protests…, Quartz Africa, 13 October 2020

[20] Azeezat Olaoluwa, End Sars protests: The Nigerian women leading the fight for change, BBC, 1 December 2020

[21] Michael Bamidele, …International Celebrities Voice Support for #EndSARS Protest, The Guardian Nigeria, 10 October 2020

[22] End Sars: How Nigeria's anti-police brutality protests went global, BBC, 17 October 2020

[23] Nigeria’s Twitter ban: Government orders prosecution of violators, BBC, 5 June 2021

[24] Her Majesty’s Government, “Global Britain in a competitive age The Integrated Review of Security, Defence, Development and Foreign Policy”, pp. 18

[25] HMG, “Global Britain,” p. 12 para 6.

[26] Maria Miller MP, “Elected women representatives: online abuse,” Hansard, Volume 692, 20 April 2021

[27] Protection Approaches, Covid-Related Hate,” October 2020

[28] HOPE not hate, “State of hate”, p.78

[29] HMG, “Landmark laws to keep children safe, stop racial hate and protect democracy online published,” 14 May 2021

[30] HMG, Draft Online Safety Bill,” 12 May 2021, Section 7.8.g, section 19.3.a.iv

[31] HMG, “Global Britain,” pp. 18-19

[32] HMG, “Global Britain,” pp. 20-22

[33] HMG, “Global Britain,” p. 21 para V

[34] HMG, “Global Britain,” p. 41

[35] HMG, “Global Britain,” p. 4

[36] HMG, “Global Britain,” p. 79

[37] The Christchurch Call for action; The UN Principles on Business and Human Rights; The UN Strategy and Plan of Action on Hate Speech; Rabat Plan of Action; Berkeley Protocol

[38] G7, “Internet safety principles,” 28 April 2021

[39] Democratized data gathering can document atrocities, like the Xinjiang Data Project, support real-time insights from people on the ground such as the Beni Dashboard form Orange Door Research and Protection Approaches, is used for investigative reporting such as BBC Africa Eye and Bellingcat, and supports the work of organisations like Videre.

[40] Nathan Ruser, “There is now more evidence than ever that China is imprison Uyghurs,” The Guardian, 24 September 2020

[41] This includes TikTok, Twitter, Youtube, Facebook, Instagram