Institute for Strategic Dialogue – written evidence (DAD0069)

 

2) How has the design of algorithms used by social media platforms shaped democratic debate? To what extent should there be more accountability for the design of these algorithms? 

  1. There remains significant concern that platform architectures contribute to negative outcomes. Central to this concern is that the algorithms dictating a users’ experience and journey have led to unintended consequences, and have been challenging to effectively scrutinise or evaluate. Researchers are only starting to develop an understanding of those unintended negative effects, such as the promotion of hate speech or polarizing an already divisive political culture, thanks to the contributions of former platform insiders in charge of developing platform AI. For example, Guillaume Chaslot helped shed light on how it was possible for the YouTube recommendation algorithm to steer users towards “terrorist content, foreign state-sponsored propaganda, extreme hatred, softcore zoophilia, inappropriate kids content, and innumerable conspiracy theories”.[1]
  2. As a result, there is a need to establish new systems for oversight and/or auditing of algorithmic decision-making. Existing efforts in this area are already underway, such as those between academia and the private sector to allow external researchers to analyse information amassed by companies to address societal issues are of course important. These efforts have been challenging to set up, have yet to prove themselves, and are limited in scope.[2]
  3. We proposed the following method for auditing algorithmic systems:
  1. Examine the purpose, constitution, and policies of the systems, and interview those who build and interact with different parts of that system, and observe how people use the system.
  2. Identify and assess what data was used to train the algorithm, how it was collected, and whether it is enriched with other data sources, and whether that data changed over time.
  3. Examine the model itself including considering the processing flow and the type of supervisory or monitoring mechanism used.
  4. Undertake a code review or “white-box testing” to analyse the source code, or the statistical models in use, including how different inputs are weighted.
  5. Run controlled experiments over time to determine if the algorithms subject to their review are producing unintended consequences that harm the public interest.
  6. Include technologists who are able to advise to develop policies and procedures on how such an audit should take place as well as undertaking the audit, and include external experts as consultants or fellows to assist with particularly complex or novel issues.
  7. Draw on efforts already underway by the Information Commissioner’s Office to create an AI auditing framework for data protection.
  8. Conduct open and transparent research, with continuing consultation with civil society, industry and academia to jointly pool expertise to establish best practice in this new area. It is envisaged that processes could be co-constructed with external stakeholders and that this approach would be continuously developing, and responsive to the changing technological landscape. 

3) What role should education play in helping create a healthy, active and digitally literate democracy? 

  1. The government should play a key role in education and public awareness activities, supported through revenues raised via a new digital tax on online services. Overall, the public requires simple, common and consistent language and indicators across the online environment to help inform their choices and improve their understanding of the products they use, the spaces in which they participate, and their rights, responsibilities, and options for redress. This could include the development of a ‘highway code’ type system for the online environment, including commonly understood symbols designating the types of spaces found online, and the relevant rules and regulations that apply, supported by public awareness campaigns and clear guidance, and potentially online training modules.
  2. Additionally, those with the influence to support users in becoming empowered digital citizens have a responsibility to do so. Technology companies, governments, educators, parents and civil society actors need to work together in order to keep pace with the changes to the digital world and update the education system accordingly. While there is broad recognition of the need to build digital literacy skills and knowledge, stakeholders must go beyond developing digital literacy and focus on the norms and behaviour that comprise digital citizenship.
  3. The following recommendations focus on how further collaboration between stakeholders can empower young people to improve their online communities as good digital citizens:
  1. The UK Government should define and standardise digital citizenship to enable educators to understand what it is and recognise its importance. The Government’s recent proposals for a media literacy strategy should sit at the heart of a wider drive to increase digital citizenship learning.
  2. Digital citizenship should be embedded into the national curriculum, with more specific guidance and training for practitioners on how best to teach it, and through which programme of study it would most effectively be taught. Government should encourage and support school and youth centre leaders to train their staff to deliver digital citizenship learning effectively, combining this training within initial teacher training, continuous professional development and youth worker training.
  3. All stakeholders in digital education should co-ordinate more effectively to ensure teaching and learning keeps pace with changes in technology and reflects the nature of contemporary online harms.
  4. Digital citizenship education models should be tailored for delivery in informal education contexts, where in-depth conversations and inspiring practitioners can effect positive behavioural change online.
  5. The UK Government should give schools adequate guidance on how to embed digital citizenship across the key stages, to ensure that gaps do not emerge in students’ learning and that key knowledge and skills are developed each year.

4) Would greater transparency in the online spending and campaigning of political groups improve the electoral process in the UK by ensuring accountability, and if so what should this transparency look like?

5) What effect does online targeted advertising have on the political process, and what effects could it have in the future? Should there be additional regulation of political advertising?

  1. Recent years have seen an explosion in data which can be used to target advertising, including location, IP, browsing data, and information collected from devices and wearables. The next few years are likely to see a move to increasingly automated marketing, where valuable individuals or groups are tracked, measured and targeted by machines, potentially using machine-generated content. There are major risks associated with the use of this kind of data for advertising. 
  2. This risk is most prominent in political advertising. Elections are becoming increasingly ‘datafied’, with advertising and marketing techniques being offered by a network of private contractors and data analysts, offering cutting-edge methods for audience segmentation and targeting to political parties all over the world.[3] Questions of user consent, knowledge and privacy have not been answered, and accountability and visibility of this kind of advertising is not adequate.
  3. It is of central importance that governments, civil society and the public are able to better understand the ways in which the internet is impacting society and democracy in order to better to encourage its positive effects and to curb negative externalities. These kinds of decisions require evidence: transparency is the tool through which this evidence can be gathered. Recent efforts by platforms to increase transparency around advertising have been inadequate. Statements by the European Commission earlier this year have underlined the need for further effort by platforms to make advertising data available to scrutiny.
  4. Good models for transparency exist, and should be used by the government as best practice when thinking about the wider online ecosystem and the future of expectations for transparency online.
  1. Transparency must be computational. For an online space to be transparent, it must be possible to observe it computationally. For instance, Twitter’s API allows for a holistic view of what takes place on that platform. Were that API not to exist, an otherwise nominally ‘public’ platform would not be transparent as a direct result of its scale - it overwhelms human capacity. 
  2. Transparency must complement rights to data privacy, not erode them. A good model for transparency will protect individuals’ data privacy while enabling a macro understanding of the nature and scale of technology platforms’ processes, and any potential infringement of rights that stems from the use of the platform.
  1. Technical Recommendations: We recommend online platforms are expected to provide functional, computational transparency for advertising. Mozilla’s recent letter to social media platforms outlining the requirements of a functional advertising API in light of suspected exploitation and abuse of digital advertising provides a highly useful blueprint for advertising APIs.[4] A functional, open API should have the following:  
  1. Comprehensive political advertising content. 
  2. The content of the advertisement and information about targeting criteria.
  3. Functionality to empower, not limit, research and analysis.
  4. Up-to-date and historical data access.
  5. Public access.
  1. Data Protection and Risks: Access to transparency data may be contentious. Although we believe that it is in the public interest to have oversight over all areas outlined above, it is possible that there ought to be exceptions to the types of data and types of access available to the general public. A ‘tiered’ access structure may be advisable in light of data protection and privacy expectations. However, we believe the starting point ought to be public access. The government could use the model of data trusts, currently being piloted by the Office for AI and the Open Data Institute, to provide an independent body who would determine access to company’s data. 

8) To what extent does social media negatively shape public debate, either through encouraging polarisation or through abuse deterring individuals from engaging in public life?

 

9) To what extent do you think that there are those who are using social media to attempt to undermine trust in the democratic process and in democratic institutions; and what might be the best ways to combat this and strengthen faith in democracy?

  1. Over more than a decade, extremist groups have successfully exploited and amplified social anxieties, working to ‘mainstream’ their fringe narratives, with hateful and divisive ideologies becoming more pronounced in the online ecosystems of social media. Using increasingly refined and adept strategies to radicalise, manipulate, and intimidate, these actors have learnt how to organise themselves and mobilise across borders. The rapidly evolving online ecosystem provides these committed bad actors with the means to spread their noxious ideologies further and faster than ever before.
  2. The most visible manifestations of this ‘mainstreaming’ process are the successes of populist and far-right parties at the ballot box, historic spikes in hate crime, and virulent anti-immigration sentiment and activism. The distortive information campaigns of malign actors – including the ever more sophisticated propaganda machines of state actors – have had a significant impact on democratic processes and civic culture, successfully swaying electorates across the continent. If left unchecked, this ‘mainstreaming’ of extreme ideologies could undermine the progress made in key policy sectors, chiefly migration, climate change and human rights.
  3. Decisions by platforms to reduce the ability of civil society organisations and the public to observe, analyse and report on what is taking place online are troubling. Web platforms, and social media platforms in particular, are essential sources of insight on the ways in which the web is changing politics, society and culture, including the impact potential online harms. We believe that civil society research organisations, academia, open source intelligence groups and investigative journalists have a vital role to play, both in using online sources to report on changes in the online world and in evaluating the roles and decisions made by online platforms.

10) What might be the best ways of reducing the effects of misinformation on social media platforms?

  1. It is now well-evidenced that some ‘legal harms’, such as disinformation and extremism (that does not meet the threshold of hate speech), pose threats to public safety, public health and the integrity of democratic processes. However, in seeking to deal with these ‘legal harms’, ISD urges the strong application of transparency and safety by design regulation over the systems and processes of technology platforms, rather than a focus on content moderation regulation, which would endanger rights to free speech if applied to this set of ‘legal harms’. Thus, in matters of disinformation and large-scale exploitation or manipulation of online spaces, computational transparency over the content and communications should act as a system to alert authorities to potential breaches.
  2. Guidance should be developed in a range of areas to ensure effective safety by design, including: AI and algorithmic decision-making; user journeys and uses of positive “nudges” that encourage user choice and clear understandings of settings (e.g. privacy); recommendation filters and algorithms; and live-streaming. In order to achieve safety by design in these areas, and others, organisations should be encouraged to employ scenario or risk forecasting and stress-testing approaches to ensure products are future proofed against online harms. Practical guidance should also be provided on the key principles of ethical design, as well as practical tools to help assess and predict possible safety concerns during the early stages of design. Overall, there needs to be a cultural shift in the online technology sector to ensure that safety by design becomes embedded as a foundational principle throughout technology companies.

 

 

5

 


[1] https://www.wired.com/story/the-toxic-potential-of-youtubes-feedback-loop/

[2] https://www.buzzfeednews.com/article/craigsilverman/slow-facebook

[3] https://demosuk.wpengine.com/wp-content/uploads/2018/07/The-Future-of-Political-Campaigning.pdf

[4] See the open letter signed by Mozilla, ISD and 10 expert organisations for an effective advertising API model, which stresses that all official communications by political parties should be accessible, at a minimum at the level of content and metadata. https://blog.mozilla.org/blog/2019/03/27/facebook-and-google-this-is-what-an-effective-ad-archive-api-looks-like/