Dr Argyro Karanasiou – written evidence (DAD0038)
Dr Argyro P Karanasiou, Senior Lecturer in Law, LETS (Law, Emerging Tech & Science) Lab, School of Law & Criminology, University of Greenwich
- The rise and fall of participatory democracy
- Two decades ago, the revolutionary developments in ICT, facilitated new forms of participatory democracy (Blumler and Coleman, 2001) and carried the promise of direct and deliberative forms of governance (OECD 2003). The tremendous impact of internet on civic life and the subsequent emergence of a transnational online public sphere (Cammaerts and von Audenhove, 2005), gave rise to new concepts of governance models (Carter et al, 2005; Welch et al, 2004) that were premised on web-mediated citizen-state interaction and encouraged an active citizenry (Parent, 2005). At the same time the emergence of new intermediaries, namely ICT corporations acting as gatekeepers enforcing public polices, gave rise to a new wave of techno-determinism (Langdon, 1977) that reshaped the political ecosystem: the “invisible handshake” (Birnhack and Elkin-Koren, 2003) between the state and the tech industry served well the interests of both, maintaining thereby political power and market dominance respectively. What once was the promise of an enabler for active citizenry, soon morphed into an opportunity to profit from a rapid datafication of the citizens paired with additional surveillance mechanisms (Hintz and Brown, 2017). The Snowden revelations in 2013 and the Cambridge Analytica scandal in 2018 are indicative cases of severe blows to public life: their aftermath was a continuing sentiment of mistrust towards the state (Tavani et al, 2014; Cadwallard, 2017) that discouraged citizens from any active involvement with the commons.
- Technology as a building block for demagogic populism
- We are currently witnessing an era of great disillusionment with democracy and civic life: the Democracy Perception Index 2018 survey reports that 54% of citizens in modern democracies do not feel that their voice has an impact. The highest levels of dissatisfaction have been reported in Europe, which does not come as a surprise given the unstable political climate and several failed attempts to let the public decide on delicate (and legally disputable) political matters, carrying thereby all responsibility. In the last five years alone, European citizens were asked to vote on referenda, yet the results were not materialised: the Catalan independence referendum in 2017 was declared illegal, the UK Brexit referendum in 2016 led to major political turbulence and a series of deliberations currently ongoing, and the Greek Bail Out referendum in 2015 was bluntly disregarded by the government 3 days after the results were in. Added to this is the sharp increase of immigration, which is further credited as a key factor for the rise of the far-right parties in Europe (Halla et al. (2017) for Austria; Dustmann et al. (2016) for Denmark; Sekeris and Vasilakis (2016) for Greece; Brunner and Kuhn (2014) for Switzerland; Becker and Fetzer (2016) for the UK).
- The political sphere in Europe has massively transformed over the past decade and technology has a key role in this. The de-legitimisation of old governance intermediaries has provided fertile soil for techno-populism ideologies, namely bottom-up governance models that further de-politicisation (De Blasio et al, 2018). This is not posing a threat to democracy per se, but it certainly has the potential to set new paradigms in a disruptive manner (Bloom et al, 2019). Historically however, demagogy has thrived on populism (Stanley, 2008) and techno-populism is not an exception. In this climate of growing scepticism and disappointment over representation, technology is now being used to appeal to the citizens’ emotional responses and to reveal their vulnerabilities to demagogues. No doubt technology still bears an undeniable potential to support participatory democracy and enhance good governance; yet recent instances of AI serving as a means of mass manipulation and micro-targeting (noted below) make this a missed opportunity:
- Computational propaganda: Disinformation and manipulation of public life
- The Cambridge Analytica revelations highlighted a well-known reality: the promotional industries of advertising, professional lobbying and marketing have long had an interest in reaching out to the public via “media-buying”, namely purchasing space/time on media outlets. Further to this though, it is an indicative case of technology used as a means of political manipulation. Instead of traditionally reaching out to the public, political actors and promotional industries join forces technology: this has led to the phenomenon of computational propaganda (Howard and Wooley 2016). In a manifestly data-driven environment, the electorate is being treated as a commercial audience, ready to passively consume content carefully tailored to each one of the citizens serving thereby political purposes. The commercial logic permeating modern politics paired with dominant market players in the digital world using sophisticated and social profiling practices (Turow 2012) are a serious threat to democracy.
- Automation has furthered these practises as AI has already been used on various occasions to manipulate public opinion. Take for example political bots, namely AI enabled accounts that are programmed to mimic human accounts and can perform various manipulative actions online, such as:
(i) astroturfing, namely bots creating a false perception of popularity by liking and sharing particular content on social media (Wooley 2016). The US Presidential Election in 2016 is a good example of political bots employed to silence genuine political discourse (Howard at al 2018)
(ii) disinformation, by spreading fake news in an attempt to shape public discourse and distort political sentiment. The trending Twitter hashtag “#MacronLeaks” in the 2017 French Presidential Election, involved a swarm of fake bot accounts that spread fake information about Emmanuel Macron and dominated social feeds to cause public sensation
(iii) trolling, namely bots attacking opposition to a ruling regime, investigative journalists and political dissidents online, often using hate speech, so that they discourage such voices from partaking to public discourse. This is a disruptive tactic used as a tool of social control mainly by authoritarian regimes (Aro 2016).
The above suggest a non-exhaustive list of several instance of computational propaganda, namely AI enabled manipulative tactics of public discourse.
- Datafication and micro-targeting
- The growing competition among marketeers towards the development of sophisticated micro-targeting techniques purporting to reach interested audiences as well as to identify potentially interested ones (Alva et al, 2017) has found new applications in the political sphere. The ICO commissioned report “The Future of political campaigning” (Bartlett et al 2018) offers some valuable insights in this respect and identifies several key trends of data analytics serving political purposes:
(i) Audience segmentation and granular information
This provides marketeers with accurate and narrow identification of individuals based on analysing data from demographics, behaviour and attitudes.
(ii) Cross-device targeting
This allows for reaching individuals with messages at a time and place when they are mostly receptive to it. By being able to track people, rather than devices, political campaigners gain further insights of a voter’s multi-dimensional personality, making profiling far more sophisticated than ever before (Moore and Tambini, 2018). When this is reviewed in the light of IoT, namely all interconnected sensory devices used in each household ranging from Virtual Private Assistants, such as Amazon’s Alexa.
(iii) Sentiment analysis and phychographics
Sentiment analysis has been predominantly used in data science for analysis of customer feedbacks on products and reviews. They are used to understand user ratings on different kinds of products, hospitality services like travel, hotel bookings. It has also become popular to analyse user tweets — positive, negative or neutral by crawling twitter through APIs.
- At the same time, AI has proven to be an excellent tool for gaining insights on voting behaviour and behavioral patterns; a good example of this is Mavenoid’s AI test (htpps://ai-valet.se) that allows for real time analysis of a few sample questions before offering tailored ones to participants. This is a step further from the original focus of traditional questionnaires, namely to survey voting intention and promises precision as participants cannot game the system with their answers.
- Besides the obvious concerns over the grave privacy intrusions of such practices, there also additional implications for the individual’s autonomy, free will, and uninfluenced decision making, which are far more subtle and thus hardly measurable. Most importantly, the involvement of tech firms in data mining, behavioral analytics and micro-targeting with a view to facilitate governmental ambitions for control in exchange for market dominance is a trade-off that establishes an oligopoly of data market players whilst undermining the core essence of democracy: pluralism.
- Outsourcing governance and accountability:
AI as a means of addressing the trust deficit in modern politics and governance.
- The digital era furnishes European democracies with an unprecedented opportunity for participatory democracy: a mostly tech-savvy interconnected community of citizens paired with a supporting legal infrastructure, as evidenced by the EU framework for the Digital Single Market. A 2017 Deloitte survey found that 85% of the UK population have access to a smartphone and this is poised to surpass 92% in the next 3 years. Yet at the same time, computational propaganda and the use of AI for micro-targeting and fake content generation and dissemination online have already impacted greatly political discourse. Democracy has become a commercial product in the digital era: marketed for mass consumption and managed by private commercial entities who share clientele with political parties for profit. This has resulted in a trust deficit towards the state, which in turn has de- legitimised traditional governance models. An interesting outcome of this is the paradox of citizens trusting AI driven governance, even if this means limited accountability and transparency. In a 2019 survey by the Centre for Governance of Change at IE University, a quarter of the participants expressed a preference for policy decisions to be made by AI instead of politicians. As noted in the report this highlights the following paradox: “while the public is fearful of advancements in tech, particularly increased automation, one in four Europeans would prefer artificial intelligence to make important to decisions about the running of their country. In nations such as the Netherlands, Germany, and the United Kingdom, the percentage is even higher – one in every three. Amid the vagaries of Brexit and current questions around the European model of representative democracy, the results tellingly reflect significant levels of disillusion towards politicians.”
- At a first glance this is an alarming prospect, given the inherent bias (Binns 2017; Diakopoulos 2014), opacity (Pasquale 2015), and accountability diffusion (Karanasiou et al 2017) in automated decision making. It should however be viewed as a natural result of a data driven reality that has led to the amalgamation of consumers and voters in a hybrid of e-citizenry: in the same manner, the traditional forms of governance have given way to GovTech, which guarantees direct governance modelled on business principles and fuelled by big data. Businesses have long been relying on algorithms to reach strategic decisions (e.g. in the hiring process), following practices that have been criticised on grounds of privacy intrusion and discrimination (Williams et al 2018) but undeniably hold great potential as well to maximise productivity and innovation (Makridakis 2017). In a similar vein, many European countries welcome the involvement of start-ups in AI enabled governance: take for example Fluicity, the French start-up running a citizen participation cross-regional platform, or Familio, the company behind the Danish messaging system between parents and nurseries.
- Nonetheless, whereas the main goal of a business entity is profit-making, the public sector has a different bet to win when embracing new technologies: to provide good governance that sustains democracy through (i) accountability and (ii) transparency. The widespread use of algorithmically enabled decision making in social settings has given rise to serious concerns as to potential discrimination and bias encoded inadvertently in the decision. Moreover, the use of machine learning in fields with high levels of accountability -and thus transparency-, such as public administration or law enforcement, highlights the need for a clear interpretability of outputs. The fact that a human operator might be out of the loop in automated decision making, does not preclude that human bias will not part of the result yielded by the machine. The absence of due process and human reasoning exacerbate the already limited accountability and add to the challenge, as often algorithmically driven processes are so complex that their outcomes cannot be explained or foreseen, even by their engineering designers: this is often referred to as the “black box” in AI.
- To this end, the EU General Data Protection Regulation (GDPR) includes a series of provisions, often referred to as a “right to explanation”. These are art 22 GDPR, which addresses automated individual decision making and articles 13, 14, and 15 GDPR, which focus on transparency rights around automated decision-making and profiling. Article 22 GDPR reserves a “right not to be subject to a decision based solely on automated processing”, when this decision produces “legal effects” or “similarly significant” effects on the individual. Articles 13-15 GDPR involve a series of notification rights when information is collected from the individual (art 13) or from third parties (art 14) and the right to access this information at any moment in time (art 15), providing thereby “meaningful information about the logic involved”. Further to this, Recital 71 reserves for the data subject the right “to obtain an explanation of the decision reached after such assessment and to challenge the decision”, where an automated decision has been met that produces legal effects or similarly significantly affecting the individual. Although Recital 71 is not legally binding, it does however provide guidance as to how relevant articles in the GDPR should be interpreted.
- There is growing criticism as to whether a mathematically interpretable model would suffice to account for an automated decision and guarantee transparency in automated decision making. Alternative approaches include ex post auditing and focus on the processes around Machine Learning models rather than examining the models themselves, which can be inscrutable and non-intuitive (Selbst & Barocas, 2018).
- As such, the reliance of public administration on machine learning and automated decision-making would not lead to a Kafkaesque dystopia, provided that legal mechanisms guaranteeing due process (Coglianese et al 2016) and intelligibility are in place. In this vein, the GDPR’s provisions for a right to an explanation (art 22) and information (art 15) on automated decision making provide some scrutiny tools, yet these are not sufficient (Wachter et al 2017) and need better and clearer legal grounding; a good example in this direction is the French Digital Republique Act (loi pour une Republique numerique no. 2016-1321), which paired with FOIAs allows for transparent AI enabled public administration (Edwards et al 2018).
Alva, A et al (2017), “Cross-Device Tracking: Measurement and Disclosures”, Proceedings on Privacy Enhancing Technologies, Volume 2017, Issue 2
Aro, J. (2016), “The cyberspace war: propaganda and trolling as warfare tools”, European View, 15(1), 121-132.
Bartlett, J., Smith, J., & Acton, R. (2018). The future of political campaigning. Demos. Available at www. demos. co. uk/project/the-future-ofpolitical-campaigning/, <accessed 10/06/2019>
Becker, S. O. and Fetzer, T. (2016), “Does Migration Cause Extreme Voting?”, Warwick Working Paper Series 306.
Binns, R. (2017) “Algorithmic Accountability and Public Reason”, Philosophy & Technology 1, 4
Birnhack, M. D., & Elkin-Koren, N. (2003) “The invisible handshake: The reemergence of the state in the digital environment” Va. JL & Tech., 8, 1.
Bloom, P., & Sancino, A. (2019) Disruptive Democracy: The Clash Between Techno-Populism and Techno-Democracy. SAGE Publications Limited.
Blumler, J. G., & Coleman, S. (2001). Realising democracy online: A civic commons in cyberspace (Vol. 2). London: IPPR.
Brunner, B. and A. Kuhn (2014), “Immigration, Cultural Distance and Natives`Attitudes Towards Immigration: Evidence from Swiss Voting Results”, IZA Discussion Papers 8409 , https://papers.ssrn.com/sol3/ papers.cfm?abstract_id=2492436. <accessed 10/06/2019>
Cadwalladr, C. (2017). The great British Brexit robbery: how our democracy was hijacked. The Guardian, 7, available at https://www.theguardian.com/technology/2017/may/07/the-great-british-brexit-robbery-hijacked-democracy, <accessed 10/06/2019>
Cammaerts, B., & Audenhove, L. V. (2005) “Online political debate, unbounded citizenship, and the problematic nature of a transnational public sphere.” Political communication, 22(2), 179-196.
Carter, L., & Bélanger, F. (2005) “The utilization of e‐government services: citizen trust, innovation and acceptance factors.” Information systems journal, 15(1), 5-25.
De Blasio, E., & Sorice, M. (2018) “Populism between direct democracy and the technological myth.” Palgrave Communications, 4(1), 15.
Diakopoulos, N, (2014) “Algorithmic Accountability Reporting: On the Investigation of Black Boxes” (Tow Center for Digital Journalism), Report available at https://academiccommons.columbia.edu/doi/10.7916/D8ZK5TW2, <accessed 10/06/2019>
Dustmann, C., K. Vasiljeva and A. Piil (n.d.), “Refugee Migration and Electoral Outcomes”, CReAM Discussion Paper Series CPD 19/16, available online at https://doi.org/10.1016/S1043-2760(97)84344-5, <accessed 10/06/2019>
Edwards, L., & Veale, M. (2018) “Enslaving the Algorithm: From a “Right to an Explanation” to a “Right to Better Decisions”?” IEEE Security & Privacy, 16(3), 46-54.
European Commission and Content and Technology Directorate-General for Communication Networks, A MultiDimensional Approach to Disinformation: Report of the Independent High-Level Group on Fake News and Online Disinformation. (2018) 12
Halla, M., A. F. Wagner and J. Zweimüller (2017), “Immigration and Voting for the Far Right”, Journal of the European Economic Association, https://doi.org/http://dx.doi.org/10.2139/ssrn.2103623.
Hintz, A., & Brown, I. (2017). “Digital Citizenship and Surveillance| Enabling Digital Citizenship? The Reshaping of Surveillance Policy After Snowden”, International Journal of Communication, 11, 20.
Howard, P., & Wooley, S. (2016), “Political Communication, Computational Propaganda, and Autonomous Agents”, International Journal of Communication 10, 20
Karanasiou, A. P., & Pinotsis, D. A. (2017), “A study into the layers of automated decision-making: emergent normative and legal aspects of deep learning.” International Review of Law, Computers & Technology, 31(2), 170-187.
Makridakis, S. (2017). “The forthcoming Artificial Intelligence (AI) revolution: Its impact on society and firms.” Futures, 90, 46-60.
Moore, M., & Tambini, D. (Eds.). (2018). Digital dominance: the power of Google, Amazon, Facebook, and Apple. Oxford University Press.
OECD Report (2003) Promise and Problems of E-Democracy, available online at http://www.oecd.org/gov/digital-government/35176328.pdf, <accessed 10/06/2019>
Parent, M., Vandebeek, C. A., & Gemino, A. C. (2005). Building citizen trust through e-government. Government Information Quarterly, 22(4), 720-736.
Pasquale, F. (2015) The Black Box Society: The Secret Algorithms That Control Money and Information, Harvard University Press 2015
Philip N. H, Woolley S, and Calo, R. (2018)"Algorithms, bots, and political communication in the US 2016 election: The challenge of automated political communication for election law and administration." Journal of information technology & politics15.2, 81-93.
Sekeris, P. and C. Vasilakis (2016), “The Mediterranean Refugees Crisis and Extreme Right Parties: Evidence from Greece”, MPRA Paper 72222, 1–14. https://mpra.ub.uni-muenchen.de/72222/1/MPRA_paper_72222.pdf.
Selbst, A. D., & Barocas, S. (2018). The intuitive appeal of explainable machines. Fordham L. Rev., 87, 1085.
Stanley, B. (2008) “The thin ideology of populism.” Journal of political ideologies, 13(1), 95-110.
Tavani, H. T., & Grodzinsky, F. S. (2014). “Trust, betrayal, and whistle-blowing: Reflections on the Edward Snowden case”, ACM SIGCAS Computers and Society, 44(3), 8-13.
Turow, J. (2012). The daily you: How the new advertising industry is defining your identity and your worth. Yale University Press.
Wachter, S., Mittelstadt, B., & Floridi, L. (2017). “Why a right to explanation of automated decision-making does not exist in the general data protection regulation.” International Data Privacy Law, 7(2), 76-99.
Welch, E. W., Hinnant, C. C., & Moon, M. J. (2004) “Linking citizen satisfaction with e-government and trust in government”, Journal of public administration research and theory, 15(3), 371-391.
Williams, B. A., Brooks, C. F., & Shmargad, Y. (2018), “How algorithms discriminate based on data they lack: challenges, solutions, and policy implications”, Journal of Information Policy, 8, 78-115.
Winner, L. (1977). Autonomous Technology: Technics-Out-of-Control as a Theme in Political Thought. Cambridge: MIT Press