Written Evidence Submitted by the Information School, University of Sheffield

(GAI0017)

 

Authors (alphabetical order):

Dr Jo Bates

Prof. Laurence Brooks

Dr Niall Docherty

Dr Denis Newman-Griffis

Dr Susan Oman

Dr Judita Preiss

Dr Xin Zhao

Executive Summary

        Effective, future-oriented governance of AI requires a perspective shift from technology-centred approaches to a practice-oriented approach that reflects the social and human processes around AI development and use.

        Practice-oriented training of AI professionals and continuous engagement with communities affected by AI technologies are key elements of developing a more responsible, transparent, and effective AI ecosystem.

        AI transparency must move beyond individual decisions and technical design to highlight the social contexts in which AI systems are developed, evaluated, deployed, and used.

        The Living With Data project at the University of Sheffield provides valuable initial examples of resources and guidance for fostering open communication and dialogue around AI technologies outside the research laboratory.

        More work between researchers, practitioners, policymakers, and communities is needed to develop holistic approaches to creating and governing AI practices.

 

About the authors

We are a group of interdisciplinary researchers from the University of Sheffield Information School (iSchool), with expertise in Data Science, Responsible AI and Computing, Education, Digital Societies, Information Systems and Natural Language Processing. The Information School is the number one ranked Library and Information Management department in the world (according to QS World University Rankings, 2022), and the school’s transdisciplinary focus in research and teaching has put it at the forefront of the field for several decades. The evidence below draws on our own research on AI governance and related areas, as well as our collaborative experiences training AI and Data practitioners in university settings. In particular, we are invested in integrating Social Science and Humanities (SSAH) perspectives with Science, Technology, Engineering, and Mathematics (STEM) in the study, development, and public communication of AI technologies. We are submitting this piece of written evidence collectively, as we believe that similarly integrating the full range of disciplinary expertise in the governance of AI will strengthen the government’s policy making approach in the future.

Dr. Susan Oman is Lecturer in Data, AI and Society at the University of Sheffield. At present she is seconded into DCMS as part of an AHRC fellowship. Her contributions to this written evidence are in her capacity as an academic at The University of Sheffield only.

 


 

P1          Our evidence is structured in response to three questions posed by the Committee: (1) How effective is current governance of AI in the UK? (2) What measures could make the use of AI more transparent and explainable to the public? And (3) What lessons, if any, can the UK learn from other countries on AI governance?



Q1. How effective is current governance of AI in the UK?

P2          Current approaches to governance of AI in the UK emphasise a technology-first perspective, in which AI effort is focused on developing powerful, general-purpose technologies first, with questions of adaptation to specific settings, adoption, integration, and trust being secondary concerns addressed after technology development. This focus on development of technologies as generalisable tools is reflected in the current government strategy related to AI. The 2021 National AI Strategy [1] emphasises access to people, data, compute, and finance for developing AI technologies, and the 2022 AI Action Plan [2] highlights improving access to data and computing resources within the UK R&D ecosystem. The 2022 Defence AI Strategy [3] recognises the broader context of AI and emphasises multi-stakeholder governance, but the core strategy remains development and use of technologies rather than situated human-machine solutions.

P3          The emphasis on technology development is also reflected in government efforts for advancing and managing AI in practice. For example, the AI Data Scientist role developed by the National Careers Service [4] describes using data and building technologies, but not understanding where data come from, what purposes technologies may serve (and whether those purposes are appropriate), or how data and technology use may change in different local or international contexts—a common contributor to AI inequalities [5]. Similarly, the May 2022 report on Understanding UK AI R&D commercialisation and the role of standards [6] focuses solely on technical standards for successfully transferring and trusting AI technologies in practice.

P4          The technology-first perspective, however, overlooks two key concerns that differentiate between general-purpose AI tools developed in research settings and specific uses of AI in practice, which evaluation and governance must account for.

P5          The first is the human factors involved in designing and using AI in specific contexts. AI methodologies may be designed as general-purpose tools, but as data-driven technologies their use is uniquely specific—analysing specific data, in specific settings, and used for specific purposes. Using AI in practice requires adapting general AI technologies into context-specific AI solutions, including identifying appropriate data to analyse, understanding the characteristics of those data and what they can and cannot say, and defining how the use of an AI technology might (or should not) be integrated into complex organisational practices.

P6          Failing to address the human aspects of data going into AI technologies and how they are used can lead to solutions that are harmful, e.g. using racially-biassed measures of healthcare cost in treatment prioritisation [7]; ineffective, or both, as in the case of prejudicial automated recommendations in child welfare cases that were often ignored by mistrustful users [8]. Appropriately addressing these concerns requires recognising the socio-technical complexities of AI in practice, including how technology draws on human data and informs human processes, and how questions of trust affect these interactions.

P7          The second concern is the paradigm shift in moving towards a more AI-driven economy. Current efforts focus on using AI to improve existing processes and ways of thinking: improving productivity, efficiency, decision-making accuracy, etc. However, major industry organisations in AI R&D have built enormous successes on new ways of thinking with data, and combining the unanticipated information that Big Data can yield with human ingenuity. For example, Google’s use of web links and personal search histories to generate highly targeted advertising, or Facebook’s use of social media posts and behaviour for recommendations and adverts.

P8          These innovations were founded on a deep understanding of why data were generated and what those data said about people, far beyond traditional ways of thinking in advertising. Moreover, failures to consider broader and more diverse contexts in these innovations have created harmful design flaws that are often masked by the economic value they create [9]. Regulation and governance that focuses only on technology development within existing paradigms is inherently reactive. Proactively identifying and mitigating the risks of inappropriate AI requires forward-looking thinking grounded in a social understanding of the role of data and technology use in AI solutions. [10]

P9          A holistic view of AI R&D and governance is needed to address these issues and re-engage with public trust and data commercialisation practices. Highly publicised failures of AI and a perceived lack of data-centred governance to prevent such failures leads citizens to imagine the worst of new AI technologies [11]. Rather than focusing on AI technologies directly, people-centred governance of applications that use AI technologies, e.g. the current Online Safety Bill [12], can be a powerful route to push industry entities to thoughtfully manage the social design and impact of their technologies while still achieving commercial aims.

P10      Governance efforts should shift towards fostering a holistic practice-oriented view of AI, encompassing social and organisational perspectives as well as technical development. Mapping a path forward for governing AI practices requires an active dialogue between researchers, practitioners, stakeholders from impacted communities, and government, grounded in shared concepts and understanding. New research, such as understanding the nature and limitations of available data [13], developing frameworks to guide sociotechnical analysis of AI solutions [14], and defining the ecosystem required around responsible AI [15], must be paired with new efforts in educating and engaging AI practitioners and the public.



Towards a practice-oriented view of AI governance

P11      In current academic and industrial R&D, AI practitioners with 'social' and 'technical' backgrounds are often institutionally separated and siloed [16]. This hinders effective communication, leading to polarised perspectives about AI governance [17]. This is largely due to the way technical approaches have simultaneously been prioritised and made to appear inscrutable [18]. Practitioners with social sciences and humanities (SSAH) training may feel technically inadequate and unable to intervene in discussions surrounding the governance of AI. Simultaneously, practitioners with science, technology, engineering and mathematics (STEM) backgrounds may feel less confident in engaging in discussions with an overt ethical or governance focus [19].

P12      This impasse could be improved by combining SSAH and STEM approaches when training AI practitioners in higher education. For example, higher education institutions could 1) equip technically-focused students with the vocabulary to engage with ethical questions of AI governance, 2) position SSAH perspectives as essential contributors to the research space and, 3) furnish all with the skills and knowledge to support effective, non-dogmatic, and non-hierarchical dialogue between disciplines. Doing so would reduce the current separation between social and technical approaches in training AI practitioners in higher education, and help produce more responsible and interdisciplinary AI professionals in future.

P13      Diversifying perspectives on AI governance will help create the interdisciplinary equitability to make space for fresh perspectives, interventions, and ideas. Another essential source of expertise is engaging with stakeholders from impacted communities early on in the policymaking process. Inspiration could be taken from participatory design protocols in Human-Computer Interaction, which enlist intended user groups to help technologists develop products better suited to their needs [20]. These approaches can complement existing engagements with industry, which largely focus on the technical administration of AI [21] and as a result may unfairly privilege private companies with the ability to set AI governance agendas and norms [22].

P14      It is also important to acknowledge that the views gathered through community (and industry) engagement should be valued as situated expertise, rather than definitive or representative [23]. Moreover, recalling related issues surrounding participatory governance, community engagement ought not be positioned as a silver bullet for understanding the societal impacts of AI, but rather as a necessary ground clearing activity that opens up the conversation onto wider terrains, leading to more imaginative, relevant, and inclusive outcomes [24].

P15      Overall, we propose a move away from technological solutionism, which reduces the societal issues surrounding AI to technical problems with technical answers. Instead, we suggest treating AI as a sociotechnical phenomenon, which views technology as an integration of human systems and relationships with technical modes of administration and organisation. This new approach to studying, teaching, and building AI has a solid foothold in industry research environments [25,26] and is gaining ground in research intensive universities (e.g. the Information School at the University of Sheffield is designing its new Data Science BSc around such sociotechnical perspectives). By adopting a similarly sociotechnical approach and synthesising across AI perspectives, policymakers could address gaps in the type of evidence currently used for AI governance. This more holistic, less siloed approach, can bring more diverse voices to the table and enable the government to help create a better AI ecosystem for more people in society to benefit from in the future.




Q2. What measures could make the use of AI more transparent and explainable to the public?

P16      Current legislation only partly addresses the need for transparent and explainable AI. For example, UK GDPR establishes a right of access to details about personal data feeding into automated individual decision-making and profiling [27]. Yet, as the UK AI Strategy notes UK GDPR “was not intended to comprehensively govern AI systems, or any other specific technologies. Many AI systems do not use personal data at all” [1] (at least according to the current legal definition of personal data).

P17      The UK Government’s Algorithmic Transparency Standard [28] for government departments and public sector bodies requires “a short description of the algorithmic tool, including how and why it is being used”, as well as “more detailed information about how the tool works, the dataset/s that have been used to train the model and the level of human oversight”.  It is currently being iterated following piloting. However, this standard only applies to government departments and public sector bodies.

P18      These policy initiatives tend to reinforce technology-first approaches to transparent and explainable AI. However, these approaches can be problematic for different reasons.



Challenges with current approaches to Explainable AI

P19      Existing technology-first approaches to explainable AI include (i) the construction of an approximation model based on a technique which is simpler, less accurate and unrelated, but explainable, and (ii) the exploration of individual feature contributions to a prediction (e.g. using SHapley Additive exPlanations and DeepLIFT [29] or LIME [30]). However, since (i) uses different principles to the original model its explanations may not apply to the original model. The explanations produced in (ii) focus on individual predictions, highlighting the features which contributed to that specific prediction (e.g. when considering likelihood of being approved for credit, income may have been an important feature for one prediction, but the area the person lives in for another). While these techniques can be used on any machine learning input and problem (including natural text or image data), individual prediction explanations are expensive to compute on a large scale meaning this approach can result in overall biases being missed.

P20      Prediction explanations also need to be presented to a user in an understandable manner, such as a description of the area of an image leading to a suggested diagnosis. This has fuelled work in automatically generated narrative explanations where a system generates an explanation alongside its prediction [31,32]. However, such systems require a large quantity of data to train and therefore are very problem-specific.

P21      Another challenge is that explanations of predictions are, by definition, local to the individual predictions being made. They therefore do not indicate systematic biases the system could reflect. For example, even when biassing features e.g. ethnicity are explicitly removed, biases may return through indirectly related features e.g. postcodes. Existing approaches to explainable AI do not address these and other limitations. The warning issued by the Information Commissioner's Office [33] regarding the accuracy of biometric technology that attempts to analyse subjects' emotions suggests that the potential inaccuracy of the system should be made transparent alongside any system outputs.

P22      Approaches to Explainable AI may also be resisted by system developers and owners. In proprietary works, owners may not wish to share the inner workings of a system for commercial reasons. Developers may also be concerned about the potential for abuse of the system. For example, while sharing a justification for rejecting a credit card application is seen as good, the justification may also indicate ways to force the system to make a different decision with small, fake, adjustments to the inputs. Proposals to indicate required changes in parameters, as an alternative to existing explainability models [34], therefore pose their own risks for system abuse.

P23      Technical measures aimed at explainable AI therefore need to be approached with caution.
 

 

Making AI systems transparent for the public

P24      Much of the research and policy work on transparent and explainable AI measures focuses on providing technical and legal information about how AI systems work, for example the 319-page methodology that was published to explain the A-level prediction algorithm [35]. However, as The Royal Society [36] observe, "different users require different forms of explanation in different contexts" - sharing code and methodology is potentially helpful to other developers, but less useful to the students and schools receiving lower than expected A-level grades.

P25      Social science researchers have explored the limitations of technology-first forms of transparency measures as solutions to the controversies, harms and problems that can result from the introduction of AI-based systems in different domains. These researchers have argued that “seeing” the technical detail of AI-systems does not necessarily lead to people understanding these systems in a social context [37], nor can we expect transparency to have a simple relationship with accountability and public trust in AI systems [38]. Moreover, recent findings from the Living With Data project [39] (see below) have observed that people who had greater awareness of data uses tended to be more critical and cautious about them [40]. This suggests that increased transparency and public scrutiny of AI systems must be combined with more democratic decision-making in AI governance, for AI to be developed with genuine consent and trust of citizens.

P26      Given these observations, social science researchers have recognised that transparency and explainability measures are of little value to the public if access to information is not packaged with “tools for turning that access to agency” [41].



Living With Data - meaningful transparency in practice

P27      Living With Data (LWD) is a Nuffield Foundation funded project that used survey and interview methods to capture diverse publics thoughts and feelings about various UK public sector data and AI systems in place, or under development, at the Department for Work and Pensions, BBC and NHS. To engage their research participants, the researchers undertook a process of creating written and visual accounts of the selected data and AI systems to use as prompts in surveys and interviews (see [42] for examples).

P28      Based on their experiences of ‘making transparent’ these various systems the LWD team [43] developed a set of observations and recommendations for transparency measures that place emphasis on creating resources about AI systems that can inform meaningful communication and dialogue across variations in people’s knowledge and experience, rather than the existing model of one-way technology-first communication. The following provides an overview of these observations and recommendations.

P29      Significant information asymmetries exist between organisations and members of the public about existing and proposed data/AI systems. Members of the public struggle to assess what they think should be made transparent if there is no public information about what AI systems are in place in different organisations across sectors. The UK government has begun to recognise this issue in its public sector Algorithmic Transparency Standard, yet further policy work needs to be done to reduce information asymmetries across sectors so that members of the public (including journalists and researchers) know (1) what AI systems are in use, (2) where efforts are/are not being made towards transparency and explainability, and (3) where more transparency and explanation may be needed.

P30      Recognise the potential and evolving societal impacts of an AI system and find ways to bring them into dialogue and understanding. This requires communicating potential harms alongside the benefits of AI systems, particularly when different people may have different views on risks, benefits, and their implications. This requires expertise not just in data and computer science, but in the type of critical thinking that characterises much of the work in the social sciences and humanities, and methodological expertise in communicating socio-technical phenomena to non-experts. It also requires people with expertise in recognising and challenging systemic and structural oppression.

P31      In many cases, assumptions about what information should be made transparent are driven from a technology-first perspective. There is a need to foster open discussion involving stakeholders from impacted communities (whilst not making them responsible) about what aspects of AI systems are important to make transparent for different people and purposes, and use these insights to inform contextual decision making about transparency practices. This includes transparency about potential social consequences and risks and the level of technical detail an account ought to go into, recognising when it is - and is not - valuable to communicate what happens inside AI systems.

P32      Avoid obfuscation when communicating about AI systems through provision of too much, too detailed, too complex, or too little information. This requires significant time and appropriate expertise in written and visual communication. Information needs to be communicated in different ways for different AI systems and audiences.

P33      Acknowledge and foster understanding of the uncertainty inherent in AI systems. There is uncertainty around many aspects of AI systems e.g. statistical uncertainty, they undergo ongoing development, the future uses are unclear, the social implications are not known. The uncertainty inherent in AI systems means that attempts to make dynamic AI systems transparent will only ever be partially successful. We need to embed talking about uncertainty in our transparency practices.

P34      Recognise that transparency practices can take place at various stages of AI system design and implementation. Often efforts to make AI systems transparent only take place once the system is deployed. Transparency earlier in the development process presents a potential opportunity for fostering public dialogue and democratic decision making about AI development.

P35      Recognise the resources needed for transparency practices and commit to making them available. All these commitments would require significant resourcing if they were implemented at scale as part of a sustainable AI policy intervention. The availability of resources, to whom, when and how these are valued should also be recognised as inherently political.



Q3. What lessons, if any, can the UK learn from other countries on AI governance?

P36      A range of AI assessment approaches have been developed outside the UK which present useful lessons on AI governance. The EU-based High Level Expert Group on AI (AI HLEG) have created a checklist and prototype tool for assessing the trustworthiness of AI, called Assessment List for Trustworthy Artificial Intelligence (ALTAI) [44], which is receiving positive feedback. Based on the concept of fundamental or human rights, ALTAI covers 7 different requirements, many of which are already discussed above:

a)      Human Agency and Oversight;

b)      Technical Robustness and Safety;

c)      Privacy and Data Governance;

d)      Transparency;

e)      Diversity, Non-discrimination and Fairness;

f)        Environmental and Societal well-being; and

g)      Accountability.

P37      By collaborating with bodies/institutions outside of the UK, we can build on the ideas and systems already developed and make them appropriate for the UK while remaining integrated into the more global nature of modern technologies, including AI.

References

  1. Office for Artificial Intelligence. (21 Sept, 2021) “National AI Strategy.” Accessed at: https://www.gov.uk/government/publications/national-ai-strategy
  2. Office for Artificial Intelligence. (18 July, 2022) “National AI Strategy - AI Action Plan.” Accessed at: https://www.gov.uk/government/publications/national-ai-strategy-ai-action-plan/national-ai-strategy-ai-action-plan
  3. Ministry of Defence. (15 June 2022). “Defence Artificial Intelligence Strategy.” Accessed at: https://www.gov.uk/government/publications/defence-artificial-intelligence-strategy.
  4. National Careers Service. (n.d.) “Data Scientist”. Accessed at: https://nationalcareers.service.gov.uk/job-profiles/data-scientist Accessed 14 Nov 2022.
  5. D'ignazio, C. and Klein, L. F. (2020). Data feminism. MIT press, 2020.
  6. Office for Artificial Intelligence. (12 May, 2022) “Understanding UK AI R&D commercialisation and the role of standards.” Accessed at: https://www.gov.uk/government/publications/understanding-uk-ai-rd-commercialisation-and-the-role-of-standards
  7. Obermeyer, Z., et al. (2019). "Dissecting racial bias in an algorithm used to manage the health of populations." Science 366(6464): 447-453.
  8. Eubanks, V. (2018). Automating inequality: How high-tech tools profile, police, and punish the poor, St. Martin's Press.
  9. Noble, S. U. (2018). "Algorithms of oppression." Algorithms of Oppression. New York University Press, 2018.
  10. Gerke, S., et al. (2020). "The need for a system view to regulate artificial intelligence/machine learning-based software as medical device." npj Digital Medicine 3(1): 53.
  11. Oman, S., Ditchfield, H., and Kennedy, H. (17 Nov 2022) “To understand uses of personal data in the present, people draw on the past and imagine the future.” LSE Impact Blog. Accessed at: https://blogs.lse.ac.uk/impactofsocialsciences/2022/11/17/to-understand-uses-of-personal-data-in-the-present-people-draw-on-the-past-and-imagine-the-future/
  12. UK Parliament. (28 June 2022). “Online Safety Bill.” Bill 121 2022-23. Accessed at: https://bills.parliament.uk/bills/3137
  13. Newman-Griffis, D., et al. (2022). “A roadmap to reduce information inequities in disability with digital health and natural language processing.” PLOS Digital Health 1(11): e0000135.
  14. Newman-Griffis, D., et al. (2022). “Definition drives design: Disability models and mechanisms of bias in AI technologies.” First Monday. In Press.
  15. Santiago, N. and Stahl, B. (2021). “Deliverable No. 4.3: SHERPA Final Recommendations”. DOI: https://doi.org/10.21253/DMU.14627130, Accessed at: https://figshare.dmu.ac.uk/articles/online_resource/D4_3_SHERPA_final_recommendations/14627130.
  16. Bates, J., et. al. (2020). Integrating FATE/critical data studies into data science curricula: where are we going and how do we get there? In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (FAT* '20). Association for Computing Machinery, New York, NY, USA, 425–435. https://doi.org/10.1145/3351095.3372832
  17. Newman-Griffis, D., et. al. (2021, June). Translational NLP: a new paradigm and general principles for natural language processing research. In Proceedings of the conference. Association for Computational Linguistics. North American Chapter. Meeting (Vol. 2021, p. 4125). NIH Public Access.
  18. Kroll, J. A. (2018). The fallacy of inscrutability. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 376(2133), 20180084.
  19. Antoniou, J., et al. (2021). “Deliverable D5.6: SHERPA AI in Education Report”. DOI: https://doi.org/10.21253/DMU.16912318, Accessed at: https://figshare.dmu.ac.uk/articles/online_resource/D5_6_AI_in_Education_Report/16912318.
  20. Docherty, N. and Asia J. Biega, A. J. (2022). (Re)Politicizing Digital Well-Being: Beyond User Engagements. In CHI Conference on Human Factors in Computing Systems (CHI ’22), April 29-May 5, 2022, New Orleans, LA, USA. ACM, New York, NY, USA, 13 pages. https://doi.org/10.1145/3491102.3501857
  21. Cath, C. et. al. (2018). ‘Artificial Intelligence and the “Good Society”: The US, EU, and UK Approach’. Science and Engineering Ethics 24, no. 2 (1 April 2018): 505–28. https://doi.org/10.1007/s11948-017-9901-7
  22. Wilson, C.  (2022). ‘Public Engagement and AI: A Values Analysis of National Strategies’. Government Information Quarterly 39, no. 1 (1 January 2022): 101652. https://doi.org/10.1016/j.giq.2021.101652.
  23. Katell, M. et. al. (2020). ‘Toward Situated Interventions for Algorithmic Equity: Lessons from the Field’. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, 45–55. FAT* ’20. New York, NY, USA: Association for Computing Machinery, 2020. https://doi.org/10.1145/3351095.3372874.
  24. Head, B. W. (2007). ‘Community Engagement: Participation on Whose Terms?’ Australian Journal of Political Science 42, no. 3 (1 September 2007): 441–54. https://doi.org/10.1080/10361140701513570.
  25. Microsoft Research. (n.d.) “FATE: Fairness, Accountability, Transparency, and Ethics in AI.” Accessed at: https://www.microsoft.com/en-us/research/theme/fate/. Accessed 22 Nov 2022.
  26. Google Research. (11 Jan 2022) “Google Research: Themes from 2021 and Beyond.” Accessed at: https://ai.googleblog.com/2022/01/google-research-themes-from-2021-and.html 
  27. Information Commissioner’s Office (n.d.). Rights related to automated decision making including profiling.  https://ico.org.uk/for-organisations/guide-to-data-protection/guide-to-the-general-data-protection-regulation-gdpr/individual-rights/rights-related-to-automated-decision-making-including-profiling
  28. UK Government (2021 - updated 2022). Algorithmic Transparency Standard. https://www.gov.uk/government/collections/algorithmic-transparency-standard
  29. Lundberg, S. M. and Lee, S-I. (2017) “A unified approach to interpreting model predictions.” Advances in Neural Information Processing Systems, 30..
  30. Ribeiro, M. T. et al. (2016). “Why Should I Trust You?” Explaining the Predictions of Any Classifier. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD '16). Association for Computing Machinery, New York, NY, USA, 1135–1144.
  31. Yang, J. et. al. (2021). CrystalCandle: A User-Facing Model Explainer for Narrative Explanations. Available from: https://arxiv.org/abs/2105.12941
  32. Wang, X. et. al. (2018). Tienet: Text-image embedding network for common thorax disease classification and reporting in chest x-rays. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition , 9049–905
  33. ICO (2022). Biometrics technologies. Available from: https://ico.org.uk/about-the-ico/research-and-reports/biometrics-technologies/
  34. Wachter, S. et. al. (2017). Why a right to explanation of automated decision making does not exist in the general data protection regulation. Int. Data Privacy Law 7, 76–99.
  35. Ofqual (2020). Awarding GCSE, AS, A level, advanced extension awards and extended project qualifications in summer 2020: interim report. Available from: https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/909368/6656-1_Awarding_GCSE__AS__A_level__advanced_extension_awards_and_extended_project_qualifications_in_summer_2020_-_interim_report.pdf
  36. The Royal Society (2019). Explainable AI: Policy briefing. Available from: https://royalsociety.org/-/media/policy/projects/explainable-ai/AI-and-interpretability-policy-briefing.pdf
  37. Ananny, M. and Crawford, K. (2018). Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability. New Media & Society 20(3): 973–989. DOI: 10.1177/1461444816676645.
  38. Benjamin, R. (2016). Informed Refusal: Toward a Justice-based Bioethics. Science, Technology, & Human Values 41(6): 967–990. DOI: 10.1177/0162243916656059.
  39. Living With Data Project. (n.d.) “Current research – Living With Data: knowledge, experiences and perceptions of data practices.” Accessed at: https://livingwithdata.org/current-research. Accessed 22 Nov 2022.
  40. Kennedy et al. (2022). Data Matters Are Human Matters: final Living With Data report on public perceptions of public sector data uses. Available from: https://livingwithdata.org/project/wp-content/uploads/2022/10/LivingWithData-end-of-project-report-24Oct2022.pdf
  41. Obar, J. A. (2020). Sunlight alone is not a disinfectant: Consent and the futility of opening Big Data black boxes (without assistance). Big Data & Society 7(1)
  42. Living With Data Project. (2022) “Producing accounts of data uses.” Available from: https://livingwithdata.org/resources/producing-accounts-of-data-uses/. Accessed 22 Nov 2022.
  43. Team members Jo Bates, Helen Kennedy, Itzelle Medina Perea, Susan Oman, Lulu Pinny. See: Living With Data Project. (n.d.) “People.” Available from: https://livingwithdata.org/people/. Accessed 22 Nov 2022.
  44. European AI Alliance. (n.d.) “Welcome to the ALTAI Portal!” Accessed at: https://futurium.ec.europa.eu/en/european-ai-alliance/pages/welcome-altai-portal. Accessed 22 Nov 2022.

 

 

(November 2022)