Written Evidence Submitted by the Information School, University of Sheffield
(GAI0017)
Authors (alphabetical order):
Dr Jo Bates
Prof. Laurence Brooks
Dr Niall Docherty
Dr Denis Newman-Griffis
Dr Susan Oman
Dr Judita Preiss
Dr Xin Zhao
Executive Summary
● Effective, future-oriented governance of AI requires a perspective shift from technology-centred approaches to a practice-oriented approach that reflects the social and human processes around AI development and use.
● Practice-oriented training of AI professionals and continuous engagement with communities affected by AI technologies are key elements of developing a more responsible, transparent, and effective AI ecosystem.
● AI transparency must move beyond individual decisions and technical design to highlight the social contexts in which AI systems are developed, evaluated, deployed, and used.
● The Living With Data project at the University of Sheffield provides valuable initial examples of resources and guidance for fostering open communication and dialogue around AI technologies outside the research laboratory.
● More work between researchers, practitioners, policymakers, and communities is needed to develop holistic approaches to creating and governing AI practices.
About the authors
We are a group of interdisciplinary researchers from the University of Sheffield Information School (iSchool), with expertise in Data Science, Responsible AI and Computing, Education, Digital Societies, Information Systems and Natural Language Processing. The Information School is the number one ranked Library and Information Management department in the world (according to QS World University Rankings, 2022), and the school’s transdisciplinary focus in research and teaching has put it at the forefront of the field for several decades. The evidence below draws on our own research on AI governance and related areas, as well as our collaborative experiences training AI and Data practitioners in university settings. In particular, we are invested in integrating Social Science and Humanities (SSAH) perspectives with Science, Technology, Engineering, and Mathematics (STEM) in the study, development, and public communication of AI technologies. We are submitting this piece of written evidence collectively, as we believe that similarly integrating the full range of disciplinary expertise in the governance of AI will strengthen the government’s policy making approach in the future.
Dr. Susan Oman is Lecturer in Data, AI and Society at the University of Sheffield. At present she is seconded into DCMS as part of an AHRC fellowship. Her contributions to this written evidence are in her capacity as an academic at The University of Sheffield only.
P1 Our evidence is structured in response to three questions posed by the Committee: (1) How effective is current governance of AI in the UK? (2) What measures could make the use of AI more transparent and explainable to the public? And (3) What lessons, if any, can the UK learn from other countries on AI governance?
Q1. How effective is current governance of AI in the UK?
P2 Current approaches to governance of AI in the UK emphasise a technology-first perspective, in which AI effort is focused on developing powerful, general-purpose technologies first, with questions of adaptation to specific settings, adoption, integration, and trust being secondary concerns addressed after technology development. This focus on development of technologies as generalisable tools is reflected in the current government strategy related to AI. The 2021 National AI Strategy [1] emphasises access to people, data, compute, and finance for developing AI technologies, and the 2022 AI Action Plan [2] highlights improving access to data and computing resources within the UK R&D ecosystem. The 2022 Defence AI Strategy [3] recognises the broader context of AI and emphasises multi-stakeholder governance, but the core strategy remains development and use of technologies rather than situated human-machine solutions.
P3 The emphasis on technology development is also reflected in government efforts for advancing and managing AI in practice. For example, the AI Data Scientist role developed by the National Careers Service [4] describes using data and building technologies, but not understanding where data come from, what purposes technologies may serve (and whether those purposes are appropriate), or how data and technology use may change in different local or international contexts—a common contributor to AI inequalities [5]. Similarly, the May 2022 report on Understanding UK AI R&D commercialisation and the role of standards [6] focuses solely on technical standards for successfully transferring and trusting AI technologies in practice.
P4 The technology-first perspective, however, overlooks two key concerns that differentiate between general-purpose AI tools developed in research settings and specific uses of AI in practice, which evaluation and governance must account for.
P5 The first is the human factors involved in designing and using AI in specific contexts. AI methodologies may be designed as general-purpose tools, but as data-driven technologies their use is uniquely specific—analysing specific data, in specific settings, and used for specific purposes. Using AI in practice requires adapting general AI technologies into context-specific AI solutions, including identifying appropriate data to analyse, understanding the characteristics of those data and what they can and cannot say, and defining how the use of an AI technology might (or should not) be integrated into complex organisational practices.
P6 Failing to address the human aspects of data going into AI technologies and how they are used can lead to solutions that are harmful, e.g. using racially-biassed measures of healthcare cost in treatment prioritisation [7]; ineffective, or both, as in the case of prejudicial automated recommendations in child welfare cases that were often ignored by mistrustful users [8]. Appropriately addressing these concerns requires recognising the socio-technical complexities of AI in practice, including how technology draws on human data and informs human processes, and how questions of trust affect these interactions.
P7 The second concern is the paradigm shift in moving towards a more AI-driven economy. Current efforts focus on using AI to improve existing processes and ways of thinking: improving productivity, efficiency, decision-making accuracy, etc. However, major industry organisations in AI R&D have built enormous successes on new ways of thinking with data, and combining the unanticipated information that Big Data can yield with human ingenuity. For example, Google’s use of web links and personal search histories to generate highly targeted advertising, or Facebook’s use of social media posts and behaviour for recommendations and adverts.
P8 These innovations were founded on a deep understanding of why data were generated and what those data said about people, far beyond traditional ways of thinking in advertising. Moreover, failures to consider broader and more diverse contexts in these innovations have created harmful design flaws that are often masked by the economic value they create [9]. Regulation and governance that focuses only on technology development within existing paradigms is inherently reactive. Proactively identifying and mitigating the risks of inappropriate AI requires forward-looking thinking grounded in a social understanding of the role of data and technology use in AI solutions. [10]
P9 A holistic view of AI R&D and governance is needed to address these issues and re-engage with public trust and data commercialisation practices. Highly publicised failures of AI and a perceived lack of data-centred governance to prevent such failures leads citizens to imagine the worst of new AI technologies [11]. Rather than focusing on AI technologies directly, people-centred governance of applications that use AI technologies, e.g. the current Online Safety Bill [12], can be a powerful route to push industry entities to thoughtfully manage the social design and impact of their technologies while still achieving commercial aims.
P10 Governance efforts should shift towards fostering a holistic practice-oriented view of AI, encompassing social and organisational perspectives as well as technical development. Mapping a path forward for governing AI practices requires an active dialogue between researchers, practitioners, stakeholders from impacted communities, and government, grounded in shared concepts and understanding. New research, such as understanding the nature and limitations of available data [13], developing frameworks to guide sociotechnical analysis of AI solutions [14], and defining the ecosystem required around responsible AI [15], must be paired with new efforts in educating and engaging AI practitioners and the public.
Towards a practice-oriented view of AI governance
P11 In current academic and industrial R&D, AI practitioners with 'social' and 'technical' backgrounds are often institutionally separated and siloed [16]. This hinders effective communication, leading to polarised perspectives about AI governance [17]. This is largely due to the way technical approaches have simultaneously been prioritised and made to appear inscrutable [18]. Practitioners with social sciences and humanities (SSAH) training may feel technically inadequate and unable to intervene in discussions surrounding the governance of AI. Simultaneously, practitioners with science, technology, engineering and mathematics (STEM) backgrounds may feel less confident in engaging in discussions with an overt ethical or governance focus [19].
P12 This impasse could be improved by combining SSAH and STEM approaches when training AI practitioners in higher education. For example, higher education institutions could 1) equip technically-focused students with the vocabulary to engage with ethical questions of AI governance, 2) position SSAH perspectives as essential contributors to the research space and, 3) furnish all with the skills and knowledge to support effective, non-dogmatic, and non-hierarchical dialogue between disciplines. Doing so would reduce the current separation between social and technical approaches in training AI practitioners in higher education, and help produce more responsible and interdisciplinary AI professionals in future.
P13 Diversifying perspectives on AI governance will help create the interdisciplinary equitability to make space for fresh perspectives, interventions, and ideas. Another essential source of expertise is engaging with stakeholders from impacted communities early on in the policymaking process. Inspiration could be taken from participatory design protocols in Human-Computer Interaction, which enlist intended user groups to help technologists develop products better suited to their needs [20]. These approaches can complement existing engagements with industry, which largely focus on the technical administration of AI [21] and as a result may unfairly privilege private companies with the ability to set AI governance agendas and norms [22].
P14 It is also important to acknowledge that the views gathered through community (and industry) engagement should be valued as situated expertise, rather than definitive or representative [23]. Moreover, recalling related issues surrounding participatory governance, community engagement ought not be positioned as a silver bullet for understanding the societal impacts of AI, but rather as a necessary ground clearing activity that opens up the conversation onto wider terrains, leading to more imaginative, relevant, and inclusive outcomes [24].
P15 Overall, we propose a move away from technological solutionism, which reduces the societal issues surrounding AI to technical problems with technical answers. Instead, we suggest treating AI as a sociotechnical phenomenon, which views technology as an integration of human systems and relationships with technical modes of administration and organisation. This new approach to studying, teaching, and building AI has a solid foothold in industry research environments [25,26] and is gaining ground in research intensive universities (e.g. the Information School at the University of Sheffield is designing its new Data Science BSc around such sociotechnical perspectives). By adopting a similarly sociotechnical approach and synthesising across AI perspectives, policymakers could address gaps in the type of evidence currently used for AI governance. This more holistic, less siloed approach, can bring more diverse voices to the table and enable the government to help create a better AI ecosystem for more people in society to benefit from in the future.
Q2. What measures could make the use of AI more transparent and explainable to the public?
P16 Current legislation only partly addresses the need for transparent and explainable AI. For example, UK GDPR establishes a right of access to details about personal data feeding into automated individual decision-making and profiling [27]. Yet, as the UK AI Strategy notes UK GDPR “was not intended to comprehensively govern AI systems, or any other specific technologies. Many AI systems do not use personal data at all” [1] (at least according to the current legal definition of personal data).
P17 The UK Government’s Algorithmic Transparency Standard [28] for government departments and public sector bodies requires “a short description of the algorithmic tool, including how and why it is being used”, as well as “more detailed information about how the tool works, the dataset/s that have been used to train the model and the level of human oversight”. It is currently being iterated following piloting. However, this standard only applies to government departments and public sector bodies.
P18 These policy initiatives tend to reinforce technology-first approaches to transparent and explainable AI. However, these approaches can be problematic for different reasons.
Challenges with current approaches to Explainable AI
P19 Existing technology-first approaches to explainable AI include (i) the construction of an approximation model based on a technique which is simpler, less accurate and unrelated, but explainable, and (ii) the exploration of individual feature contributions to a prediction (e.g. using SHapley Additive exPlanations and DeepLIFT [29] or LIME [30]). However, since (i) uses different principles to the original model its explanations may not apply to the original model. The explanations produced in (ii) focus on individual predictions, highlighting the features which contributed to that specific prediction (e.g. when considering likelihood of being approved for credit, income may have been an important feature for one prediction, but the area the person lives in for another). While these techniques can be used on any machine learning input and problem (including natural text or image data), individual prediction explanations are expensive to compute on a large scale meaning this approach can result in overall biases being missed.
P20 Prediction explanations also need to be presented to a user in an understandable manner, such as a description of the area of an image leading to a suggested diagnosis. This has fuelled work in automatically generated narrative explanations where a system generates an explanation alongside its prediction [31,32]. However, such systems require a large quantity of data to train and therefore are very problem-specific.
P21 Another challenge is that explanations of predictions are, by definition, local to the individual predictions being made. They therefore do not indicate systematic biases the system could reflect. For example, even when biassing features e.g. ethnicity are explicitly removed, biases may return through indirectly related features e.g. postcodes. Existing approaches to explainable AI do not address these and other limitations. The warning issued by the Information Commissioner's Office [33] regarding the accuracy of biometric technology that attempts to analyse subjects' emotions suggests that the potential inaccuracy of the system should be made transparent alongside any system outputs.
P22 Approaches to Explainable AI may also be resisted by system developers and owners. In proprietary works, owners may not wish to share the inner workings of a system for commercial reasons. Developers may also be concerned about the potential for abuse of the system. For example, while sharing a justification for rejecting a credit card application is seen as good, the justification may also indicate ways to force the system to make a different decision with small, fake, adjustments to the inputs. Proposals to indicate required changes in parameters, as an alternative to existing explainability models [34], therefore pose their own risks for system abuse.
P23 Technical measures aimed at explainable AI therefore need to be approached with caution.
Making AI systems transparent for the public
P24 Much of the research and policy work on transparent and explainable AI measures focuses on providing technical and legal information about how AI systems work, for example the 319-page methodology that was published to explain the A-level prediction algorithm [35]. However, as The Royal Society [36] observe, "different users require different forms of explanation in different contexts" - sharing code and methodology is potentially helpful to other developers, but less useful to the students and schools receiving lower than expected A-level grades.
P25 Social science researchers have explored the limitations of technology-first forms of transparency measures as solutions to the controversies, harms and problems that can result from the introduction of AI-based systems in different domains. These researchers have argued that “seeing” the technical detail of AI-systems does not necessarily lead to people understanding these systems in a social context [37], nor can we expect transparency to have a simple relationship with accountability and public trust in AI systems [38]. Moreover, recent findings from the Living With Data project [39] (see below) have observed that people who had greater awareness of data uses tended to be more critical and cautious about them [40]. This suggests that increased transparency and public scrutiny of AI systems must be combined with more democratic decision-making in AI governance, for AI to be developed with genuine consent and trust of citizens.
P26 Given these observations, social science researchers have recognised that transparency and explainability measures are of little value to the public if access to information is not packaged with “tools for turning that access to agency” [41].
Living With Data - meaningful transparency in practice
P27 Living With Data (LWD) is a Nuffield Foundation funded project that used survey and interview methods to capture diverse publics thoughts and feelings about various UK public sector data and AI systems in place, or under development, at the Department for Work and Pensions, BBC and NHS. To engage their research participants, the researchers undertook a process of creating written and visual accounts of the selected data and AI systems to use as prompts in surveys and interviews (see [42] for examples).
P28 Based on their experiences of ‘making transparent’ these various systems the LWD team [43] developed a set of observations and recommendations for transparency measures that place emphasis on creating resources about AI systems that can inform meaningful communication and dialogue across variations in people’s knowledge and experience, rather than the existing model of one-way technology-first communication. The following provides an overview of these observations and recommendations.
P29 Significant information asymmetries exist between organisations and members of the public about existing and proposed data/AI systems. Members of the public struggle to assess what they think should be made transparent if there is no public information about what AI systems are in place in different organisations across sectors. The UK government has begun to recognise this issue in its public sector Algorithmic Transparency Standard, yet further policy work needs to be done to reduce information asymmetries across sectors so that members of the public (including journalists and researchers) know (1) what AI systems are in use, (2) where efforts are/are not being made towards transparency and explainability, and (3) where more transparency and explanation may be needed.
P30 Recognise the potential and evolving societal impacts of an AI system and find ways to bring them into dialogue and understanding. This requires communicating potential harms alongside the benefits of AI systems, particularly when different people may have different views on risks, benefits, and their implications. This requires expertise not just in data and computer science, but in the type of critical thinking that characterises much of the work in the social sciences and humanities, and methodological expertise in communicating socio-technical phenomena to non-experts. It also requires people with expertise in recognising and challenging systemic and structural oppression.
P31 In many cases, assumptions about what information should be made transparent are driven from a technology-first perspective. There is a need to foster open discussion involving stakeholders from impacted communities (whilst not making them responsible) about what aspects of AI systems are important to make transparent for different people and purposes, and use these insights to inform contextual decision making about transparency practices. This includes transparency about potential social consequences and risks and the level of technical detail an account ought to go into, recognising when it is - and is not - valuable to communicate what happens inside AI systems.
P32 Avoid obfuscation when communicating about AI systems through provision of too much, too detailed, too complex, or too little information. This requires significant time and appropriate expertise in written and visual communication. Information needs to be communicated in different ways for different AI systems and audiences.
P33 Acknowledge and foster understanding of the uncertainty inherent in AI systems. There is uncertainty around many aspects of AI systems e.g. statistical uncertainty, they undergo ongoing development, the future uses are unclear, the social implications are not known. The uncertainty inherent in AI systems means that attempts to make dynamic AI systems transparent will only ever be partially successful. We need to embed talking about uncertainty in our transparency practices.
P34 Recognise that transparency practices can take place at various stages of AI system design and implementation. Often efforts to make AI systems transparent only take place once the system is deployed. Transparency earlier in the development process presents a potential opportunity for fostering public dialogue and democratic decision making about AI development.
P35 Recognise the resources needed for transparency practices and commit to making them available. All these commitments would require significant resourcing if they were implemented at scale as part of a sustainable AI policy intervention. The availability of resources, to whom, when and how these are valued should also be recognised as inherently political.
Q3. What lessons, if any, can the UK learn from other countries on AI governance?
P36 A range of AI assessment approaches have been developed outside the UK which present useful lessons on AI governance. The EU-based High Level Expert Group on AI (AI HLEG) have created a checklist and prototype tool for assessing the trustworthiness of AI, called Assessment List for Trustworthy Artificial Intelligence (ALTAI) [44], which is receiving positive feedback. Based on the concept of fundamental or human rights, ALTAI covers 7 different requirements, many of which are already discussed above:
a) Human Agency and Oversight;
b) Technical Robustness and Safety;
c) Privacy and Data Governance;
d) Transparency;
e) Diversity, Non-discrimination and Fairness;
f) Environmental and Societal well-being; and
g) Accountability.
P37 By collaborating with bodies/institutions outside of the UK, we can build on the ideas and systems already developed and make them appropriate for the UK while remaining integrated into the more global nature of modern technologies, including AI.
References
(November 2022)