Written Evidence Submitted by School of Informatics, University of Edinburgh

(GAI0079)

Summary

The rapidly evolving landscape of AI tools presents unique opportunities and challenges with respect to the balance between innovation, translation to efficient and effective services to the public and trust. Timely strategic investments into AI research within the UK have spurred its invention of sophisticated AI engines that could power the future. There is an imminent need to exploit these inventions by laying and maintaining pathways across sectors in the form of factored software tools. These software tools need to be designed by experts, in a bottom-up manner so that they lend themselves to reuse, easy maintenance, regulation and scrutiny. Alongside research and innovation in AI tools, it is vital to introduce schemes that will reshape tomorrow’s workforce at a variety of educational levels. It is important to equip future workers with skills ranging across commissioning, deployment, tuning, testing, maintenance and regulation. Such schemes will stimulate industry, facilitate large scale utilization of software, promote social mobility, train a national workforce and reduce UK’s reliance on international talent. Moreover there is also potential for radical reform in the way that government works, and interacts with citizens.

In this submission, the School of Informatics at the University of Edinburgh provides its views on some of these challenging problems from the perspective of one of UKs research powerhouses in Computer Science and AI. In particular, it is evident that there is an urgent need to bridge the gap between development and effective and responsible deployment of AI tools at scale.

UK Strengths

  1. Research output. The UK ranks fourth in the world [1] in terms of research output in AI thanks to a combination of historical strength in the area and UKRI’s continued funding programmes for AI education and research (PhD and post-doctoral levels). It is vital that these programmes continue to grow, to remain competitive globally.
  2. Readiness. The UK ranks third [2] globally when it comes to being prepared to apply AI for services to the public. This is crucial for empowering AI research to transcend ‘toy datasets’ and enable their assessment in useful, realistic settings.
  3. Regulation. The recent Policy Paper on AI regulation [3] strikes the tricky balance between stoking creative applications while recognising the need to be alert to its impact across sectors.

Scope for improvement

  1. Increasing transparency of AI services. The rapid proliferation of AI tools has led to a landscape of general yet powerful techniques that may be applicable across a variety of sectors. Efforts for public engagement led by the developers focus on the tools rather than their applicability (10.).
  2. Risk and scale of VC investment. The UK has been successful in seeding and encouraging innovation. Yet the levels are below that of the EU [4] and slightly less than half the money from UK investors is used to support innovation in other countries [5]. There is an urgent need for the UK to provide buttressing schemes (financial, mentoring, networks) that will encourage AI-based start-ups to improve scale.
  3. Software engineering and management. Although several programmes have been established to manage data, there is also a need for fundamental engineering of software associated with AI technologies. This includes development, design, version control and mechanisms for selective sharing. Such a focus will also help cohesive AI governance (7.) and fully exploit the contributions of a limited workforce (9.).

Weaknesses

  1. Need for coherent governance. It is important to establish an overarching view of AI governance. Currently, it appears that this is fragmented by sector. For example, software for successful classification might be claimed to be under the purview of MHRA since it was first applied to detecting abnormalities in heartrate. It is possible that the algorithm fails the standards for this application but is still useful for detecting license plates of speeding cars. Independently, the algorithm might be under scrutiny by DCMS via ICO for data privacy. The status quo in the best case leads to duplicated effort and could in the worst case allow models to escape tight scrutiny.
  2. Workforce. There is an imminent need to develop a workforce with a range of practical skills to scale, deploy and maintain AI software across applications. This could entail the creation of fresh teaching programmes for AI at the lower educational levels including new skills-driven degrees and short courses. It is impractical to expect PhD students or research associates with short-term contracts to engage in these activities which are foundational to the delivery of AI technologies.
  3. Diversity. About 4.2% [6] of AI professionals in the UK on professional networks (LinkedIn) identify as female (compared to 94% men). Yet, there are more women enrolled in UK universities than men. In addition to leading to a skew in opportunities for women, this also precludes the availability of about half our workforce for AI technologies. The problem also applies to diversity of other disadvantaged communities. It is imperative that schemes are devised for encouraging these communities to be educated early (secondary school level) on the benefits of choosing AI-related careers.

Transparency

  1. Explanation. It is important for the public to be made aware of the role of AI in the underlying processes (e.g. in a mortgage application, for an updated estimate of their car insurance, etc.) rather than of details of the technology works. The important practical question is ‘who will be responsible for providing this explanation?’. One possibility would be to introduce policies so that the service provider is responsible for overseeing transparency of processes.
  2. Justification. In addition to understanding the role of the AI, sometimes it is necessary for the specific AI tool to lend itself to what is often referred to as explainability. For example, insight into reasons for refusing a mortgage application. An important aspect of AI governance is to facilitate this capability both during the design of the technology as well as its incorporation into a service platform (joint public and private sectors).
  3. Role of Universities and Media. The benefits of effective public engagement schemes for AI technologies include informing, consulting, improved social mobility and diversity and reduced public anxiety. The public must be informed of the skills necessary to secure AI-related jobs without fear-mongering (unlike how it is presented in [7] for example).

Scrutiny

  1. Involving experts. One of the strengths of the UK is its early investment into nurturing multidisciplinary experts in joint areas across computer science, law, political science, etc. These experts must now be involved in the scrutiny of innovative AI technologies in sensitive sectors (for example [8]). One of the impediments to this is that there is a lack of incentives (17.) for these experts to participate in cumbersome procedures for scrutiny. Financial incentives are unlikely to be sufficient, especially for those who are now employed in the private sector.
  2. Developing auditing capability. A mechanism of audit is needed that allows external scrutiny to be applied to AI development and projects, to ascertain whether they are meeting their own goals for fairness, explainability, open-ness etc.  Such external audit would require companies to be concrete about their own governance goals, and then to be held accountable for demonstrating that they have achieved those goals.

Regulation

  1. Top-down vs Bottom-up regulation. In addition to the regulations imposed by sectors, it would be useful if practices and guiding principles are established at the level of the AI algorithms. For example, the identification of potential regulatory challenges for a new machine learning algorithm that performs classification. Classification algorithms are used widely ranging from detecting anomalous heart rate from ECG data to recognizing faces from video feed. Atomic regulatory flags can then be useful in informing sector-level regulations without the regulatory bodies necessarily being experts in classification algorithms.

Legal framework

-          No response   -

International models for AI governance

  1. Singapore. Singapore ranks above the UK in terms of AI readiness. They have pioneered AI governance with the Model AI Governance framework (2019) that put ethical principles into practical recommendations for organizations to deploy AI in a responsible manner. They also introduced the world’s first AI Governance Testing Framework (2022) with a Toolkit for companies that wish to demonstrate responsible AI in an objective and verifiable manner. 10 companies from different sectors and of different scale, have already provided feedback. These companies include - AWS, DBS Bank, Google, Meta, Microsoft, Singapore Airlines, NCS (Part of Singtel Group)/Land Transport Authority, Standard Chartered Bank, UCARE.AI, and X0PA.AI.
  2. Australia. In 2019, the Department of Industry, Science and Resources in Australia published the AI Ethics Framework to guide organizations to responsibly design, develop and implement AI. The framework introduces the ethical principles of Australia to build safe, secure and reliable AI; and provides information about: (i) applying the principles at each phase of the AI system lifecycle, (ii) testing the principles with some of Australia’s biggest businesses, and (iii) developing the framework and principles by consulting stakeholder organizations and individuals.

Suggested mechanisms

  1. AI Research Policy Officer at Computer Science Departments. One possibility for sustained involvement of AI developers and policymakers could be via the introduction of new posts at key AI R&D centres. For example, such an AI Policy Expert would be embedded within an Informatics department at a University and be responsible for designing atomic level regulatory mechanisms corresponding to the research output of that department. Most departments already have analogous posts for Business Development -- to assist with IP and investment-related matters. In contrast, the policy associates will focus on developing objective reports regarding responsible AI, privacy and security and bottom-up regulatory declarations for new research output. The School of Informatics has experience with the administration and integration of business development officers and is keen to trial such a mechanism for responsible AI and regulation.
  2. University apprenticeship scheme / degree for training practical AI workforce. In our experience, the translation of newly developed AI models to scalable and useful implementations requires independent attention from AI research. This should be seen as an opportunity since it introduces a variety of jobs such as AI software engineering, testing, deployment and maintenance. The onus is currently on start-ups and SMEs that are already burdened with balancing risks. There is an imminent need for a national framework for AI education at various levels -- apprenticeships that train workers on ‘tuning knobs’ in an existing model for a particular application, certifications for AI engineers to fix a reported problem, etc. Urgent schemes are needed to analyse the resources, stakeholders and actors to deliver such skills in the long-term, which will be instrumental in building tomorrow’s comprehensive workforce. This plan could also lay the foundations of a scheme for facilitating social mobility as technology continues to replace humans in mundane jobs.

(November 2022)

References

[1]

Savage, N (2020). The race to the top among the world’s leaders in artificial intelligence. Nature 588, S102-S104 (2020). https://doi.org/10.1038/d41586-020-03409-8

[2]

Government AI readiness index 2021. Oxford Insights.

[3]

Establishing a pro-innovation approach to regulating AI. Policy Paper 2022. ISBN: 978-1-5286-3639-1

[4]

Tricot, R. (2021). Venture capital investments in artificial intelligence: Analysing trends in VC in AI companies from 2012 through 2020. OECD Digital Economy Papers, No. 319, OECD Publishing, Paris, https://doi.org/10.1787/f97beae7-en.

[5]

OECD.AI (2022), Visualisations powered by JSI using data from Preqin, accessed on 25/11/2022, www.oecd.ai

[6]

OECD.AI (2022), Visualisations powered by Tableau using data from StackOverflow, accessed on 25/11/2022, www.oecd.ai. Supported by the Patrick J McGovern foundation.

[7]

Telegraph Headline. https://www.telegraph.co.uk/business/2022/11/21/sunak-tells-nhs-embrace-robot-workers-prepares-sack-staff/

[8]

Technology rules? The advent of new technologies in the justice system. Justice and Home Affairs Committee. Authority of the House of Lords. HL Paper 180.