Written Evidence Submitted by the High Integrity Systems Engineering Group (HISE)

(GAI0044)

 

Introduction

The High Integrity Systems Engineering (HISE) group at the Computer Science department in the University of York has pioneered assurance of dependable software and safety critical systems since the late-1980’s. The group’s Assuring Autonomy International Programme (AAIP) (funded by Lloyd’s Register Foundation) builds on HISE experience and expertise and is leading research in the safe use of AI, machine learning and autonomous systems. We have influenced and directly contributed to safety standards in many industrial sectors, including UK defence, aviation, healthcare, and automotive. Our current research programme works closely with industry and other research organisations, extending to industrial robotics and mining in addition to the previously mentioned sectors. AI adoption will bring significant benefit to all these UK sectors, as it is incorporated in many tasks to either enhance or replace human capability, often with the aim of improving safety or standards of care.

We have made significant industrially-relevant advances in this space in the last 5 years, developing the first systematic and structured methodology for assuring machine learning for use in autonomous systems with AI (the AMLAS process, https://www.assuringautonomy.com/amlas) - which is currently informing approaches in UK and international public and private sector organisations and influencing standards. Our research outcomes are publicly available online. We also are currently leading research into safe, ethical and responsible use of AI.

 

The Committee welcomes submissions on the following questions:

 

        How effective is current governance of AI in the UK?

Governance of trustworthy and safe AI has struggled to keep pace with the very rapid growth of the technology over the last decade. For safety-critical systems there are a multitude of domain specific standards and guidelines, produced by practitioners and/or regulators, to ensure consistent good practice is embedded in the processes and management structures of organisations developing those systems. Additionally, there are also domain specific regulators to provide additional oversight and direction. Almost all of these are lacking in guidance for the specific issues raised by the use of AI and those that we have worked with acknowledge a skills gap in dealing with the new technology.

 

Examples of AI problems during development include difficulty in developing formal requirements, ensuring data provenance, use of open-source software tools and inconsistent approaches to verifying predictable behaviour. All of these can potentially contribute to unsafe behaviour. For example, lack of data provenance can lead to issues of deliberate data poisoning, release of private information and bias in AI functions. Lack of consistent verification methods of the AI undermines confidence in whether the AI will behave in an acceptably safe way in service.

 

Examples of issues during operation include lack of oversight of autonomous systems (including updates which might be done over-the-air), difficulty identifying violations of operational design domain assumptions, difficulty understanding liability following incidents and accidents, and managing the AI to deal with changes in the environment. For example, without ongoing support and monitoring of AI behaviour (which is already uncertain from design) the likelihood of accidents increases. Additionally, humans in the loop may lack the training to understand AI outputs and anticipate breakdowns and unsafe behaviour.

In summary, there are issues with governance leading to gaps and inconsistent approaches which can lead to unsafe behaviour in highly critical domains, and ethical concerns e.g., about privacy, inclusivity and bias.

 

        What are the current strengths and weaknesses of current arrangements, including for research?

 

One advantage of looser governance is that technical innovation is easier to achieve, taking advantage of more agile development processes, quicker time to market and allowing cutting edge technologies to be deployed. However, in more conservative domains such as safety critical industries the lack of effective oversight is of real concern and is likely to lead to either unsafe and/or unpredictable AI being deployed, or truly beneficial deployments being prevented or delayed due to the lack of an appropriate governance framework.

 

Research is necessarily more innovative and imaginative in its approach to AI development, even in the safety critical domain. But ethical issues such as data privacy, and use of unproven technology remain.

 

Recommendations:

  1. Use goal-based approaches to ensure AI system safety to form high-level governance documents. These approaches give clear aims and objectives without constraining the means of compliance, are appropriate for AI technologies, and are currently used in many industries for more conventional technologies, e.g., software systems in air traffic services.
  2. Provide funding for assurance approaches of AI, in the form of industry guidance documents, to integrate disparate research into more consistent approaches, especially for safety assessment of AI-based systems and for their ongoing safety management in service.
  3. Commission, fund and roll out education programmes for upskilling regulators, supporting deeper individual and institutional understanding of AI behaviour, risks and potential failures. Include assurance approaches developed as part of recommendation 2. 
  4. Commission, fund and roll out public information campaigns, with additional online domain resources to industry, regulators and the public to facilitate better informed decision-making

 

        What measures could make the use of AI more transparent and explainable to the public?

 

There are two aspects to this question. The first is ensuring that the public is made aware of AI use in safety critical situations, perhaps to be delivered by a scheme of product/system labelling. The second is ensuring that members of the public who come into direct contact with, and need to interact with, AI based systems understand the decisions that are being made and the ethical considerations raised, and have the ability to understand the reasons behind those decisions.

 

Explainable AI (XAI) is the term used for technical measures which can be used to make the decisions of AI more transparent to end users. There are diverse means to achieve explainability, and many require a deep technical understanding of how AI works in order to interpret what is provided. This means that many ‘explainable AI’ methods in development are not suitable for public use. Additionally, not all explainable AI methods work in real-time, meaning there will be a delay before data is available - which means that these XAI methods might be usable in incident analysis, but not to support informed choices by affected stakeholders (e.g., end users).

 

Recommendations:

  1. Fund a programme of work to improve public confidence and safety by developing and enhancing explainable AI techniques to provide greater transparency to the public. These techniques will also provide industry benefits to UK plc by supporting professional operators and humans-in-the-loop with safety critical systems that include AI.
  2. Treat explainability as a human-centred process, that may require a type of dialogue between the machine and the human user, rather than as a technical output of the AI system.
  3. Develop a policy requiring that the general public be made aware of safety related situations and systems that they interact with that include AI to help them to make informed choices about whether or not to use the system and/or how to make best use of its outputs, e.g., recommendations.

 

        How should decisions involving AI be reviewed and scrutinised in both public and private sectors?

 

We believe the intent of this question is to consider areas where human decision making has been replaced or enhanced by the use of AI systems. In the case of safety critical systems the use of AI should be reviewed in terms of benefit, typically reduced risk or societal improvement. This will need to be balanced by ethical considerations of justice, human autonomy, non-maleficence and beneficence.

 

Additionally, given that there are technical concerns with performance and accuracy, any decision making embodied in AI should be validated and verified as being fit for purpose.

 

        Are current options for challenging the use of AI adequate and, if not, how can they be improved?

 

The current options for challenging the use of AI in safety related systems are limited for the general public, particularly pre-deployment. We note the use of regulatory scrutiny (please see our answer to the following question) will partially address this in specific domains.

 

        How should the use of AI be regulated, and which body or bodies should provide regulatory oversight?

 

The use of AI in safety critical systems should be subject to regulatory scrutiny equivalent to non-AI systems for the same use. This includes ensuring compliance with those existing legislative standards that remain relevant, but also scrutinising assurance of AI. As the latter is very likely to involve novel validation, verification and assurance techniques (due to the lack of mature AI safety standards) as well as novel technologies specific AI expertise will be required within the relevant authorities, or accessible to them. Regulatory engagement early in a manufacturer’s development lifecycle may be mutually beneficial to gain understanding, and potential approval, of the planned assurance approaches. It may also be beneficial to conduct more in-service regulatory oversight than at present to gain information on AI performance and learn lessons from incidents and accidents.

 

This oversight should be proportionate to the risk involved, so that more scrutiny is placed on higher risk systems (which could cause catastrophic damage or fatality) than lower risk systems (for example, whose misbehaviour could cause a minor injury). The specific details of oversight will depend on the domain.

 

Recommendations:

  1. Current safety regulators should continue to perform oversight for AI both prior to and after deployment in their existing domains. This oversight should be extended or deepened for unproven AI technology.
  2. Specific expertise in AI should be required within the regulatory authority where AI is newly deployed.
  3. Capitalise on the extensive use of safety cases by the safety-critical industries particularly in the UK, e.g., defence and nuclear, as a means for promoting structured thinking about risk among diverse stakeholders. 
  4. Regulatory bodies should appoint a lead for AI adoption, working with counterparts in other domains to adopt common principles and approaches so far as is possible, to give consistent treatment of AI systems and to minimise the burdens on companies working across multiple domains.

 

        To what extent is the legal framework for the use of AI, especially in making decisions, fit for purpose?

 

We are not lawyers, although we do collaborate with lawyers, so these comments reflect our understanding of the difficulties that can be posed by AI, rather than an analysis of the legal framework itself. Where AI is used, it is important that the legal framework ensures that liability lies with those that create the risk - this is consistent, for example, with the way the HSE regulates. If, say, an autonomous vehicle causes an accident then we (society) would expect that liability lies with the manufacturer. It is not obvious that this is the case (although the Law Commission work on autonomous vehicles should lead to greater clarity in this domain).  The fact that the EU have recently published proposals to change the way that causality (in the legal sense) is attributed with the use of AI suggest that there is a potential problem in ensuring that liability does lie with those that create the risk.

 

Recommendation:

  1. The Law Commision should be asked to consider whether or not the UK legal framework ensures that liability lies with those that create the risk, especially where AI is used in safety-related/critical situations.

 

        Is more legislation or better guidance required?

Better guidance on the assurance and safe deployment of AI in safety critical systems is required. Current guidance is immature and unproven (although there are useful developments, e.g., through the BSI). In some cases, the existing legislative frameworks may still be sufficient so regulatory needs could be met through improved standards and guidance, within those frameworks. However, as noted above, there are cases where accountability and liability are not necessarily properly attributed when AI is used, so there may be a need for new legislative frameworks. We would favour implementing minimal legislation, aimed at placing responsibility, liability etc. where it should lie (cf the Law Commission work on self-driving vehicles) leaving the bulk of the “controls” to guidance and standards.

 

        What lessons, if any, can the UK learn from other countries on AI governance?

 

Little. There is some useful work in the EU and in some sectors, e.g., EASA is developing guidance relevant to the aerospace community. Also, the Federal Drug Administration (FDA) in the USA has developed guidance on Software as a Medical Device (SaMD) which is being extended to deal with AI. However, the FDA guidance is relatively abstract and a lot of interpretation is needed. This relates to the comments above about the immaturity of current guidance - these apply to international work of which we are aware as well as work in the UK.

 

(November 2022)