Written Evidence Submitted by The Royal Academy of Engineering

(GAI0039)

 

Background:

Since 2019, the National Engineering Policy Centre (NEPC) has been exploring the safety and ethics of autonomous systems to understand the risks and benefits associated with this technology across different sectors. Such systems are generally based on technologies like AI, and the extent to which its use is autonomous depends on the application of this technique to decision making. This project has looked at how concepts of automation, autonomy and AI are closely linked concepts that relate to a range of different technologies in different sectors. It seeks to understand how autonomous systems can be ethically designed, developed and deployed to ensure benefits are widely distributed and no one is disadvantaged. We have undertaken a series of sector specific deep dives to understand the opportunities and challenges within different sectors such as transport and healthcare. The below comments are drawn from this work and the viewpoints of the expert working group that steers it.

The NEPC held a workshop to explore the role for these cross-cutting standards, to understand the barriers to adoption and identify the actions required, to ensure the safe and ethical development and deployment of autonomous systems. The responses below, to select questions within the inquiry, have been largely taken from the findings from this workshop, which have not yet been published but are attached as an appendix. Responses have also been drawn from the NEPC’s work on autonomous systems in healthcare and the published report on autonomous transport.

Autonomous systems make decisions for themselves in complex environments. Regulations and standards will play an important role in governing autonomous systems. Technical standards are emerging to enable engineers and developers to embed ethical and safety principles in the design of autonomous systems in different sectors.

About the National Engineering Policy Centre

We are a unified voice for 42 professional engineering organisations, representing 450,000 engineers, a partnership led by the Royal Academy of Engineering. We give policymakers a single route to advice from across the engineering profession. We inform and respond to policy issues of national importance, for the benefit of society.

What measures could make the use of AI more transparent and explainable to the public?

The NEPC’s engagement on safety and ethics of autonomous systems has looked at key principles relevant to the development of AI within autonomous systems, such as transparency, and the extent to which these can be supported by technical standards. The work has regularly highlighted the role that standards will play in the development and regulation of autonomous systems.

The NEPC has focused on principle-focused, application agnostic standards which are relevant across different sectors and which can therefore be applied broadly. The workshop described above specifically looked at the Institute of Electrical and Electronics Engineers standard: IEEE P7001-2021 transparency of autonomous systems. The below gives more details on what this standard covers, and how it provides specific guidance for certain sectors and users, or leaves space for sector specific interpretation. This was selected as an important, exemplar standard that can have an important impact on autonomous systems development, and as such will play a key role in building trust in AI systems for experts and for wider user groups.

Transparency 

 

There are several groups of stakeholders who will interact with autonomous systems in different ways. Applying the above standard when developing autonomous systems across different sectors should be crucial to both regulatory approval and ensuring explainability for users and wider society.

However, the adoption of standards is not encouraged nor mandated by regulators. The NEPC’s workshop found that regulators and developers lack the resources and time to learn about standards, meaning they do not necessarily know what standards exist or how to apply them. Additionally, without a deeper understanding of AI, regulators may tend to consider autonomous systems as traditional systems.

To make AI within autonomous systems transparent and explainable, and to do so in a way that increases public trust, will therefore involve:

The discussions at the workshop led to a recommendation (please see annex) stating that regulators, Professional Engineering Institutions, Catapults and public procurement bodies should promote the adoption of standards. This could thereby support the development of transparent and explainable autonomous systems.

How should the use of AI be regulated, and which body or bodies should provide regulatory oversight?

The NEPC’s work on the safety and ethics of autonomous systems has found developing the appropriate regulation for both the development and application of AI is essential for growth and innovation in autonomy across sectors. Regulation enables innovation by providing clarity that enables innovative companies to develop excellent services.

AI within autonomous systems create similar ethical challenges (avoidance of harm, fairness, transparency) across sectors. However, different sectors do have different needs depending on the context that autonomous systems will be developed and deployed in. Some cross cutting principles such as failsafes will be more familiar in safety critical domains, for example space or nuclear, but not all sectors will have the same starting point. So, while high level principles are of value, and there is value in defining these, it was agreed in the NEPC’s workshop sector-specific standards would be needed in addition. Similarly, in regard to regulation, while there are cross-cutting challenges, it is a matter for sector specific regulators to apply these within their domains.

There is still a significant amount of effort needed to ensure that regulators are able to do this effectively. With specific regard to the healthcare sector, the NEPC’s work has found that that the developers of autonomous systems find the current regulatory landscape to be ill-defined, sometimes resulting in regulation being perceived as a barrier to innovation. There is work to be done to develop workable regulations that drive good patient outcomes and that are safe and ethical and a clearer framework for the development and use of autonomous systems in healthcare needs to be developed.

Developer demand for a central guide to help navigate UK regulatory requirements suggests that the AI Multi-Agency Advisory Service (AI MAAS) which is being established through the NHS Transformation Directorate and AI Lab regulatory programme will be welcome. This service will help both developers and adopters of the technology, however it will not be completely live until 2023.[2] Whilst the service continues to be developed, the sector would benefit from clear signposting, guidance and advice available to support innovators and enable safe innovation and use.

Once a product reaches market and is in use, post-market surveillance will be key to continually assess risk and benefit, and the system continues to function as originally intended. Although there are no fully autonomous systems in healthcare, adaptive algorithms and systems that self-learn from real-world changes and experience to improve performance will require specific consideration in future regulations. Real-time adaptation may mean the system performs differently to its pre-market assessment and it will be crucial to monitor changes throughout a product’s life span. This may involve software that is built into the system that makes assessments as it self-learns to ensure that any changes remain safe.

Medical device regulation has not been designed for adaptive or self-learning algorithms and so requires a different regulatory approach that is total product lifecycle-based – from pre-market to post-market development – which allows products to continuously learn whilst providing effective safeguards and helping to deliver effective patient care. This comes with expectations around transparency as well as the need to collect and monitor real world performance data. [3] [4]

The American Medical Informatics Association (AMIA) recommends that there should be periodic evaluation of the system, identification of algorithmic shift or drift due to a shift in data, constant review to determine whether bias occurs and continued user education and training. All of which are key to transparency and performance monitoring in a real-world setting.[5]

The NEPC’s work on autonomous transport also highlighted the importance of regulators being able to assess the compliance of systems with the regulations set for them. The speed of technological progress makes it difficult to ensure that regulators have enough understanding to be able to competently assess compliance. As with previous technology evolutions, competing with industry for a limited pool of skilled professionals will affect regulatory capacity.[6]

The NEPC’s work on transport systems also highlighted the need for a holistic and connected approach to regulation across modes. In general, there is therefore a need for regulation suited to specific sectors and domains of application, with support for connectivity between them. There are roles for bodies such as the Better Regulation Executive, the Regulatory Horizons Council, and emerging bodies such as the Institute for Regulation to support this cross-sector connectivity and skills building.

 

What lessons, if any, can the UK learn from other countries on AI governance?

The United States Food and Drug Administration (FDA) has been progressive in applying this to increasingly autonomous systems. The FDA has approved several AI or ML based medical devices that operate on a locked algorithm. A locked algorithm can be defined as one that provides the same output with each input and does not change with use. In this case, any algorithmic changes that have occurred have required FDA pre-market review. A pre-market approach is taken if there are any major changes to the algorithm that could significantly affect device performance, or safety and effectiveness.

 


 

Appendix:

Standards for safe and ethical autonomous systems

Executive Summary

Autonomous systems make decisions for themselves in complex environments. Regulations and standards will play an important role in governing autonomous systems. Technical standards are emerging to enable engineers and developers to embed ethical and safety principles in the design of autonomous systems in different sectors. The National Engineering Policy Centre held a workshop to explore the role for these cross-cutting standards, to understand the barriers to adoption and identify the actions required, to ensure the safe and ethical development and deployment of autonomous systems.

Call to action

Community

Better Regulation Executive should work with UK Regulator Network to encourage greater cross-sector collaboration on artificial intelligence, machine learning and autonomous systems, to build a community to understand and tackle common challenges.

Regulator upskilling

Institute for Regulators should collaborate with the National Engineering Policy Centre to develop CPD courses that help regulators to better understand AI, ML and autonomous systems and existing and emerging standards and how to adopt them. Language across standards should be made consistent to make it easier for users to effectively understand and interpret between standards produced by different bodies, this may require standardised terminology and collaboration to build unified understanding.

Principles and new standards

Standards bodies and regulators should work together to identify and develop usable standards beyond transparency, verification and failsafe design. This might include principles such as: design practice, principles of operational context, human interaction and security.

Industry uptake

Regulators, Professional Engineering Institutions, Catapults and public procurement bodies should promote the adoption of standards that encourage safe and ethical development of autonomous systems.

 

Context

Since 2019, the National Engineering Policy Centre has been exploring the safety and ethics of autonomous systems to understand the risks and benefits associated with this technology across different sectors. The project seeks to understand how autonomous systems can be ethically designed, developed and deployed to ensure benefits are widely distributed and no one is disadvantaged. Through our work, it has been argued that regulation and standards will play an important role.

On 28th April 2022, the Academy hosted a cross-sector workshop on the role of international technical standards in regulating autonomous systems, bringing together a mix of regulatory and technical expertise. It convened regulators including from the Health and Safety Executive, the Office for Nuclear Regulation and the Maritime and Coastal Agency, as well as wider expertise from standards bodies, industry, SMEs, Catapults and academia. This workshop aimed to explore the role for cross cutting standards, to understand the barriers to adoption and identify the actions required, to ensure the safe and ethical development and deployment of autonomous systems, and to start to build a community who can collaborate to overcome common issues. This echoes the Alan Turing Institute’s recent call for a joined up approach for coordination, knowledge sharing and resource pooling for regulatory bodies facing the challenge of “AI readiness”.[7]

Autonomous systems make decisions, and take actions, often in complex and unpredictable environments. These systems are typically designed to be non-deterministic, where the same input can result in multiple different outcomes and this, together with the unpredictability of deployment environments, often makes it impossible to predict each outcome with certainty. There are key principles relevant to the development of autonomous systems that can help to assure the safety and ethical development of these systems which can be supported by technical standards. A wide range of standards exist, or are under development, that are being produced by national standards bodies, industry and international organisations.

The standards highlighted in the workshop were chosen as they are principle-focused, rather than application specific. The workshop deliberately focused on these horizontal standards, relevant across different sectors, to understand where they can help, and where challenges remain. The discussions aimed to inform an action plan for further standards development and to encourage industry uptake, as well as for regulator upskilling and collaboration. 

The three principles discussed were transparency, failsafes and verification, presented in the context of emerging Institute of Electrical and Electronics Engineers (IEEE) standards: IEEE P7001-2021 transparency of autonomous systems; IEEE P7009 failsafe design of autonomous systems and IEEE P2817 guide for verification of autonomous systems. These are generic, umbrella standards, intended to apply to all autonomous systems both physical and software based. These were selected as important, exemplar principles that can have an important impact on autonomous systems development. However, there is no expectation that these provide a complete set of principles. [8] [9] Other principle-based standards indeed exist and IEEE’s Ethically Aligned Design report sets out 8 general ethical principles for autonomous systems.

Transparency is critical for understanding what a system is doing, why certain decisions are made, and understanding what went wrong and why once autonomous systems fail. In complex, realistic environments, uncertainty and failure of systems is inevitable, and failsafe mechanisms are an essential principle to build into mitigation strategies. It is also crucial to provide evidence of reliability and confidence in both the system, and its decision making, through verifying that a whole system meets its design specification.

Following the presentations on emerging standards, Andrew White from UK’s Office for Nuclear Regulation discussed some of the challenges relating to the regulation of autonomous systems, and the role of standards such as these in addressing these challenges. With this context attendees then discussed how the standards could be applied and where the gaps were.
 

Principles for autonomous systems

Transparency IEEE P7001 – Alan Winfield

Transparency assumes that the basis of a particular autonomous or intelligent system (A/IS) decision or action should always be discoverable. This is important not only in understanding failures but in building confidence and trust when autonomous systems operate alongside humans. Transparency of behaviour more generally is also important for stronger levels of verification (see below). The P7001 standard was developed to set out measurable levels of transparency so that the level can be specified prior to development and then assessed for compliance. It is intended to be used by designers, manufacturers, operators and maintainers of autonomous systems. However, transparency often means something different to different stakeholders who will require different information, relayed in plain language. The P7001 standard covers transparency for expert stakeholders (safety certification engineers, accident investigators, lawyers or expert witnesses) as well as non-expert stakeholders (users, wider society).[10]
 

Failsafe Design – IEEE P7009 – Ken Wallace

IEEE’s P7009 standard for fail-safe design for autonomous and semi-autonomous systems is being developed to establish a baseline for the development, implementation and use of fail-safe mechanisms in these complex systems. It describes some of the key requirements and properties of these systems and provides tools to implement fail-safe mechanisms and methods to measure and certify the ability to fail safely. The standard will inform the design, testing and analysis of the failsafe mechanisms  and the organisational safety processes, should a system fail. These mechanisms are essential as autonomous systems can fail, often without a human being on hand to recover, and there is a need to help mitigate risk of harm to people, society or the environment. It is intended that this standard is adapted for different sectors so they can define what is “safe enough” in each specific context. For example, the safety requirements for a self-driving car on a public road may be different to an autonomous robot in a nuclear facility.[11]

 

Verifiability – IEEE P2817 – Signe Redfield

The development of IEEE’s P2817 guide for the verification of autonomous systems will enable users to define an appropriate multistep verification process for autonomous systems, based on the available tools, levels of transparency, and good practice. The guide provides resources on: formal methods to provide strong evidence (mathematical proof) for the systems; simulation to understand the behaviours in specific scenarios; stochastic methods for probabilistic estimates of system behaviour; real world testing for higher risk scenarios; and runtime verification to ensures the system remains within predicted boundaries. The guide helps developers avoid common pitfalls in the collection, analysis and inbuilt assumptions underlying the evidence that the integrated system meets the design specification. It focuses on the functionality of, and decision-making processes within, an autonomous system, and not the outcome.
 

 

Regulator challenges and the role for standards – Andrew White, ONR

The Office for Nuclear Regulation (ONR) has an “outcomes focused” approach to regulation, meaning they do not mandate or encourage licensees to adopt a specific standard or provide certification themselves. However, they do require the licensee to provide explicit evidence that demonstrates the system is safe.

For non-AI software systems, the ONR requires a safety case that demonstrates assurance that a system’s risks have been reduced so far as is reasonably practicable (SFAIRP) and it is expected to be primarily deterministic. AI systems are different and pose a challenge as they are typically designed to be non-deterministic, and as a result evidence cannot always be provided for how the system will function in every possible outcome. It is accepted that AI systems will fail, but it is not known how, which creates a further challenge in giving assurance of safety. Despite this, the ONR views AI, and autonomous systems, as worthwhile technologies to consider due to their potential benefits. There are few established standards relating to the safety of such systems.

The ONR takes a view that regulators should act as enablers, and not prohibitors of innovation. They define what the acceptable level of risk is, creating regulatory sandboxes for innovation that help identify the questions which need to be answered together.

 

Discussion

Value

There was agreement that high level principles are needed as they capture the key issues arising from autonomous systems. Standards are helpful as they provide practical ways to assess the system and promote consistency. They are also not prescriptive allowing consideration of the specific context and encouraging conversations about what is safe enough. 

Autonomous systems create similar ethical challenges (avoidance of harm, fairness, transparency) across sectors. However, different sectors do have different needs depending on the context that autonomous systems will be developed and deployed in. Some cross cutting principles such as failsafes will be more familiar in safety critical domains, for example space or nuclear, but not all sectors will have the same starting point. So while high level principles are of value, it was agreed sector-specific standards would be needed in addition.
 

Awareness and understanding

Regulator and developer awareness and understanding of emerging standards needs to be built up. There was agreement that regulators and developers lack the resources and time to learn about standards, which means they do not necessarily know what standards already exist or how best to apply them. With that lack of understanding, it was felt that both confidence and trust to implement and rely on a collection of technical standards was also missing. Regulators also lack understanding of AI, Machine Learning (ML) and autonomous systems and are unable to keep up with technological developments. This means that there is a tendency to consider autonomous systems as traditional systems, which is a challenge given the reasons above.  
 

Sharing good practice

Greater sharing of information and best practice is important to encourage the adoption of standards. In particular, regulators would benefit from upskilling in techniques and the key components of strong verification. As standard implementation is not formally enforced, it would be useful to   encourage the use and adoption of standards within industry.
 

Cross-sector collaboration

There is value in cross-sector conversation, as well as collaboration between all parties (regulators, innovators, standards developers, insurers, the legal profession), to understand and tackle challenges in ensuring the safety and effectiveness of autonomous systems. Currently there is no mandate for cross-sector collaboration between regulators and it would be useful to encourage this. Often there are only a few individuals with expertise in autonomous systems in each organisation, and such connectivity would help to make best use of these scarce skills. Cross-sector collaboration may also help to navigate international regulatory differences by sharing an understanding of where the overarching principles remain the same which can help build confidence in safety processes and encourage transferable learning.

Embedding ethical considerations

Adopting principle-based standards can encourage developers to consider the ethics of autonomous systems on a greater scale. There is sometimes a tension between the commercialisation of a product, ethical practice and beneficial outcomes. Ethical risk assessments are an emerging governance tool to help organisations work through the ethical implications of the systems they are developing or adopting, but uptake has been limited. Adoption of ethical standards may be encouraged by increasing ethical consumerism, where the products or services a consumer chooses are those that cause the lease social or environmental damage.
 

Maturity of standards

The speed at which the technology can develop poses a challenge as it is often faster than the development of both regulation and standards. Few mature standards for autonomous systems exist and adoption of emerging standards need to be encouraged through mechanisms such as regulation and procurement, for example by including the requirement to meet certain standards in procurement specification.
 

Increasing uptake

Uptake can be a challenge where standards are not mandated by regulators. ONR explained they do not encourage licensees to adopt a particular standard whereas some sectors will only adopt ISO standards. Therefore, if there is limited knowledge of the range of standards that exist, developers will be less inclined to adopt good practice.
 

Clarity

Language used in standards poses a challenge, as it is either inconsistent or too complex, resulting in difficulty of, or varying, interpretation of standards. Language across standards should be made consistent, with a need for common terminology. It would also be important to consider how language and approaches across sectors differ.
 

Missing principles/standards

Regulators and developers agreed that transparency, verification, and failsafe design are important cross-cutting principles. However other principles such as; design practice, operational contexts, human interaction (outside of human factors or machine learning explainability), and security would also be valuable. Additionally, regulators would benefit from measures or forms of risk analysis as many still rely on predictable hazard analysis, however due to uncertainty these are likely to be unpredictably wrong for non-trivial autonomous system applications.

 

Call to action

Community

Better Regulation Executive should work with the UK Regulator Network to encourage greater cross-sector collaboration on AI, ML and autonomous systems, to build a community to understand and tackle common challenges.

Regulator upskilling

There is a need for CPD courses that help regulators to better understand AI, ML and autonomous systems and existing and emerging standards and how to adopt them. Language across standards should be made consistent to make it easier for users to effectively understand and interpret between standards produced by different bodies, this may require standardised terminology and collaboration to build unified understanding. There is a potential role here for the emerging  Institute for Regulators.

Principles and new standards

Standards bodies and regulators should work together to identify and develop usable standards beyond transparency, verification and failsafe design. This might include principles such as: design practice, principles of operational context, human interaction and security.

Industry uptake

Regulators, Professional Engineering Institutions, Catapults and public procurement bodies should promote the adoption of standards that encourage safe and ethical development of autonomous systems.

 

 

 

Royal Academy of Engineering

 

The Royal Academy of Engineering is harnessing the power of engineering to build a sustainable society and an inclusive economy that works for everyone.

In collaboration with our Fellows and partners, we’re growing talent and developing skills for the future, driving innovation and building global partnerships, and influencing policy and engaging the public.

Together we’re working to tackle the greatest challenges of our age.

What we do

TALENT & DIVERSITY 

We’re growing talent by training, supporting, mentoring and funding the most talented and creative researchers, innovators and leaders from across the engineering profession.

We’re developing skills for the future by identifying the challenges of an everchanging world and developing the skills and approaches we need to build a resilient and diverse engineering profession. 

INNOVATION 

We’re driving innovation by investing in some of the country’s most creative and exciting engineering ideas and businesses.  

We’re building global partnerships that bring the world’s best engineers from industry, entrepreneurship and academia together to collaborate on creative innovations that address the greatest global challenges of our age.  

POLICY & ENGAGEMENT 

We’re influencing policy through the National Engineering Policy Centre – providing independent expert support to policymakers on issues of importance.    

We’re engaging the public by opening their eyes to the wonders of engineering and inspiring young people to become the next generation of engineers.
 

 

 

(November 2022)

 


[1] See https://royalsocietypublishing.org/doi/abs/10.1098/rsta.2020.0363 for a discussion of the different definitions and expectations of explainability.

[2] https://transform.england.nhs.uk/ai-lab/ai-lab-programmes/regulating-the-ai-ecosystem/the-multi-agency-advice-service-maas/

[3] Proposed Regulatory Framework for Modifications to Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD) (2019) U.S. Food and Drug Administration, https://www.fda.gov/files/medical%20devices/published/US-FDA-Artificial-Intelligence-and-Machine-Learning-Discussion-Paper.pdf

[4] Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD) Action Plan (2021) U.S. Food and Drug Administration, https://www.fda.gov/media/145022/download

[5] 

[6] nepc-the-journey-to-an-autonomous-transport-system.pdf (raeng.org.uk)

[7] Common Regulatory Capacity for AI (2022), Aitken, M. et al, The Alan Turing Institute https://doi.org/10.5281/zenodo.6838946

[8] “Trustworthy AI”. Raja Chatila, Virginia Dignum, Michael Fisher, Fosca Giannotti, Katharina Morik, Stuart Russell, Karen Yeung. In Reflections on Artificial Intelligence for Humanity, 2021. https://doi.org/10.1007/978-3-030-69128-8_2

[9] “Principles for the Development and Assurance of Autonomous Systems for Safe Use in Hazardous Environments”, Matt Luckcuck, Michael Fisher, Louise Dennis, Steve Frost, Andy White, Doug Styles. 2021.   https://doi.org/10.5281/zenodo.5012322

[10] “IEEE P7001: A Proposed Standard on Transparency”. Alan Winfield, Serena Booth, Louise Dennis, Takashi Egawa, Helen Hastie, Naomi Jacobs, Roderick Muttram, Joanna Olszewska, Fahimeh Rajabiyazdi, Andreas Theodorou, Mark Underwood, Robert Wortham, Eleanor Watson. Frontiers Robotics AI 8, 2021.                       https://doi.org/10.3389/frobt.2021.665729

[11] “Evolution of the IEEE P7009 Standard: Towards Fail-Safe Design of Autonomous Systems”. Marie Farrell, Matt Luckcuck, Laura Pullum, Michael Fisher, Ali Hessami, Danit Gal, Zvikomborero Murahwi, Ken Wallace. In Proc. ISSRE Workshops, 2021.  https://doi.org/10.1109/ISSREW53611.2021.00109