Written Evidence Submitted by Rolls-Royce Plc



About Rolls-Royce Plc.

Rolls-Royce is at the centre of UK aerospace and defence, pioneering the power that matters over land, sea, air and space. We are a global business, exporting over 75% of what we produce in the UK to customers in over 150 countries. For over a century Rolls- Royce has been at the forefront of UK manufacturing innovation and last year we invested over £1.2 billion in R&D, with approximately half spent in the UK.


AI is heavily integrated into our processes and ensures that we maintain our position as one of the worlds leading technology companies. Our R² Factory business leads on much of the development of this technology, combining advanced data analytics techniques, artificial intelligence and machine learning, with deep domain knowledge and system engineering expertise.


Executive Summary

Artificial intelligence (AI) has become integral to parts of our business, including through our manufacturing processes and in the application of our products in their end environment. Deployed safely and effectively, AI can play an important role in the safety critical sectors in which Rolls-Royce operates, improving the quality of and confidence in our products as technology evolves.


Underpinning this is our belief in the need for an ethical approach to AI, which is why Rolls-Royce created the Aletheia Framework™, a free publicly available blueprint for designing ethical AI systems.


We support efforts to create the right framework for the application of AI, both now and for uses not yet known, that is underpinned by effective governance processes. This will provide both the confidence to innovate and develop trust in the use and deployment of AI.


Our response highlights three key areas we believe need to be considered in creating an effective governance framework for AI in the UK:


regulation and oversight for the better application of AI as technologies evolve devolving regulatory oversight to existing industry regulators will help ensure that the application of AI suits the market it operates in and provide a suitable environment to manage technology evolution.


Responses to the Committee’s Questions


How effective is current governance of AI in the UK?


There is currently no central overarching legislative framework governing the application of AI in the UK. Instead, the AI regulatory framework is included within broader standards




and regulations. This context-driven approach offers the flexibility for companies to use AI to innovate and grow, making the UK an attractive place to invest in the use of AI technologies.


When seeking to regulate aspects of AI technologies not currently covered by UK regulations, such as the use of AI in monitoring and inspecting the manufacturing of turbine blades, and other safety critical developments, it is important that the balance between making sure that the technologies are safe and accountable whilst providing the freedom to innovate is maintained. Losing ground in this space may put the UK at a competitive disadvantage as a place to invest as AI applications grow.


One weakness of the current approach, of regulating through existing non-AI regulation, is on its ability to evolve with AI where future application is as yet unknown. The regulatory framework should be able to adapt as AI technology develops new models and opportunities. This could be supported through a quality assurance framework that captures design, data management, testing and ethics for example that allows companies space to innovate without being too prescriptive that it stifles advancement. We believe oversight of this should be at sector-level rather than held centrally so that the application of AI is both prepared for and responsive to change in the right environment.


A second weakness is around AI autonomy, where it is not clear where accountability and transparency governing the use of AI to perform tasks without human guidance resides. This is particularly important for companies that work in safety critical sectors where the use of AI could improve safety and efficiency but isn’t currently allowed. As part of any assurance programme surrounding the technology, we recommend there be an ‘accountable person’ who is responsible for that technology and work with regulators on the application of that AI.


Sector-level responsibility builds on the existing network of regulators already operating to make the development and delivery of products safe. In aviation the Civil Aviation Authority has oversight over safety in UK aviation. Oversight of AI applications could fit within the CAA’s governance systems, making use of the CAA’s existing expertise and engagement with industry to ensure AI technology is applied and regulated to the advancement of UK aviation, particularly as the technology evolves.


What measures could make the use of AI more transparent and explainable to the public?


Safety is paramount for our business. It drives our approach to product design and manufacturing so that we can be confident that our people and products are safe. The safety critical environments we operate in mean that product failure can have significant consequences, erodes public trust quickly and is difficult to restore.


A strong governance framework around the development and use of AI - especially among those parts of the economy that might be reluctant to use it and in safety critical sectors could help foster public confidence. A common quality assurance framework that includes a certification regime could provide a mark of confidence, similar to the use of the ‘Red Tractor’ in the UK food system. At Rolls-Royce we have developed the Aletheia Framework™, a free-to-use toolkit that developers can checklist their processes against. This is based around three core areas: 1) the social impact of the AI being developed, 2) trust in the safety and application of the AI being developed, and 3) the governance around ownership and accountability. The Aletheia Framework can be found freely on Rolls-Royce’s website. Beyond this, an AI ethical framework could form one part of a wider quality assurance framework.





These areas provide an accountable and transparent framework that can adapt as the technology evolves. It is also self-regulating and provides a degree of autonomy and choice to the developer. For example, we accept that AI systems must provide for transparency and traceability of their design, inputs and outputs. Our framework recommends that all AI programmes are “assessed for any bias or discrimination impact and their provenance shall be clearly stated to enable any future Root Cause Analysis or troubleshooting”. Subjecting AI to stringent bias checks plays an important role in securing the public’s confidence in our AI programmes and the Aletheia Framework™ includes an AI bias assessment tool.


Moreover, to ensure there is a human control variable for technology that operates without human interaction, our framework allocates responsibility for individual AI programmes, proposing that ultimate accountability for the outcomes of the AI system needs to be clearly stated with a business owner clearly identified.” Accountability within AI governance is important to mitigate contention and promote transparency.


This is one example of voluntary regulation that could be encouraged at national level, developed and adopted at sector-level, and implemented at the individual level. Another would be universal global standards that put companies from different jurisdictions on a more equal footing and opens up opportunities to trade across different global markets. More universal approaches will help socialise the application of AI around the world. The UK already has a world-leading reputation for regulatory standards through which standards around AI could be incorporated.


How should decisions involving AI be reviewed and scrutinised in both public and private sectors?


We support the devolution of regulation and oversight to sector-specific regulators so that it can keep pace with changes and developments in a specific sector and environment. Where regulators do not currently exist the requirement for one should be reviewed and encouraged and supported where needed. These organisations are closer to businesses it would oversee than national Government, increasing the scope for collaboration and dialogue that will help the oversight to evolve as technology advances.


Central Government can play an important role in setting the principles through which AI regulation is developed and governed. Establishing suitable procedures to develop, implement, monitor and act on regulatory frameworks within their sector could provide common outlooks and action from sector-bodies, providing a degree of consistency across industry. Through the AI Council the Government already has an industry link and we encourage the Government to make sure that the Council represents a cross-section of industry, including a mix of those in safety and non-safety critical industries. It is time for the AI Council’s membership to also reflect the ‘real economy’.


A strong governance framework led by industry that promotes the safe, ethical and responsible development and use of AI would show global leadership on this issue and help encourage investment into the UK around AI technologies.


How should the use of AI be regulated, and which body or bodies should provide regulatory oversight?



Rolls-Royce supports the context-driven approach proposed by Government and propose that responsibility for delivery is devolved to the appropriate sector-bodies, or where there are none, to a body established for this purpose.


Rolls-Royce operates across different sectors and markets, exposing the business to different types of regulation and enforcement. We believe that a catch-all legislative framework for the application of AI would limit innovation and the attractiveness of the UK as a place to invest in AI technology.


We support the regulation of AI to be based on a series of universal principles that govern the sector-based regulatory framework around AI. From these, sector bodies are well placed to develop the context-specific regulations and procedures that are necessary to ensure the safe and ethical applications of AI within their area of competency. This would allow for different levels of oversight to be applied based on need, for example an additional level of scrutiny for AI in safety critical environments that isn’t required for non-safety critical environment.


Self-regulation and certification governed through appropriate frameworks would also add an additional layer of oversight that could both protect the development of AI and foster trust in its use.


To what extent is the legal framework for the use of AI, especially in making decisions, fit for purpose?


The legal framework that surrounds AI offers a situation-specific approach that allows for innovation and development of novel AI. We believe there are three areas of scope that could strengthen existing provisions:



(November 2022)