Written Evidence Submitted by ADS
(GAI0027)
INTRODUCTION
ADS is the trade association for the UK’s aerospace, defence, security, and space industries. ADS has more than 1,100 member companies across all four sectors, with over 95% of these companies identified as Small and Medium Size Enterprises (SMEs). The UK is a world leader in the supply of aerospace, defence, security and space products and services. From technology and exports to apprenticeships and investment, our sectors are vital to the UK’s growth – generating £77 billion turnover a year in the UK, including £34 billion in exports, and supporting one million jobs.
1.1. The current governance of AI in the UK is a patchwork of regulatory and legal regimes built for different reasons such as data protection, which partially capture the use of AI. While this has not significantly hindered the AI ecosystem, there is an urgent need for a joined-up, coherent framework for governing the use of AI, to address gaps and inconsistencies in the governance framework.
1.2. ADS supports the Government’s approach as set out in the policy paper Establishing a pro-innovation approach to regulating AI, which is light-touch, context-sensitive and that empowers sectoral regulators rather than creating an entirely new regulatory body. It is important regulation focuses on the higher-risk applications, so that innovation can continue to thrive in areas with lower risk. However, there are also areas where there is insufficient legal clarity or public support for the use of AI that may require greater intervention.
1.3. It is therefore crucial for the Government to bring forward its developed proposals for regulating the use of AI as soon as possible through its promised White Paper. The current uncertainty on the regulation of AI is a deterrent to investment and innovation in AI systems and risks the UK falling behind its competitors.
1.4. One example of where the lack of clarity has held back the growth of AI use cases is in the use of safety-critical AI in aviation. Like issues facing the widespread adoption of self-driving cars, the lack of AI-based standards and regulations has prevented the adoption of safety-critical AI in aviation.
1.5. For example, leading UK aerospace companies have developed safety-critical AI technologies that can, for example, monitor the health of engines and make autonomous decisions on their operation. However, while the technology is quite mature the governance framework is not, thus holding back their development and risking that these emerging use cases are pursued in other markets.
1.6. The international dimension of aviation and aerospace cannot be ignored and there will be other sectors where similar issues arise. Due to the international nature of decision-making for aviation there would need to be international agreement to take forward the deployment of safety-critical AI in aviation. This underlines the need for the UK Government to start on a light-touch basis to governing the use of AI, while being ready to step in for example by developing more robust regulatory frameworks and shaping international standards, when the growth of the market requires it.
2.1. The AI ‘black box’ problem is well known and presents a significant challenge to the widespread use of AI. While there are important commercial sensitivities to respect, questions surrounding accountability and routes to redress must be addressed to give customers and the public the confidence to enable adoption of AI technologies. With that in mind, ADS supports the principle set out in the Government’s policy paper that routes to redress and/or contestability must be clarified in relevant situations, for example safety-critical AI in aviation.
2.2. AI offers significant benefits for the UK national security community, for example by rapidly deriving insights from large, disconnected datasets. However, in this circumstance there will rightly be important privacy and human rights considerations that must be addressed by the regulatory framework. It may not be possible for both commercial and national security reasons to reveal how industrially developed AI capabilities will be utilised by the national security community, but it is crucial that there is enhanced policy and guidance to give the public confidence that the privacy and human rights implications of national security uses of AI are reviewed on a regular and robust basis.
3.1. The application of AI in decision-making is critical to meeting the ever-growing data challenge that faces both companies and Governments. For example, in policing the exponential growth of digital forensic evidence poses a mission-critical challenge to modern-day policing that can only be answered through the structured use of machine learning (ML)-driven AI to process huge datasets. However, with the increasing use of AI-driven decision making and especially as this moves beyond routine decisions it is important for accompanying structures that allow for review and structure to be updated at the same time. Without this, public confidence in the use of AI will be fatally undermined.
3.2. The existing mechanisms for review through current regulators and routes such as Judicial Review are already relatively robust, but it will be critical to ensure that the body undertaking reviews has sufficient knowledge of AI systems and the principles underpinning their use. For this reason, it is critical for regulators to engage with industry and encourage knowledge sharing, as it will be impossible for regulators to continuously keep ahead of technological trends without industry insights. This will also place an obligation on industry that ADS members will be happy to meet.
3.3. In updating structures for reviewing the use of AI, the Government should therefore engage with industries that are already actively utilising AI to learn how companies are already navigating the complexities of applying AI in a way that builds public trust. For example, the Aletheia Framework is a toolkit for ethics and trustworthiness in AI that has been developed by Rolls-Royce, which regulators should be open to learning lessons from.
4.1. ADS supports the approach set out in the Government’s policy paper that the use of AI, rather than the technology itself, should be the focus of regulation and that a light-touch, context-sensitive approach should be taken in the first instance, focused primarily on high-risk concerns.
4.2. The proposal that existing regulators should be empowered to regulate the use of AI rather than creating a new, singular regulator for AI is appropriate given the very wide array of applications that can be seen across ADS’s four sectors, let alone other industries. For example, the Civil Aviation Authority (CAA) is best placed to understand the implications of the potential use cases of AI in aerospace and aviation, given its central role in administering the safe practice of this industry. What is crucial is that regulatory bodies such as the CAA are resourced accordingly, including by equipping it with new skillsets, to manage the increased demands on regulators that will inevitably come with the growing usage of AI in new circumstances.
4.3. Notwithstanding this, as noted in point 1.3 and 1.4, there will be specific areas where public trust is especially crucial, for example safety-critical uses, to the widespread adoption, and therefore growth, of new AI technologies and markets. The most relevant existing regulator will need to work with industry to develop comprehensive and robust regulatory frameworks to give both the public and industry itself the confidence to mature specific AI technologies.
4.4. An effective AI assurance market will be crucial for giving potential customers and the public confidence in how the risks will be managed at different points in an AI lifecycle. However, in terms of regulating the use of AI across their life cycle, care should be taken to focus on high-risk concerns rather than hypothetical concerns.
4.5. Given the risks of overlap and confusion, it will be important for regulators to coordinate their approaches especially in convergent use cases. For that reason, bodies such as the Digital Regulation Cooperation Forum must be fully utilised to promote coordination, knowledge sharing and best practise.
5.1. The existing legal framework lacks clarity and it is often ambiguous as to how existing UK laws will apply to AI. This discourages many businesses, especially SMEs, from developing novel use cases of AI, because of risk aversion and a fear of potential legal action. Potential gaps in the UK’s legal approach can also harm public trust in the use of AI. Overall, this may limit the growth of the sector in the UK in comparison to other countries with greater legal clarity and/or flexibility.
5.2. It is worth noting the link between AI, machine learning and Big Data. AI is an overarching term for systems that employ computer intelligence. Machine learning is a subset of AI that provides computers with the ability to algorithmically learn when exposed to new data, without being directly programmed. Big Data is typically the aggregation of large volumes of data by computer systems, which are usually easy to mine in some way for insights. The key point is that for AI systems to become more effective, they typically require access to large amounts of data.
5.3. If the UK wishes to remain at the cutting edge of AI development and retain a strategic advantage, its legal and regulatory framework must ensure that machine learning-driven AI systems can legally and ethically utilise Big Data. This must strike a balance between utility and data protection and privacy concerns, as well as addressing potential copyright issues related to open-source data. Leaning overly in one direction would risk undermining public support for the use of AI and in the other would undermine UK advantage. Looking at the UK’s competitors in allied and friendly nations, an unfriendly framework for business innovation could harm the UK’s prospects for export success and economic prosperity. For competitors in hostile nations, which do not have the same ethical or legal constraints on the use of Big Data, this same approach could pose severe national security risks.
6.1. The UK’s regulatory and legal framework for the use of AI must be globally competitive, to encourage businesses to invest in the development of AI in the UK rather than elsewhere, while retaining public confidence in the use of AI. If the UK’s framework is not deemed to be sufficiently attractive, businesses will conduct AI product development elsewhere. If the UK falls behind in this crucial market, it may be forced to license expensive, ‘black box’ AI technologies, which may exacerbate public concerns about the use of AI in decision making.
6.2. If the UK wishes to ensure genuine UK leadership in AI, it also needs to invest in the technical skills required so that the UK can produce core AI frameworks rather than using AI frameworks developed by other countries. At its core, AI systems are closely linked to data science and advanced mathematics. To effectively regulate the use of AI, the UK must ensure its regulators have the skillsets required to produce rather than adopt frameworks. This will give greater assurance, security, and public confidence in the use of AI in the UK.
6.3. The EU has set out its own approach to regulating the use of AI systems through the EU AI Act. This will take a more interventionist approach to regulation, with AI systems in regulated systems being potentially significantly affected. It will also have extraterritorial impacts that will potentially affect UK businesses who provide AI systems in the EU Single Market, like the EU’s GDPR legislation. The EU AI Act will also ask EU member states to nominate one or more national competent authorities to enforce regulation at the national level, overseen by an overarching European AI Board, and to regulate AI systems according to a four-tiered risk framework, the highest level of which will ban usage. For example, the use of AI in facial recognition systems in public places would be outright banned, which could have public safety implications.
6.4. ADS members believe that this approach by the EU is overly heavy-handed and that the UK’s light-touch, context-sensitive approach is more appropriate to handling this rapidly evolving technology. In some senses, the EU are trying to adapt a product liability approach and apply it generically to AI, but this risks over-regulating the low-risk use of AI. It will also be potentially burdensome for SMEs to comply with.
6.5. In the USA, the White House has recently published a Blueprint for an AI Bill of Rights. This sets out a similar approach to the UK that is principles-based and light-touch, which ADS members operating in the US market have welcomed.
6.6. Given the global nature of the AI market, where often the underlying datasets are drawn from multiple jurisdictions, there is a risk of regulatory divergence and the potential that the EU approach, due to its extraterritorial nature, sets a global standard that UK companies will choose to comply with voluntarily, like GDPR. For this reason, the UK must engage closely with both the EU and the USA to promote an interoperable approach to regulating the use of AI.
(November 2022)