DAIC0017
Written evidence submitted by Professor Trevor Taylor.
Artificial intelligence (AI) is likely the most forward-focused area of technology where there is a huge amount written and said about what it could lead to and very little consideration of how it is already used. The Ministry of Defence told this Committee last year that:
Artificial Intelligence (AI) is a general-purpose enabling technology that has the potential to transform every aspect of Defence, from back-office corporate services to the delivery of military effect.[1]
However, when defence definitions of AI are analysed, the long-established defence exploitation of AI becomes apparent.
Defence understands Artificial Intelligence (AI) as a family of general-purpose technologies, any of which may enable machines to perform tasks normally requiring human or biological intelligence, especially when the machines learn from data how to do those tasks.
The pedantic observer would note first that the first quotation treats AI as a single entity whereas the second refers to it as a ‘family’ of technologies.[2] My colleagues James Sullivan has generated a long but insightful list of AI trypes and techniques.
To focus next on the replacement of human intelligence criteria, it becomes clear that weapons with their internal guidance systems and radars that classify objects using the stored information in their libraries are AI-based systems that have been around for decades. Many UK ships have for ages carried the first the Goalkeeper and then the Phalanx point defence system which, when placed in automatic mode, senses, analyses, decides and fires at an incoming target. Either the UK statement on Ambitious, Safe and Responsible AI overlooked this fact (when it claimed ‘the United Kingdom does not possess fully autonomous weapon systems and has no intention of developing them’) or it relied a chance to take refuge if needed in a reference to ‘context-appropriate human involvement’.[3] We can expect that defensive weapons especially will be as autonomous as necessary in the light of the time needed for an effective decision.
The UK has a good track record in the generation of smart weapons: Storm Shadow, Meteor, Sea Viper and Brimstone (which can operate as an autonomous loitering munition) are cases in point. This would suggest the country may be well-placed to take advantage of some AI technology advances. However, it also underlines that the private sector including established defence companies hold much of the country’s expertise, especially when it comes to using advanced sensors and data analysis to improve the availability and support costs of equipment (the condition-based maintenance domain).
To offer a layman’s perspective, an AI system has three components: the last of which is the compute facility that stores and processes data. Data is key, it often needs to curated into standard forms and to be selected as at least potentially relevant and to an issue. Thirdly instructions to the compute (software) must be generated which at least at some stage requires human intellect. To’ learn’, machines also need feedback on the accuracy of their previous effort which can also require human action. The more data needed and available, the larger the compute facilities needed to be available, which has prompted controversy in the US about the procurement of cloud facilities. This should draw attention to the attributes of ‘AI companies’: do they have the data and compute facilities as well as suitable people.
An AI system’s utility fundamentally depends on the problems which are directed at it. A recent publication from McKinsey’s underlined that ‘for digital and AI transformations to succeed, companies need to understand the problems they want to solve’.[4]
There are islands of keen interest in AI within the governmental defence, not least in the intelligence analysis world which knows it has huge volumes of data to deal with, but my survey of government material suggests that, at the top level such as in the Defence AI Strategy, there is a lack of some of the precision and commitment to specifics that can be found in Department of Defense material. The five-year US Defence Artificial Intelligence Strategy shows quite specific awareness of where benefits can be generated:
improving situational awareness and decision-making, increasing the safety of operating equipment, implementing predictive maintenance and supply, and streamlining business processes. We will prioritize the fielding of AI systems that augment the capabilities of our personnel by offloading tedious cognitive or physical tasks and introducing new ways of working. [5]
DoD authorities also point to the need for extensive training on AI for the defence establishment as a whole so they those with problems can understand how AI could help with analysis, understanding and prescription.[6] AI, like other tools, needs prepared and intelligent users. I have not observed this stress on user training and education in the UK and a glance at the syllabuses for the Advanced Command & Staff Course and other UK staff courses that are available on-line shows reveals little or no attention being given to the topic. [7]
To give an example of a defence issue where the MoD must have a lot of data, a major issue for the Navy is its difficulties in recruiting and retaining people. Are those concerned with dealing with this getting guidance on whether and how computerised data analysis could help with understanding why this is the case and what could improve things?
A final observation is that most AI advances have been generated in either a self-contained context (e.g. the generation of successful game playing) or at least benevolent environments (such as the driver support elements on cars). A fundamental consideration for defence applications is how they will fare when faced by a capable and determined adversary. In defence, every advance generates counters either quickly or over a period. American defence refers to the need for defence AI applications must be ‘resilient’. This factor always needs to be taken into account when thinking how advances the civil world can be transferred across, alongside with assessment of the relative consequences of the computer being confused or just wrong.
18th January 2024
[1] MoD evidence June 2023.pdf
[3] 20220614-Ambitious Safe and Responsible_Final (publishing.service.gov.uk)
[4] How to succeed in digital and AI transformations | McKinsey
[5] Summary of the 2018 Department of Defense Artificial Intelligence Strategy
[6] Summary of the 2018 Department of Defense Artificial Intelligence Strategy