DAIC0010
Written evidence submitted by Palantir Technologies UK, Ltd.
Introduction
- We appreciate the opportunity to provide evidence to the House of Commons Defence Sub-Committee’s inquiry into developing artificial intelligence (“AI”) capacity and expertise in UK defence.
- After introducing Palantir Technologies (“Palantir”), we provide our high-level perspectives on AI’s defence implications, before answering the Sub-Committee’s questions. In response to question one, we suggest that the Ministry of Defence’s AI challenge is not one of strategy- and priority-setting, but of delivery, primarily reflecting the immense technical challenge of fielding AI operationally and at scale. Our answer to question two recommends that the Ministry of Defence (“the MOD”) focus not only on building defence AI capacity and expertise, but on leveraging the UK’s much wider AI capability base as a source of defence advantage. As we describe in response to question three, this means establishing the foundational digital infrastructure that AI and UK AI firms depend upon and thereby the means by which UK AI firms can “field-to-learn.” Our answer to question four describes some of the obstacles preventing UK AI firms from fully participating in defence supply chains, and the opportunity to catalyse new private investment in defence AI. Namely, if the MOD can provide UK AI firms with a path to operational deployment and ongoing commercial success, those firms (and firms not yet in existence) will create investable opportunities for private capital, both shoring up its AI industrial base and bolstering the UK economy.
About Palantir
- Palantir builds commercially-available software platforms that enable public, private, and non-governmental organisations to securely integrate data and AI models, helping them to make better decisions and operate more effectively. Though US-headquartered, London is home to our second-largest office and one of our two primary research and development centres, providing approximately 850 highly-skilled jobs.
- The company was founded in 2003, with the mission of enabling national security institutions to use their data more effectively and responsibly. Today, our software is deployed across every field of industry and most domains of government.
- UK and allied armed forces use Palantir’s software to integrate disparate and heterogenous data and myriad AI technologies in support of strategic, tactical and operational military functions, while enabling adherence to complex security, legal, and ethical requirements.
- In addition, firms in the UK’s defence-industrial base, and an ecosystem of UK software developers, consultancies and AI model developers, use the data foundation and security infrastructure provided by Palantir’s software (which is open and modular) to deploy and manage their own products, rapidly and at scale across the armed forces. In doing so, they leverage Palantir’s twenty years of software engineering focussed on overcoming the challenges involved in deploying AI and software capabilities across complex national security institutions. Through Palantir Government Web Services, we are further expanding these deployment capabilities, allowing UK software companies and AI firms to circumvent the challenges described, field their capabilities faster, and achieve commercial success by working with defence.[1]
Background
Faster and better decision-making, at strategic, operational, and tactical levels
- Where appropriately used, AI technologies provide military personnel with the ability to make and execute better decisions at greater speed. In this way, they have the potential to systematically augment the efficiency and effectiveness of UK defence capabilities.
- Recent developments in generative AI, including large language models (“LLMs”) and other technologies enabled by developments in transformer-based architectures, underline this effect. We are in the foothills of these technologies’ potential, but what is clear is that they augment human productivity in ways that will have profound implications for defence. In large part, this is because they dramatically lower the threshold of technical proficiency required for personnel to engage with complex and voluminous data in sophisticated ways.
Unless the UK improves its ability to field AI, adversaries’ AI capabilities, together with their faster rates of adoption, will level the playing field and erode the UK’s international position
- These advancements have been driven primarily by research, investment, and commercialisation in the West (chiefly in the United States). And yet, while offering a source of Western military advantage, these technologies’ availability to our adversaries, in combination with those countries’ home-grown capabilities and faster rates of adoption, risk levelling the playing field.
- Among other things, this is because, unless the MOD keeps pace, adversaries’ AI capabilities are likely to diminish the impact of pre-existing UK military hardware. While the UK’s armed forces possess some of the world’s most advanced capabilities, increasingly accessible and capable AI-driven systems can disrupt and even overwhelm these traditional platforms, eroding a military advantage secured through many decades of investment.
- We see these dynamics in Ukraine. While featuring the trench warfare and artillery fire that characterised European land wars of the twentieth century, the war between Russia and Ukraine has shown just how impactful software and AI can be in levelling the playing field between otherwise asymmetrical military forces.[2] This levelling may not always be to the West’s advantage.
- Underlying this risk is the concern that, while the West may be a leader in most AI technologies, its adversaries often hold an adoption advantage.[3] In addition to their access to AI capabilities developed in the West (hard to prevent), these countries have their own homegrown capabilities (in China’s case formidable), face fewer constraints in developing and deploying them, are incentivised by more aggressive threat postures, and are more willing to adapt and deploy off-the-shelf commercial products. Relatedly, their militaries often have greater, even if coerced, access to their economies’ wider industrial base. They have avoided the civilian-military divergence that characterise Western defence-industrial bases and which obstructs Western militaries’ access to the full range of AI capabilities available in the marketplace.
Question 1: How clearly has the Ministry of Defence set out its priorities for the kind of AI capacity and expertise it believes the UK defence sector should have, what priorities has it identified, and are these deliverable?
- Since at least 2021, UK Government and MOD publications have clearly identified the imperatives described above. But the immense technical challenges involved in fielding AI at scale, together with a range of commercial obstacles, continue to obstruct delivery.
- As regards priority-setting, the 2023 Integrated Review Refresh states that “the UK’s overriding priority … remains generating strategic advantage through science and technology,” with AI identified as one of the priority technologies.[4] The MOD’s Defence Command Paper Refresh (“the DCPR”) echoes this position, stating that “this will mean shifting our thinking to fully integrate both steel and software,”[5] and further asserting that “our response must be rapid, ambitious, and comprehensive.”[6]
- These publications also identify the broad means of delivery. The DCPR commits to accelerating digital transformation through its “Digital Backbone” and “evolving the MOD communications system and cloud architecture to enable game-changing data-led analysis and decision-making.”[7] The Defence AI Strategy provides a further level of detail, describing the need for data strategies, data registers, better data sharing arrangements with allies and new protocols for data integrity and veracity, to overcome the fact that “our vast data resources are too often stove-piped, badly curated, undervalued and even discarded”.[8]
- The MOD’s challenge is not one of strategic direction or priority-setting – it is one of implementation at the necessary pace.
- Although perhaps reflecting that the publication is now 18-months old, capabilities described in the Defence AI Strategy as “AI Next,” including intelligent imagery analysis, all-source intelligence fusion, and talent management are already being deployed by other militaries, with relevant analogues also widespread in private sector enterprise settings. The use of AI in operational level planning, described as “AI Future,” has now also been proven at scale in military settings, including in Ukraine.
- Limited progress in fielding AI capabilities reflects MOD challenges in establishing the technical foundations required for evaluating, acquiring and fielding operationally meaningful AI, many of them encompassed by the MOD’s “Digital Backbone.” Of these publicly-reported foundational infrastructure investments, MODnet Evolve is rated red (“unachievable”) by the Infrastructure & Projects Authority,[9] with the SECRET cloud infrastructure anticipated by the Defence AI Strategy for the end of 2022[10] now targeting full operating capability by the end of 2024.[11] The software components of Morpheus, a foundational Army battlefield network have also been beset by challenges, the programme one of a number examples leading the House of Commons Defence Committee to observe “The Department is not yet able to share and exploit data across the Armed Forces and with partners effectively enough.”[12]
Question 2: What strengths and expertise does UK industry currently have in the field of Artificial Intelligence with defence applications?
- The UK has considerable AI strengths, rightly identified by the Defence AI Strategy as a “strategic asset in its own right.”[13] It also has strengths in defence AI capabilities specifically, although the nature of these capabilities means they cannot be described publicly or easily compared with other countries’ defence AI capabilities (similarly not publicly disclosed).
- The MOD’s objective should be to establish the means, technical and commercial, by which it can support and exploit the wider “strategic asset” of the UK’s AI capability base – most of which is not defence-focussed and does not currently view the MOD as a potential customer – as a source of defence advantage. Our answer to question three focuses on these means.
Question 3: How can the UK Government best develop capacity and expertise within domestic industry in sectors such as engineering and software to support the development and delivery of Artificial Intelligence applications in defence?
- The best means by which UK AI firms can develop defence AI capacity and expertise is through live deployment and an approach of “field-to-learn”. In terms of the question, the UK Government (specifically the MOD) can support this by being a more accessible and collaborative AI customer – specifically, by investing in the foundational digital infrastructure required for operationally meaningful AI, and through which UK AI firms can field-to-learn and achieve commercial success.
Invest in the foundational digital infrastructure that UK AI firms and their capabilities depend upon
- A range of technical barriers stand in the way of UK AI firms seeking to field their capabilities and grow their businesses with the MOD. These include poor data access and interoperability, no or limited means of deploying AI on classified networks, and the difficulty of monitoring and iterating on AI model performance over time.
- These barriers reflect not just the MOD’s cultural and institutional complexities, but immense technical challenges.
- AI capabilities are not plug-and-play tools that work out of the box. To be truly operational (rather than decorative and brittle), they must be embedded in an organisation’s technical infrastructure, and in specific operational decision-making processes. They need access to robust and appropriately-governed datasets, and to sufficient compute resources (for example, graphics processing units). They require continuous testing and evaluation, and the ability to evolve over time in response to changing conditions and user feedback. Further, this activity needs to be tracked through a persistent and reliable audit trail, with military and civilian stakeholders ranging from data scientists to legal advisors and deployed personnel all involved.
- Further, the MOD needs means of deploying these enabling technologies across thousands of distinct technical environments at all classification levels. Every warship, fighter jet and tank, every data centre and cloud, and potentially every soldier, sailor and airman, is a distinct environment.
- While to some extent recognised by the MOD (see our previous answer), these challenges require both greater realism and significantly greater MOD investment. Overcoming them is essential if the MOD is to leverage and thereby support UK AI firms’ capacity, and deploy AI at scale in an operationally meaningful way.
- An analogy may be drawn with the deployment of sensors or antennae into orbit. These sensors and antennae are positioned on a satellite, carried to orbit in the tip of a much larger rocket, launched from a spaceport. Once the rocket fairings have fallen away and the satellite’s solar rays have unfurled, they are networked with other systems, on the ground and in space, and then monitored and updated throughout their lifetime. The sensors or antennae may perform the ultimate intended function – whether weather forecasting, earth observation, or satellite television – but, however exquisite as independent units, their true utility and power emerges as components of a much larger, interconnected enterprise.
- While a loose analogy, our point is that the MOD requires not only the individual sensor or antenna, but also the satellite, rocket, spaceport, and other surrounding technology components – where there are not just a few types of antennae and sensors for a limited range of functions, but a plethora of AI technologies, supporting missions ranging from imagery and acoustic analysis, to the LLM-based production of intelligence assessments, and the use of physics-based models in maintenance optimisation. The critical importance of this wider foundational infrastructure is too often under-appreciated.
The example of the U.S. Defense Department’s Algorithmic Warfare Cross-Functional Team, and the significance of field-to-learn
- The U.S. Department of Defense (“the DOD”) Algorithmic Warfare Cross-Functional Team (“the AWCFT”) provides a compelling illustration of the kind of foundational infrastructure and field-to-learn approach we believe is required.
- The AWCFT’s objective is to “accelerate the DOD’s integration of big data and machine learning.”[14] Its technical infrastructure provides vetted AI firms with controlled access to training datasets, capabilities through which AI firms and military end-users can evaluate models in specific operational contexts (on a “test-fix-test” basis), and the means of fielding models for operational use and ongoing iteration.
- This approach enables both technical innovation and ethical boundary-setting, with technologists, policy-makers, end-users, and suppliers exposed the specific challenges of AI deployment and use – a more effective foundation for learning than theoretical exercises disconnected from operational settings.
- It also performs a critical commercial and economic function. For the DOD, the AWCFT provides the technical means by which it can evaluate the full range of AI capabilities potentially available to it for a given function, whether developed by a traditional defence contractor, a technology hyperscaler, an academic researcher, or an emerging start-up. For AI firms, the programme provides a front door to the DOD, a means of building capacity through live deployment in the field and so, in turn, the commercial success required to catalyse further private investment and the creation of new firms.
- The UK lacks this marketplace. The MOD has few means of identifying, evaluating, and then acquiring AI models on a meritocratic basis, while UK AI firms lack a pathway from test and evaluation to live deployment, learning in the field, and commercial success.
Introduce a more programmatic, continuous approach to security accreditation
- A related set of challenges arise from MOD and UK Government approaches to security accreditation.
- At present, software and AI capabilities are accredited on a use-case by use-bases basis by the MOD, according to sets of principles set by organisations such as the National Cyber Security Centre (“the NCSC”). Additionally – and separately – both the supplier itself and the supplier’s capabilities are accredited through a further set of processes.
- These processes are time-consuming and expensive, for both the MOD and UK AI firms.
- We believe the modern security landscape requires a continuous assessment approach, enabled by programmatic security controls and consistent environmental monitoring. These controls become possible as the MOD (and indeed wider UK public sector, with NCSC oversight) adopts Secure by Design (“SbD”) and modern DevSecOps approaches. The NCSC and public sector organisations should require that consistent and high standards of security are baked into the development of software and AI capabilities at every stage of the development process, and that software and AI capabilities should be accredited for a category of use-cases once, with the MOD needing to do further accreditation work only to the extent that a given use-case isn’t captured by that category. This programme could be administered by the NCSC and support the deployment of AI capabilities across the UK public sector.
- The United States’ Federal Risk and Authorization Management Program (“FedRAMP”) illustrates this concept (even if, in practice, FedRAMP still comprises mostly manual processes). The UK therefore has a chance to lead in security accreditation by both codifying the set of security controls firms should consider to meet elevated levels of assurance, while also making the assessment of their implementation an automatable process ensuring speed of adoption.
- While initial accreditation would come with costs (which could be borne by the Government, to support UK AI firms), this approach would significantly reduce the subsequent time and money spent on accreditation, and accelerate the MOD’s deployment of UK AI firms’ capabilities. If aligned with international SbD standards, it would also support the interoperability and deployment of UK AI firms’ capabilities in the US and other allied defence markets.
Question 4: What can the Government do to help embed UK AI companies in defence supply chains, both domestically and internationally?
- In answering this question, we focus on the commercial obstacles standing in the way of UK AI companies seeking to join defence supply chains, and describe the economic and capability opportunity available to the UK if the MOD can address these.
In the market for AI, the MOD is not a monopsonist
- Intense competition for AI talent and capital, together with the scale of the technology sector, prevent the MOD from acting as a monopsonist. It has fewer levers in respect of potential AI suppliers than it does in respect of the suppliers of warships or fighter jets, for example.
- This requires it to conform to more standard commercial behaviours (e.g., considerably faster procurement cycles and more conventional commercial terms), provide firms and their investors with a clearly-signalled pipeline of opportunities, and that it makes these opportunities available on a truly meritocratic basis to both incumbents and non-incumbents with no prior defence experience.
- As a guiding principle, the MOD should not require UK AI firms to fundamentally reengineer themselves or their products, as a precondition for working with it. While the MOD will always have unique requirements (e.g., in relation to security and deployability) – just as NHS trusts and other highly-regulated organisations do (e.g., related to clinical safety) – this should be the objective. The foundational digital infrastructure we described in the previous answer is critical in this regard, abstracting away the immense technical challenges that the MOD and UK AI firms otherwise need to grapple with on a use-case by use-case basis.
- More broadly, AI requires us to abandon the distinction that has developed in recent decades between the “defence-industrial base” and the wider economy, and to recognise the common AI challenges shared across the public and private sectors. Mid twentieth-century Britons would not have viewed Marconi’s Wireless Telegraph & Signal Company or General Electric Company primarily as defence contractors or civilian manufacturers. They were both, but, more importantly, leaders in their respective fields. That Marconi gave rise to both the BBC and the radar-jammers of D-Day would similarly not have been viewed as curious, in the way that a financial services AI firm or online advertising company might today cause (indeed in the latter case has caused) raised eyebrows by providing AI capabilities to defence. Against the backdrop of war, the imperative was to recruit the best talent and the best firms, and to deploy the best capabilities available.
By establishing the right technical and commercial enablers, the MOD can support the growth of UK AI firms and catalyse new private investment in defence AI
- As noted, the UK currently lacks an effective marketplace through which the MOD can evaluate and then field AI capabilities at scale, and through which UK AI firms can “field-to-learn” across the armed forces and thereby achieve commercial success. Recent years have seen multiple instances of UK firms being established to pursue defence AI opportunities but then failing, and of international defence AI firms establishing but then shuttering UK offices. While the UK has a relatively strong ecosystem of AI firms, too few of them view the MOD as an attractive – or even viable – customer. Too few investors, in turn, view defence AI as a good destination for their capital.
- By establishing the technical and commercial enablers we describe, the MOD can reverse these feedback loops, shoring up its AI capability base and delivering a significant economic opportunity for the UK. That is, if the MOD can provide UK AI firms with a path to operational deployment and ongoing commercial success, and clearly signal investment plans of sufficient scale, those UK AI firms (and firms not yet in existence) will be more likely to attract private capital from the many investors who see AI’s defence potential but lack investable opportunities. Success will beget success. We are beginning to see this dynamic emerge in the US, in large part due to the work of the AWCFT.
Question 5: How can the UK Government ensure that it champions the UK AI sector in the context of Pillar 2 of the AUKUS Partnership?
- Constrained by the word-limit, we have chosen not to answer this question.
17th January 2024
[1] Palantir Technologies, Technical Annex: Announcing Palantir Government Web Services, 8 September 2023.
[2] For more on this subject, see Ignatius, D., How the algorithm tipped the balance in Ukraine, Washington Post, 19 December 2022.
[3] Underlining this, a former Chief of US Air Force contracting, describing US defence acquisition generally, commented in 2022 that “China is still, after gains we’ve made in the last five years or so, about five to six times faster than us in acquisition. In purchasing power parity they spend about one dollar to our twenty dollars to get to the same capability”; Lofgren, E., China’s weapons acquisition cycle 5-6x faster than the United States — “We are going to lose” if we don’t change, 6 July 2022.
[4] HM Government, Integrated Review Refresh 2023: Responding to a more contested and volatile world, March 2023, p.56.
[5] Ministry of Defence, Defence’s response to a more contested and volatile world, 18 July 2023, p.11.
[6] Ministry of Defence, Defence’s response to a more contested and volatile world, p.11.
[7] Ministry of Defence, Defence’s response to a more contested and volatile world, p.36.
[8] Ministry of Defence, Defence Artificial Intelligence Strategy, June 2022, p.24.
[9] Infrastructure and Projects Authority, Annual Report on Major Projects 2022-23, 2023, p.72.
[10] Ministry of Defence, Defence Artificial Intelligence Strategy, p.25.
[11] Ministry of Defence, Cloud Strategic Roadmap for Defence, 2 February 2023, p.30.
[12] House of Commons Defence Committee, The defence digital strategy, 26 January 2023, p.7.
[13] Ministry of Defence, Defence Artificial Intelligence Strategy, June 2022, p.7. Palantir knows these strengths first-hand. That our second-largest office is located in the UK, together with one of our largest research and development centres, reflects that we find much of the world’s best software engineering and AI talent here in the UK. Further, we find that much of Europe’s best AI talent wants to work and live here in the UK.
[14] U.S. Department of Defense, Memorandum from the Deputy Secretary of Defense on the Establishment of an Algorithmic Warfare Cross-Functional Team (Project Maven), 26 April 2017.