final logo red (RGB)

 

AI in Weapons Systems Committee

Uncorrected oral evidence: Artificial intelligence in weapons systems

Thursday 21 September 2023

11.05 am

 

Watch the meeting

Members present: Lord Lisvane (The Chair); Lord Browne of Ladyton; Lord Clement-Jones; Lord Bishop of Coventry; Baroness Doocey; Lord Fairfax of Cameron; Lord Grocott; Lord Mitchell; Lord Sarfraz; Lord Triesman.

Evidence Session No. 14              Heard in Public              Questions 191 - 200

 

Witness

I: Professor Jinghan Zeng, Professor of China and International Studies, Lancaster University.

 

USE OF THE TRANSCRIPT

  1. This is an uncorrected transcript of evidence taken in public and webcast on www.parliamentlive.tv.
  2. Any public use of, or reference to, the contents should make clear that neither Members nor witnesses have had the opportunity to correct the record. If in doubt as to the propriety of using the transcript, please contact the Clerk of the Committee.
  3. Members and witnesses are asked to send corrections to the Clerk of the Committee within 14 days of receipt.

13

 

Examination of witness

Professor Jinghan Zeng.

Q191       The Chair: Good morning, Professor Zeng. Thank you very much indeed for joining us. I am so sorry that we are starting a little late; I am afraid it tends to be a frequent experience that we try to squeeze a quart into a pint pot. We will do our very best to make the most of the time that you are with us this morning, for the next three-quarters of an hour or so.

What is the state of development of AWS in China? I am using quite a wide definition, not only fully autonomous weapons but weapons supported and assisted by advanced AI.

Professor Jinghan Zeng: China has been heavily invested in the area of autonomous weapons systems. We know that Chinese defence companies and research institutions have been working on different aspects of AI and different kinds of applications in the domains of air, land, sea and space. Probably one of the most notable areas of progress made by China’s autonomous weapons systems is unmanned aerial vehicles, such as the armed drone, which is capable of carrying out civilian or even combat missions. China leads in this area, especially in medium-altitude long-endurance UAVs; for example, China’s Wing Loong 1 and Wing Loong 2 are quite popular at the moment. Its bestselling drone is Caihong 4, also called the Rainbow 4.

They all have their American competitors—competing products—but China claims that its versions are much faster than US ones and much more capable of carrying greater weapon payloads. They are also estimated to be much more economic than American competing products. For example, according to a report from the Center for Strategic and International Studies, China’s Caihong 4 and Wing Loong 2 cost between $1 million and $2 million, while the American versions, the Reaper and the Predator, cost around $16 million and $4 million respectively, so they are much more expensive than the Chinese ones. Recently, in November, China displayed its Wing Loong 10 unmanned aerial vehicle, which was introduced at an air show in the city of Zhuhai, showing rapid progress in this area. That is one example in the air domain.

It is important to mention that China is now a leading exporter of combat drones. Its buyers come from Africa, the Middle East and Asia, making China the leading exporter in this area. However, the US is still by far the largest and leads when it comes to civilian drones.

There are other examples of the development of autonomous weapons systems in China. For example, its land force has been piloting and deploying an unmanned ground combat vehicle able to autonomously plan routes according to the task and launch an attack. The Chinese navy is reported to have tested and developed an underwater drone, an unmanned surface vessel and an autonomous submarine. The Chinese rocket force also has an agenda in this area. It is exploring AI in remote sensing, targeting and decision support. It is trying to develop a smarter missile that is more intelligent in its capacity, and trying to incorporate higher levels of automation to facilitate operations.

It is very difficult to know the details of how those developed, due to the data access challenge, but the general trend is very clear: China has been trying to incorporate more autonomous capacity in its military equipment in all respects under the call to become a world-leading army. China perceives the future will see a shift from informationised to “intelligentised” forms of war, so all parts of the army, in all respects, are answering this call and trying to incorporate automation into the way they work in order to deliver a more than 21st-century modern army. China is a major player in autonomous weapons systems, but it is still some way behind the US.

The Chair: You gave us a comparison: the Reaper drone. As deployed, that is not autonomous, although no doubt it could be made so. Of the systems that China is working on or has deployed, are we talking about fully autonomous systems? In other words, are we talking about systems which no longer have meaningful human control?

Professor Jinghan Zeng: At this stage, most of them are probably not fully autonomous weapons; there is still a heavy element of human control. The Chinese army is still heavily emphasised as being under human control. In the example of drones, humans, such as the pilots operating them, still play a role in that process. We do not have publicly accessible information on whether there have been new developments in this area.

Q192       Lord Mitchell: Most western countries—I think—have a very definitive guideline with respect to offensive autonomous weapons, in that human involvement is mandatory. I am interested to know whether China is guided by the same rules.

Professor Jinghan Zeng: To look at that, we have to bear in mind the wider context. The development of AI was not such a big game-changer a few years ago. It has become more visible nowadays. The rules that the Chinese army will use to regulate how those weapons are used or to what extent humans play a role are lagging behind.

So far, we do not have any publicly accessible information, regulation or law about how the Chinese will ban certain uses of autonomous weapons. We do not have that information, but I think there is a system in the Chinese army about how it will develop a weapon, do the procurement and test the quality of it. That is probably more relevant to your question about how it governs and tests its weapons systems, but, so far, we do not have information specifically about any law or regulation on autonomous weapons systems and how they are being regulated.

Q193       Lord Sarfraz:  Thank you, Professor, for your time this morning. A lot of the discussion in the UK and the US is around the ethics of AI and weapons systems. Is that discussion also happening in China? How do you think that that discussion aligns with our concerns around ethics here?

Professor Jinghan Zeng: Yes, definitely; it is a very important topic in China. China has repeatedly called on the international stage for a ban on lethal autonomous weapons systems; we have seen Chinese delegations repeatedly calling for that and making it very clear. That is a very different position from that of other major players, such as Russia and the US, which is more low-key in its call for a ban.

Having said that, there is a slight contradiction in that China has also been very actively encouraging the development of AI in its military innovation under a different kind of cause. I would even go further: there is probably a slight tension between its diplomatic and military agendas. After all, China has the capability and financial resources to develop AI in a way in which it will gain military advantage, so what its military actor desires might be slightly different from what its diplomatic agenda wants to go for. There is a slight contradiction; that is the first thing.

Secondly, when you look into the Chinese definition of autonomous weapons systems and you talk about ethics and so on, you see that they tend to have a narrower definition that is different from the way we talk about it in the UK or the US. We need to bear that in mind.

The answer to your question is yes, ethics are very important. You can hear different leading Chinese diplomats calling on the international stage for a ban on lethal autonomous weapons systems and talking about how important that is. In the wider picture, China is interested in becoming a global governance norm-maker and shaper. That applies in the AI area, where China feels it needs to have leadership. In order to have leadership, it wants to define the future norms—the way we use AI, including military use of AI. That is where ethics come in and make it important; they are being discussed. China definitely wants to lead the global conversation about ethics and safety in order to shape future AI standards and norms.

Lord Sarfraz: Are processes and institutions in place to oversee the ethical and legal element in the development of these systems?

Professor Jinghan Zeng: That is a very good question, but to understand it, there is a wider picture. As I mentioned, there is always a gap—as in any country—between regulation and the rapid development of AI. For the Chinese army it is a new thing, because it is under the new revolution—the shift from informationised war to “intelligentised” war. It is quite a new development; all the ethical and legal standards are developing and the governance system has been evolving, too. At the same time, the way China organises its army has changed quite significantly.

In that context, we do not know of a specific kind of law or regulation that will directly deal with autonomous weapons systems in China, but we have seen in public domains that the negative impact of AI’s rapid development has already drawn a lot of attention in Chinese society. The Chinese Government have released a series of regulations and rules about AI but they remain in the civilian domain. To what extent do they spill over into the military domain? They remain more a kind of observation, but that is a trend in society.

The process is probably less about specifically autonomous weapons systems and more about the way China manages its weapons and the governance of its weapons. There are three parts of the process. First is the research and development process, which includes several steps, including a feasibility study, which is called a weapon concept design; concept formulation; engineering trial; prototype development; test and trial; and active service trials. That is the R&D process for weapons development in China, and it is very relevant to the autonomous weapons systems that we are looking at. Secondly, there is the procurement and test stage, where there are different kinds of regulations to regulate how procurement can be done and who will do it, and the bidding system of the company that comes to do that process. The last stage is deployment, testing the quality of the weapons and things like that.

Currently, the system is regulated by China’s existing legal framework and the internal regulation in the PLA. For export control, a different kind of regulation kicks in. For example, if you want to produce a weapon in China, such as a weapons system, you need to obtain a weapons and equipment scientific research and production licence, unless you have special permission from the State Council or Central Military Commission of China. There is a licence directory developed by the State Council and the Central Military Commission. Under that, the Chinese general armament department and the State Administration of Science, Technology and Industry for National Defense supervise the production of weapons at national level. In different provinces, departments of national scientific knowledge supervise production in their respective regions. The research institutions of a state-owned enterprise, or the Chinese Academy of Sciences, for example, are responsible for their own units.

There is a process for how to manage weapons, but it is probably not specifically designed for autonomous weapons systems. This might come to be a problem because autonomous weapons systems are now developing so fast. In the past three years, China has been reorganising the way it works, because it sees that war has changed to “intelligentised”. It has been updating with new regulations, such as the military equipment regulation and the military equipment ordering regulation in 2021, the regulation on military equipment testing in 2022 and the weapon and equipment quality management regulation, which is being drafted and is open for consultation this year.

All those things are evolving very rapidly. Another variable worth mentioning is China’s military-civil fusion strategy, in which it is trying to mobilise more social actors to support the army in becoming more intelligent. This will invite more social actors to come into the process and will make it more complicated. It is very much an evolving stage, and China is still working out the best way or process to ensure that it aligns.

Q194       Lord Fairfax of Cameron: Professor Zeng, earlier you said that, despite China’s massive investment at the moment in AI, it is still catching up. Is the fact that Taiwan has an almost total world monopoly on chip and semiconductor production likely to affect whether or when China invades Taiwan, or is that a red herring?

Professor Jinghan Zeng: We probably need to look at the matter within the wider picture. You can very much say that the majority of China’s military strategy and resources are devoted to the issue of the Taiwan district. It is very much about how we may be reunited with Taiwan, and most of the attention has been focused on that. That makes China very different from the US, which has, for example, lots of overseas bases and interests. It distributes its efforts across the board, but China does not. China has been concentrating on Taiwan and making sure that it has a military advantage in that area.

If your question is whether the chips industry will be technologically helpful to China, I think the answer is that it definitely will be. As to whether that will be the single variable affecting a Chinese decision for a military operation towards Taiwan, I would say that there is quite a wide range of variables, including China’s own domestic economic situation—Biden recently commented on that—its nationalism and the legitimacy of the Chinese Communist Party and the regime. When you compare the AI issue with all those big items, it probably would not be a decisive factor. It is a military decision on Taiwan, so if there is a military operation towards Taiwan, it probably would not be singly affected by the fact that Taiwan has advanced chips.

Lord Fairfax of Cameron: Thank you. That is a very interesting answer.

Q195       Lord Grocott: My question is about the definition of AWS, about which we have heard a fair bit of evidence in earlier sessions. I think you said that a narrow definition is taken in China. I would like you to comment on that, as well as on the importance or otherwise of having an internationally agreed definition, which seems to me quite important if there is to be any talk about control of AWS or internationally agreed standards about what should or should not be—or is or is not—acceptable. As far as China is concerned, to what extent is that important in getting any kind of international agreement on issues of that sort?

Professor Jinghan Zeng: The definition issue is very important because often we talk about very different things in the same conversation. First, there are about 12 different definitions given by states or international organisations. China is not unique: like many other countries, it does not have an official, widely accepted definition of autonomous weapons systems. My research has shown that China has been actively studying what other countries have been doing, especially American terminology, and trying to learn its own lessons.

Currently, China’s definition is between human control and machine autonomy. Indeed, in Chinese writing, they do not often use the term “autonomous weapons systems”; the term they often use is “intelligentised weapon” or “AI weapon”. The term “intelligentised weapon"Zhìnéng huà wǔqì in Chinese—implies a high level of smartness and intelligence in the weapon that is selecting and engaging targets. Even if some function of a weapons system becomes unmanned, it does not necessarily mean that it is intelligent, smart or has a greater level of autonomy. That is the context of the Chinese discussion; it is less about autonomous weapons and more about intelligent weapons.

There are two points regarding the definition. There is no consistent Chinese definition. You could say that the Chinese definition can evolve a bit, which is natural, given the rapid development of the technology and the different kinds of circumstances. There is no consistent Chinese definition, but I would go even further and argue that there is no coherent Chinese definition. The more I look into it, the more I find that there is quite a different agenda among different actors in China. For example, we often talk about what China’s definition is, which implies that there is one single coherent definition, but there is not. Different actors probably have different kinds of preferences and definitions, and that matters a lot to them because of their diplomatic, corporate or military interests. That is another topic. We have different actors.

Even within the Peoples Liberation Army, there is still debate about the extent to which we should define AI and the involvement of AI in the process. Some people emphasise that we should definitely prioritise human control, and others say that it is important to gain strategic advantage by eliminating human elements. Some people talk about humans and machines playing very different roles, and that has influenced China’s definition. That is the wider context. A useful reference for China’s definition is the People’s Liberation Army 2011 official dictionary, which gave a definition of an AI weapon as a weapon that utilizes AI to automatically pursue, distinguish, and destroy enemy targets; often composed of information collection and management systems, knowledge base systems, assistance to decision systems, mission implementation systems”.

However, in 2018, a Chinese position paper gave a narrow definition of autonomous weapons systems, talking about five essential features, which goes to your question about there being a narrow definition. The first one is lethality, sufficient payload for it to be lethal; the second is autonomy; the third is impossibility of termination; the fourth is indiscriminate effect; and the fifth is evolution. The issue is that the bar has been set too high: so far, no such weapon exists, so what are we going to ban? We do not know of any weapons so far that meet the third criteria, about the impossibility of termination, and the fifth point, on evolution, has raised considerable regulatory challenges.

Chinese diplomats hold a very different view. They do not think it is narrow; they think it is necessary, but that is another topic. Again, I see slight tensions in the diplomatic agenda of China leading the area of shaping the norms and the more military agenda of trying to get less regulation in order to develop a more advanced army. There is some slight tension.

The second part of your question was about an internationally agreed definition, which is where that tension comes in. Quite a lot of Chinese diplomats would probably agree with you that we need to have an internationally agreed definition and we need some action. China would like to play a very active role there, but the extent to which that represents the entirety of China’s view needs more observation.

Lord Grocott: Presumably, the diplomatic perspective from China—you say there are different perspectives from different parts—would be towards the desirability of trying to achieve some sort of international definition. Am I right to assume that?

Professor Jinghan Zeng: Some actors in China would want that to be achieved, but others would probably want more freedom to do their own thing, and less regulation in that area. They would believe that such regulation or internationally agreed definition would not be helpful to a major player such as China. They think China is like Russia and the US—it has the financial resources and the military capacity—and that having such internationally agreed restrictions would not be to China’s benefit. You will always have people holding that view, but we do not yet know to what extent that view has the upper hand at this stage. We see the tension there, which is why we say that the Chinese position on this is ambiguous.

Q196       The Lord Bishop of Coventry: I would like to follow up on the point about evolution that you mentioned. I guess that what China is saying there is that, as weapons develop and become adaptable, meaningful human control needs to remain. I am very interested in the distinction you make between the diplomatic and the military, which I can see very well. Do you see pressure from the military perspective on that aspiration and commitment to maintain control, even as these weapons systems develop, become more sophisticated and increase in their capacity for adaptation in motion, as it were?

Professor Jinghan Zeng: The point I am making is that China is not a unitary actor. When you look into issues such as this, you will find different agendas. Looking specifically at the Chinese army, it is quite a big organisation, with different actors involved and there are several factors. There may be an argument for them to eliminate human control, but, as I mentioned, there is a debate. Some people have been arguing that only by eliminating as much human control as possible will China be able to get a strategic advantage when competing with the US. Some argue that the human factor has been making things less efficient, and that AI will have higher accuracy, more precision and make much better weapons, so we should do that as much as possible.

However, I would argue that there are other probably more important and fundamental factors to the culture of the Chinese army, which we should bear in mind, and worry less about that going further. One is probably the way China views how armies should be organised or developed. The culture is that there is a hierarchy and a culture of comment, which means that it always emphasises control. You need to make sure that you have full control. You need to make sure that the army listens to the regime and does not do something that develops its own independence. That agenda is always there and is very important.

When the Chinese version of ChatGPT came out, the first version was not very well received in China. If you asked a question about its political attitude or if it believed in the kind of socialist values that China wants to champion, it did not give you answers that the Chinese Government would like. When you bring that kind of system into the military, how are you going to make sure that the party has full control of it?

I would also say that there is some tension about ideological and political education. When you select a military officer and soldier in China, they often prioritise two things. One is that you have to be a red, and the second is that you have to be a specialised expert. Red means loyal to the Communist Party and socialist values. You also have to be specialised. As AI develops, the more and more you need specialisation, so when those two things compete, which are you going to prioritise? Will it be political character or your AI skills? I think that is going to be quite a challenge for the Chinese army. When the system becomes more automatic, you will need more specialised expertise in military operation. The speed of operation has become faster than ever. How the human factor kicks in is quite worrying to traditional Chinese culture and the way that they control the army.

There is a trend in how AI can complement and replace human operational decision-making. Then the question arises of how the decision-making will prioritise the values of the Chinese Communist Party and make sure that the party controls the army. Traditionally, a human soldier has extensive political and ideological education in Marxism, the Communist Party and loyalty. When a machine comes in, how can you make sure that you have sufficient loyalty? Those are questions that have not been sorted out yet. That is why there is discussion, and we see China talking about how AI can be used to improve ideological education and potentially to monitor ideological trends in the way soldiers use websites and things like that. I think there are some other issues regarding that.

That is probably a long answer to your question. There is an element where China has the motivation to eliminate human control to get a competitive advantage. You can also see the kind of structure or barrier it faces because of the way the Chinese army has been organised, the traditional culture and its relation with the region.

Q197       Lord Browne of Ladyton: Thank you very much indeed, Professor Zeng. The clerks probably gave you notice of this question. I will read it to you, but please do not answer it until I ask you something else. In one sentence the question is: how does China’s governance of autonomous weapons systems function, and how is it evolving?

From my observation of what has been going on in the last 30 minutes, you have answered that question. I think you probably answered it in a way that, while it will not stun any of us, the level of diversity and debate that is being tolerated within what you call China’s legal framework, which we understand, will probably surprise some of us. From lack of the sort of knowledge you have shared with us, we were expecting something that was a bit more directed and a bit more disciplined.

Personally, I am pleased to see more diverse debate going on. What I would quite like to do with the few minutes that I have with you is to ask you to move away from the recognition that this is an evolving environment in which the machine’s development has a say and a sense of generating the imperative for movement, rather than human beings doing it because a machine’s capability demands response. Where do we think this is going to end up in the foreseeable future?

I refer you to the Carnegie Endowment for International Peace report entitled China’s AI regulations and How They Get Made, published in July 2023. You will be well aware of it and will know it much better than I do. The authors suggested that China’s AI governance was “approaching a turning point”. I am going to ask whether you agree with that and, if not, where do you think it actually is and where it will be in the next few years. They suggest that, after several years of the process of debate you described to us quite comprehensively, China is moving towards what they call “a comprehensive national AI law”.

Can I specifically ask you whether you see that as the direction of travel and whether we can, as they encourage us to do, look at the way in which regulation governing the internet was dealt with in China and to see that as a model that might be followed? I think that is probably the best we can ask you, because taking into account all that you have told us it would be unfair to ask you where you think it is all going to come out. In general terms, do you think that is right? Can we look in the foreseeable future to some more certainty and growing rigidity in what is allowed to happen in China?

Professor Jinghan Zeng: It is probably less certain than you would expect. The reason is that, currently, AI has been developing so rapidly and the regulation lags behind. A different part of the debate has been about to what extent the regulation should come in.

On one side, when you talk about comprehensive law and regulations, if there are too many regulations, to what extent will that harm AI innovation and affect China’s national competitiveness? I think there is a similar debate in the US as well. It does not want to have too many regulations and laws, otherwise it will affect the entire industry. On the other side, the way that China manages society is always by building regulation and trying to make sure that it regulates according to the way the Government want, or to make sure that it does not, to use its own words, become disordered.

I think those two things will always be two agendas. The two different interests will always be a point of tension. For example, in the debate about the future regulation of the Chinese economy, we have one side saying that we should have less regulation of the private economy, otherwise there will no longer be innovation and economic growth. The other side emphasises that we still need that.

I am less sure about which one might dominate the precise agenda, but I agree that in the past few years China has probably seen a turning point. In the past 10 to 20 years, some people have joked that the way China governs AI will be a “naked run”, which means there is no law and no regulation, just go ahead with it. They think it is fantastic and a good thing because technology has always been perceived as very good in China and as a way to become modernised, and things like that. You will always have that view. However, in the past few years I think the negative impact has become clearer because of the violation of privacy. Different kinds of social problems come out. The state has seen the abuse of data by private actors. Quite a range of laws and regulations have been released in the past few years, not only on AI but on technology in general.

To answer your question, it remains to be seen how the arguments on both sides of the debate might evolve in the future. I agree that we are probably reaching a turning point. The view is very different from five or 10 years ago, when nobody really cared about regulation. Now, they are very serious about it and a serious law has been announced.

Lord Browne of Ladyton: In this country we live in the same environment, to the extent that the technology is moving at the same pace. It is moving at the same pace all over the world. This is my observation, and not for anybody other than me. My observation is that the movement of the technology has significantly outstripped in pace the movement of the regulation. To that degree we are similar.

What we seem to have done, at least at governmental level, is to come to the conclusion that international humanitarian law, and the laws of conflict, are sufficient to be the framework within which we can put the regulation of this technology, because, after all, it is just another technology and we have always been able to do that. Is there any likelihood that China would find itself in that sort of place?

Professor Jinghan Zeng: My general observation is that, using the UK as a comparison, we are right to say that all countries have seen the rapid development of technology. I would say there is a much higher level of civil awareness about privacy in this country than in China, especially in the past five or 10 years. Robin Li, the Chinese CEO of Baidu, publicly admitted that in China people tend to trade privacy for convenience, and the more power in convenience, the less they worry about their own privacy, but I think now they are starting to be aware of it.

In the UK and Germany, there was a kind of long tradition of civil awareness of privacy, and civil rights to ensure data protection. GDPR has gone way ahead, when China was just starting to talk about its own laws. The European approach will probably be more regulation. I would not say it will be more regulation ahead of the technology, but in comparison with China’s case there is definitely more regulation already on data and things like that.

Will this be just another technology? At least now, China is talking about AI as a matter of national security. It is not treating AI as just another technology. It is national security, and it is firmly affecting China’s discourse on critical security, which means regime security, economic security and military security. In that discourse it makes AI different from just another technology—for example, renewable energy or other things. That is probably something different in its political discourse.

Q198       Lord Sarfraz: Professor, I note that you are an expert in belt and road. Do you think that China will take advantage of its AI expertise and capability to export and deliver AI solutions, as it relates to defence in particular, to belt and road countries and other countries where it seeks to influence, as a tool of engagement?

Professor Jinghan Zeng: We can look at it on two levels. The first level is belt and road. So far, China’s investment in belt and road has significantly dropped for lots of reasons, including the pandemic and China’s own economic situation. The way we understand belt and road, or what belt and road is, is very different from what belt and road was 10 years ago. The nature of the BRI was very different. In the past, it was very much about tradition, infrastructure and things like that. Now, it is a lot more about digital connectivity and connection.

My research on belt and road has always talked about this; it is just a slogan. It is not that helpful to use it as a way of developing a specific policy. I think your question is more about the extent to which China’s AI expertise will strengthen or help its strategic advantage towards the countries that belt and road covers. In that area, you see considerable wrong that China can do.

For example, there is the export of Chinese drones to a wide range of countries considered as belt and road countries. AI helps in that area. Chinese drones are much cheaper than American ones. Countries such as Saudi Arabia, Myanmar, Iraq, Egypt, United Arab Emirates, Pakistan and Serbia are all current buyers, and China has been operating drone factories in some of those countries. In that regard, I think China wants to become more global. More precisely, the Chinese army wants to become a global leader in weapon exportation. That helps its own domestic agenda, so what China has been doing in that area of AI is always helpful. What AI can develop in a wider area for Chinese technology to trade with belt and road countries definitely feeds the agenda of China to become an AI superpower under its leadership.

By creating this technology, China will be able to advance its diplomatic agenda with countries in Asia, Africa and the Middle East. It is not necessarily always a bad thing. If you really look into the conditions in a lot of the countries in Africa or the Middle East, they will benefit significantly by having much cheaper technology to achieve things that otherwise they are not able to do; for example, a lot of the infrastructure that China has built in Africa, such as hospitals and high-speed railways, are beneficial things for the global south.

The Chair: The clock is rather against us. At this point I am going to move to Lord Clement-Jones.

Q199       Lord Clement-Jones: I have a very brief question, Professor. We want to probe the kinds of human resource and skills that are available to China in the development of intelligent weaponry. You have talked about social actors in developing AI weapons. By that, I assume you mean the corporate sector and so on. Could you describe how close that relationship is? Is there a real corporate and government partnership in terms of working with the PLA and so on?

As a secondary but very important question, what kind of ethical qualms do you think those in the corporate sector would have about the development of those weapons? It is rather like, for instance, Google employees who had their doubt about Project Maven. Would the equivalent be the case in China?

Professor Jinghan Zeng: Thank you for the question. To look at that question, you have to look into the military fusion strategy of China, which is to try to mobilise social capital to develop military advantage for China, by encouraging more collaboration between the People’s Liberation Army and Chinese universities, research institutions, state-owned enterprises and different kinds of social corporate actors.

That is very important in the context that, in the past, the Chinese army did not trust the private sector very much. There are actually different kinds of legal issues, or even legal risks, of private actors taking on a military contract. I do not think that has fully been solved yet. In a way, they want to work closer with the private or corporate sector on different things, but the culture is still that they do not much trust the private sector. How that might evolve under the military fusion strategy will be interesting to observe.

What will definitely happen is that the military fusion strategy will invite more corporate and social actors to come in that will have their own agenda. How the governance of that might work still needs to be studied and observed. I would say that it is going to make it more complicated. The People’s Liberation Army has very strict rules and regulations on how weapons can be produced and how information can be shared, but now, as more actors come in, there is real concern over national security and information leaking, as well as the issue you mentioned about regulation—to what extent they are being regulated and the risks.

All of that is new. The impact of the military fusion strategy will take years to become obvious. To what extent the corporate actor will be regulated by the relevant legal and ethical standards is a good question that needs to be asked. If you compare that with Google—

Lord Clement-Jones: We are thinking about the individual approach to ethics and how they feel about developing these weapons within the corporate sector, as well as academia of course. Would they have doubts?

Professor Jinghan Zeng: Do you mean will the Chinese army have doubts—

Lord Clement-Jones: No, not the Chinese army. The people who would be working with them in the corporate sector, in academia and so on as part of your military fusion, so to speak.

Professor Jinghan Zeng: I would say that the doubts will be less obvious or probably less visible than we have seen in the US. In the US, tech giants such as Microsoft and Google, for example, refuse to share their official organisation technology with the American police. Apple refuses to co-operate with the FBI over things. That kind of thing is not happening in China because China has a very different state-business relationship. You do not really see that kind of openness of resistance to the government agenda. That is not how it works in China’s critical environment.

The Chair: Professor, our last question comes from Lord Fairfax, who assures me that it will take you only 30 seconds to answer. That is your challenge.

Q200       Lord Fairfax of Cameron: It will take three seconds to answer, more accurately. The forthcoming global AI safety summit is going to take place in the UK on 1 and 2 November. Do you expect that China will attend?

Professor Jinghan Zeng: I think China definitely should be invited. That is my view, for two reasons. It is a leading AI player. Without China, you do not really have much to talk about. Also, if you look at the global AI race narrative, if China is not invited, is not regulated and is not on board, to what extent do you expect the US will play along? It will say, “If China is not regulated, and if China is not abiding by this rule, why should we do that? We are losing our competitive advantage”. If you follow that logic, and you really want meaningful international agreements or progress in that area, regardless of whether you like it or not, China needs to be on board.

The Chair: Let us see how that develops. Thank you very much indeed, Professor. We really appreciate the time that you have given us this morning and the expertise that you have shared with us. Thank you again.