China's Proposal at UN to Regulate Military AI, Intelligent Decision Making System, and Intelligent Combat System
Issue 12, 20 Dec 2021
Some exciting changes are in pipeline starting from January 2022. My colleague Suyash Desai, Manoj Kewalramani, and I will together curate a weekly newsletter on China. This will cover all interesting updates from politics, military, society, and other sectors. China Tech Dispatch will continue to focus on military tech. Stay tuned!
I. Military and Warfare
China's Proposal at UN to Regulate Military AI
China recently submitted a position paper on regulating the military applications of artificial intelligence (AI) to the sixth review conference of the United Nations (UN) Convention on Certain Conventional Weapons. The overall tone of this position paper is that countries should debate and discuss the weaponization of AI - all while Beijing continues to find new ways to apply AI in its military applications.
According to Global Times, this is the first time China has proposed to regulate the military applications of AI at the UN. However, China had submitted a position paper at the 2018 Convention on Certain Conventional Weapons (CCW) Group of Governmental Experts conference. In this position paper, Beijing had supported discussion on applications of AI in Lethal Autonomous Weapons Systems (LAWS).
First and foremost, this is a formal acknowledgment of AI as a technology capable of transforming the international security paradigm. Many countries including the US and China are trying to leverage the advantages of AI in military applications. According to some reports, China might even be ahead of the US in integrating AI applications for military purposes.1/2 China recently presented its position paper on regulating military applications of AI at the Sixth Review Conference of the UN Convention on Certain Conventional Weapons.
With this proposal, Beijing wants to project itself as a responsible player. China’s position paper contains all the keywords like "ethics", "governance", "world peace and development", "multilateralism", "openness and inclusiveness" that are used to portray oneself as a responsible state. For technological security, China emphasizes the centrality of human intervention and data security along with a restriction on the military use of AI data. However, for dual-use technology like AI, there is no clear distinction where data is being used. For example, civilian data can be used to train an AI model and this trained model can then be used for military purposes.
By initiating discussion on military applications of AI, China has taken the lead in shaping the discussion on a security implication of a critical technology like AI. China sees itself as an important norm-setting actor and this attempt reflects the same. However, AI is important for the PLA's own vision of future warfare, and hence any talk of its regulation by Beijing should be read with a grain of salt. For strategic security, China calls the major countries to be "prudent and responsible" while developing and applying AI in the military and not seeking absolute military advantage. This is a big talk by a country whose own military places developments in AI at its core to win future wars. Chinese scholars and PLA officers consider AI to be a starting point to build strategic offensive and defensive capabilities necessary to win future warfare. The absolute military advantage using AI is exactly what the PLA wants to achieve.
Next, two terminologies used in the proposal are similar to those used for nuclear weapons. One, use of the term proliferation. China’s arms control ambassador to the United Nations, Li Song, said that oversight proposed by China is necessary to cut the risk of military AI proliferation. The term proliferation is mostly used in reference to nuclear weapons and weapons of mass destruction. This could mean China views AI as a technology capable of mass destruction. Hence, if this discussion goes forward, a few countries will dictate global norms on the weaponization of AI like in the case of nuclear weapons. The second term is in reference to no first use. Beijing urged in the position paper that countries need to remember that military applications of AI shall never be used as a tool to start a war or pursue hegemony. These words sound similar to the No First Use (NFU) doctrine. Most of the current applications of AI are in decision making, battlefield simulations, increasing precision, reducing reaction time, etc. These applications are not typically used to start a war. China’s 2018 position paper at CCW does attempt to define application of AI for LAWS and in some of those cases the NFU might be applicable. For example, if LAWS evolved due to interaction with the environment and learn autonomously to expand its functions and capabilities that exceed human expectations (Point 3 in the position paper). Still, the nuances of defining use AI in military appplications are far complex and hence promise of NFU might not be credible.
Intelligent Decision-Making System
The article by Mao Weihao from Army Command College of PLA (中国人民解放军陆军指挥学院) in PLA Daily outlines the characteristics of an intelligent decision-making system needed for intelligent warfare.
But first, the author highlights key differences between conventional warfare and intelligent warfare:
The four factors in intelligent warfare can become key factors in achieving victory in war (winning mechanisms). First, situational awareness refers to the degree of awareness of combat operations of the opponent and reduce exposure of own information. Hence, the secret to victory is to reduce exposing own information and gain more information on the opponent. Second, enemy situation analysis is necessary to overcome shortcomings in own system i.e. building a stronger defense. Third, the speed of information flow is an advantage. The faster the flow of information, the greater will be the information gathered and better will be decision making. Fourth, superior cluster control abilities without descending into battle time chaos are important to determine victory in war. An orderly side has a better chance of winning the war. Hence, interruptions, command disorders, loss of combat units, etc, the combat cluster should not fall into chaos. This is especially important in the era of intelligent warfare since combat clusters will likely be unmanned.
Autonomous decision-making forms the core of intelligent warfare and can be based on a Recognition-primed decision (RPD) decision-making mechanism. Steps to build autonomous decision-making mechanism for intelligence based on RPD:
Activate the action plan
Control combat operations
Evaluate the results and run in a loop
Recognition-primed decision (RPD): RPD is a decision-making model for rapid decision making. This model relies on the experience of the decision-maker along with environmental factors of the situation to arrive at the first viable option as a possible course of action, especially in complex situations. More here (Chapter 6)
Four core parts of an intelligent decision-making system:
Data: Building a database is of core importance since a better database leads to better will be quality of recognition and response.
Algorithms: Recognition and response depend on pattern matching algorithms. Better the algorithm will lead to better recognition and better response
Communications: Communication is important to conduct integrated operations. The combined space-based, air-based, land-based and other communication platforms form a network-based self-learning system.
Sensors: Sensors help perceive the real-time situation and feed into the data.
Using deep learning, this intelligent combat system can improve itself with each iteration. In addition to the decision-making process of an intelligent system, two more conditions need to be met:
Building a knowledge database: The RPD decision-making model is mainly based on individual experience and professional knowledge and it doesn't work for "outsiders." Therefore, the RPD decision model is generally suitable for experienced persons or experts in a certain field. This means that for the intelligent combat system on the battlefield to make autonomous decisions, a huge "knowledge database" is necessary.
Update database after the war: The database should be updated after the war with new knowledge gained and scenarios encountered during the war. This will improve the decision-making ability of the autonomous system.
Evolution of Intelligent Combat Systems for Smart Victory
The article by Yuan Yi and Zhu Feng from the War Research Institute of the PLA Academy of Military Sciences (中国人民解放军军事科学研究院) in PLA Daily is about the evolution of intelligent combat systems and the role of AI in this evolution. The authors state, "the new generation of artificial intelligence technology can endow the combat system with the ability to learn and evolve itself, making the nature of the combat system move from an inorganic system to an organic system." (新一代人工智能技术能够赋予作战体系自学习、自进化的能力，使得作战体系性质从无机系统向有机系统迈进。)
Note the importance given to military applications of AI here. This is important because China recently presented a position paper in the UN urging regulation on the military application of AI. And as I have noted above, China's position paper is more about projecting itself as a responsible actor rather than actually regulating use of AI.
The authors propose that the evolution of intelligent combat systems will follow some basic evolutionary models in the future.
Experience sharing, group evolution model (经验共享、群体进化 模式): Using edge computing and cloud computing, the intelligent combat system can share their knowledge and learning experiences with others by uploading information to the "combat cloud." This way, the group can evolve as a whole based on knowledge gained by one.
Digital twin, parallel evolution model (数字孪生、并行进化 模式): This technology can be used to simulate actual combat system and evolved with multiple iterations over time. The results of this evolved simulation can be mapped with the actual combat system parallelly.
Fight each other, fight evolution model (左右互搏、对抗进化 模式): Since actual combat data is difficult to obtain and there are very few practical opportunities for war in peacetime, deep learning and reinforcement learning can be used to generate high-fidelity virtual opponents and repeatedly confront the virtual opponents in a test environment. Authors mention Generative Adversarial Networks (GANs) to expand a small set of data to credibly expand the dataset. They also suggest reinforcement learning can be used to conduct virtual confrontations based on basic operational rules, automatically generate operational experience, self-innovate and upgrade tactics, and promote the evolution of the operational system
The authors make the case that intelligent combat systems will subvert the concept of warfare. The article mentions, "future warfare will be a confrontation between systems and systems, but the main body of confrontation will be transformed from a traditional combat system with a relatively solid organizational structure to an intelligent combat system that can learn, grow, and evolve itself. This major change will have a profound impact on the winning mechanism of future wars, battlefield dominance, combat system construction, and military training models." (未来战争仍然是体系与体系的对抗，但对抗的主体由组织结构相对固化的传统作战体系，转变为可以自学习、自成长、自进化的智能化作战体系。这一重大转变对未来战争制胜机理、战场制权、作战体系构建、军事训练模式等方面，均将产生深刻影响。)
Victory mechanism in the future: In future wars, the party with a high degree of intelligence and evolution in the combat system will be able to learn quickly in fierce confrontations. As the war progresses, the overall capability gap caused by the speed difference in the evolution of the combat systems of the two sides will become larger and larger. Whichever system is faster will win.
Battlefield dominance by maintaining speed and quality of evolution of combat system: In future wars, intellectual power will be core controlling power. The key to seizing intellectual power is to maintain the freedom to let our combat system evolve and hinder the evolution of the enemy's combat system. Speed and quality of evolution will always determine which combat system dominates the battlefield.
Focus on both initial capabilities and evolutionary capabilities: Traditional combat systems emphasize the initial capabilities so as to overwhelm the opponent from the beginning. However, the intelligent combat systems focus on evolution - the ability to self-learn and evolve according to the situation.
Man-machine training military training model: Intelligent combat system will have people training together so that man and machine can both evolve together.
II. Additional Reading
Addition of Certain Entities to the Entity List and Revision of an Entry on the Entity List, Federal Register (US Government)
Megha Pardhi is a Research Analyst at The Takshashila Institution. She tweets at @pardhimegha21.
Before you go:
If solving India’s grand public policy challenges interests you, get equipped by signing up for Takshashila’s 12-week Graduate Certificate Programmes.
Find all the details you need to know here: https://school.takshashila.org.in/gcpp
For any queries, you can contact: firstname.lastname@example.org / email@example.com