close
close

Mastering Man and Machine Combat Teams – War on the Rocks

Mastering Man and Machine Combat Teams – War on the Rocks

Many of America’s top military leaders predict that mastering the interaction between humans and increasingly sophisticated artificial intelligence algorithms and autonomous machines will provide a significant advantage to future fighters. This was stated by the Chief of Staff of the Air Force. stated that “militaries who master human-machine interaction will have a decisive advantage in war.” Likewise, the Commander of the Army Future Command believes that the integration of men and machines will lead to a dramatic evolution—and perhaps a revolution—in military operations.

Much of the discussion about human-machine teams has focused on using machines to replace people in battle. The doctrine of the future army hopes to avoid trade”blood on first contact“by using autonomous vehicles for dangerous reconnaissance missions or penetration operations. Wargamers exploring the use of joint warplanes have often used them as decoys, jammers, active emitters and other tasks that risked their loss in a highly competitive environment. Likewise, the Navy’s pursuit of unmanned ships and aircraft often involves risky activities such as delivery of supplies in a controversial environment or mine action operations. These concepts aim to remove people from the most dangerous parts of the battlefield by fearless and tireless machines in their place.

While reducing the risk that American military personnel face in combat is always a worthy goal, simply having robots perform the same tasks instead of humans will not revolutionize future wars. Instead, if military leaders hope to achieve dramatic improvements on the battlefield, human-machine teams will need to learn to effectively leverage the complementary skills of their members.

To achieve this goal, the military’s approach to human-machine interaction must change in three ways. First, efforts to train the human component of man-machine commands should focus on the instinctive brain rather than the rational brain. Trying to get AI algorithms to explain how they reason leads to ineffective human-machine teams. Instead, using people’s innate ability to unconsciously identify patterns in behavior seems to produce excellent results. Second, the military must ensure that AI developers don’t simply pick the low hanging fruit to improve the accuracy of their models. Instead, they should develop products that have additional – non-duplication – skill sets within a human-machine team. Finally, we should avoid overhyping AI. Despite all the amazing advances made by AI researchers, war is fundamentally a human activity, and only humans have vast amounts of tacit knowledge. People remain the most important part of the human-machine team.

Need for teams

Because the workings of machine intelligence are very different from the fundamentals of biological intelligence, humans and machines bring different strengths and weaknesses to a combined human-machine team. When these differences are optimally combined, human and machine teams become more than the sum of their parts, surpassing both human and machine performance in completing assigned tasks.

Unfortunately, human instincts about how to interact with AI and autonomous cars in national teams often lead them astray. These inconsistencies result in human-machine teams performing poorly on a task compared to an AI algorithm operating without human input—teams that are less than the sum of their parts. If ineffective collaboration methods result in human-machine teams also being ineffective in carrying out military missions, this could pose a dilemma for the Department of Defense. Department management will have to choose between allowing AI act without human control or cede combat advantage to adversaries without the same moral reservations about technology. recent events in China refusal to sign a joint declaration At the 2024 Summit on Responsible Artificial Intelligence in the Military, a call for humans to maintain control over military AI applications vividly illustrates the risks this dilemma poses to the US military. Therefore, overcoming these challenges and teaching people how to use the complementary skill sets available in human-machine teams may be important to ensure that human operators can effectively select and control outcomes when using AI-enhanced tools, and thus thus ensuring that AI is used ethically and responsibly. in future military conflicts.

Understanding the differences in strengths between human and machine intelligence provides the basis for successfully integrating humans with intelligent machines. Cars often surpass people on tasks that require the ability to analyze and remember huge amounts of data, on repetitive tasks that require a high degree of accuracy, or on tasks for which it is useful superhuman response speed. For example, AI optimized for computer strategy games dominates its human opponents, coordination of activities thousands of widely dispersed units to achieve a single strategic goal. In these games, AI can “march divided, fight together” on a truly enormous scale – beyond the human brain’s ability to comprehend or resist.

In contrast, humans often have an advantage over machine intelligence in tasks that require tacit knowledge and context, or where human senses and reasoning still retain superiority over sensors and algorithms. For example, AI can analyze images to determine the location of a battalion of enemy vehicles, but it will not be able to understand why those vehicles were located in that location or what task the commander was most likely assigning them to perform. Grand strategy will become an even greater mystery to the machine – modern AI algorithms may calculate that an opponent can be defeated, but they will never understand which potential opponents should be defeated and why. War is an inherently human activity, hence warfare is filled with tacit human knowledge and context that no single data set can ever fully capture.

Current efforts

Many defense research initiatives on how to form effective human-machine teams have focused on understanding and improving human trust in machine intelligence by developing artificial intelligence algorithms that can explain the reason behind their results. Like the Defense Advanced Research Projects Agency’s XAI program explains“Advances in machine learning… promise the creation of artificial intelligence systems that perceive, learn, make decisions and act independently. However, they will not be able to explain their decisions and actions to human users. This shortcoming is especially important for the Department of Defense, whose mission requires the development of more intelligent, autonomous and symbiotic systems. Explainable AI will be essential if users are to understand, trust and effectively manage these AI partners.” The life-or-death nature of many military AI applications appears to reinforce this requirement that military personnel understand and trust the rationale behind any actions taken by AI applications.

Unfortunately, experimental studies have repeatedly shown that adding explanations to AI increases the likelihood that people will obey AI.”judgmentwithout increasing command accuracy. Two factors appear to support this result. First, people tend to believe that other people tell the truth by default—if they detect no signs of deception, they tend to believe that their teammate is providing the correct information. Because AI never exhibits typical human signs of deception, when an AI that has proven itself to be reliable in the past explains how it arrived at its answer, most people subconsciously assume that it is safe to accept that result or recommendation. Second, AI explanations only provide a person with information about how the AI ​​arrived at its decision, but do not provide any information about how the correct answer should be arrived at. If a person does not know how to determine the correct answer, the main effect of reading the AI’s explanations will be to increase their confidence that the AI ​​has approached the problem carefully. On the other hand, if a person already knows how to determine the correct answer, no explanation from the AI ​​is needed – the person already knows whether the answer is correct or not.

The best way

Instead of relying on explainable AI to create effective human-machine teams, the Department of Defense should consider two alternative approaches. One promising approach aims to help people develop effective mental models to guide their interactions with fellow machines. Effective mental models often play a similar role in human teams—when you work with a teammate for a long time, you become intimately aware of their strengths and weaknesses and instinctively know how to collaborate with them. Repeated interactions with machine intelligence under realistic conditions can similarly create effective human-machine teams. Integrating AI prototypes into military exercises and training (with safety protocols such as minimum safe distances between dismounted humans and robotic vehicles or limits on the complexity of maneuvers allowed for AI-controlled equipment) can help the human element of human-machine teams learn to work with their machine “teammates.” Delaying this training until AI tools are more mature risks falling behind potential adversaries with more real-world experience, such as Russia, and forcing U.S. soldiers to catch up while under enemy fire.

Additionally, when the Department of Defense sets out to create an AI model that will assist humans rather than replace them, it needs to ensure that these AIs have additional skills with his human teammates. Sometimes the simplest tasks that can be taught to AI to perform are tasks that humans are already good at doing. For example, if an AI model is designed to identify improvised explosive devices, the easiest task would be to train it to recognize images of such devices that were not well camouflaged. However, the greatest value for a human-machine team may be in teaching the model to recognize improvised explosive devices, which can only be detected through complex analysis of multiple types of sensors. Even if this second AI model is able to detect a much smaller percentage of devices in the training set compared to an AI model optimized to identify the simplest cases, the second model will be more useful to the team if all the devices it detects go undetected. people. The Department of Defense should ensure that the metrics used to evaluate AI models measure the skills required by the combined human-machine team, rather than simply assessing the performance of an AI model in isolation.

Finally, the Department of Defense should ensure that humans remain the dominant partner in any human-machine team. The power of human-machine teams stems from their ability to leverage the complementary skills of their members to achieve performance that exceeds that of either humans or machines alone. In this partnership, people will remain dominant because the knowledge and context they bring to the team provides the most value. War is an inevitable human activity. An AI algorithm can learn to optimally achieve goals, but only humans will understand which goals are most important to achieve and why those goals are important.

Only people understand why we are waging war. Thus, humans will remain the most important part of any man-machine team in war.

James Reiseff is a senior technical policy analyst at RAND, a nonprofit, nonpartisan research institute.

Image: Those. Sergeant Jordan Thompson