We need two tier approach to control military AI
This article was originally published on Stop Wapenhandel's blog. You can find it here.
Artificial Intelligence (AI) is quickly becoming central in newly produced weapons, like fighter aircraft and sensors. Some reported examples from the first week of June alone:
- Amnesty International reports the use of Elbit artificial intelligence technology by the Israel military in Gaza, used for a fast inventarisation of targets based on surveillance data provide by satellites. The technology is combat proven and available for export, according to the Dutch Minister of Defence, Ollongren.
- In June 2024 Airbus signed a cooperation agreement with German AI startup Helsing to work on a futuristic unmanned aircraft dubbed Wingman. Hesling will provide its AI knowledge “including the fusion of various sensors and algorithms for electronic warfare.” The Wingman gets a number of tasks, from jamming and reconnaissance to “strike missions against ground and airborne targets.” It will deploy guided weapons during operations too dangerous to send a manned aircraft. Helsing is involved in both major European fighter aircraft programs. Closely connected is that Airbus also cooperates with NeuralAgent, another startup, to develop technology to automate and greatly reduce response times of an unlimited number of combat functions.
- In April 2024 US DARPA (Defense Advanced Research Projects Agency) reported the first case of a dogfight (aerial battle between fighter aircraft conducted at close range ) between a manned fighter and an AI controlled aircraft. According to a US Air Force Test Pilot School Commandant the test showed that the DARPA project “moves in the right direction”.
Examples of military AI developments are numerous. The NGO Saferworld stated alarmingly that: “we are likely to start seeing ever-more advanced and potentially more lethal weapon systems,” and there is a “need to act – and act now,” because AI is “this generation’s Oppenheimer moment” referring to the early stages of nuclear weapon development, well-known now because of the recent Oppenheimer-movie. At that time realization grew that an explosive power was created that was able to destroy the world in a few strikes, and that this power had to be controlled.
Ongoing wars may however speed up the uncontrolled development of “Weapons that can decide for themselves whom or what to target — and even when to kill — (…) entering military arsenals” as phrased by Defense News, a major US military magazine. In the same article the concept “terminator problem” was introduced. The problem that if one state has a certain military technology, all others believe they need it too, to be secure — a situation that makes regulating AI difficult.
When you consider AI as a method to absorb information and make quick decisions – or even start actions with or without people in the loop – based on the gathered data and machine learned processes, then you understand the power and danger of AI. Being the fastest is one of the crucial components of having momentum in warfare. This is a basic insight in military strategy. More than two thousand years ago Chinese strategist Sun Szu already wrote the Art of War several advices on swift battle. Millennia after Szu tempo and time are still central elements in strategic thought. A version of the US Marine Force Manual on Warfighting e.g. states: “Success [in war] depends in large part on the ability to adapt—to proactively shape changing events to our advantage as well as to react quickly to constantly changing conditions,” and “Tempo is itself a weapon—often the most important.” In a report on tempo and reconnaisance forces, famous strategist Von Clauzewitz is cited from his On War: “men, time, and space as the key components of the essential activity in war, combat.”
From another angle, the International Committee of the Red Cross recently pointed at the dangers of speeding up wars, the escalation risk and the mistakes which may be the results. They propose slowing it down.
AI is a tool to enormously expand the speed of military action and therefore the terminator problem is even more connected with AI technology than with normal weaponry. It will be hard to convince powerful nations to refrain from the advantages of AI and to commit to limitation and control.
Despite this, over 140 countries came together to talk about the dangers and the need to control AI, in an April 2024 Vienna Conference called: ‘Humanity at the Crossroads: Autonomous Weapons Systems and the Challenge of Regulation’. Both NGO Saferworld and military magazine Defense News refer to this conference, and the second source expands somewhat on the larger than expected upswing (1,000 delegates). But also reports on a major barrier to reign in the technology, now still in its infancy stage but quickly developing: “Much of the Global South (…) now seems interested in restricting the technology,” according to Austrian Foreign Affairs diplomat on arms control Alexander Kmentt, but – and here lays the crux, “though little could be achieved without buy-in from the major global powers.”
“Do we really think that in the middle of a battle between China and the U.S., someone is going to say: ‘Hold on, we can’t let the machine do that’?” Natasha Bajema, a researcher at the James Martin Center for Nonproliferation Studies, asked, referring to the allure of what she described as war moving at machine speed. The appetite for more autonomy in weapons, fanned by combat in Ukraine and Gaza, has drowned out long-standing calls for limits on AI in military applications. When the development of Cold War 2.0 with China is included, the situation becomes even grimmer.
The Austrian conference shows there is interest and some space for debating control. And this space should be taken in this moment in time. But one can be sure that sound results will be much harder to reach – or even impossible – when world politics slides back into a situation of confrontation between the superpowers. Control of AI needs a two tier approach. The first involves specialists, diplomats and politicians who are debating and proposing control on different aspect of AI. Vienna showed this is not a fata morgana. But without easing the Cold War atmosphere of the current geopolitical situation one can be sure these efforts will be chopped off by military needs to be quicker and thus more decisive than the opponent. Meetings will add to the possibilities for exchange between opposing parties and may introduce small steps which may be the start of larger diplomatic gains. Since Vienna, the US and China started talking on AI (“not aimed at any substantive outcomes”). Those minimal steps may lead to towards potential more substantial policy and contact between adversaries.
The road towards Cold War 2.0 will inevitably lead to more uncontrolled AI-weaponry. Arms control, especially in this important field of autonomous warfighting, is not a technical issue alone, but also connected to the power politics of this world. And lets be clear in the field of peace and security the military policy has the overhand (underlined e.g. by the fast growing military expenditures), included on arms control, peace and diplomatic solutions answering military conflict. Let’s not fool ourselves, in a world of confrontation, control of this dangerous technology will fail.
We have had a kind offer from an individual donor, who will match up to £5,000 of donations from others - so by supporting War Resisters' International today your donation is worth double!
Add new comment