When delegates from 50 countries met in the Netherlands last week to discuss the future of military artificial intelligence, human rights activists and non-proliferation experts saw an opportunity. For years, rights groups have urged nations to restrict the development of AI weapons and sign a legally binding treaty to restrict the use of them over fears their unrestricted development could mirror last century’s nuclear arms race. Instead, the results of what could have been a historic summit were only “feeble” window dressing, the rights groups said.
After two days of in-depth talks, panels, and presentations produced by around 2,500 AI experts and industry leaders, the REAIM (get it?) summit ended in a non-legally binding “call to action” over the responsible development, deployment and use of military AI. The attendees also agreed to establish a “Global Commission on AI.” That might sound lofty, but in reality, those initiatives are limited to “raise awareness” about how the technology can be manufactured responsibly. Meaningful talks of actually reducing or limiting AI weapons were essentially off the table.
Stop Killer Robots Campaign, one of the leading rights groups advocating against AI in warfare, told Gizmodo the call action offered a “vague and incorrect vision” of military use of AI without any reason for clarity on rules or limitations. Safe Ground, an Australian rights group, called the entire summit a “missed opportunity.”
At the same time the United States, which is both the world leader in AI weapons systems and historically one of the leading voices against an international AI weapons treaty, revealed a 12 point political declaration outlining its “responsible” autonomous systems strategy. The declaration, which comes just weeks after a controversial new Department of Defence directive on AI, says all AI systems should adhere to international human rights laws and have “appropriate levels of human judgment.” Though State Department officials triumphantly advertised the declaration as a pivotal step forward, rights groups fighting to limit the AI weapons system said it’s a complete disaster.
“Now is not the time for countries to tinker with flawed political declarations,” Human Rights Watch Arms Advocacy Director Mary Wareham said in a tweet. Stop Killer Robots Government Relations Manager Ousman Noor went further and called the declaration “the most backwards position seen from any state, in years.”
Now is not the time for countries to tinker with flawed political declarations that pave the way for a future of automated killing – @hrw https://t.co/5NAKwWiD6B To protect humanity, US help negotiate new international law to prohibit and restrict autonomous weapons systems. https://t.co/3FjM0kt3ZF
— Mary Wareham (@marywareham) February 16, 2023
“This Declaration falls drastically short of the international framework that the majority of states within UN discussions have called for,” Stop Killer Robots said in a statement. “It does not see the need for legally binding rules, and instead permits the development and use of Autonomous Weapons Systems, absent lines of acceptability.”
Just when you think the US is serious about being helpful, it announces the most backwards position seen from any state, in years.
This Political Declaration is terrible & worse than the US’s stated policy in multilateral discussions at the UN. #REAIMhttps://t.co/2ik9HCF4gW
— Ousman Noor (@ousmannoor) February 16, 2023
For AI military sceptics, the first-of-its-kind summit was actually seen as a step in the wrong direction. Prior to the summit, a majority of the 125 states represented in the U.N.’s Convention on Certain Conventional Weapons expressed interest in new laws essentially banning autonomous weapons development during a conference last year. UN Secretary-General António Guterres released a statement around the same time saying such systems should be prohibited under international law. Those efforts failed largely due to the U.S., China, and Russia, which are all in favour of the development of these weapons. The views of these three countries were previously outliers at the U.N. Now, under the new framework, it appears foregone that autonomous weapons systems are necessary and unavoidable.
One notable country not represented among the 50 or so nations at the REAIM summit? Russia, due to its ongoing war with Ukraine. Present or not, Russia and Ukraine were discussed throughout the summit as one of the potential testing grounds for new, fully autonomous military technology. Ukraine already reportedly uses semi-autonomous attack drones and Clearview AI’s facial recognition service to identify dead Russian troops.
Here’s some of the top highlights from the summit.
Over 2,500 people attended REAIM, the first-of-its-kind international AI weapons summit
The REAIM summit may have failed to appease rights groups but it largely succeeded bringing a wide variety of stakeholders to the table. Hosted in The Hague by The Netherlands and South Korea, the international summit was seen by some as an important first step to get stakeholders, some of which are actively competing against one another in an AI arm race, to meet under one roof and discuss the most pressing challenges presented by AI weapons system. In total, around 2,500 attendees from 100 different countries attended the summit.
The REAIM Summit saw 2️⃣5️⃣0️⃣0️⃣ attendees from 1️⃣0️⃣0️⃣ countries with 8️⃣0️⃣ government representatives contributing to the Responsible use and deployment of Military #AI.
ℹ️ During REAIM 2023, we also launched a Call To Action on this important topic:https://t.co/UoGowhPZX6 pic.twitter.com/hLlhYI94iW
— REAIMsummit (@REAIMsummit) February 17, 2023
Dutch Foreign Minister Wopke Hoekstra told Reuters at the start of the summit the event sought to agree upon some definition around AI weapons and discuss ways to improve safety under the assumption nations would inevitably pursue autonomous warfare. In general, the stakeholders involved sought to push discussions of AI weapons higher up on each respective nation’s political agenda.
“We are moving into a field that we do not know, for which we do not have guidelines, rules, frameworks, or agreements. But we will need them sooner rather than later,” Hoekstra told Reuters.
Op de @REAIMsummit is ingestemd met een gezamenlijke ‘call to action’ over de verantwoorde ontwikkeling, toepassing en het gebruik van AI in het militaire domein.
Dit onderstreept de noodzaak om AI hoger op de politieke agenda te zetten en initiatieven te stimuleren. (1/2) pic.twitter.com/5L9B4khIeN— Ministerie van Defensie (@Defensie) February 16, 2023
The U.S. released a vague, toothless 12-point political declaration on military AI
The U.S. shocked some on the final day of the summit by revealing its own 12 point political declaration outlining its autonomous systems strategy and best practice for its deployment. Among other points, the non-binding declaration says AI weapons must be consistent with international law, should maintain “appropriate levels of human judgment,” and should have their development overseen by “senior officials.” Maybe most notable, the declaration says human beings should maintain control over all actions concerning nuclear weapons.In theory, that provision should help prevent a future nuclear holocaust stemming from a hacked weapons silo or faulty AI. All very reassuring.
Though the U.S. declaration does hint at some willingness by the world’s largest military to talk across the aisle, its vagueness also leaves more questions than answers. The declaration fail to dive into specifics of the levels of human oversight required for AI weapons systems and even appears to depart from previous statements made by Deputy Secretary of Defence Kathleen Hicks, who told Gizmodo she believed AI systems should always have “humans in the loop.”
China calls for more international cooperation
Like the United States, China has largely refrained from signing on to large scale treaties or agreements to limit AI weapons. The most obvious reason for why is because they, also like the U.S, have invested heavily in space.
Tan Jian, China’s ambassador to The Netherlands, attended the event and reportedly sent a pair of papers to the United Nations which said AI weapons, “concerns the common security and the well-being of mankind,” which means any solution moving forward should be made collectively. During the summit, according to Reuters, Jian said it crucial countries opt to work together through the UN and “oppose seeking absolute military advantage and hegemony through AI.”
Palantir CEO says the AI military future is already upon us
Nation states representatives weren’t the only people in attendance. The summit also welcomed industry excerpts and private industry executives like Palantir CEO Alex Karp. During his speech, Karp reportedly said the Ukrainian military’s recent use of AI to positively identify target on the battlefield had moved the question of AI weapons away from “highly erudite ethics discussion,” to something with immediate real world consequences. The CEO previously said Ulkranians are using Palantir’s controversial data analytics software to carry out some of that targeting.
Karp, who has faced criticism from U.S. civil liberties groups for helping fuel a wave of so-called predictive policing tactics in major cities, agreed that there should be more transparency around the data used by AI weapons systems, but simultaneously said it was important for western countries not to fall behind China and Russia in the tech race.
“One of the major things we need to do in the West, is realise this lesson is completely understood by China and Russia,” Karp said, according to Reuters.
Human Rights Watch: ‘Now is not the time for countries to tinker with flawed political declarations’
Now is not the time for countries to tinker with flawed political declarations that pave the way for a future of automated killing – @hrw https://t.co/5NAKwWiD6B To protect humanity, US help negotiate new international law to prohibit and restrict autonomous weapons systems. https://t.co/3FjM0kt3ZF
— Mary Wareham (@marywareham) February 16, 2023
For years NGO giant Human Rights Watch has been one of the leading voices advocating in favour of an international treaty on autonomous weapons systems. In the past, the organisation blamed the U.S, Russia, China, and India, for playing an outsized role in derailing treaty talks supported by dozens of smaller nations. On the surface then, one might think HRW would respond favourably to the U.S. new political declaration. Instead, the organisation said it effort fell flat.
“Now is not the time for countries to tinker with flawed political declarations that pave the way for a future of automated killing,” Human Rights Watch Arms Advocacy Director Mary Wareham said in a statement. “To protect humanity, US [sic] help negotiate new international law to prohibit and restrict autonomous weapons systems.”
United States is pitching its flawed political declaration on “responsible use of weapons systems that incorporate AI capabilities” as an interim step. Yet for past DECADE it has opposed negotiating new international law on #KillerRobots. Shows how the Pentagon is still in charge pic.twitter.com/ajPzogcMyG
— Mary Wareham (@marywareham) February 16, 2023
That skepticisms came just days after HRW released a lengthy report railing against a new U.S. Department of Defence directive on AI weapons which it criticised as an “inadequate response” to the threats posed by the tech. That proposal, the agency said, was “out of step” with widely supported international proposals for treaties prohibiting and regulating autonomous weapons systems.
“The US pursuit of autonomous weapons systems without binding legal rules to explicitly address the dangers is a recipe for disaster,” Wareham said. “National policy and legislation are urgently needed to address the risks and challenges raised by removing human control from the use of force.”
Stop Killer Robot calls the Pentagon’s directive ‘the most backwards position seen from any state in years’
Just when you think the US is serious about being helpful, it announces the most backwards position seen from any state, in years.
This Political Declaration is terrible & worse than the US’s stated policy in multilateral discussions at the UN. #REAIMhttps://t.co/2ik9HCF4gW
— Ousman Noor (@ousmannoor) February 16, 2023
As their name subtly suggests, the Stop Killer Robots organisation strongly opposes the expansion of AI in weapons systems and wasn’t pleased with the outcome of the summit. The organisation said the widely agreed on call to action was “vague and incoherent” and failed to apply any real rules or limitations on AI military use or development, which was kinda the whole point of the summit.
As for the United States, Stop Killer Robots said its declaration, “falls drastically short,” with the organisation’s government relations manager calling it, “the most backwards position seen from any state, in years.”
“This ‘Political Declaration’ is toxic and is an attempt to radically undermine global effort towards establishing a new Treaty on Autonomous Weapons Systems,” Stop Killer Robots Government Relations Manager Ousman Noor said in a statement. “States should avoid it entirely.” Noor went on to say the declaration failed to prohibit weapons systems that are designed to target humans and also failed to establish clear restrictions on systems that can be used without human control.
“It contains no prohibitions on systems that cannot be used with meaningful human control and fails to recognise the need to prohibit systems that target humans,” Stop Killer Robots said. “It does not identify what types of limits are needed (temporal/spatial/duration of operation/scale of force etc.) and fails to give expression to the widely recognised need to ensure predictability, understandability, explainability, reliability and traceability.”
Australian rights group calls the summit a ‘missed opportunity’
Safe Ground, an Australian based human rights organisation which has spoken out forcefully against autonomous weapons in the past, told Gizmodo the REAIM summit “missed an opportunity” to adequately discuss autonomous weapons, despite the event being billed as exploring AI in the military domain. Similarly, Safe Ground noted the call to action seen as the high point of the event did not actually specifically mention autonomous weapons or prohibitions and obligations related to their development or use. It’s also non-binding, which means its mostly for show
“Whilst discussions of responsible AI are important, international law on autonomous weapons is essential, as well as clear policy at the domestic level,” Safe Ground said.