Amid the stately beiges of Geneva’s Palais de Nations this week, United Nations diplomats from Ukraine and Russia were launching strikes.
“What we could see in this hall in the course of the two last days is nothing else than the blackmailing of all of us by the Russian representative,” the Ukrainian diplomat said Tuesday.
The Russian delegate fired back a moment later: “There is discrimination suffered by my country because of restrictive measures against us.”
Ukraine was chastising Russia not over the country’s ongoing invasion but a more abstract topic: autonomous weapons.
The comments were a part of the Convention on Certain Conventional Weapons, a U.N. gathering at which global delegates are supposed to be working toward a treaty on Lethal Autonomous Weapons Systems, the charged realm that both military experts and peace activists say is the future of war.
But citing visa restrictions that limited his team’s attendance, the Russian delegate asked that the meeting be disbanded, prompting denunciations from Ukraine and many others. The skirmish was playing out in a kind of parallel with the war in Ukraine — more genteel surroundings, equally high stakes.
Autonomous weapons — the catchall description for algorithms that help decide where and when a weapon should fire — are among the most fraught areas of modern warfare, making the human-commandeered drone strike of recent decades look as quaint as a bayonet.
Proponents argue that they are nothing less than a godsend, improving precision and removing human mistakes and even the fog of war itself.
The weapons’ critics — and there are many — see disaster. They note a dehumanization that opens up battles to all sorts of machine-led errors, which a ruthless digital efficiency then makes more apocalyptic. While there are no signs such “slaughterbots” have been deployed in Ukraine, critics say the activities playing out there hint at grimmer battlefields ahead.
“Recent events are bringing this to the fore — they’re making us realize the tech we’re developing can be deployed and exposed to people with devastating consequences,” said Jonathan Kewley, co-head of the Tech Group at high-powered London law firm Clifford Chance, emphasizing this was a global and not a Russia-centric issue.
While they differ in their specifics, all fully autonomous weapons share one idea: that artificial intelligence can dictate firing decisions better than people. By being trained on thousands of battles and then having its parameters adjusted to a specific conflict, the AI can be onboarded to a traditional weapon, then seek out enemy combatants and surgically drop bombs, fire guns or otherwise decimate enemies without a shred of human input.
The 39-year-old CCW convenes every five years to update its agreement on new threats, like land mines. But AI weapons have proved its Waterloo. Delegates have been flummoxed by the unknowable dimensions of intelligent fighting machines and hobbled by the slow-plays of military powers, like Russia, eager to bleed the clock while the technology races ahead. In December, the quinquennial meeting did not result in “consensus” (the CCW requires it for any updates), forcing the group back to the drawing board at an another meeting this month.
“We are not holding this meeting on the back of a resounding success,” the Irish delegate dryly noted this week.
Activists fear all these delays will come at a cost. The tech is now so evolved, they say, that some militaries around the world could deploy it in their next conflict.
“I believe it’s just a matter of policy at this point, not technology,” Daan Kayser, who lead the autonomous weapons project for the Dutch group Pax for Peace, told The Post from Geneva. “Any one of a number of countries could have computers killing without a single human anywhere near it. And that should frighten everyone.”
Russia’s machine-gun manufacturer Kalashnikov Group announced in 2017 that it was working on a gun with a neural network. The country is also believed to have the potential to deploy the Lancet and the Kub — two “loitering drones” that can stay near a target for hours and activate only when needed — with various autonomous capabilities.
Advocates worry that as Russia shows it is apparently willing to use other controversial weapons in Ukraine like cluster bombs, fully autonomous weapons won’t be far behind. (Russia — and for that matter the United States and Ukraine — did not sign on to the 2008 cluster-bomb treaty that more than 100 other countries agreed to.)
But they also say it would be a mistake to lay all the threats at Russia’s door. The U.S. military has been engaged in its own race toward autonomy, contracting with the likes of Microsoft and Amazon for AI services. It has created an AI-focused training program for the 18th Airborne Corps at Fort Bragg — soldiers designing systems so the machines can fight the wars — and built a hub of forward-looking tech at the Army Futures Command, in Austin.
The Air Force Research Laboratory, for its part, has spent years developing something called the Agile Condor, a highly efficient computer with deep AI capabilities that can be attached to traditional weapons; in the fall, it was tested aboard a remotely piloted aircraft known as the MQ-9 Reaper. The United States also has a stockpile of its own loitering munitions, like the Mini Harpy, that it can equip with autonomous capabilities.
China has been pushing, too. A Brookings Institution report in 2020 said that the country’s defense industry has been “pursuing significant investments in robotics, swarming, and other applications of artificial intelligence and machine learning.”
A study by Pax found that between 2005 and 2015, the United States had 26% of all new AI patents granted in the military domain, and China, 25%. In the years since, China has eclipsed America. China is believed to have made particular strides in military-grade facial recognition, pouring billions of dollars into the effort; under such a technology, a machine identifies an enemy, often from miles away, without any confirmation by a human.
The hazards of AI weapons were brought home last year when a U.N. Security Council report said a Turkish drone, the Kargu-2, appeared to have fired fully autonomously in the long-running Libyan civil war — potentially marking the first time on this planet a human being died entirely because a machine thought they should.
All of this has made some nongovernmental organizations very nervous. “Are we really ready to allow machines to decide to kill people?” asked Isabelle Jones, campaign outreach manager for an AI-critical umbrella group named Stop Killer Robots. “Are we ready for what that means?”
Formed in 2012, Stop Killer Robots has a playful name but a hellbent mission. The group encompasses some 180 NGOs and combines a spiritual argument for a human-centered world (“Less autonomy. More humanity”) with a brass-tacks argument about reducing casualties.
Jones cited a popular advocate goal: “meaningful human control.” (Whether this should mean a full-on ban is partly what’s flummoxing the U.N. group.)
Military insiders say such aims are misguided.
“Any effort to ban these things is futile — they convey too much of an advantage for states to agree to that,” said C. Anthony Pfaff, a retired Army colonel and former military adviser to the State Department and now a professor at U.S. Army War College.
Instead, he said, the right rules around AI weapons would ease concerns while paying dividends.
“There’s a powerful reason to explore these technologies,” he added. “The potential is there; nothing is necessarily evil about them. We just have to make sure we use them in a way that gets the best outcome.”
Like other supporters, Pfaff notes that it’s an abundance of human rage and vengefulness that has led to war crimes. Machines lack all such emotion.
But critics say it is exactly emotion that governments should seek to protect. Even when peering through the fog of war, they say, eyes are attached to human beings, with all their ability to react flexibly.
Military strategists describe a battle scenario in which a U.S. autonomous weapon knocks down a door in a far-off urban war to identify a compact, charged group of males coming at it with knives. Processing an obvious threat, it takes aim.
It does not know that the war is in Indonesia, where males of all ages wear knives around their necks; that these are not short men but 10-year-old boys; that their emotion is not anger but laughter and playing. An AI cannot, no matter how fast its microprocessor, infer intent.
There may also be a more macro effect.
“Just cause in going to war is important, and that happens because of consequences to individuals,” said Nancy Sherman, a Georgetown professor who has written numerous books on ethics and the military. “When you reduce the consequences to individuals you make the decision to enter a war too easy.”
This could lead to more wars — and, given that the other side wouldn’t have the AI weapons, highly asymmetric ones.
If by chance both sides had autonomous weapons, it could result in the science-fiction scenario of two robot sides destroying each other. Whether this will keep conflict away from civilians or push it closer, no one can say.
It is head-spinners like this that seem to be holding up negotiators. Last year, the CCW got bogged down when a group of 10 countries, many of them South American, wanted the treaty to be updated to include a full AI ban, while others wanted a more dynamic approach. Delegates debated how much human awareness was enough human awareness, and at what point in the decision chain it should be applied.
And three military giants shunned the debate entirely: The United States, Russia and India all wanted no AI update to the agreement at all, arguing that existing humanitarian law was sufficient.
This week in Geneva didn’t yield much more progress. After several days of infighting brought on by the Russia protest tactics, the chair moved the substantive proceedings to “informal” mode, putting hope of a treaty even further out of reach.
Some attempts at regulation have been made at the level of individual nations. The U.S. Defense Department has issued a list of AI guidelines, while the European Union recently passed a comprehensive new AI Act.
But Kewley, the attorney, pointed out that the act offers a carve-out for military uses.
“We worry about the impact of AI in so many services and areas of our lives but where it can have the most extreme impact — in the context of war — we’re leaving it up to the military,” he said.
He added: “If we don’t design laws the whole world will follow — if we design a robot that can kill people and doesn’t have a sense of right and wrong built in — it will be a very, very high-risk journey we’re following.”