Autonomously assured destruction

The ethics of artificially intelligent weapons.

Samuel Hagood

February 16, 2024

On the eve of September 26, 1983, the world nearly ended. The Cold War between the United States and the Soviet Union was as icy as ever. On September 1, the Soviet Union had shot down a South Korean airliner that strayed into their airspace. Everyone on board was killed, including an American congressman. Global tensions were at their highest since the Cuban missile crisis two decades prior. Fearing retaliation for the downed airliner, the Soviets monitored a satellite called Oko, an experimental system meant to watch for US ballistic missiles. As midnight of September 26 passed in Moscow, Oko alerted Russian Lieutenant Colonel Stanislav Petrov that an American nuclear missile was headed for his homeland. One, two, three, four…—the early warning system counted five nuclear warheads streaking towards Moscow. The system was certain that the United States had started World War III. All Petrov had to do was raise his telephone and relay Oko’s conclusion to his superiors, and Soviet nukes would launch for the United States. Millions would die and a nuclear winter would poison those left behind.

But Petrov did not trust Oko. It was a brand-new system that could still have bugs. Furthermore, it made no sense for the US to launch only five missiles; if they planned to turn the Cold War hot, the first strike would be a thousand-missile knockout punch. Petrov considered the options, picked up the phone, and told his commanding officer that the system was malfunctioning. He was correct; Oko had mistaken sunlight hitting clouds for the exhaust of American missiles.

What would have happened if the keys to World War III resided not in the hands of a human, but in the circuit boards of a machine? On September 26 of 1983, the Soviet early warning system was absolutely sure that the US had fired first. If Oko had control over the Soviet nuclear arsenal, we would not be sitting here, and Oko would continue watching for missiles from a lifeless world.

Today, many machines complete tasks that used to require human intelligence. These machines, known as artificial intelligence, or AI, streamline our lives by finding the quickest route through traffic and the best results for our searches. If you use a newer smartphone, you know that AI can recognize both our words and our faces. The chatbot ChatGPT draws on the collective knowledge of the Internet to spout expertise in seconds. Yet artificial intelligence can do as much harm as it can good. This becomes apparent at the crossroads of AI and military innovation.

Today, a new class of weapons systems herald a new age of warfare. Fully autonomous weapons systems (autonomous weapons from this point on) use artificial intelligence to independently select and engage targets without human control or supervision. US Air Force strategist John Boyd pioneered the theory that a combatant is always doing one of four things: observing, orienting, deciding, or acting. We’ve always had our eyes to observe; now we have night-vision goggles and satellites. We’ve always had plans that orient us; now we have rules of engagement and computer simulations. We’ve had swords, cannons, muskets, and missiles with which to act; but we’ve never created instruments of war that decide when and whom to fight. Autonomous weapons are the first.

Autonomous weapons are not widely used at the moment, but they do exist. Azerbaijan fired hundreds of Harpy drones, supplied by Israel, in its war against Armenia three years ago. Harpy drones are autonomous weapons, so after they were fired they loitered in the sky until they identified targets. Then they slammed their 70 pounds of explosives into the ranks of Armenian fighters. The Russian army is developing a fleet of autonomous ground vehicles ranging in size from an ATV to a full-blown tank. China’s leader plans to autonomize every branch of the Chinese armed forces. For fear of being overshadowed by their neighbors, smaller countries develop or purchase their own autonomous weapons, and in 2020, the US spent an estimated $35.1 billion researching autonomy in warfare so as not to fall behind China and Russia.

Autonomous weapons are attractive to modern militaries because they aren’t restrained by human limits like sleep. They cost less to operate than manned systems and they keep troops off dangerous battlefields. These advantages blind nations to the threat autonomous weapons pose to world peace and human rights. A new arms race lurks in our future, one that will ride the wave of the greater societal shift towards an artificially intelligent world. That arms race should be called off before it ever begins.

***

First, deploying autonomous weapons on the battlefield will expose crucial flaws in the design of the artificial intelligence that powers them.

The AI behind an autonomous weapon is known as a neural network, or a neural net. Like a human brain, a neural network learns to recognize images through experience. Computer scientists show the neural network millions of images, and the network builds its own understanding of the world through trial and error. Today’s neural networks are extremely powerful, performing as well or better than humans on object recognition tests, and the technology will only continue to improve. However, that may not be a good thing.

In 2017, the US Department of Defense hired a team of experts to research the implications of neural networks for warfare. After an exhaustive study, the group determined that it is “impossible to really understand exactly how the system does what it does.” Moreover, it “is not clear that the existing AI…is immediately amenable to any sort of…validation and verification.” Testing and evaluating weapons before they see action is a reasonable requirement already in place in most militaries, but adequately testing neural networks is unrealistic. Neural networks are so-called “black boxes,” defying human attempts to understand why they come to the conclusions they do. Both their successes and their failures result from a way of processing the world entirely different from, even alien to, our own. For example, specially altered images, known as adversarial images, can fool neural networks. Some of these adversarial images appear simply as static, while others look like traditional camouflage. They can all trick neural networks into confidently identifying a minivan as a tank or a white shirt as a suicide vest. One of the most sinister aspects of neural networks is not that they will make mistakes—because they will make very few—but that we won’t be able to learn from the mistakes that they do make. Cases of malfunction, and thus of needless death, will go unsolved. Despite these uncertainties, autonomous weapons may be too tempting for countries to ignore without a ban on their development and use.

Machines like self-driving cars and autonomous weapons are at their best when they fully understand their surroundings, and we help them see with data sets. But war is always an unpredictable affair. When a soldier can’t defeat his opponents by outgunning or outracing them, he must outwit them, something that neural networks, and therefore autonomous weapons, are not designed to do. The adversarial images made to fool neural networks are a good example of this weakness. However, tricking the neural network may not even be necessary for its failure. It may well trick itself, as a tragic story from the Iraq War demonstrates. In 2003, a Patriot air-defense battery designed to shoot down cruise missiles identified a flying object headed for US ally Kuwait as an enemy Iraqi missile. Unsuspecting American soldiers authorized the Patriot to neutralize the threat. Unfortunately, the Iraqi missile was a ghost created by electromagnetic interference from another Patriot missile battery. There was no Iraqi missile in the sky, but there was a US fighter jet returning home from a mission over Iraq. The Patriot missiles locked on to Lieutenant Nathan White’s aircraft and killed him instantly. Ultimately, Army investigators decided that no one was at fault for the accident. The electromagnetic interference was a novel circumstance, unforeseen even after years of testing. The soldiers had simply trusted the Patriot system too readily. And, had the Patriot been a fully autonomous weapon, the cause of Lieutenant White’s death might still be a mystery today.

***

Second, autonomous weapons will be an unstable and potentially inflammatory factor in global balances of power and crises, deteriorating the global order.

In 2010, stock-trading algorithms caused a free fall across the US stock market. From 2:32 p.m. to 3:00 p.m. on May 6, in a scare known as the Flash Crash, the Dow lost nearly ten percent of its value to microsecond interactions between these algorithms. In war, speed is a strength, and autonomous weapons’ algorithms operate at similar speeds to their stockbroker cousins. But if autonomous weapons interact incorrectly with any number of factors present in the modern warzone, such as international law, weather conditions, or the opposing force’s mistakes, they can initiate “flash wars” in seconds. Mere weapons will not hesitate to begin conflicts. This instability will be compounded if both sides are using autonomous weapons. For those interested in peace, autonomous weapons’ speed may not be their greatest strength, but, in fact, their greatest weakness.

Some international actors, however, are not interested in peace. For authoritarian states, autonomous weapons will lend unprecedented power and control. Dictators naturally fear their people. Even nonmilitary dictators use their armies to keep the masses in check, but this common tactic has backfired on dictatorships in the past. In 2011, Egyptian president Hosni Mubarak was ousted from power by his own army, who mutinied, refusing to continue harming citizens who protested Mubarak’s regime. If Russia, China, or Iran opt to use autonomous weapons instead of soldiers, they need not worry about deserters or dissent. Imagine if the Egyptian forces holding back the mobs of protestors were not humans but machines. Thousands of Egyptians might have died at the hands of their own government. Robert Work, a former US Deputy Secretary of Defense, testified to this danger: “authoritarian regimes who believe people are weaknesses in the machine…that they cannot be trusted, will naturally gravitate towards totally automated solutions.” With autonomous weapons in play, no Caesar will cross the Rubicon. Dictators like Vladimir Putin and Xi Jinping will rule with a literally iron fist. But a ban on autonomous weapons could protect the oppressed from a new age of oppression.

Even in free nations, autonomous weapons pose the risk of concentrating the ability to wage war in the hands of a select few, subverting democratic principles. Fewer human boots on the ground is an undeniably attractive prospect. Fathers could stay home with their children, sons with their mothers, while robots protect the nation. Technology will have triumphed. But in fact, one of the greatest counterweights to the undertaking of war in democratic states is the public’s fear of its consequences. Hesitation to send those fathers and sons to battle restricts the conditions under which nations go to war. Ominously, autonomous weapons remove these hurdles. Autonomous weapons will disconnect a country’s foreign policy from the conscience of its people.

***

Finally, there is a moral case to be made against inhuman actors taking the lives of humans.

People should take responsibility for their mistakes. For a war to be just, soldiers, generals, and nations must take responsibility for the lives they end. Those involved in a just war have a duty to respect their opponents and to accept their responsibility; otherwise, our enemies are simply animals to be exterminated. But who is put on trial when an autonomous weapon commits a war crime? Who asks forgiveness of the mother whose son was just recently playing soldier in the backyard? It seems reasonable to first blame the programmers of the autonomous weapon for an accident. Yet, if they created the weapon to be fully autonomous, then it is designed to learn and make its own decisions. It doesn’t make sense to criminalize innovators for delivering on a customer’s request, especially when militaries should know that autonomous weapons are altogether capable of making mistakes. Blaming commanding officers for accidental deaths and crimes fails to address this issue as well. The crimes of a fully autonomous agent are not the crimes of the general, any more than the crimes of the private are the crimes of the general. Our last chance at a just war, then, is to cut open the roof of the courthouse and place our prototype, missile-laden drone on the defendant’s seat to stand trial. The weapon committed the crime, and fighting a just war means punishing those who fight unjustly. But we can’t punish the autonomous weapon by letting its gears rust or taking away its online chess privileges. Machines don’t understand what it means to be punished; they don’t feel pain of any kind. Because they cannot suffer as we do, no punishment will satisfy a victim’s loved ones or society’s sense of justice. Fully autonomous weapons therefore create a black hole of moral responsibility that, if left unchecked, will leave deaths unexplained and families crying for forgotten sons and daughters. But if the development and use of fully autonomous weapons is banned, that hole will never appear.

Paul Scharre, a retired Army ranger who later began to research autonomous weapons, lived through many nerve-racking moments during his time of service. In a 2018 speech, Scharre recalled a time when his squad settled onto a mountaintop near the border between Afghanistan and Pakistan to watch for Taliban border crossings. A Taliban group sent a young girl from a nearby village to scout Scharre’s position and report back. Scharre watched through his sniper rifle’s scope as the girl approached and eventually turned away; his group simply waited out her search. What never occurred to them was to kill the child, despite the fact that she was a combatant gathering intelligence, an enemy scout. At best, a robot in Scharre’s place would have followed an algorithm ensuring that it followed established standards of war. Whether that algorithm intended to uphold justice or not, a small girl, forced up a mountainside by terrorists, would have died that day. When a human looks another human in the eye and takes a life, he sees himself, if only briefly, in the eyes looking back at him. Submarine hunters see the oil in the ocean from dying subs and are reminded of the dying men within them. Killing another human being is something we do not and should not take lightly. Ending a life should be a last resort. But we lose our grasp on what war really is when we leave the battlefield. Lives mean nothing to autonomous weapons; they see only goals and metrics.

Given its destructive power and speed, one slip in the decision-making of an autonomous weapon, or one typo buried deep in its code, could be disastrous. Autonomous weapons are our creations. They are our Frankensteins, and our pens will write their stories. We should end their horror story before it ever begins. Policymakers should sign a ban and put down their pens. As we build the future, artificial intelligence should come in peace, not in war.

Samuel Hagood is a first-year undergraduate at the University of Chicago studying political science and economics.