Autonomous weapons systems: Is the world heading in the right direction

Why countries are acquiring AWSs

4/16/20256 min read

black blue and yellow textile
black blue and yellow textile

The emergence of artificial intelligence weapons systems marks a grim evolution in the human capacity for organized violence. These systems, driven not merely by silicon and code but by the ambitions of states and their militaries, alter the strategic, ethical, and economic architecture of modern warfare. They are neither neutral tools nor inevitable progress. Rather, they are the expressions of deeply political choices, choices shaped by rivalries between great powers, economic pressures of defense industries, and the inertia of techno-military ideologies.

The deployment and development of AI weaponry must be understood not as a technical question but a geopolitical one. For the United States and China, AI is a frontier in a long-term struggle for strategic primacy. In the American discourse, AI-powered weapons are framed as necessary to maintain deterrence against a rising China. In Chinese thinking, they are a way to leapfrog conventional Western advantages and project strength without replicating the costs of Cold War-era militarism. Russia, constrained economically but rich in cyber capabilities, views AI warfare as asymmetric leverage—a means to destabilize adversaries and assert influence without inviting open confrontation.

Each of these calculations is rooted in hard realist assessments, not fantasy. RAND Corporation studies have noted that China’s investments in AI and autonomous systems are on pace to match or exceed those of the United States in some critical domains by 2030. The U.S. National Security Commission on Artificial Intelligence, chaired by former Google CEO Eric Schmidt, has already declared that AI will be the “key military technology of the future,” demanding urgent and large-scale public-private partnerships. The language is breathless, but the implications are deadly serious: a global arms race is not being avoided; it is being pursued.

What distinguishes this arms race from those of the past is its economic opacity and its deep embedding in the civilian tech economy. Unlike the Manhattan Project or the Strategic Defense Initiative, AI weapon systems are developed not in secret government bunkers but in corporate labs, often by researchers unaware—or unconcerned—about the potential uses of their work. Google’s Project Maven, which sought to apply machine learning to drone surveillance footage, sparked internal revolts among employees. Nevertheless, the project was only the tip of a much larger iceberg: the Pentagon’s Joint Artificial Intelligence Center (JAIC), Microsoft’s defense contracts, Palantir’s battlefield analytics—these illustrate how Silicon Valley has become inseparable from the military economy.

The blurring of these boundaries has profound implications for accountability and public debate. AI weapon systems, especially autonomous lethal ones, challenge not just legal frameworks but the very notion of human responsibility in war. Who is responsible when a drone misidentifies a civilian target—an engineer? A commander? An algorithm? International humanitarian law, forged in an era of human decision-making, falters before this opacity. The UN Secretary-General has called for a global ban on fully autonomous weapons, describing them as “morally repugnant.” Yet key states—the U.S., Russia, Israel—continue to block such efforts, insisting instead on “meaningful human control” without ever specifying what that means in operational terms.

Among states with operational or near-operational AI weapons capabilities, Israel has perhaps been the most brazen. Its Harpy and Harop drones, designed for loitering attacks, have already demonstrated autonomous targeting capabilities. Reports suggest that Israel’s Iron Dome system incorporates machine learning to refine interception trajectories, while AI-driven targeting software has played a role in operations in Gaza. These systems are often tested in asymmetric environments, where legal scrutiny is limited and strategic consequences are minimal. In this, the technological frontier becomes an ethical periphery.

Russia’s doctrine, meanwhile, embraces AI not as a tool of precision but of destabilization. The 2022 invasion of Ukraine saw the deployment of unmanned systems and loitering munitions on both sides, but Russia’s emphasis lay also in integrating AI with cyber and information warfare. Disinformation bots, deepfake videos, and network attacks were all part of a broader strategy to undermine morale and trust, not merely to destroy material. This approach reflects a conception of warfare that is less Clausewitzian than it is hybrid and fluid, where AI becomes a scalpel not for decapitation strikes but for psychological attrition.

China’s AI strategy is both more ambitious and more opaque. It views AI as central not just to military modernization but to regime stability. Surveillance, data collection, facial recognition—these are not just tools of control but sources of military advantage. The People’s Liberation Army (PLA) has invested heavily in AI-based decision support systems and is believed to be developing AI-powered submarines, drones, and electronic warfare tools. Unlike the U.S., China’s fusion of civilian and military AI development is not a workaround but a design: the Military-Civil Fusion strategy mandates it. The result is a system where control is tight, progress is rapid, and transparency is minimal.

Other nations are racing to keep up or carve niches. The United Kingdom’s Defence AI Strategy speaks of “AI-enabled force multipliers.” Turkey has exported AI-enhanced drones to Azerbaijan, tipping the balance in the 2020 Nagorno-Karabakh war. India has launched its Defense AI Council and is collaborating with private firms, although capacity remains limited. Iran’s experiments with AI-guided drones raise concerns not of peer competition, but of proliferation to non-state actors. Hezbollah, the Houthis, and other groups have shown interest in loitering munitions and semi-autonomous systems. As with nuclear technology, the genie is already out of the bottle—but unlike nuclear weapons, AI weapons require no enriched uranium or underground testing sites. They require only code, bandwidth, and intent.

The implications of this are sobering. A 2023 study by the Stockholm International Peace Research Institute (SIPRI) noted that at least 45 countries have begun integrating AI into military decision-making or weapons systems. The European Parliament’s Scientific Foresight Unit warned that the spread of autonomous weapons could “lead to accidental conflicts” through system errors or misinterpretations. These are not distant threats. They are immediate, structural features of the emerging order. Autonomous systems may not spark World War III, but they could easily spark the incident that escalates into it.

The economic dimensions of AI weapons systems are similarly insidious. In the U.S. alone, defense spending on AI is projected to exceed $15 billion annually by 2025. DARPA, the Defense Advanced Research Projects Agency, continues to fund swarming drones and predictive analytics, drawing in contractors and startups with lucrative grants. For smaller countries, the lure of AI lies in its promise to bypass traditional limitations: a state that cannot afford an air force might afford killer drones; a rebel group without tanks might still acquire targeting software. The diffusion of power that AI weapons enable is real—but so is the diffusion of risk.

The logic of deterrence that once governed nuclear arms does not translate easily to AI. There is no mutually assured destruction in the algorithmic battlefield. There is no hotline to prevent a swarm of drones from misinterpreting a radar signal. Theories of escalation control falter when the speed of engagement exceeds human reaction time. And unlike nuclear weapons, which are so catastrophic that their use is rare, AI weapons are precisely designed to be usable. Their cheapness, precision, and perceived deniability make them tempting tools of first resort, not last.

The international response has been, at best, fragmented. The Convention on Certain Conventional Weapons (CCW), tasked with discussing lethal autonomous systems, has produced years of debate and no binding treaty. The U.S. insists on voluntary norms. China claims to support a ban but continues development. Russia mocks the entire process. Civil society organizations like the Campaign to Stop Killer Robots have raised awareness, but their influence on actual policy has been marginal. The logic of militarization remains stronger than the logic of restraint.

There are alternatives, but they require political will. A moratorium on fully autonomous weapons could be negotiated among major powers, even if weaker states continue development. Verification mechanisms could be modeled on arms control treaties, requiring states to disclose testing data and development benchmarks. Dual-use AI research could be subjected to ethical review panels, particularly in universities and tech firms. Export controls, already used for cyber tools and surveillance tech, could be expanded to include AI components with lethal applications.

At a deeper level, the conversation must shift from technical definitions to strategic ethics. AI weapons are not inherently evil; their use is a choice. But that choice is being shaped less by human deliberation than by institutional momentum, economic incentives, and political rivalry. Anatol Rapoport, the great game theorist and moral critic of Cold War logic, once wrote that the most dangerous weapons are not those that kill the most people, but those that make killing easier. AI weapons, by lowering the threshold for violence and obscuring responsibility, do precisely that.

History teaches little optimism. Most technologies of war, once invented, are used. But history also teaches that restraint is possible, when costs are high and publics are informed. The biological weapons taboo, the partial nuclear test ban, the chemical weapons conventions—all emerged not from idealism alone but from a convergence of horror, pragmatism, and diplomacy. A similar coalition is needed now. Not to ban the future, but to preserve what is left of our shared humanity within it.