New York — In a stark address to the United Nations General Assembly on September 24, 2025, Ukrainian President Volodymyr Zelenskyy delivered a grave message: the world is now engaged in “the most destructive arms race in human history.” This new competition is not fueled by nuclear material but by the rapid, unstoppable proliferation of military Artificial Intelligence (AI) and autonomous drones.
Zelenskyy 's alarm centered on the growing capabilities of autonomous weapons systems capable of “drones fighting drones, attacking critical infrastructure and attacking people all by themselves.” He emphasized that these lethal platforms are becoming drastically cheaper, more effective, and increasingly difficult to contain, pushing the global community toward an irreversible escalation. He called for immediate, decisive international action.
Why This Technology Is Uniquely Destabilizing
The autonomous systems described by the Ukrainian president pose a unique threat due to three interconnected factors: their speed of proliferation, their ability to weaponize inexpensive platforms, and the non-transparent nature of their software. Unlike the immense technical and economic barriers to developing nuclear weapons, AI-enabled drones can be created by a wide array of state and non-state actors and deployed en masse. This transformation converts what was once an asymmetric threat into a dangerously symmetrical, accessible form of warfare.
Parallels and Critical Departures from Past Arms Races
Security analysts often draw comparisons between the current AI-drone competition and historical precedents, such as the Cold War nuclear sprint, the Space Race, or the cyber arms build-up.
- Proliferation Velocity: Nuclear development demanded centralized industrial capacity and scarce fissile materials. In sharp contrast, small-platform drones and military AI can be rapidly prototyped and fielded using readily available, off-the-shelf components, drastically shortening warning times and exponentially increasing the number of potential belligerents.
- Verification and Attribution: Past arms control regimes relied on verifiable signals, such as nuclear test detection and fissile material accounting. AI and black-box software, however, operate with minimal observable signatures and can be easily disguised, copied, or re-engineered, making international verification and clear attribution exceptionally complex.
- Dual-Use Dilemma: Many commercial AI breakthroughs (e.g., in computer vision, navigation, and autonomy) directly translate into lethal military applications. This inherent dual-use nature renders simple prohibition measures technically and politically unfeasible. International organizations, like the ICRC, are therefore advocating for a prohibition on systems that lack “meaningful human control,” while regulating the rest—a position at the heart of current UN discussions.
Security scholars agree that Zelenskyy 's concern is well-founded. As Paul Scharre and others have argued, warfare acts as an accelerant for military technology; front-line necessities drive the swift production and deployment of AI-enabled systems. Without established norms and controls, this proliferation will soon become irreversible, dramatically lowering the global threshold for conflict.
The Path Forward: Concrete Policy Interventions
While key military powers currently resist blanket prohibitions, a pragmatic, phased roadmap is necessary to manage the existential risks posed by autonomous weapons. These measures combine essential prohibition with regulation, deterrence, and defensive resilience:
1. Launch Phased Multilateral Treaty Negotiations
Immediate Prohibition: Demand an immediate ban on fully autonomous weapons that select and engage targets without meaningful human control.
This draws upon the language already advocated by the ICRC and NGOs. - Parallel Regulation: Concurrently, establish talks within the Convention on Certain Conventional Weapons (CCW) framework to precisely define “meaningful human control” and to explicitly list banned use-cases, such as the autonomous targeting of civilian infrastructure.
2. Establish Targeted Dual-Use Export Controls
- Harmonization: Major supplier states must harmonize and expand export controls on critical dual-use components, including advanced chipsets, LIDAR, specialized compute hardware, and autonomy toolkits.
- Risk Calibration: Controls must be calibrated to raise the lead time and cost for militarized AI toolchains without stifling legitimate civilian AI research and development (mirroring historical non-proliferation regimes).
3. Mandate Transparency and Auditability for Military AI
- Inspection Standards: Require states and defense contractors to implement auditable logs, reproducible model versions, and rigorous red-team testing protocols for all deployed AI systems.
- Confidence Building: Create a third-party inspection framework for high-risk deployments (such as air-defense automation) to aid in international attribution and confidence-building measures.
4. Invest in Collective Defense and Deterrence
- Resilience and Coordination: Alliance members must coordinate efforts to bolster counter-drone defenses and harden critical national infrastructure (power grids, communication hubs, hospitals) against swarm attacks.
- Clear Red Lines: Establish and communicate firm red lines (e.g., cross-border attacks on alliance territory) that carry agreed-upon political and economic consequences to effectively deter escalation.
5. Accountability and Support for Vulnerable States
- Capacity Building: Donor programs must be established to provide smaller, vulnerable nations with the funds and expertise to build passive defenses, develop legal frameworks, and improve incident reporting mechanisms against drone swarm exploitation.
- Sanctions: Implement rapid, coordinated sanctions against corporate or state entities that are found to be supplying weaponized autonomy to aggressive actors, thereby raising the economic and reputational costs of proliferation.
The two core difficulties the dual-use problem and the difficulty of software enforcement mean that a swift, complete ban is improbable. However, Zelenskyy 's central thesis remains irrefutable: inaction today will make the global security problem vastly more expensive, complex, and potentially catastrophic tomorrow.
The challenge now is to quickly merge necessary defensive investments with targeted international controls and binding norms to decelerate the most dangerous elements of this new technological arms race.
0 Comments