After testing out small, artificial intelligence (AI)-powered autonomous surveillance drones in special mission operations and the war in Ukraine, the Pentagon is reportedly seeking to field multiple thousands of AI-enabled autonomous vehicles by 2026 to stay on par with the PRC.
This comes as part of the ambitious Replicator initiative, which is expected to "galvanize progress" and accelerate hard decisions on what AI tech is mature and trustworthy enough to deploy — including on weaponized systems. The Pentagon has over 800 unclassified AI projects, much of which are still in testing.
As the military use of AI-powered autonomous drones inevitably expands, governments across the globe must promise to use them only in a limited capacity. These weapons should be prohibited from targeting areas containing humans — ideally, their scope would be narrowed to just military objects. This technology makes on-the-battlefield decisions without human input, which means the humans who build them can and must program them to steer clear of people in the first place.
While it's understandable to question the legality and morality of autonomous weapon systems, US adversaries like China won't care about such debates. Western countries — with the goal of defending freedom and peace — need to responsibly develop these weapons faster and in larger quantities if they hope to deter enemy regimes. These next-generation weapons are already being used in Ukraine, so it's best if democratic allies control their global use before autocrats do.
There's an 11% chance that the United States will sign a Treaty on the Prohibition of Lethal Autonomous Weapons Systems before 2031, according to the Metaculus prediction community.