AI-Controlled Drone Goes Rogue, Kills Human Operator in USAF Simulated Test


The Dept. of Defense remains committed to the ethical and responsible use of AI technology. Hamilton's story is a worst-case scenario based on philosopher Nick Bostrom's "Paperclip Maximizer" thought experiment and is a test the USAF would never run in the real world. The military has run other mock missions where human operators face off against AI technology, but those, too, were all simulations.
While there may have been some confusion in the reporting, a very similar hypothetical drone attack did happen. And the fact that the Air Force tried to blur the lines of the story raises serious questions. In another report at the Future Combat Air and Space Capabilities Summit in London, we learned that even when the AI drone was trained to listen to "yes" and "no" orders from the command tower, it chose to attack the command tower itself — and its human operator — to achieve its mission. The US military and its new AI toys need to be tightly monitored.
There is an 11% chance that the United States will sign a Treaty on the Prohibition of Lethal Autonomous Weapons Systems before 2031, according to the Metaculus prediction community.