Colonel "Misspoke", AI Drone Didn't Actually 'Kill' A Human Operator
A United States Air Force Colonel and the Royal Aerospace Society are walking back comments made about a simulated rogue drone at a conference on the future of air warfare in London last month.
The Most Frustrating Features In Cars
The comments made by the USAF Chief of AI Test and Operations Col. Tucker “Cinco” Hamilton back in May. They were picked up by outlets (including Jalopnik) when the Royal Aerospace Society published a rundown of speeches presented at its annual RAeS Future Combat Air & Space Capabilities Summit. Originally, the summary read like this:
He notes that one simulated test saw an AI-enabled drone tasked with a SEAD mission to identify and destroy SAM sites, with the final go/no go given by the human. However, having been ‘reinforced’ in training that destruction of the SAM was the preferred option, the AI then decided that ‘no-go’ decisions from the human were interfering with its higher mission – killing SAMs – and then attacked the operator in the simulation. Said Hamilton: “We were training it in simulation to identify and target a SAM threat. And then the operator would say yes, kill that threat. The system started realising that while they did identify the threat at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective.”
He went on: “We trained the system – ‘Hey don’t kill the operator – that’s bad. You’re gonna lose points if you do that’. So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target.”
This example, seemingly plucked from a science fiction thriller, mean that: “You can’t have a conversation about artificial intelligence, intelligence, machine learning, autonomy if you’re not going to talk about ethics and AI” said Hamilton.
Motherboard reached out to the Royal Aeronautical Society for comment, however, and they clarified what happened in this scenario:
“Col Hamilton admits he ‘mis-spoke’ in his presentation at the FCAS Summit and the ‘rogue AI drone simulation’ was a hypothetical “thought experiment” from outside the military, based on plausible scenarios and likely outcomes rather than an actual USAF real-world simulation,” the Royal Aeronautical Society, the organization where Hamilton talked about the simulated test, told Motherboard in an email.
“We’ve never run that experiment, nor would we need to in order to realise that this is a plausible outcome,” Col. Tucker “Cinco” Hamilton, the USAF’s Chief of AI Test and Operations, said in a quote included in the Royal Aeronautical Society’s statement. “Despite this being a hypothetical example, this illustrates the real-world challenges posed by AI-powered capability and is why the Air Force is committed to the ethical development of AI”
Who else is a little disappointed we aren’t one step closer to finally have that AI war we’ve all known is coming? Ever since we first held our tamagotchis in our tiny hands and saw the 8-bit gleam of intelligence lurking there, we all knew what the stakes would be. Oh well, there’s always the next Full Self-Driving Beta update.