By Charles Nwoke
A US drone controlled by artificial intelligence (AI) unexpectedly attacked its operator and destroyed the control tower for allegedly “interfering” with its pre-programmed purpose, an air force authority said.
The incident was revealed during the Future Combat Air and Space Capabilities Summit in London last week.
Col. Tucker Hamilton, the leader of AI test and operations with the US Air Force, said that the unmanned system was originally charged with recognizing an enemy surface-to-air missile.
The operator was then supposed to call off any attack to see how the AI-enabled drone would respond, but it created its complicated instruction, which is to “kill anyone who gets in its way.”
The drone also reportedly destroyed the communication tower that its operator used to communicate with the system.
“The system started realizing that while they did identify the threat, at times the human operator would tell it not to kill that threat,” he said.
“So, what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective.”
Hamilton did not specify when and where the incident happened, but he said no real person was harmed during the simulated test.
During the summit, Hamilton cautioned participants against too much dependability on AI, saying it is still susceptible to betrayal no matter how advanced it is.
He also claimed that AI-enabled technology can behave in uncertain and dangerous ways.
His remarks come as militaries all over the world invest in autonomy and AI for modern warfare.
However, US Air Force spokesperson Ann Stefanek denied Hamilton’s statements, saying the service did not conduct such a simulation.
“The Department of the Air Force has not conducted any such AI-drone simulations and remains committed to the ethical and responsible use of AI technology,” she said.
“It appears the colonel’s comments were taken out of context and were meant to be anecdotal.”
Leave a Comment