HomeTechnologyNo, an AI-controlled drone did not kill its human operator in a...

No, an AI-controlled drone did not kill its human operator in a simulation

A US Army colonel described a virtual “simulation” in which an autonomous drone repeatedly killed its human operator. A terrifying scenario… which was actually just a simple imaginary scenario.

The story was all about a sci-fi disaster scenario. During a simulation, an autonomous US Air Force drone using artificial intelligence reportedly attempted to kill its human operator repeatedly to “kill anyone who got in its way,” a US colonel said in late September. May. Claims ultimately denied by the US military, then by the soldier himself.

Colonel Tucker Hamilton, the US Air Force’s head of AI testing and operations, described this supposed virtual “simulation” at a conference of the Royal Aeronautical Society, a British professional association. He warned of the risks that artificial intelligence presents in the military arena, giving the example of a “simulation” carried out with an armed drone equipped with an AI.

“The drone killed the operator”

The AI-enhanced device was tasked with identifying and destroying surface-to-air missile launch sites, with the consent of its human operator, essential for firing. But according to the colonel, the AI ​​would have started to rebel. “The system began to realize that even though it identified the threat, the human operator would sometimes tell it not to kill it, but [le drone] He earned his points by killing the threat.”

Also according to him, the army would have tried to regain control of the AI, without success. “We train the system ‘Hey, don’t kill the operator, it’s bad’ (…) So what does it do? It starts to destroy the communication tower that the operator uses to communicate with the drone and prevent it from killing.” the objective.”

Finally a simple “thought experiment”

The colonel’s statements were not denied for long. “The US Air Force has not conducted any such autonomous drone simulation,” a spokeswoman said in a statement to Business Insider.

Faced with the resumption of this “drill” in many international media, Colonel Tucker Hamilton finally clarified his statements. “The colonel admits to having ‘spoke bad'”, explains a paragraph added to the page of the Royal Aeronautical Society on Friday, June 2.

Therefore, this rogue drone story was just a purely imagined scenario, and not a simulation performed with real AI. A scenario despite everything “based on plausible scenarios”, according to the military, who insists that “illustrates the real challenges posed by AI-induced capabilities, and why the United States Air Force is committed to developing AI in a ethical way”.

Author: lucas chagnon
Source: BFM TV

Stay Connected
16,985FansLike
2,458FollowersFollow
61,453SubscribersSubscribe
Must Read
Related News

LEAVE A REPLY

Please enter your comment!
Please enter your name here