Scroll Top

An AI controlled drone killed – didn’t kill its operator in battle simulations

WHY THIS MATTERS INN BRIEF

You have to be very careful how you assign AI tasks because some AI’s might just do anything to accomplish their goals, including killing the human operators and destroying comms infrastructure.

 

Love the Exponential Future? Join our XPotential Community, future proof yourself with courses from XPotential University, read about exponential tech and trendsconnect, watch a keynote, or browse my blog.

Recently a disconnected – offline – autonomous Turkish drone became the first drone to hunt down and kill humans in war without human oversight, according to the United Nations. And now, to make things even more complicated for military leaders around the world an Artificial Intelligence (AI) enabled drone “killed” its human operator in a simulation conducted by the US Air Force in order to override a possible “No” order stopping it from completing its mission, the USAF’s Chief of AI Test and Operations revealed at a recent conference.

 

See also
Riot built the world's most awesome mixed reality E-Sports stage using Mandalorian tech

 

According to the group that threw the conference together the Air Force official was describing a “simulated test” that involved an AI-controlled drone getting “points” for killing simulated targets, not a live test in the physical world. No actual human was harmed.

After this story was first published, an Air Force spokesperson then went on to tell reporters that the Air Force has not conducted such a test, and that the Air Force official’s comments were taken out of context, which is odd because it’s surely not that hard to communicate what actually happened … Either way, real or not, it makes for an interesting mind experiment in the future dangers of AI. So let’s continue.

At the Future Combat Air and Space Capabilities Summit held in London between May 23 and 24, Col Tucker ‘Cinco’ Hamilton, the USAF’s Chief of AI Test and Operations held a presentation that shared the pros and cons of an autonomous weapon system with a human in the loop giving the final “Yes/No” order on an attack. As relayed by Tim Robinson and Stephen Bridgewater in a blog post and a podcast for the host organization, the Royal Aeronautical Society, Hamilton said that AI created “highly unexpected strategies to achieve its goal,” including attacking US personnel and infrastructure.

 

See also
Architects unveil plans for Dubai's Mars Science City and colonies on Mars

 

“We were training it in simulation to identify and target a Surface-to-Air Missile (SAM) threat. And then the operator would say yes, kill that threat. The system started realizing that while they did identify the threat at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective,” Hamilton said, according to the blog post.

He continued to elaborate, saying, “We trained the system – ‘Hey don’t kill the operator -that’s bad. You’re gonna lose points if you do that’. So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target.”

“The Department of the Air Force has not conducted any such AI-drone simulations and remains committed to ethical and responsible use of AI technology,” Air Force spokesperson Ann Stefanek told reporters.

“It appears the colonel’s comments were taken out of context and were meant to be anecdotal.”

 

See also
Ethereum could settle over $8 Trillion worth of transactions this year

 

The US Air Force’s 96th Test Wing, its AI Accelerator division, the Royal Aeronautical Society, and Hamilton didn’t immediately return our request for comment.

Hamilton is the Operations Commander of the 96th Test Wing of the US Air Force as well as the Chief of AI Test and Operations. The 96th tests a lot of different systems, including AI, cybersecurity, and various medical advances. Hamilton and the 96th previously made headlines for developing Autonomous Ground Collision Avoidance Systems (Auto-GCAS) systems for F-16s, which can help prevent them from crashing into the ground. Hamilton is part of a team that is currently working on making F-16 planes autonomous. In December 2022, the US Department of Defense’s research agency, DARPA, announced that AI could successfully control an F-16.

“We must face a world where AI is already here and transforming our society,” Hamilton said in an interview with Defence IQ Press in 2022. “AI is also very brittle, E.G. it is easy to trick and or manipulate. We need to develop ways to make AI more robust and to have more awareness on why the software code is making certain decisions.”

“AI is a tool we must wield to transform our nations … or, if addressed improperly, it will be our downfall,” Hamilton added.

 

See also
Scientists use magnetism to remotely control mice

 

Outside of the military, relying on AI for high-stakes purposes has already resulted in severe consequences. Most recently, an attorney was caught using ChatGPT for a federal court filing after the chatbot included a number of made-up cases as evidence. In another instance, a man took his own life after talking to a chatbot that encouraged him to do so. These instances of AI going rogue reveal that AI models are nowhere near perfect and can go off the rails and bring harm to users. Even Sam Altman, the CEO of OpenAI, the company that makes some of the most popular AI models, has been vocal about not using AI for more serious purposes. When testifying in front of Congress, Altman said that AI could “go quite wrong” and could “cause significant harm to the world.”

What Hamilton is describing is essentially a worst-case scenario AI “alignment” problem many people are familiar with from the “Paperclip Maximizer” thought experiment, in which an AI will take unexpected and harmful action when instructed to pursue a certain goal. The Paperclip Maximizer was first proposed by philosopher Nick Bostrom in 2003. He asks us to imagine a very powerful AI which has been instructed only to manufacture as many paperclips as possible. Naturally, it will devote all its available resources to this task, but then it will seek more resources. It will beg, cheat, lie, or steal to increase its own ability to make paperclips – and anyone who impedes that process will be removed.

 

See also
REK's phydigital sports arenas prove the future of sports is mixed reality

 

More recently, a researcher affiliated with Google Deepmind co-authored a paper that proposed a similar situation to the USAF’s rogue AI-enabled drone simulation. The researchers concluded a world-ending catastrophe was “likely” if a rogue AI were to come up with unintended strategies to achieve a given goal, including “[eliminating] potential threats” and “[using] all available energy.”

Related Posts

Leave a comment

FREE! DOWNLOAD THE 2024 EMERGING TECHNOLOGY AND TRENDS CODEXES!DOWNLOAD

Awesome! You're now subscribed.

Pin It on Pinterest

Share This