WHY THIS MATTERS IN BRIEF
Machines already have the capability to autonomously and automatically kill people, it’s just not “switched on” yet …
Love the Exponential Future? Join our XPotential Community, future proof yourself with courses from XPotential University, read about exponential tech and trends, connect, watch a keynote, or browse my blog.
The UN has already seen what it alleges was the first example of a drone making an autonomous kill decision when it lost access to the control network it was on, and now the deployment of Artificial Intelligence (AI) controlled drones elsewhere that can make autonomous decisions about whether to kill human targets is moving closer to reality, The New York Times reported.
Lethal autonomous weapons, that can select targets using AI, are being developed by countries including the US, China, and Israel.
The use of the so-called “killer robots” would mark a disturbing development, say critics, handing life and death battlefield decisions to machines with no human input.
Several governments are lobbying the UN for a binding resolution restricting the use of AI killer drones, but the US is among a group of nations — which also includes Russia, Australia, and Israel — who are resisting any such move, favouring a non-binding resolution instead, The Times reported.
“This is really one of the most significant inflection points for humanity,” Alexander Kmentt, Austria’s chief negotiator on the issue, told The Times. “What’s the role of human beings in the use of force — it’s an absolutely fundamental security issue, a legal issue and an ethical issue.”
The Pentagon is working toward deploying swarms of thousands of AI-enabled drones, according to a notice published earlier this year.
In a speech in August, US Deputy Secretary of Defense, Kathleen Hicks, said technology like AI-controlled drone swarms would enable the US to offset China’s People’s Liberation Army’s (PLA) numerical advantage in weapons and people.
“We’ll counter the PLA’s mass with mass of our own, but ours will be harder to plan for, harder to hit, harder to beat,” she said, reported Reuters.
Frank Kendall, the Air Force secretary, told The Times that AI drones will need to have the capability to make lethal decisions while under human supervision.
“Individual decisions versus not doing individual decisions is the difference between winning and losing — and you’re not going to lose,” he said.
“I don’t think people we would be up against would do that, and it would give them a huge advantage if we put that limitation on ourselves.”
The New Scientist reported in October that AI-controlled drones have already been deployed on the battlefield by Ukraine in its fight against the Russian invasion, though it’s unclear if any have taken action resulting in human casualties.
The Pentagon did not immediately respond to a request for comment.