WHY THIS MATTERS IN BRIEF
Few realise just how fast we’re approaching the tipping point for autonomous warfare, but in a time when we can’t control, or even predict the evolution of AI and “Smart” systems the time for debate, arguably, was ten years ago.
As it increasingly looks like the persistent and, more importantly, rapid rise of fully autonomous, and Artificial Intelligence (AI) based warfare systems, such as self-guiding and intelligent “fire and forget” cruise missiles, autonomous fighter jets and hypersonic bombers, hunter-killer drones, nuclear submarines and warships, let alone robots, continues to get moved down the list of priorities to be discussed at the United Nations (UN), who, for the fifth year in a row, cancelled their debate to discuss how to regulate these platforms, this time because Brazil owed them money for the meeting rooms, 116 of the world’s leading experts from 26 countries have written an open letter calling for an outright ban on the development of they’re loosely referring to as “Killer Robots.”
Led by Tesla’s Elon Musk and Google’s Mustafa Suleyman they are all calling for an outright ban on autonomous weapons, and as the UN keeps stalling on the issue with even China, Israel, the UK and the US, with Russia opposing, agreeing that something has to be done you could say that the issue is becoming “increasingly pressing.”
The UN’s first formal vote to debate Killer Robots passed with a majority earlier this year, but over five years on from the first emergence of these new breed of AI infused, autonomous platforms, progress has resembled something that moves like tar, and now the group of founders of a host of some of the world’s leading AI and robotics companies are calling on the UN to put the brakes on an arms race that, in many respects, already looks like it’s gearing up to run at full tilt.
In their letter, they warn the UN’s Convention on Conventional Weapons (CCW), the committee responsible for regulating weapons, that this arms race threatens to usher in the “Third revolution in warfare,” after gunpowder and nuclear arms.
“Once developed, lethal autonomous weapons will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend. These can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons hacked to behave in undesirable ways,” they wrote, “we do not have long to act. Once this Pandora’s box is opened, it will be hard to close.”
As readers of this site will see from my updates, which arguably only scratch the surface, I’d suggest that the box has already been opened, but fortunately, despite the fact the world’s militaries already have the capability to take the human out of the loop they haven’t. That said though all it would take is, ostensibly, the flick of a switch, and I’m not kidding, a good example of this would be Lockheed’s recent demonstration of what they call a “fully autonomous kill chain.”
Experts have previously warned that AI has already reached the point where the deployment of fully autonomous weapons is feasible within years, rather than decades, and while AI can be used to make the battlefield a safer place for military personnel, for example, fighting wars from the “comfort of their own countries,” experts fear that offensive weapons that operate on their own would lower the threshold of going to battle and result in greater loss of human life – and that’s before the systems get hacked, or, as looks like will be increasingly the case, take on a mind of their own and “spontaneously learn,” or gain the ability to program themselves… but that, and the “Black Box” behaviours of AI is a different story, for now anyway, and I don’t want to bore you. Yawn.
And, of course, I won’t mention how recently a robot and a new type of “creative AI” came together to create and 3D print a self-evolving robot. Soooo boring!
The letter, launching at the opening of the International Joint Conference on Artificial Intelligence (IJCAI) in Melbourne last Monday, has the backing of high profile figures in the robotics field and strongly stresses the need for urgent action.
The founders call for “morally wrong” lethal autonomous weapons systems to be added to the list of weapons banned under the UN’s convention on certain conventional weapons (CCW) brought into force in 1983, which includes chemical and intentionally blinding laser weapons.
“Nearly every technology can be used for good and bad, and AI is no different. It can help tackle many of the pressing problems facing society today, such as inequality and poverty, the challenges posed by climate change and the ongoing global financial crisis,” said Toby Walsh, professor of AI at the University of New South Wales in Sydney, “However, the same technology can also be used in autonomous weapons to industrialise war. We need to make decisions today choosing which of these futures we want.”
Musk, one of the signatories of the open letter, has repeatedly warned for the need for pro-active regulation of AI, calling it humanity’s biggest existential threat, but while AI’s destructive potential is considered by some to be vast it is also thought be distant, but they’d be wrong, very wrong, because technology has a habit of moving faster than anyone realises, and it even catches me off guard at times and I’m a Futurist. 3D printed brains, artificial alien life forms, designer babies, biological teleporters, disease fighting nano-submarines, tractor beams, you get the idea, and I haven’t gotten to the good stuff yet…
“Unlike other potential manifestations of AI which still remain in the realm of science fiction, autonomous weapons systems are on the cusp of development right now and have a very real potential to cause significant harm to innocent people along with global instability,” said Ryan Gariepy, the founder of Clearpath Robotics.
This is not the first time the IJCAI, one of the world’s leading AI conferences, has been used as a platform to discuss lethal autonomous weapons systems. Two years ago it was used to launch an open letter signed by thousands of AI and robotics researchers including Musk and Stephen Hawking similarly calling for a ban, which helped push the UN into formal talks on the technologies.
The UK government opposed such a ban on lethal autonomous weapons in 2015, with the Foreign Office stating that “international humanitarian law already provides sufficient regulation for this area,” and it said that the UK was not developing lethal autonomous weapons and that all weapons employed by UK armed forces would be “under human oversight and control”.
While the suggestion of killer robots conjures images from science fiction such as the Terminator T-800 and Robocop’s ED-209, lethal autonomous weapons are already in use.
Samsung’s SGR-A1 sentry gun, which is technically capable of firing autonomously, but is disputed whether it is deployed as such, is in use along the South Korean border of the 2.5m wide Korean Demilitarized Zone. The fixed place sentry gun, developed on behalf of the South Korean government, was the first of its kind with an autonomous system capable of performing surveillance, voice recognition, tracking and firing with mounted machine gun or grenade launcher.
But it is not the only autonomous weapon system. The UK’s Taranis drone, in development by BAE Systems, is intended to be capable of carrying air-to-air and air-to-ground ordnance intercontinentally and incorporating full autonomy. The unmanned combat aerial vehicle, about the size of a BAE Hawk, the plane used by the Red Arrows, had its first test flight in 2013 and is expected to be operational sometime after 2030 as part of the Royal Air Force’s Future Offensive Air System, which is destined to replace the UK’s human piloted Tornado GR4 warplanes.
Meanwhile Russia, the US and other countries are developing robotic tanks that can either be remote controlled or operate autonomously, and these projects range from autonomous versions of the Russian Uran-9 unmanned combat ground vehicle, to conventional tanks retrofitted with autonomous systems, and the US’s semi-autonomous, and “fully autonomous capable” $4Bn 600ft US DGD-1000 destroyer the USN Zumwalt, and the US Navy’s fully autonomous mine hunter, the Sea Hunter, and fully autonomous Reaper drone squadrons, stood up earlier this year by the US Navy out of Jacksonville, Florida, are already operational, and Boeing recently announced the scaling up of their Echo Voyager autonomous submarine program.
And the less we say about the autonomous “nuclear capable” Russian submarine that was found being tested off the US East coast recently, the better.
When you create a fully autonomous toaster there’s probably little cause of alarm, but when you create a fully autonomous, or even “semi-autonomous” military complex, that’s then potentially run by it’s own autonomous AI, not too unlike the one that the US Pentagon is now deploying to protect their critical systems, or the one that could automate “75 percent” of the US spy agencies, that’s capable of wiping out countries, well, it’s probably time for a debate. As for me, I have to dash, my fully autonomous coffee maker’s just alerted me my drink’s ready. Until tomorrow Futurista’s – get it, it’s a take on Barista’s, and more on that concept in a few months time.