Scroll Top

Researchers create a kill switch to terminate rogue AI agents

WHY THIS MATTERS IN BRIEF

Researchers have a new weapon in the war against the rogue AI’s of the future.

 

Scientists from Google’s Artificial Intelligence division, DeepMind, and Oxford University are developing a “kill switch” for AI that will allow human operators to repeatedly and safely interrupt an AI program.

 

See also
UK's NHS gives millions of Londoners their own on call AI doctor

 

You use AI every day, you might just not realize it. It’s being woven into every part of the world’s digital fabric and it already has an increasing amount of influence over our daily lives and society. From introducing new technology influenced, cultural biases to increasing human longevity and much more in between the wide adoption of AI, and its impact is already staggering.

As these platforms become increasingly powerful, capable and independent – including the ability to self code, self heal and self replicate, it’s only natural that we should ask the question about what happens when, not if one goes rogue and prepare an adequate defense.

 

See also
Samsung uses IBM's brain chip to build a digital eye

 

 

Shooting down the Rogue Army

Rogue however is only one of the concerns – albeit the greatest one. As these systems get increasingly complex and proficient there are an increasing number of ways in which an AI could behave “less than optimally” and laugh as you may by 2025 it’s highly likely that we’ll see the worlds first “schizophrenic” AI, caused by a “blip” in the code – let’s just hope it’s not near an ICBM when it has an episode.

Over the longer term AI platforms could, dare we say will, learn to avoid interruptions to themselves by simply finding new, innovative ways to disable the human masters big red button. The scenarios, and therefore the challenge that faces Google, and other researchers in this space, is immense. You could easily argue it’s akin to trying to hard code common, ethical and moral behaviours into every human being and then some, yet every day the newspapers remind us that despite society’s best efforts the powerful combined forces of intelligence, determination and individualism make this an almost impossible challenge. And it will be no less for AI which will, more than likely, end up inheriting some of those same traits – albeit in digital form.

 

See also
A survey of 1.5 Million people shows a third can't tell AI from Humans

 

Sometimes, all we can do is seek to limit the damage.

Today AI is the intelligence that powers trillions of digital transactions – from Google’s and Siri’s search algorithms to Facebooks and Netflix’s matching algorithms. It diagnoses complex disease with staggering speed, cuts drug discovery times by hundreds of multiples, optimises energy transmission and transportation networks, helps streamline business operations, makes our cities “Smarter” and increasingly it’s both the protector and the operator that’s embedded into more and more of the worlds defense platforms.

 

See also
Softbank CEO Son says AGI will arrive within the next ten years

 

The digital kill switch

Up until now though there has never been an obvious way to put what is arguably the world’s most powerful genie back in its bottle and the teams research revolves around a method to ensure that AI’s that learn via a process called “reinforcement”, can be repeatedly and safely interrupted by human overseers without learning how to avoid or intentionally manipulate these interventions.

In an academic paper they outline how future intelligent machines could be algorithmically soft coded to prevent them from learning how to and, maybe more worryingly wanting to, override human input – a topic that has caused particular angst among the scientific and expert communities with notaries including Elon Musk, founder of Tesla, SpaceX and the backer of OpenAI, Stephen Hawking and Bill Gates being particularly vocal about the potential catastrophic Skynet like consequences of an out of control AI.

 

See also
This AI created its own lesson plans to learn faster then smashed other AI's out the park

 

To stop the inevitable from happening the researchers are trying to design a system that makes the human interruptions of algorithms “not appear as being part of the task at hand”. Essentially this means machines are taught to stop themselves rather than being given the opportunity to think that the command originated from the outside.

In the paper the researchers state that for some algorithms, such as Q-Learning algorithms it is already possible to safely stop them but while other algorithms can be modified to be stopped from working it’s not clear if the remaining algorithms can be easily made safely interruptible. When researchers, for example tried to apply the changes to more universal algorithms, such as those associated with Artificial General Intelligence (AGI) it resulted in making them “weakly”, not “fully” interruptible.

 

Conclusion

While many people can argue that control is an illusion it’s also clear that we, as humans, must be able to exert a high level of control over future intelligent agents but with so many AI variants and with the pace of the technology advancing so rapidly maybe all we’ll be able to do is create a system that limits the damage.

 

Related Posts

Leave a comment

FREE! 2024 TRENDS AND EMERGING TECHNOLOGY CODEXES
+

Awesome! You're now subscribed.

Pin It on Pinterest

Share This