Scroll Top

Scientists ran an experiment to prove a super intelligent AI couldn’t be controlled

WHY THIS MATTERS IN BRIEF

Control is an illusion, and historically the smartest species has always wiped out less capable species,  so people are worried that’s what AI could do to humanity.

 

Love the Exponential Future? Join our XPotential Community, future proof yourself with courses from XPotential Universityconnect, watch a keynote, or browse my blog.

Have you ever heard people ask whether or not AI will destroy the world, or ask if we’ll ever be able to control future Artificial Intelligence’s? If not then firstly what rock have you been hiding under, and is there space for one more, and if you have then you’ll know that no one ever comes up with a decent answer.

 

See also
Scientists get funding to grow neural networks in petri dishes

 

That said though, and for what it’s worth, every once in a while Elon Musk tells everyone that one day AI could become an immortal dictator, which would suggest he thinks we couldn’t control it, and every once in a while Google announces it’s still not succeeded in creating a kill switch that will let it terminate rogue AI’s, which, again, just suggests more of the same. And let’s not even go anywhere near the “Doomsday Games” event where hundreds of the world’s top experts and scientists couldn’t figure out how to solve the majority of the world’s doomsday scenarios, or the time Google demonstrated that more powerful AI’s get “aggressive” and “kill” weaker ones …

And as for my answer it’d also be no, categorically, especially as we get closer to realising the dawn of Artificial General Intelligence (AGI) and Artificial Super Intelligence (ASI) – two events which will change our world beyond all recognition. Afterall, we can’t even control what dumb software does to us, admittedly thanks mostly to bugs, hackers, and robo-hackers, so what chance would we have against “intelligent software” that can design, write, and evolve its own code, sometimes spontaneously – and all at near infinite speed?

 

See also
Snapchat's ChatGPT chat bot is creeping out kids and parents

 

Now scientists in the US have just delivered their verdict on whether we’d be able to control a high-level computer super-intelligence. The answer? Almost definitely not.

The catch is that controlling a super-intelligence far beyond human comprehension would require a simulation of that super-intelligence which we can analyse. But if we’re unable to comprehend it, it’s impossible to create such a simulation.

Rules such as “Cause no harm to humans” can’t be set if we don’t understand the kind of scenarios that an AI is going to come up with, suggest the authors of the new paper. Once a computer system is working on a level above the scope of our programmers, we can no longer set limits.

 

See also
Hiding on the Dark Web just got a whole lot easier

 

“A super-intelligence poses a fundamentally different problem than those typically studied under the banner of ‘robot ethics’,” write the researchers.

“This is because a super-intelligence is multi-faceted, and therefore potentially capable of mobilising a diversity of resources in order to achieve objectives that are potentially incomprehensible to humans, let alone controllable.”

Part of the team’s reasoning comes from the halting problem put forward by Alan Turing in 1936. The problem centres on knowing whether or not a computer program will reach a conclusion and answer, so it halts or simply loops forever trying to find one.

 

See also
Inspired by OpenAI's DALL-E 2 biotech labs are now using generative AI to create new drugs

 

As Turing managed to prove using some smart math, while we can know that for some specific programs, it’s logically impossible to find a way that will allow us to know that for every potential program that could ever be written. That brings us back to AI, which in a super-intelligent state could feasibly hold every possible computer program in its memory all at once.

Any program written to stop AI harming humans and destroying the world, for example, may reach a conclusion, and halt or not – it’s mathematically impossible for us to be absolutely sure either way, which means it’s not “containable.”

“In effect, this makes the containment algorithm unusable,” says computer scientist Iyad Rahwan, from the Max-Planck Institute for Human Development in Germany.

 

See also
Prophesee and Qualcomm partner to bring neuromorphic computer vision to smartphones

 

The alternative to teaching AI some ethics and telling it not to destroy the world – something which no algorithm can be absolutely certain of doing, the researchers say – is to limit the capabilities of the super-intelligence. It could be cut off from parts of the internet or from certain networks, for example.

The new study rejects this idea too, suggesting that it would limit the reach of the AI – the argument goes that if we’re not going to use it to solve problems beyond the scope of humans, then why create it in the first place?

If we are going to push ahead with AI then we might not even know when a super-intelligence beyond our control arrives, such is its incomprehensibility. That means we need to start asking some serious questions about the directions we’re going in.

 

See also
Facebook creates a new intelligence test to test how smart AI's really are

 

“A super-intelligent machine that controls the world sounds like science fiction,” says computer scientist Manuel Cebrian, from the Max-Planck Institute for Human Development. “But there are already machines that perform certain important tasks independently without programmers fully understanding how they learned it. The question therefore arises whether this could at some point become uncontrollable and dangerous for humanity.”

The research has been published in the Journal of Artificial Intelligence Research.

Related Posts

Leave a comment

FREE! DOWNLOAD THE 2024 EMERGING TECHNOLOGY AND TRENDS CODEXES!DOWNLOAD

Awesome! You're now subscribed.

Pin It on Pinterest

Share This