Scroll Top

Scientists have created a checklist to determine if AI becomes concious

WHY THIS MATTERS IN BRIEF

Consciousness is a tricky thing to prove, but a new test might help us determine if and when AI becomes conscious.

 

Love the Exponential Future? Join our XPotential Community, future proof yourself with courses from XPotential University, read about exponential tech and trendsconnect, watch a keynote, or browse my blog.

Recently one of my friends had what amounted to a therapy session with ChatGPT, the Generative Artificial Intelligence (AI) from OpenAI that’s taken the world by storm, and as expected the AI’s responses were on point, sympathetic, and as many others have been reporting felt so utterly human.

 

See also
Researchers used a worm brained AI with just 19 neurons to control a self-driving car

 

We all know what’s happening  – under the hood of ChatGPT a swarm of digital synapses have been trained on an internet’s worth of human-generated text in order to spit out favourable responses. Yet he said the interaction felt so real that he had to constantly remind himself that he was chatting with code – not a conscious, empathetic, or as one researcher thought a sentient being, on the other end.

With generative AI increasingly delivering seemingly human-like responses, it’s easy to emotionally assign a sort of “sentience” to the algorithm, and no, ChatGPT isn’t conscious. In 2021, Blake Lemoine at Google stirred up a media firestorm by proclaiming that one of the chatbots he worked on, LaMDA, was sentient and he subsequently got fired.

But most deep learning models are loosely based on the brain’s inner workings. AI agents are increasingly endowed with human-like decision-making algorithms. The idea that machine intelligence could become sentient one day no longer seems like science fiction.

But, how could we tell if machine brains one day gained sentience? The answer may be based on our own brains.

 

See also
Futurist Keynote, Germany: Life in 2030, RWE

 

preprint paper authored by 19 neuroscientists, philosophers, and computer scientists, including Dr. Robert Long from the Center for AI Safety and Dr. Yoshua Bengio from the University of Montreal, argues that the neurobiology of consciousness may be our best bet. Rather than simply studying an AI agent’s behaviour or responses -for example, during a chat – matching its responses to theories of human consciousness could provide a more objective ruler.

It’s an out-of-the-box proposal, but one that makes sense. We know we are conscious regardless of the word’s definition, which is still unsettled. Theories of how consciousness emerges in the brain are plenty, with multiple leading candidates still being tested in global head-to-head trials.

The authors didn’t subscribe to any single neurobiological theory of consciousness. Instead, they derived a checklist of “indicator properties” of consciousness based on multiple leading ideas. There isn’t a strict cutoff – say, meeting X number of criteria means an AI agent is conscious. Rather, the indicators make up a moving scale: the more criteria met, the more likely a sentient machine mind is.

Using the guidelines to test several recent AI systems, including ChatGPT and other chatbots, the team concluded that for now, “no current AI systems are conscious.”

However, “there are no obvious technical barriers to building AI systems that satisfy these indicators,” they said. It’s possible that “conscious AI systems could realistically be built in the near term.”

 

See also
AI can now listen to individuals in a crowd

 

Since Alan Turing’s famous imitation game in the 1950s, scientists have pondered how to prove whether a machine exhibits intelligence like a human’s.

Better known as the Turing test, the theoretical setup has a human judge conversing with a machine and another human – the judge has to decide which participant has an artificial mind. At the heart of the test is the provocative question “Can machines think?” The harder it is to tell the difference between machine and human, the more machines have advanced toward human-like intelligence.

ChatGPT broke the Turing test. An example of a chatbot powered by a Large Language Model (LLM), ChatGPT soaks up internet comments, memes, and other content. It’s extremely adept at emulating human responses – writing essays, passing exams, dishing out recipes, and even doling out life advice.

These advances, which came at a shocking speed, stirred up debate on how to construct other criteria for gauging thinking machines. Most recent attempts have focused on standardized tests for humans: for example, those designed for high school students, the Bar exam for lawyers, or the GRE for entering grad school. OpenAI’s GPT-4, the AI model behind ChatGPT, scored in the top 10 percent of participants. However, it struggled with finding rules for a relatively simple visual puzzle game.

 

See also
AI has learnt to predict heart attacks more accurately than doctors

 

The new benchmarks, while measuring a kind of “intelligence,” don’t necessarily tackle the problem of consciousness, so here’s where neuroscience comes in.

Neurobiological theories of consciousness are many and messy. But at their heart is neural computation: that is, how our neurons connect and process information so it reaches the conscious mind. In other words, consciousness is the result of the brain’s computation, although we don’t yet fully understand the details involved.

This practical look at consciousness makes it possible to translate theories from human consciousness to AI. Called computational functionalism, the hypothesis rests on the idea that computations of the right kind generate consciousness regardless of the medium – squishy, fatty blobs of cells inside our head or hard, cold chips that power machine minds. It suggests that “consciousness in AI is possible in principle,” said the team.

Then comes the hard part: How do you probe consciousness in an algorithmic black box? A standard method in humans is to measure electrical pulses in the brain or with functional MRI that captures activity in high definition but neither method is feasible for evaluating code.

Instead, the team took a “theory-heavy approach,” which was first used to study consciousness in non-human animals.

 

See also
Google DeepMind has given its AI the ability to imagine and ultimately innovate

 

To start, they mined top theories of human consciousness, including the popular Global Workspace Theory (GWT) for indicators of consciousness. For example, GWT stipulates that a conscious mind has multiple specialized systems that work in parallel; we can simultaneously hear and see and process those streams of information. However, there’s a bottleneck in processing, requiring an attention mechanism.

The Recurrent Processing Theory suggests that information needs to feed back onto itself in multiple loops as a path towards consciousness. Other theories emphasize the need for a “body” of sorts that receives feedback from the environment and uses those learnings to better perceive and control responses to a dynamic outside world – something called “embodiment.”

With myriad theories of consciousness to choose from, the team laid out some ground rules. To be included, a theory needs substantial evidence from lab tests, such as studies capturing the brain activity of people in different conscious states. Overall, six theories met the mark. From there, the team developed 14 indicators.

It’s not one-and-done. None of the indicators mark a sentient AI on their own. In fact, standard machine learning methods can build systems that have individual properties from the list, explained the team. Rather, the list is a scale – the more criteria met, the higher the likelihood an AI system has some kind of consciousness.

 

See also
AI can identify dangerous lung diseases as well as trained doctors

 

How to assess each indicator? We’ll need to look into the “architecture of the system and how the information flows through it,” said Long.

In a proof of concept, the team used the checklist on several different AI systems, including the transformer-based large language models that underlie ChatGPT and algorithms that generate images, such as DALL-E 2. The results were hardly cut-and-dried, with some AI systems meeting a portion of the criteria while lacking in others.

However, although not designed with a global workspace in mind, each system “possesses some of the GWT indicator properties,” such as attention, said the team. Meanwhile, Google’s PaLM-E system, which injects observations from robotic sensors, met the criteria for embodiment.

None of the state-of-the-art AI systems checked off more than a few boxes, leading the authors to conclude that we haven’t yet entered the era of sentient AI. They further warned about the dangers of under-attributing consciousness in AI, which may risk allowing “morally significant harms,” and anthropomorphizing AI systems when they’re just cold, hard code.

 

See also
IBM's Project Debater AI barely convinces Cambridge University audience that "AI is good"

 

Nevertheless, the paper sets guidelines for probing one of the most enigmatic aspects of the mind. “[The proposal is] very thoughtful, it’s not bombastic and it makes its assumptions really clear,” Dr. Anil Seth at the University of Sussex told Nature.

The report is far from the final word on the topic. As neuroscience further narrows down correlates of consciousness in the brain, the checklist will likely scrap some criteria and add others. For now, it’s a project in the making, and the authors invite other perspectives from multiple disciplines – neuroscience, philosophy, computer science, cognitive science – to further hone the list.

Related Posts

Leave a comment

FREE! DOWNLOAD THE 2024 EMERGING TECHNOLOGY AND TRENDS CODEXES!DOWNLOAD

Awesome! You're now subscribed.

Pin It on Pinterest

Share This