WHY THIS MATTERS IN BRIEF
Learning to talk, like chatbots do, is one thing, being able to understand and respond to human emotions is an entirely different skillset that needs huge computing power.
Chatbots are everywhere these days, as your personal shopper, sending you notifications about your food order and even update your bank account balance or serve as your hotel concierge. And thanks to improvements in Natural Language Processing (NLP) and Artificial Intelligence (AI) over the years they, and their life-like digital human equivalents, they’ve increasingly captured the attention of brands and their eager consumers.
However, as much as we want to believe we live in a world full of intelligent automation with the likes of Google Assistant, Amazon Alexa, or Apple Siri, which are sounding increasingly life-like and starting to hold their own conversations, there are still lots of limits to what today’s chatbots can actually offer.
Even as AI’s, like the digital human Lia from Soul Machines, makes strides toward reading and trying to emulate human emotions by analysing people’s body language and facial expressions, as well as their voices, none of the bots on the market today are programmed to feel human emotion or really care about the people they’re interacting with.
Although AI produces functional applications with classical computers, it is limited by the computational capabilities of classical computers, and that’s a problem if you want to create sophisticated AI’s that can care and empathise. And this is where quantum computers, that are hundreds of millions times more powerful than today’s classical computers come in – by being able to provide a computation boost to AI, and enabling it to tackle more complex problems.
Now, according to a report by Sifted, startup Cambridge Quantum Computing (CQC) has built “Meaning-aware” natural language processing on a quantum computer — a system that understands both the grammatical structure and the meaning of words, in a way that classical computers cannot.
Founder and CEO Ilyas Khan said classic computers simply don’t have enough processing power to be able to understand the rules of grammar and recognize word patterns, and given the complexity of human language he has a point.
“This [AI model] is quantum native, it cannot be done with a classical computer with a reasonable amount of resources,” Khan said, adding that NLP is generally done on the basis of recognizing patterns in a “bag of words”. Even Open AI’s insanely powerful GPT-3 language model, he said, which can produce very human sounding text, is based on modelling the relationships between words, like a very very sophisticated autocorrect.
The exponentially-larger quantum state means NLP on a quantum computer can code complex linguistic structures and novel models of meaning in quantum circuits. He also said CQC is in early talks with a medical diagnostics company, exploring if their Quantum Language Processing (QLP) could be used to speed up diagnoses. For example, it may be possible to combine an X-ray with a radiologist verbally describing what it is showing to very rapidly identify an illness.
“Limited uses could begin within three years. For more widespread use, like talking to Alexa or Siri, we will have to wait for an increase in the capacity of quantum computers.”
That said, a quantum computer could improve AI-based digital assistants with true contextual as well as empathetic awareness and the ability to fully understand interactions with the customer since the former has the potential to sort through a vast number of possibilities within a fraction of a second to come up with a probable solution.
Either way though it’s still early days for QLP, and while it’s likely to be a game changer it’s going to undoubtedly be some years yet before we hear much more about advancements in the space, so stay tuned.