WHY THIS MATTERS IN BRIEF
Dealing with today’s technology can be cumbersome, which is why researchers think things would be better if it could simply read your mind.
Interested in the Exponential Future? Connect, download a free E-Book, watch a keynote, or browse my blog.
Baxter is but a child, a robot child with bright eyes and a subtle grin. It sits at a table and cautiously lifts a can of spray paint then dangles it over a box marked “WIRE.” But the error seems to smack Baxter across the face – its eyebrows furrow and blush appears on its cheeks and then it swings its arm to an adjacent box marked “PAINT” and drops in the can with a clunk and that spray-paint rattle.
“Good,” says a voice off screen, as Baxter’s face reverts to a grin.
Baxter is an industrial robot at MIT with hulking arms meant for lifting much larger things than cans and its decisions are not entirely its own but those of a human sitting across the table – a woman with electrodes strapped to her head.
The setup the woman’s wearing detects a particular spike in her brain’s electrical activity every time she sees Baxter making a mistake, and that’s the team’s breakthrough, because in real time she can telepathically scold Baxter for choosing the wrong box, and then the robot automatically corrects its mistake.
The researchers behind Baxter and his master’s telepathic abilities didn’t set out to embarrass an innocent machine, but instead push the boundaries of what’s possible in the world of Human-Robot interaction, as they detailed in their paper. And in a world where we’re going to be sharing more and more space with robots, from the sidewalks full of pizza delivery robots, to the robots flying your plane, and picking your crops it’s not lost on designers and inventors that at some point we’ll need a new way to interact and “socialise,” if that’s the right word, with them.
Today, communicating with the machines is mostly about typing or talking to them, which creates lag time, but letting Baxter read your mind takes just milliseconds. And this isn’t the first time researchers have created a telepathic link with a robot after another team elsewhere showed off a new Human-Robot telepathic interface that they used to teach robots new tricks in real time, a technology that, as we also see the emergence of robots with shared hive minds will let us re-train entire fleets of robots in milliseconds not the months and years it takes today.
“[Telepathy] is a new way of controlling the robot that I actually like to think of as being natural, in the sense that we aim to have the robot adapt to what the human would like to do,” says MIT roboticist Daniela Rus, a co-author on the study, “namely, don’t put the paint in the wrong box, dummy.”
The underlying technology is shiny and new and complex, but the idea itself is straightforward enough. When you notice a mistake your brain emits a faint type of signal known in neuroscience as an “error-related potential” signal. But that’s among all the other electrical chaos coursing through your brain that an EEG picks up so MIT’s machine learning algorithms sniff out the right signal. And when Baxter is about to make a mistake the system translates the error-related potentials in the woman’s brain into code a robot understands.
That said though the human and machine are communicating at the most basic of levels – not speech but the electrical signals that prelude speech.
“The paper shows an interesting capability in terms of doing this in real time,” says Carnegie Mellon roboticist Aaron Steinfeld. The researchers’ machine learning algorithms are so powerful, they can sort the error-related potentials from the other electrical noise to immediately create something the robot can comprehend.
Now, you may have been hearing recently that a robot will one day steal your job. I can’t guarantee that’s untrue, but a world is coming in which robots work alongside humans. Imagine a robot assistant helping you assemble Ikea furniture.
“The robot could actually be passing the human different pieces of the chair,” says roboticist and study co-author Stephanie Gil of MIT. “So maybe a chair leg or an arm rest. And the human is actually using his hands to put these different pieces together.”
But you shouldn’t have to constantly bark orders at your assistant, right?
“We don’t want to have to explicitly use verbal cues or a push of a button, something that’s very unnatural for the human to communicate with the robot,” Gil adds. “We want this to be very natural and almost seamless.” And nothing is more seamless than a robot reading your mind.
This technology operates as a binary at the moment – Baxter only knows if it’s doing something wrong or something not wrong – but you can expect the range of communications to diversify as the technology matures. Detecting emotions, for example.
“We’re also very interested in the potential for using this idea in driving,” adds Rus, “where you have passengers in a self-driving car and the passengers’ fears or brain signals can be monitored by the car’s control system and it can adjust its own behaviour according.”
Backseat drivers, rejoice.