Scroll Top

This AI reads your mind to recreate the faces you’re thinking of


Being able to read someone’s mind leads down two roads. It can either be used as part of a dystopian state apparatus, or to help catch criminals and help people who can’t communicate communicate.


It’s frustrating to have a clear mental image of something but not be able to exactly get it across in words or a drawing, and if you’ve ever tried matching the image in your head to the one in Google search then, asides from lots of other exciting and revolutionary possibilities, then this might be the tech for you. Now a team of neuroscientists from the University of Toronto Scarborough (UTS) are coming to your aid – they’ve developed a way to digitally recreate exactly the image someone is thinking about, by scanning their brain.


See also
ChatGPT claims the crown as the fastest growing app in history


So called Artificial Intelligence (AI) “mind reading” technology is getting eerily accurate. Along with allowing people to control prosthetics with their thoughts, these systems have quickly advanced from being able to picking out what letter you’re thinking of to being able to decode, and visualise on the small screen, more complex thoughts such as dreams, static images, streaming movies and even sentences. In fact it’s all happening so fast that some researchers have proposed the implementation of new human rights regarding how the brain can be read or manipulated.


Watch it in action


In this case the UTS team’s study was designed to see whether specific images could be plucked out of a person’s mind and to test out the idea they hooked people up to Electroencephalography (EEG) equipment and then showed them pictures of faces on a computer screen. The EEG system recorded their brain waves and after running the data through machine learning algorithms the teams new AI system was able to digitally recreate the face that the test subject had just seen.

“When we see something, our brain creates a mental percept, which is essentially a mental impression of that thing,” says Dan Nemrodov, co-author of the study, “we were able to capture this percept using EEG to get a direct illustration of what’s happening in the brain during this process.”


See also
Stability AI releases its first Text to Animation tool


AI brain reading experiments generally involve one of two methods, either EEG, as is the case with this one, or functional Magnetic Resonance Imaging (fMRI). The former measures the electrical activity in the brain using a skull cap full of electrodes, while fMRI on the other hand uses a magnetic field to monitor minute changes in the blood flow in different parts of the brain which change when people think thoughts. Both have their advantages and disadvantages, but EEG, which has also been used to help people communicate with each other telepathically before, is the one that’s more commonly used, less expensive, and can record changes faster.

“fMRI captures activity at the time scale of seconds, but EEG captures activity at the millisecond scale,” says Nemrodov, “so we can see with very fine detail how the percept of a face develops in our brain using EEG.”

That high time accuracy allowed the team to determine that it only takes about 170 milliseconds for the human brain to create a decent mental picture of a face it’s looking at, and in the future the team wants to expand the technique to be able to recreate objects other than faces, and do so over longer periods of time, allowing virtual reconstruction of images that a person remembers seeing more than a few seconds ago.


See also
Google DeepMind's new business unit to assess AI's impact on society


“Our new research could provide a means of communication for people who are unable to verbally communicate,” says Adrian Nestor, co-author of the study, “not only could it produce a neural based reconstruction of what a person is thinking about but also of what they remember and imagine, or what they want to express. It could also have forensic uses for law enforcement in gathering eyewitness information on potential suspects rather than relying on verbal descriptions provided to a sketch artist.”

The research was published in the journal eNeuro, and the team demonstrates the technique in the video above.

Related Posts

Leave a comment


Awesome! You're now subscribed.

Pin It on Pinterest

Share This