DARPA human telepathy project will let people transmit images into other people’s brains

WHY THIS MATTERS IN BRIEF

Being able to project images from one person’s brain into another person’s brain is increasingly possible, but it’s still sci-fi.

 

Interested in the Exponential Future? Connect, download a free E-Book, watch a keynote, or browse my blog.

There’s a theme now of researchers developing both invasive and non-invasive Brain Machine Interfaces (BMI) that will not only let us all talk and play games telepathically, and control F-35 fighter jets, but that will also let us connect to the machines and AI’s in the cloud. And now a Rice University led team of neuro-engineers is embarking on an ambitious four-year project to develop headset technology that can directly link the human brain and machines without the need for surgery. As a first proof of concept the team plans to transmit visual images perceived by one individual into the minds of blind patients – and just the thought of that is mind blowing in itself!

 

READ
IBM sets a small record as researchers store one bit of data on one atom

 

“In four years we hope to demonstrate direct, brain-to-brain communication at the speed of thought and without brain surgery,” said Rice’s Jacob Robinson, the lead investigator on the $18 million project, which was announced today as part of the DARPA Next-Generation Nonsurgical Neurotechnology (N3) program that I’ve talked about before that could also one day take us into the field of telepathic warfare.

Sharing visual images between two brains may sound like science fiction, but Robinson said a number of recent technological breakthroughs make the idea feasible. Just how feasible is the question DARPA hopes to address with a series of N3 awards to the Rice-led team and five others that have proposed different technological solutions for the broader challenge of connecting brains and machines.

 

READ
Soldiers digital twins let US Army 3D print replacement body parts in battle

 

“Speed is key,” said Robinson, an associate professor of electrical and computer engineering and of bioengineering in Rice’s Brown School of Engineering. “We have to decode neural activity in one person’s visual cortex and recreate it in another person’s mind in less than one-twentieth of a second. The technology to do that, without surgery, doesn’t yet exist. That’s what we’ll be creating.”

Because surgery is a nonstarter, all the N3 teams plan to use some combination of light, ultrasound or electromagnetic energy to read and write brain activity. Rice’s “magnetic, optical and acoustic neural access device,” or MOANA, will test techniques that employ all three. The MOANA team includes 15 co-investigators from Rice, Baylor College of Medicine, the Jan and Dan Duncan Neurological Research Institute at Texas Children’s Hospital, Duke University, Columbia University and Yale University.

 

READ
During Covid-19 lockdown robo-trucks in the US are keeping freight moving

 

Robinson said a big differentiator between N3-funded teams is how they plan to deal with the 50-millisecond latency threshold as well as DARPA’s requirements for spatial resolution. The agency is seeking devices that can read from and write to a minimum of 16 locations in a volume of the brain about the size of a pea.

Robinson said MOANA’s decoding and encoding technologies will each employ viral vector gene delivery, a technology that’s in clinical trials for treating macular degeneration, as well as some cancers and neurological conditions. Genetic payloads, which differ for decoding and encoding, will be delivered with the help of ultrasound to select groups of neurons in the 16 target areas of the brain.

 

READ
Facebook's new non-invasive brain reading tech achieves 61 percent accuracy

 

To “read” neural activity, the MOANA team will reprogram neurons to make synthetic proteins called “calcium-dependent indicators” that are designed to absorb light when a neuron is active, or firing.

Rice co-investigator Ashok Veeraraghavan said red and infrared wavelengths of light can penetrate the skull, and MOANA’s device will utilize this. The optical subsystem will consist of light emitters and detectors that are arrayed around the target area on a skull cap.

“Most of this light scatters off the scalp and skull, but a small fraction can make it into the brain, and this tiny fraction of photons contain information that is critical to decoding a visual perception,” said Veeraraghavan, an associate professor of electrical and computer engineering and of computer science. “Our aim is to capture and interpret the information contained in photons that pass through the skull twice, first on their way to the visual cortex and again after they are reflected back to the detector.”

 

READ
Scientists publish a plan to bring back the extinct Caspian Tiger

 

MOANA’s photodetectors will be both ultrafast and ultrasensitive. The former is important for ignoring light that scatters off the skull and instead capturing only those photons that have had enough time to travel all the way to the target area of the brain and back.

“By utilizing ultrasensitive, single-photon counting detectors, the tiny signal from brain tissue can be selectively sensed,” Veeraraghavan said.

Veeraraghavan, Robinson and MOANA collaborators Kenneth Shepard and Andreas Hielscher from Columbia Engineering plan to use the detectors to develop a technology called “time-of-flight enhanced functional diffuse optical tomography,” or ToFF-DOT. Like a CT scanner, ToFF-DOT constructs a real-time 3D image of what’s inside the body, but whereas a CT scan uses X-rays, ToFF-DOT uses visible light.

 

READ
Revolutionary bio-electronic implant heals nerves, then vanishes

 

Robinson said neurons in the 16 target regions of the visual cortex are expected to show up darker than normal on ToFF-DOT scans when they are firing and their calcium-dependent indicator proteins are absorbing light. Interpreting the dynamic changes from dark to light in the target areas is what MOANA will do to “read” neural activity.

Robinson said three years of work, first in cell cultures and then animals, will precede any work with human patients. But he said the MOANA team will coordinate its efforts with Baylor Department of Neurosurgery’s Daniel Yoshor and Michael Beauchamp, who are conducting clinical trials to restore sight to blind patients using an experimental prosthetic that directly stimulates the visual cortex with surgically implanted electrodes.

 

READ
3D printed fashion takes over New York Fashion Week

 

“There may be patients who prefer a visual prosthetic that doesn’t require brain surgery,” Robinson said. “If our work in cells and animal models goes well, MOANA could be approved for clinical tests as a nonsurgical alternative. It would require gene therapy, but not brain surgery.”

In the brain receiving an image, MOANA would “write” information to neurons that are reprogrammed to fire in response to magnetic signals. The gene therapy payload delivered to these neurons will create proteins that tether either naturally occurring or synthetic iron nanoparticles to ion channels inside the neurons. The release of calcium through these ion channels is what “fires” a neuron, causing it to actively transmit an electrical impulse.

 

READ
The world's biggest 3D printed house opens in Dubai

 

“We plan to use magnetic fields to heat the iron, which in turn will open the channel and fire the neuron,” Robinson said. “But it’s not enough to do that every second or two. Our system must respond in milliseconds for the receiver and perceiver to experience the perception close enough in time that it seems simultaneous.”

Human thought involves the coordinated firing of many neurons, sometimes in different regions of the brain. Rice co-investigator Caleb Kemere said the quality of communication that can be achieved with 16 channels of information is an open question.

 

READ
Scientists developed a catalytic reactor that turns greenhouse gases into clean fuel

 

“We know that the circuits of the brain that are involved are very dense,” said Kemere, an associate professor of electrical and computer engineering and of bioengineering who has previously studied neural circuits using invasive technologies. “It’s possible or even likely that early 16-channel demonstrations may deliver somewhat muddied perceptions, but this is an exciting path towards a more noninvasive future. The timing, density and performance of the systems we are developing will be orders of magnitude more sophisticated than anything currently available.”

Related Posts

Leave a comment

Get your FREE! XPU Introduction to Exponential Thinking Course now. No registration, no catches, just awesome knowledge.GET FUTURED
+

Explore More!

Explore 1000's of articles about our exponential future, 1000's of pages of insights, 1000's of videos, and dig into 100's of exponential technologies. Subscribe to get your no-nonsense briefing on all the biggest stories in exponential technology and science.

Awesome! You're now subscribed.

Pin It on Pinterest

Share This