WHY THIS MATTERS IN BRIEF
As brain machine interfaces become more capable, and as their use cases and adoption broaden, experts are worried that regulators will leave it to late to protect people from bad actors and misuse.
Ever since Tesla CEO and founder Elon Musk announced his plans to develop the Neural Lace, a Brain Machine Interface (BMI) like device that forms a thin nano-sized “lace” over a users cerebral cortex, via his Neuralink subsidiary, and Mark Zuckerberg announced he was starting development of his own BMI telepathic device, the technology has, unsurprisingly, started to get significantly more attention. Musk, however, wasn’t the first to propose the possibility of enhancing human capabilities using BMI devices, not by a long shot. In his case he’s trying to use them to, literally, “connect” humans with AI’s, and elsewhere the US Department of Defense’s cutting edge research arm DARPA also recently funded a similar mission, but in their case it wasn’t just to read thoughts it was to facilitate the ability to upload knowledge directly to the human brain, and elsewhere even healthcare companies are in on the act trying to use them to help “locked in” ALS patients communicate with loved ones – and much more besides.
Now, according to a collaboration of 27 experts, neuroscientists, neurotechnologists, clinicians, ethicists and machine-intelligence engineers, calling themselves the Morningside Group, BMIs present a unique and rather disturbing conundrum in the realm of Artificial Intelligence (AI). Essentially designed to hack the brain, BMIs themselves run the risk of being hacked by AI.
“Such advances [in BMI] could revolutionise the treatment of many conditions, from brain injury and paralysis to epilepsy and schizophrenia, and transform human experience for the better,” wrote the experts in a recent the experts wrote in a recent Nature journal, “but the technology could also exacerbate social inequalities and offer corporations, hackers, governments or anyone else new ways to exploit and manipulate people. And it could profoundly alter some core human characteristics such as the right to a private mental life, individual agency and an understanding of individuals as entities bound by their bodies.”
The experts used the analogy of a paralysed man who participates in a BMI trial but isn’t fond of the research team working with him. An AI, like this one, or even this VR one, could then read his thoughts and misinterpret his dislike for the researchers as a command to cause them harm, despite the man not having given such a command explicitly.
Explaining it further they went on to say, “Technological developments mean that we are on a path to a world in which it will be possible to decode people’s mental processes and directly manipulate the brain mechanisms underlying their intentions, emotions and decisions; where individuals can communicate with others simply by thinking; and where powerful computational systems linked directly to people’s brains facilitate their interactions with the world such that their mental and physical abilities are greatly enhanced.”
In order to prepare for this eventuality, the group propose four ethical considerations that need to be addressed, namely Privacy and Consent, Agency and Identity, Augmentation, and Bias.
“For neurotechnologies to take off in general consumer markets, the devices would have to be non-invasive, of minimal risk, and require much less expense to deploy than current neurosurgical procedures,” they said, “nonetheless, even now, companies that are developing devices must be held accountable for their products, and be guided by certain standards, best practices and ethical norms.”
“These become even more crucial when considering how profit hunting will often trump social responsibility when it comes to the pursuit of technology,” they said.
They also go on to say one of the potential uses for BMIs is in the workplace. As Luke Tang, the General Manager for AI technologies accelerator TechCode said, “I believe the biggest vertical in which this technology has a play is in the business setting where [BMI] will help shape our future workplaces.”
Specifically BMI technologies could improve remote collaboration, increase knowledge, and enhance communication. For the latter, BMI would work as a “technology that can translate your thoughts into speech or actions will no doubt prove transformative to today’s tech-enabled communication methods. BMI technology could also lead to a faster and more accurate flow of communication.” Tang said.
It’s precisely this ability to delve into a person’s thoughts that could present a challenge for BMIs as technologies like AI become significantly more advanced. In order for us not to lose all the potential that BMIs can offer, it’s important to have the right considerations and regulations in place.
“The possible clinical and societal benefits of neurotechnologies are vast,” the Morningside researchers concluded, “and to reap them we must guide their development in a way that respects, protects and enables what is best in humanity.”
However, all that said though, just as we’re seeing with the rise of AI, governments, organisations and regulators are way behind the curve, whether it’s debating ethics or even standards, so it’s highly likely that over the course of the next decade, as these BMI technologies mature, that we’ll find ourselves in the same situation we do today, crossing our fingers hoping somehow somewhere AI doesn’t do something “stupid” or “catastrophic.” Hope though dear regulators is not a strategy, it’s time to, at the very least, get a point of view.