Edward Chang can’t read your thoughts. If a new piece of work is released by the neuroscientist’s laboratory at the University of California, there is still a common refrain: that he has developed “mind-reading technology” or that he can “read your thoughts.”
He’s not alone, it’s a statement that reflects most of the brain-computer interaction work and voice processing study.
And no wonder, as Elon Musk’s Neuralink company says it will soon allow “consensual telepathy” and Facebook – one of the Chang’s lab’s funders – said it wants to let people submit messages by only thinking the phrases, rather than clicking them on a screen, an illustration of a brain-computer interface (BCI). Yet Chang is not attempting to interpret minds; he is interpreting expression in individuals who do not speak otherwise. “We’re not really talking about reading someone’s thoughts,” Chang says.
“Every paper or project we’ve done has been focusing on understanding the basic science of how the brain controls our ability to speak and understand speech. But not what we’re thinking, not inner thoughts.”
Such research would have significant ethical implications, but it really isn’t possible anyway right now – and may never be.
Also it’s not easy to decipher expression. His new article, in Nature last year, attempted to convert speech-generated brain impulses into words and sentences interpreted aloud by a machine; the objective is to support people with disorders such as amyotrophic lateral sclerosis ( ALS) – a chronic neurodegenerative disorder that impacts brain and spinal cord nerve cells.
“The paper describes the ability to take brain activity in people who are speaking normally and use that to create speech synthesis – it’s not reading someone’s thoughts,” he says. “It’s just reading the signals that are speaking.”
The system has worked — to a degree. Patients with electrodes implanted in their brains interpreted and replied a query. Through staring at their motor cortex to see if the brain warmed up to shift their mouth and tongue, Chang’s machine could reliably discern what they said 76 percent of the time and what they said 61 percent of the time. There are caveats, though. Potential responses were restricted to a range which made the job of the algorithm a little easier. Moreover, the participants received brain tests for autism in the facility, and they could talk normally; it’s not obvious if that converts into anyone who can’t communicate at all.
“Our goal is to translate this technology to people who are paralysed,” he says. “The big challenge is understanding somebody who’s not speaking. How do you train an algorithm to do that?”
Creating a model of somebody you might ask to read out sentences is one thing; you check their brain waves when they’re reading out sentences. Yet how can you do it while someone is unable to speak?
Chang’s laboratory is actually in the middle of a clinical trial seeking to resolve the “formidable challenge,” but it is also uncertain whether speech patterns alter among people who are unable to communicate, or how specific brain regions need to be addressed.
“There are these fairly substantial issues that we have to address in terms of our scientific knowledge,” he says.
Decoding these messages is difficult in part because we learn so little of how our own brains function. So although computers can be taught to push a left or right cursor more quickly, expression is difficult. “The main challenges are the huge vocabulary that characterise this task, the need of a very good signal quality — achieved only by very invasive technologies – and the lack of understanding on how speech is encoded in the brain,” says David Valeriani of Harvard Medical School. “This latter aspect is a challenge across many BCI fields. We need to know how the brain works before being able to use it to control other technologies, such as a BCI.”
And we simply have not enough details, says UMC Utrecht assistant professor Mariska van Steensel. Brain electrodes are complex to mount, and it is not always done; Chang chose people with autism, since they also had devices to control their seizures. A couple of people hanging around waiting for a stroke to occur, were able to engage out of frustration in his study.
“On these types of topics, the number of patients that are going to be implanted will stay low, because it is very difficult research and very time consuming,” she says, noting that fewer than 30 people have been implanted with a BCI worldwide; her own work is based on two implants. “That is one of the reasons why progress is relatively slow,” she added, suggesting a database of work could be brought together to help share information.
That’s another tough reason: our brains don’t always react the same way. Van Steensel has two device patients, enabling them to move through brain impulses in a joystick when dreaming about raising their heads. That operated beautifully in the first patient, with ALS. But in the second, it wasn’t a individual with a stroke of the brain-stem. “Her signals were different and less optimal for this to b e reliable,” she says. “Even a single mouse click to get reliable in all situations… is already difficult.”
This research is different from that of startups like NextMind and CTRL-Labs which use real, non-invasive headsets to interpret brain signals, but lack an implant ‘s precision. “If you stay outside a concert hall, you will hear a very distorted version of what’s playing inside — this is one of the problems of non-invasive BCIs,” says Ana Matran-Fernandez, artificial intelligence industry fellow at the University of Essex. “You will get an idea of the general tempo… of the piece that’s being played, but you can’t pinpoint specifically each of the instruments being played. This is the same with a BCI. At best, we will know which areas of the brain are the most active — playing louder, if you will — but we won’t know why, and we don’t necessarily know what that means for a specific person.”
Currently, initiatives by the software community — like Neuralink and Facebook — aren’t unfounded, Chang notes, rather they tackle different concerns. These ventures aim at implant or headphone systems, not the hard research required to make it practical to interpret so-called minds.
“I think it’s important to have all of these things happening,” he says. “My caveat is that’s not the only part of making these things work. There’s still fundamental knowledge of the brain that we need to have before any of this will work.”
We won’t be able to understand expressions by then, let alone the inner feelings. “Even if we were perfectly able to distinguish words someone tries to say from brain signals, this is not even close to mind reading or thought reading,” van Steensel says. “We’re only looking at the areas that are relevant for the motor aspects of speech production. We’re not looking at thoughts — I don’t even think that’s possible.”