Skip to Content, Navigation, or Footer.

Keeping inner thoughts ‘private’

Scientists were able to separately output inner thoughts from the attempted speech of patients with speech disabilities through brain-computer interface readings.

image.png

Last month, a spark was ignited within the scientific community as a team of researchers demonstrated a scientific method for decoding one’s inner thoughts. The team implanted microelectrode arrays in the brain tissues of four BrainGate2 patients who lost their ability to speak due to amyotrophic lateral sclerosis. Each patient was then asked to perform various tasks, including responding to spontaneous questions, counting shapes and reading sentences, while the researchers collected electrical data from their brains to analyze their thoughts. This electrical data was then used to mechanically build an alternative communication channel.

Studies on building a bridge between the brain and the computer date back to the ’70s, when Jacques J. Vidal built the first brain-computer interface. A brain-computer interface is a collection of metallic electrodes placed on top of or in the brain tissue, invasively, through surgery,  or non-invasively, by directly sticking on top of the scalp, to observe and collect the electrical activity of the brain. As different regions hint at different tasks in the brain, with the use of machine learning algorithms, scientists can make sense of the electrical data collected from these devices and form large datasets to advance the accuracy of these readings.  

Speech decoding and brain-computer interface studies offer potential new information that could inform our understanding of disorders and help develop assistive technology. Muscle disease and brain injuries often underlie speech disorders, due to the brain’s activation signal not being able to be effectively transmitted through the nerves or the muscles being unable to respond to this activation.

Scientists in this study targeted the motor cortex area of the brain — responsible for most of the muscle and speech tasks — to detect phonemic outputs, individual speech sounds a person produces when speaking and artificially classify these inputs as words. This study has discovered that the phonemic inputs gathered from the precentral gyrus of the motor cortex, in addition to attempted speech, show inner speech, albeit with a lower signal intensity.

At first, the researchers were skeptical as to whether they were truly detecting inner speech — the imagination or repetition of ideas in the brain without the intent to communicate externally. In order to confirm their findings, the researchers conducted a variety of further experiments. In these experiments, subjects were shown arrows pointing to different sides, then asked to remember the directions and draw them.

After being able to recognize a patient’s inner speech saying ‘right, left, etc.,’ researchers detected their muscle activities and compared the two sets of data to make sure what they’re reading isn’t just a muscle activation signal, but the patient narrating directions out loud in their brain. Additionally, scientists asked the patients to memorize the arrows visually, instead of verbally as ‘left, right, up, down,’ and received much higher accuracy for verbal reading than visual readings.  

The study found that their systems could more accurately determine uninstructed inner thoughts — like a patient naturally counting the subjects without any further guidance — when compared to instructed thoughts (i.e., asking a patient to think of something or clear their mind).

From this, the researchers posit that spontaneous thinking of answers to specific questions builds a much more independent clutter of ideas that cannot yet be accurately detected. Additionally, inner speech was not accurately detectable for some patients, which raises the question as to whether every human has a linguistic thinking process or if other modalities of thinking exist, such as visually-based thinking.

Following these findings, the researchers began to consider the privacy implications of this, and they began working on ways to protect the privacy of one’s inner mind. As researchers proved that no mechanical additions are necessary in order to detect someone’s inner thoughts, they developed two ideas on how they can prevent the brain-computer interface from mistakenly outputting a patient’s inner dialogues by confusing it with attempted speech.

Firstly, they built a ‘motor-intent’ dimension for their decoding algorithm to bypass inner thought. Secondly, they locked inner thought readings with a hard-to-unwantedly-imagine password so that the algorithm only starts decoding when the patient unlocks the feature. For their second implementation, scientists used the password “chittychittybangbang” and reached 98.75% accuracy in detecting the keyword in one of their patients’ trials.

Following the release of this study, the public has been divided in its reactions — with some people expressing excitement and others expressing fear. Some people are concerned that this study represents the opening of a Pandora’s box in terms of the technology’s potential.

However, many study participants with speech disabilities expressed a contrasting perspective on the matter: excitement, as this development could help develop assistive communication devices using the brain-computer interface. Some expressed that this type of technology has the potential to increase the speed and comfort of communication for them, as attempting to speak is much harder than imagining for them. It is promising that this technology can outline the human brain’s thinking process, which can, in the future, be utilized to build stronger computer architectures that can function as fast and capably as the brain.

It should be noted, nonetheless, that this study has only been tested with four native English speakers. There is yet no study showing whether or not these algorithms can be successful for other languages or even for other patients, as the training datasets for these experiments are limited in the number of patients and languages. While this paper highlights neuroprivacy through technology, legal aspects should also be considered on whether or not new neuroprivacy laws should be constituted.

This study will likely influence future research on unconscious human thinking and thinking processes, where scientists can also study the visual or sensory representations of thought in the brain. Hopefully, these advancements will become accessible and mobile for those struggling with communication disorders.