Researchers Are Able to Translate Brain Signals into Speech with the Help of Brain Implants and AI

Nov.30.2018

Author :Justin Brunnette

Category: IT News

The brain computer interface (BCI) industry has been pretty slow, revealing the sheer difficulty with the translating neural signals into something computers can understand. The necessity of such technology has been emphasized by Space X founder Elon Musk as a way of augmenting human intelligence to keep up with artificial intelligence.

 

The groundwork for the technology has been made as scientists at the University of Colombia have successfully utilized AI to identify brain signals picked up from implants that correspond to words we think in our head.

 

The proof of concept was conducted in a team from University of Colombia led by electrical engineer Nima Mesgarani. Five epilepsy patients volunteered where first required to surgically insert electrodes directly to the brain. The electrodes where in a grid format called “electrocorticography”, and were implanted to the two auditory centers of the brain: the Heschl’s gyrus and the superior temporal gyrus. Both regions of the brain are responsible for processing the speech such as intonation, volume, frequency, and phonemes.

 

The patients listened to people counting digits such as “one”, “two”, and “three” as well as 30 minute story read by others. This allowed the researchers to pick up the neural activity when the brains were recognizing speech.

 

A key thing to note is that the human brain has nearly identical brain activity when imagining something and actually experiencing something. So when we imagine a image of a apple, the same location and signal activity will fire up to when we actually see an apple.

 

This will mean the brain will make the same reaction to imagining speech or “covert” speech to actually speaking. When we imagine saying a word, such as “elephant”, the brain will simulate the emphasis on the “ph” sound as well as enunciate the “t” sound and so on. These features of creating sound will be reflected in the neural patterns.

 

The experimenters then utilized a deep neural network. The process would start with the implants would pick up the brain signal into electrical signal which were interpreted by the neural network to infer sounds from the signals. The signals were then sent to a vocoder, which is a device that would produce sounds from the the features in the electrical signals such as frequency, intonation etc.

 

The goal of the experiment was to see if the sounds coming out of the vocoder resembled the actual words that were being said. The result was a surprising success as 75% of the sounds coming out were intelligible to humans. The research team found that averaging the signals from multiple readings would increase the accuracy of the experiment.

 

The potential applications for the experiment would be to help those with speech disabilities be able to speak fluently. But with the advancement with computers with the ability to understand the language of human brain activity, the brain computer interface technology will become closer and closer to a reality. Just think, with this humans could interact directly with computers or AI assisted human cognition.

Original Article:
https://www.scientificamerican.com/article/with-brain-implants-scientists-aim-to-translate-thoughts-into-speech/