Scientists train AI to turn brain signals into speech

Neuroengineers have crafted a breakthrough device that uses machine-learning neural networks to read brain activity and translate it into speech.

An article, published Tuesday in the journal Scientific Reports, details how the team at Columbia University’s Zuckerman Mind Brain Behavior Institute used deep-learning algorithms and the same type of tech that powers devices like Apple’s Siri and the Amazon Echo to turn thought into “Accurate and intelligible reconstructed speech.” The research was reported earlier this month but the journal article goes into far greater depth.

If scientists can decode those signals and understand how they relate to forming or hearing words, then we get one step closer to translating them into speech.

That’s what the team has managed to do, creating a “Vocoder” that uses algorithms and neural networks to turn signals into speech.

Next, the patients listened to speakers counting from zero to nine, while their brain signals were fed back into the vocoder.

Looking forward, the team wants to test what kind of signals are emitted when a person just imagines speaking, as opposed to listening to speech.

Improving the algorithms with more data could eventually lead to a brain implant that bypasses speech altogether, turning a person’s thoughts into words.

This article was summarized automatically with AI / Article-Σ ™/ BuildR BOT™.

Original link

Sharing is caring!

Leave a Reply

Your email address will not be published. Required fields are marked *