- Posted on 18 Jun 2025
- 2 min read
AAII researchers have developed a groundbreaking AI model that decodes words and sentences from brainwaves, marking a major step forward in brain-computer interface research.
At the GrapheneX-UTS Human-centric Artificial Intelligence Centre, postdoctoral research fellow Dr. Daniel Leong, AAII PhD student Charles (Jinzhao) Zhou, and their supervisor Distinguished Professor Chin-Teng Lin have developed a pioneering AI model capable of translating brain signals into words.
The technology draws on electroencephalogram (EEG) data, traditionally used in the diagnosis of brain conditions, and applies deep learning to decode specific words from brain activity. During testing, participants wear an EEG cap, think about each word displayed on a screen, and mouth it silently — activating brain regions involved in speech recognition. The electrode cap is connected to amplifiers that read the brainwaves and feed the data into the AI model, which then interprets the thoughts based on brainwave patterns.
I am jumping happily, it’s just me
In one demonstration, the AI model generated the predicted sentence “I am jumping happily, it’s just me”, based solely on brainwave input, which is remarkably close to the original text. While still being refined, the model currently achieves around 75% accuracy. The team aims to reach 90% accuracy, which would be comparable to the performance of implanted models.
The researchers are now training the AI model with more participants and exploring its potential to enable brain-to-brain communication.
Empowered by advanced AI, this innovative research is pushing the boundaries of brain-computer interface technology.
Read the full story on ABC News