Our Software Division customizes and creates projects from different forms of neural data (EEG, MRI, and fNIRS) to build software-based solutions to problems in neuroscience, innovating and expanding what we know about the mind.
We currently have a wide variety of projects, including eye-tracking typing methods, reconstructing music from brain waves, and decoding speech.
Silent Speech Decoding
Training models that map concurrent multimodal articulatory signals to intended phonemes and words. This work aims to advance seamless communication interfaces for individuals with speech impairments and lost vocal ability.
Multimodal Cognitive Decline Prediction
Using the FoG dataset containing EEG, EMG, and accelerometer recordings, this project develops models to predict episodes of freezing of gait and assess motor signatures linked to early cognitive decline. The analysis aims to identify biomarkers that could support earlier detection and personalized monitoring of Parkinson’s progression.
Music Reconstruction
Recreating how the brain hears by reconstructing the music participants listened to from intracranial EEG signals
fMRI Image Reconstruction Model
Reconstructing images that people see from fMRI data, i.e. “imagination reading” with a stable diffusion based machine learning architecture
Speech Decoding
Building brain-to-text models that decode speech from intracranial EEG activity
EEG Dynamics and LLMs
Modeling neural patterns in deep meditation and mapping them onto large language models to explore data generation and mind state predictions
EEG Art
Used EEG signals collected through a Muse headset to generate real-time visual art based on the user's brainstate
EEG Music
EEG Music project on mood classification and playback control
Computer Vision for Tumor Detection
Training a U-NET FCN model to segment 3D MRI scans to identify brain tumors