Image Classification and Reconstruction from Low-Density EEG


Sven Guenther & Nataliya Kosmyna

Sven Guenther and  Nataliya Kosmyna

Recent advances in Brain Computer Interfaces (BCI) show different techniques of image classification and reconstruction from the visually evoked brain activity. The majority of these methods are based on costly and stationary equipment, limiting their real-world use while increasing the time and monetary resources required to enter this field of research. Of the few studies that have tried to accomplish the same task with an electroencephalogram (EEG)-based system, several have been flawed (predicting on temporal dynamics inherent to the EEG hardware by presenting visual stimuli in a block-fashion during data acquisition). Additionally, those paradigms commonly utilized high-density EEG systems, which again restricts their affordability and portability.

The goal of our study was to use a low-density, portable EEG to attempt classification and reconstruction of visual stimuli from the evoked brain activity. For that, we first designed an experiment to gather 600 EEG-image pairs belonging to 20 different image classes from human participants. To improve the aforementioned limitations, we recorded the evoked responses with an 8-channel portable EEG and randomly shuffled the stimuli for every experiment run to avoid artifactual prediction. 

After the data acquisition, we preprocessed the data and continued to build subject-wise models to classify the category of an image from the recorded brain data. We utilized 10 of the 12 recordings for training a model and validated our results on a hold-out validation set, before finally evaluating the best validation model on the hold-out test-recording. Our preliminary results indicate that it is possible to predict the image-class from the preprocessed EEG signal with up to 53% accuracy.

In a subsequent step, we use an intermediate representation of the best classification model to condition a latent diffusion model for the visual reconstruction of the seen images.