Recent advances in visual decoding have demonstrated the feasibility of classifying and reconstructing perceived images from evoked brain activity (check our work on reconstruction of 20 image classes from the visually evoked brain activity measured by a portable, 8 channel EEG here).
However, human perception is inherently shaped by the ability to process three-dimensional (3D) visual information, a fundamental yet underexplored aspect in neuroimaging research. In this work, we introduce MindSpace3D, the first EEG dataset recorded in a virtual reality (VR) environment, designed to investigate the neural mechanisms underlying 3D visual perception. MindSpace3D captures 64-channel EEG responses from 24 subjects viewing 3D objects across six distinct categories, presented as rotating video sequences, facilitating an investigation into the neural mechanisms underlying stereoscopic depth processing and object recognition.
To enable multiview 3D reconstruction from non-invasive EEG signals, we propose a novel three-stage decoding pipeline. First, EEGNet-based feature extraction is employed to process non-stationary EEG signals, leveraging depthwise separable convolutions to extract spectral and temporal dynamics. Second, a pre-trained conditional Latent Diffusion Model (LDM) generates multi-view image representations from extracted EEG features, integrating cross-attention mechanisms to align neural embeddings with latent space representations. Finally, a Neural Radiance Fields (NeRF)-based 3D reconstruction framework synthesizes volumetric object representations by leveraging multi-view image priors, enabling high-fidelity 3D shape generation directly from neural signals.
This work represents the first attempt to reconstruct dynamic multiview 3D structures from EEG, providing new insights into the neural encoding of depth perception and object recognition. While challenges remain in refining reconstruction fidelity and achieving real-time processing, our results highlight the potential of EEG-driven 3D object reconstruction for applications in brain-computer interfaces (BCIs), computational neuroscience, and immersive neuroadaptive systems.