Post

Developing Galea: An open source tool at the intersection of VR and neuroscience

Virtual reality (VR) is increasingly being used for therapeutic purposes and for neuroscience and psychology research. Having access to real-time physiological and brain data in such experiments is highly desirable. Even still, nothing has compared to the wave of interest we are experiencing today with conferences, concerts, and meetings moving into VR due to the pandemic.

My name is Guillermo Bernal, a PhD candidate in the Fluid Interfaces group. As part of my research, I developed hardware that facilitates monitoring physiological signals from VR users.  In 2018, my collaborators and I published a project called PhysioHMD in response to the growing need to make VR experiences a bit more meaningful. The platform enables researchers and developers to aggregate and interpret physiological signals (Electroencephalogram (EEG), Electro-Oculogram (EOG), Electromyogram (EMG), Electrodermal Activity (EDA), Heart Rate (HR)) in real-time and use them to develop novel, personalized interactions, as well as evaluate virtual experiences, for example, bringing facial expressions to digital avatars. We knew that this was an exciting research space and that there was a need for such technology. After the research paper became public, we received numerous requests to access our tools from other researchers and institutions working at the intersection of neuroscience and VR.

Early work

Prior to my work on PhysioHMD, there were no headsets designed to collect this type of physiological data. The only way to approach it was to build your own integration with separate sensors and software. We explored many ways in which we could make this device available to people. After many conversations with the Lab's committee and other lab members, it seemed that open-sourcing the project would be our best bet in the short term to make the device available to others. The problem is that just making files and designs available is not enough for people to begin adopting the tools, due to the large and complex effort required to build the device. It required a lot of work from my side to make every aspect of the project reliable and robust enough for people to be able to use the device on their own. This involved going to Shenzhen with a group of researchers from the Lab to learn about manufacturing and flexible sensors. Despite all the work I did at the Lab to create a prototype, I realized that there is a big gap between making a prototype that works for internal studies or a member meeting and creating something deployable and sustainable for use “in the wild” by others who may not have the expertise to build it from scratch themselves.

Collaboration with OpenBCI

A unique opportunity soon emerged that would help me cross this gap. OpenBCI, founded in 2014 by Conor Russomanno and Joel Murphy, is a Brooklyn-based company that creates open-source tools for neuroscience and biosensing. As part of their mission to democratize access to neurotechnology, the OpenBCI team has successfully launched and supported a number of accessible, open-source products. Their efforts have led to the development of a strong community of users around the world who are pushing boundaries in a number of fields. Conor is also a research affiliate at the Media Lab, and a number of projects at the Fluid Interfaces group have benefited from donations of OpenBCI hardware. 

Conor and I first met in early 2018 during one of his visits to the Media Lab, but it wasn’t until that summer, when we were both living in San Francisco, that we really connected and started discussing the idea of collaborating on a project. I was interning at Samsung, and Conor was working on Augmented Reality headsets at Meta. Between attending watch parties for the soccer World Cup and exploring the Mission District, we also discussed how we could combine our shared interests in combining physiological sensing and head-mounted displays. OpenBCI’s users had been combining their products with VR headsets for years and regularly requested HMD integration in surveys. The idea of combining the work I started on PhysioHMD, with the things that Conor and OpenBCI were exploring soon emerged. The more we discussed it, the more I realized that joining forces with a company like OpenBCI was an ideal way for me to ensure that PhysioHMD reached as large an audience as possible. 

I reached out first to Pattie Maes, and others at the Lab to figure out how this collaboration could occur. Kate and Habib suggested that I meet with MIT / BU Technology Law Clinic to make sure I could benefit from my work. Once that got worked out in 2019, I took the summer off from my RA work at the Lab and went to NYC to start and work on the first version of the device that we now call Galea.

What is Galea + how will it be used?

Galea is a hardware and software platform designed to help researchers combine multi-modal biometrics with mixed reality. Galea’s sensors are distributed throughout a custom facepad and head strap (aka the “strapparatus”) that is designed to be integrated into existing AR and VR headsets. The initial beta units will be integrated into the Valve Index. The facepad is an evolution based on PhysioHMD and includes sensors for EEG, EMG, EDA, PPG, and EOG. The straps contain eight channels of dry, active EEG electrodes made with conductive polymers positioned in the regions FCZ, POZ, PO3, O1,  OZ, O2, PO4, CPZ of the 10-20 system. Galea’s software will enable raw data access in a variety of common programming languages (Python, C++, Java, Julia, C#, R) and support compatibility with LSL for merging data with other devices. By combining this multi-modal sensor system with the immersion of augmented and virtual reality, Galea gives researchers, developers, and creators a powerful new tool for understanding and augmenting the human mind and body.

This multi-signal approach allows researchers to develop human testing and training environments that allow for the precise control of elaborate stimulus presentations in which human cognitive and functional performance can carefully be evaluated and rehabilitated. As an example, measuring focus and attention in learning/training scenarios is especially important given today’s high demand for virtual interactions. Emotions are multifaceted events with corresponding physiological signs and human expressions; even though most existing methods for automatic emotion recognition are based on audio-visual analysis, there is an increasing body of research on emotion recognition from peripheral and central nervous system physiological responses. There are advantages to using physiological signals for emotion recognition as opposed to using audio-visual signals: they cannot easily be faked, they do not require a front-facing camera, and they can be used in any degree of illumination/noise. Moreover, they can be combined with audio-visual modalities to construct a more robust and accurate multi-modal emotion recognizer. Filmmakers, entertainers, and other storytellers are trying to figure out what Mixed-Reality as a medium might mean for their respective fields and how this medium might improve the user’s experience.

Building Galea

At the time, OpenBCI’s office was at New Lab in the Brooklyn Navy Yards. New Lab is a multi-disciplinary technology center using the MIT Media Lab as a model, and provides space, services, and a strong network to transformative technology companies. At the New Lab, I couldn’t grab a coffee without bumping into other MIT fellows working in robotics, connected devices, energy, nanotechnology, life sciences, and urban tech. I also had the chance to meet Christian Bayerlien, who demonstrated to me firsthand the significant benefits that neurotechnology can have for individuals who rely on assistive technology during their daily lives. 

Christian has SMA (spinal muscular atrophy) and uses a motorized wheelchair and a staff of assistants to navigate his daily life. He was also an early backer during OpenBCI’s first Kickstarter campaign in 2013, and while I was working in the OpenBCI office he finally had the opportunity to come by for a visit. With relatively simple tools, we were able to work with Christian to quickly connect with muscles throughout his body and demonstrate how they could be used as additional “switches” or actuators for controlling other devices. After a few minutes of experimenting with his newfound, EMG-powered capabilities, Christian was already brainstorming how this would let him be more independent at home, or even better, fly a drone. BCI technology has a number of promising applications in the assistive technology space, and Intel’s ACAT team (the group behind Stephen Hawking’s communication system) has been experimenting with OpenBCI as a way to make their system more affordable and accessible.

My summer working with OpenBCI in NYC was also full of technical challenges. Incorporating PhysioHMD into what would become Galea required a lot of work on the mechanical/ergonomics issues, new PCBs designs, and new firmware. Working with devices that conform to the users' bodies is a special type of challenge because of the variation that exists across individuals. If our sensors don't have proper contact with the user, the signal degrades, and environmental noise creeps into the data. 

Similarly, when developing a mixed-signal, multi-board setup, each additional board or sensor component introduces new opportunities for disturbances or noise to be introduced to the system. These challenges required us to iterate and embrace best practices early on to minimize cross-talk, reflection noise, and ground bounce issues. One specific challenge for Galea’s multi-board setup was figuring out how to deal with the digital and analog grounds. The return path can become a source of noise if not dealt with properly. How do you prevent current loops from being introduced to the system? In our case, we established a star ground configuration that allowed us to explore different ground setups and determine whether each ground plane should be continuous or connected by resistors, while still making sure that every signal trace has an adjacent return path. This configuration gave us much-needed flexibility to probe and test how our ground was working at different locations. This was one of many challenges we needed to solve because of the number of sensors and boards going into Galea. 

[shout outs]

Aaron Trocola, NYC
Sean Montgomery, Nevada
Ioannis Smannis, Mesolonghi, Greece
Eva Esteban, NYC
Andrey Parfenov, Moscow, Russia
Nitin Nair, NYC
Joe Artuso, NYC
Shirley Zhang, NYC
Richard  Waltman,  Louisiana 

Galea in the wild

The goal for Galea is to integrate with multiple different AR and VR headsets and to open source my contributions to the project so that it can be customized and extended by users around the world. Although my motivation for collaborating with OpenBCI has always been to maximize the reach of my work, it’s still been surprising to see Galea quickly take on a life of its own. For the beta devices shipping next year, OpenBCI is developing a partnership with Valve and Tobii for  Galea to be be integrated into a custom Valve Index that includes image-based eye tracking. Galea has now received hundreds of applications for the beta program with proposals for use cases in neuroscience, computer science, assistive technology, entertainment, music, empathic computing, psychology, and more. I am proud of my part in bringing this device to life, and I eagerly await seeing what the world is able to create with it.

Related Content