This technologically enhanced feedback loop aims to enable each partner to "feel" the other’s emotions and corroborate it with context data, in a way that could result in a gradual training of their “empathy muscle." Results from our experimental studies show that users experience an increased level of attention, as well as awareness of self and others.
Would you like to try it out? Register for an online demo here. Our team will be contacting you with access instructions.
This is a prototype of a university-based research project. We are continuing to develop the system in terms of accuracy, security, end-to-end user experience and applications. We would greatly appreciate your feedback after using the demo.
Support better conversations and relationships through empathy building.
Empathy—our ability to feel someone's emotional state while preserving the knowledge about its personal origin—stands at the core of our existence as humans. It has dramatically contributed to our evolution as a species, and remains a key driver of the way we experience life. Being empathetic can make us more effective at work, less stressed, it can improve our relationship satisfaction, give us a deeper sense of connection and attachment. Still, we sometimes find it difficult to empathize with others, and for some poorly understood reasons, some people tend to face more challenges than others. Technologically-enabled solutions, ranging from virtual reality (VR) to tangible avatars, have shown promise in this direction. Yet, existing techniques tend to be difficult and expensive to deliver (e.g., requiring VR headsets), and often disconnected from daily life.
State of the Project
Us consists of two modules that can be used either separately, or jointly. Our results indicate that users experience an increased level of attention and awareness of self and others for both of these modules used separately.
1. Virtual interface (Us.virtual) – can run during any virtual interaction (e.g., Zoom), extract the emotional valence from the conversation (speech, tone, heart rate and facial expressions) and discretely feed it back through an on-screen display. This tool has been tested in a user study with 20 participants (publication under review).