• Login
  • Register

Work for a Member company and need a Member Portal account? Register here with your company email address.

Project

LUI: Large User Interface with Gesture and Voice Feedback

Google

LUI is a scalable, multimodal web-interface that uses a custom framework of nondiscrete, free-handed gestures and voice to control modular applications with a single stereo-camera and voice assistant. The gestures and voice input are mapped to ReactJS web elements to provide a highly-responsive and accessible user experience. This interface can be deployed on an AR or VR system, heads-up displays for autonomous vehicles, and everyday large displays.

Integrated applications include media browsing for photos and YouTube videos. Viewing and manipulating 3D models for engineering visualization are also in progress, with more applications to be added by developers in the longer-term. The LUI menu consists of a list of applications which the user can "swipe" and "airtap" to select an option. Each application has its unique set of non-discrete gestures to view and change content. If the user wants to find a specific application, they can also say a voice command to search or directly go to that application. Developers will be able to easily add more applications because of the modularity and extensibility of this web platform.

For more information, contact graduate researcher Vik Parthiban at vparth@mit.edu.

Advisors:
V. Michael Bove, Director, Object-Based Media group
Zach Lieberman, openFrameworks 
John Underkoffler, CEO, Oblong Industries; Scientific advisor, Minority Report and Iron Man interface