Project

LifeNet: Learning Common Sense from Sensors

Groups

Miniature sensors are so sophisticated that a wristwatch-mounted inertial measurement system can differentiate subtle actions such as shaking hands or turning a doorknob. Presently, sensors placed on objects in the environment can detect location, movement, sound, and temperature; however, interpretation of this data into human language remains challenging. Previous attempts at activity recognition force all descriptions into small, formal catagories specified in advance. For example, location could be at home or at the office�these models have not been adapted to the wider range of complex, dynamic, and idiosyncratic human activities. LifeNet constructs a mapping between sensor streams and partially ordered sequences of events in human language. We believe that mapping sensor data into LifeNet will act as a "semantic mirror" to meaningfully interpret sensory data into cohesive patterns in order to understand and predict not only human action, but give clear direction toward understanding human thought and motivation.