Publication

A Multidimensional Continuous Contextual Lighting Control System Using Google Glass

Nan Zhao, Matt Aldrich, Christoph Reinhart, Joseph A. Paradiso

Abstract

An increasing number of internet-connected LED lighting fixtures and bulbs have recently become available. This development, in combination with emerging hardware and software solutions for activity recognition, establish an infrastructure for context-aware lighting. Automated lighting control could potentially provide a better user experience, increased comfort, higher productivity, and energy savings compared to static uniform illumination. The first question that comes to mind when thinking about context-aware lighting is how to determine the relevant activities and contexts. Do we need different lighting for reading a magazine and reading a book, or maybe just different lighting for reading versus talking on the phone? How do we identify the relevant situations, and what are the preferred lighting settings? In this paper we present three steps we took to answer these questions and demonstrate them via an adaptive fivechannel solid-state lighting system with continuous contextual control. We implemented a multidimensional user interface for manual control as well as an autonomous solution using wearable sensors. We enable a simple set of sensors to manipulate complicated lighting scenarios by indirectly simplifying and reducing the complexity of the sensor-lighting control space using human-derived criteria. In a preliminary user study, we estimated significant energy savings of up to 52% and showed multiple future research directions, including behavioral feedback.

Related Content