Project

Kinected Conference

Copyright

2012 Tangible Media Group / MIT Media Lab

2012 Tangible Media Group / MIT Media Lab

Groups

What we can do if the screen in videoconference rooms can turn into an interactive display? With Kinect camera and sound sensors, We explore how expanding a system’s understanding of spatially calibrated depth and audio alongside a live video stream can generate semantically rich three-dimensional pixels containing information regarding their material properties and location. Four features are implemented, which are “Talking to Focus,” “Freezing Former Frames,” “Privacy Zone,” and “Spacial Augmenting Reality.”

What we can do if the screen in videoconference rooms can turn into an interactive display? With Kinect camera and sound sensors, We explore how expanding a system’s understanding of spatially calibrated depth and audio alongside a live video stream can generate semantically rich three-dimensional pixels containing information regarding their material properties and location. Four features are implemented, which are “Talking to Focus,” “Freezing Former Frames,” “Privacy Zone,” and “Spacial Augmenting Reality.”