Event

Mat Laibowitz Thesis Defense

Tuesday
January 26, 2010

Location

E14-633

Description

In today's digital era, elements of anyone's life can be captured, by themselves or others, and be instantly broadcast. With little or no regulation on the proliferation of camera technology and the increasing use of video for social communication, entertainment, and education, we have undoubtedly entered the age of ubiquitous media. A world permeated by connected video devices promises a more democratized approach to mass-media culture, enabling anyone to create and distribute personalized content. While these advancements present a plethora of possibilities, they are not without potential negative effects, particularly with regard to privacy, ownership, and the general decrease in quality associated with minimal barriers to entry.

This dissertation presents a first-of-its-kind research platform designed to investigate the world of ubiquitous video devices in order to confront inherent problems and create new media applications. This system takes a novel approach to the creation of user-generated, documentary video by augmenting a network of video cameras integrated into the environment with on-body sensing. The distributed video camera network can record the entire life of anyone within its coverage range and it will be shown that it, almost instantly, records more audio and video than can be viewed without prohibitive human resource cost. This drives the need to develop a mechanism to automatically understand the raw audio-visual information in order to create a cohesive video output that is understandable, informative, and/or enjoyable to its human audience.

We address this need with the SPINNER system. As humans, we are inherently able to transform disconnected occurrences and ideas into cohesive narratives as a method to understand, remember, and communicate meaning. The design of the SPINNER application and ubiquitous sensor platform is informed by research into narratology, in other words how stories are created from fragmented events. The SPINNER system maps low level sensor data from the wearable sensors to higher level social signal and body language information. This information is used to label the raw video data. The SPINNER system can then build a cohesive narrative by stitching together the appropriately labeled video segments.

The results from three test runs are shown, each resulting in one or more automatically edited video piece. The creation of these videos is evaluated through review by their intended audience and by comparing the system to a human trying to perform similar actions. In addition, the mapping of the wearable sensor data to meaningful information is evaluated by comparing the calculated results to those from human observation of the actual video.

Participant(s)/Committee

Joseph Paradiso, MIT Program in Media Arts and Sciences, Alex (Sandy) Pentland, MIT Program in Media Arts and Sciences, David Rakoff, Essayist, Author, Actor Blake Snyder (IN MEMORIAM), Screenwriter, Lecturer, UCLA

More Events