Sparacino, F. "Sto(ry)chastics: a Bayesian network architecture for combined user modeling, sensor fusion, and computational storytelling for interactive spaces"
Work for a Member company and need a Member Portal account? Register here with your company email address.
Sparacino, F. "Sto(ry)chastics: a Bayesian network architecture for combined user modeling, sensor fusion, and computational storytelling for interactive spaces"
This thesis presents a mathematical framework for real-time sensor-driven stochastic modeling of story and user-story interaction, which I call sto(ry)chastics. Almost all sensor-driven interactive entertainment, art, and architecture installations today rely on one-to-one mappings between content and participant's actions to tell a story. These mappings chain small subsets of scripted content, and do not attempt to understand the public's intention or desires during interaction, and therefore are rigid, ad hoc, prone to error, and lack depth in communication of meaning and expressive power. Sto(ry)chastics uses graphical probabilistic modeling of story fragments and participant input, gathered from sensors, to tell a story to the user, as a function of people's estimated intentions and desires during interaction. Using a Bayesian network approach for combined modeling of users, sensors, and story, sto(ry)chastics, as opposed to traditional systems based on one- to-one mappings, is flexible, reconfigurable, adaptive, context-sensitive, robust, accessible, and able to explain its choices.
To illustrate sto(ry)chastics, this thesis describes the museum wearable, which orchestrates an audiovisual narration as a function of the visitor's interests and physical path in the museum. The museum wearable is a lightweight and small computer that people carry inside a shoulder pack. It offers an audiovisual augmentation of the surrounding environment using a small eye-piece display attached to conventional headphones. The wearable prototype described in this document relies on a custom- designed long-range infrared location-identification sensor to gather information on where and how long the visitor stops in the museum galleries. It uses this information as input to, or observations of, a (dynamic) Bayesian network, selected from a variety of possible models designed for this research. It then delivers an audiovisual narration to the visitor as a function of the estimated visitor type, and interactively in time and space.
The network has been tested and validated on observed visitor tracking data by parameter learning using the Expectation Maximization (EM) algorithm, and by performance analysis of the model with the learned parameters. Estimation of the visitor's preferences, in addition to the type, using additional sensors, and examples of sensor fusion, are provided in a simulated environment.
The main contribution of this research is to show that (dynamic) Bayesian networks are a powerful modeling technique to couple inputs to outputs for real-time sensor-driven multimedia audiovisual stories, such as those that are triggered by the body in motion in a sensor-instrumented interactive narrative space. The coarse and noisy sensor inputs are coupled to digital media outputs via a user model, and estimated probabilistically by a Bayesian network. Other contributions are: the design of the museum wearable application, the assembly and fashioning of a wearable computer, specifically conceived for museum use; the design and realization of a new long-range infrared location- identification sensor; the construction and testing of a variety of Bayesian networks for user-type and profile estimation; the extension of the previous Bayesian network for real- time story-segment selection and editing; model selection; model validation and parameter learning via the EM algorithm; and simulation of processing multiple sensor inputs with a Bayesian network for more robust estimation and more accurate user profiling.