Biologically encoded augmented reality: multiplexing perceptual bandwidths
Traditional methods of visual augmented reality systems require a third-party substrate interjected between the observer and the immediate environment. This dissertation describes a system targeting perifoveal and peripheral visual fields to minimize the layer of interruption between our central vision and the natural environment. By activating powerful perceptual phenomena, this new approach multiplexes information within the visual pipeline without requiring the observer to change gaze or attentional awareness.
The methodologies described here expand upon prior work by introducing a two-stage carrier signal generation approach: proven psychophysical carrier signals are adapted to the environment and the adapted carriers are then animated along contextual motion paths. These systems were designed to augment observer experience in two distinct ways: by proprioceptive feedback (such as modulation of perceived self-motion through space-time), and by complex semantic information delivery via peripheral vision. The path trajectories are computationally generated with respect to the goal of augmentation – proprioceptive carrier motion is derived from environmental optical flow, and semantic carrier motion is derived from a novel codex of indexing symbols.
This dissertation explores a new intersection in the fields of vision science, computational imaging, and display technologies. As the technological cutting edge outpaces our physiological sensitivities (in resolution, frame rate, or field of view), this work may function as a first step in mapping a new generation of biologically encoded systems and potentially offloading some of the computational-media-landscape to the early mechanisms underlying the human visual pipeline.
Muriel R. Cooper Professor of Music and Media