Event

Ishwarya Ananthabhotla Dissertation Defense

Dissertation Title: Cognitive Audio: Enabling Auditory Interfaces with an Understanding of How We Hear

Abstract: 

Over the last several decades, neuroscientists, cognitive scientists, and psychologists have made strides in understanding the complex and mysterious processes that define the interaction between our minds and the sounds around us.  Even before a sound enters our consciousness and arrives at the forefront of thought, their research suggests, several cognitive mechanisms are engaged -- our ears present a fundamental re-representation of a sound in time, frequency, and intensity to our brains; differences in information from both ears allows us to estimate the physical location of a sound source in elevation and azimuth; we interpret and contextualize sounds through a complicated dynamic between semantics and acoustics; and we may further encode the most salient portions of this information into short-term memory.  Some of these processes, particularly at the lowest levels of abstraction relative to a sound wave, are well understood, and are easy to characterize across large sections of the human population; others, however, are the sum of both intuition and observations drawn from small-scale laboratory experiments, and remain as of yet poorly understood.

In this thesis, I suggest that there is value in coupling insight into the workings of auditory processing, beginning with abstractions in pre-conscious processing, with new frontiers in interface design and state-of-the-art infrastructure for parsing and identifying sound objects, as a means of unlocking audio technologies that are much more immersive, naturalistic, and synergistic than those present in the existing landscape.  From the vantage point of today's computational models and devices that largely represent audio at the level of the digital sample, I gesture towards a world of auditory interfaces that work deeply in concert with uniquely human tendencies, allowing us to altogether re-imagine how we capture, preserve, and experience bodies of sound -- towards, for example, augmented reality devices that manipulate sound objects to minimize distractions, lossy "codecs" that operate on semantic rather than time-frequency information, and soundscape design engines operating on large corpora of audio data that optimize for aesthetic or experiential outcomes instead of purely objective ones.

To do this, I aim to introduce and explore a new research direction focused on the marriage of principles governing pre-conscious auditory cognition with traditional HCI approaches to auditory interface design via explicit statistical modeling, termed "Cognitive Audio".  Along the way, I consider the major roadblocks that present themselves in approaching this convergence: I ask how we might "probe" and measure a cognitive principle of interest robustly enough to inform system design, in the absence of immediately observable biophysical phenomena that may accompany, for example, visual cognition; I also ask how we might build reliable, meaningful statistical models from the resulting data that drive compelling experiences despite inherent noise, sparsity, and generalizations made at the level of the crowd.

I discuss early insights into these questions through the lens of a series of projects centered on auditory processing at different levels of abstraction. I begin with a discussion of early work focused on cognitive models of lower-level phenomena; these exercises then inform a comprehensive effort to construct general purpose estimators of gestalt concepts in sound understanding. I then demonstrate the affordances of these estimators in the context of application systems that I construct and characterize, incorporating additional explorations on methods for personalization that sit atop these estimators.  Finally, I conclude with a dialogue on the intersection between the key contributions in this dissertation and a string of major themes relevant to the audio technology and computation world today.

Committee members:

Dr. Joseph A. Paradiso 
Alexander W. Dreyfoos Professor in Media Arts and Sciences
Massachusetts Institute of Technology

Dr. Sebastian Ewert 
Research Lab Lead / Honorary Lecturer 
Spotify, Inc / Queen Mary University of London

Dr. Poppy Crum
Chief Scientist / Adjunct Professor 
Dolby Laboratories / Stanford University

More Events