Discreet Teeth Gestures for Mobile Device Interaction
Byte.it is a miniaturized, discreet interface that uses teeth gestures for hands-free input for wearable computing.
As humans, we are constantly seeking to communicate and consume information, and mobile devices give us access the world wide web, our digital selves, and all our digital assets with the touch of our fingers. Context can temporarily reduce our abilities to perform certain activities, preventing us from having a fluid interaction with mobile computing. Hands are not always available, and sustained visual attention is often required for successful task performance and social norms. Current screen-based interfaces are not designed to be used by a person engaged in another attention demanding activity such as walking , talking, or driving, leading to ineffective interactions and even dangerous situations.
Audio interfaces are a potential solution as they can provide a high-bandwidth communication channel without requiring visual attention. Speech has been the predominant interaction modality for audio interfaces, but it can be ineffective in situations with loud environmental noise, or inappropriate in certain social or dynamic on-the-go contexts. Recent work has explored teeth gestures as a solution for interaction in these contexts, but these attempts are limited by the number of gestural primitives recognized (bandwidth) and the discreteness of the interfaces used to detect these gestures.
Byte.it expands on this work by exploring the use of a smaller and more unobtrusively positioned sensor (accelerometer and gyroscope) for detecting tooth clicks of different groups of teeth and bite slides for everyday human-computer interaction. Initial results show that an unobtrusive position on the lower mastoid close to the mandibular condyle can be used to classify teeth tapping of four different teeth groups (front, back, left, and right teeth click) with an accuracy of 88 percent, or an accuracy of 84 percent for seven different teeth clicking and bite sliding gestures (front, left, and right click, and front, back, left, and right slide).
The applications currently being explored are centered around dynamic, on-the-go, hands-and-eyes-free contexts. For example, (1) controlling the different commands of a media player, such as play/pause, volume, and current time of a song, podcast, or audiobook. Productivity-wise, being able to subtly (2) start and stop audio recordings of conversations or meetings, and tag relevant events that might be worth reviewing later. Teeth gestures could also allow for a discrete and rapid way to (3) accept or reject incoming alerts, notifications, and reminders, while minimizing task-switch time. A minimal set of teeth gestures could also enable the seamless (4) access of information streams such as messages, emails, news, or relevant notes about the person, place, and/or time of interest that could enhance the current interaction.
This research aims to investigate the following:
1) Understand how people could use teeth gestures to perform specific interaction commands in order to establish a standardized teeth gesture language.
2) Identify the optimal position of the sensor to achieve the highest gesture classification accuracy possible while ensuring a discreet form factor.
3) Measure the performance of our classification algorithm in the wild, while sitting, standing, walking, running, and cycling.
4) Assess the usability of the interface + applications in the wild.
Publication:
Byte.it: Discreet Teeth Gestures for Mobile Device Interaction
Tomás Vega Gálvez, Shardul Sapkota, Alexandru Dancu, Pattie Maes, CHI Conference on Human Factors in Computing Systems Extended Abstracts (CHI’19 Extended Abstracts), May 4–9, 2019, Glasgow, Scotland UK, ACM, New York, NY, USA, 6 pages, https://doi.org/10.1145/3290607.3312925