Publication

Predicting Driver Self-Reported Stress by Analyzing the Road Scene

Cristina Bustos, Neska Elhaouij, Albert Sole-Ribalta, Javier Borge-Holthoefer, Agata Lapedriza, and Rosalind Picard. "Predicting Driver Self-Reported Stress by Analyzing the Road Scene." in Proceedings of International Conference on Affective Computing and Intelligent Interaction (ACII 2021), preprint arXiv:2109.13225 (2021).

Abstract

 Several studies have shown the relevance of biosignals in driver stress recognition. In this work, we examine something important that has been less frequently explored: We develop methods to test if the visual driving scene can be used to estimate a drivers' subjective stress levels. For this purpose, we use the AffectiveROAD video recordings and their corresponding stress labels, a continuous human-driver-provided stress metric. We use the common class discretization for stress, dividing its continuous values into three classes: low, medium, and high. We design and evaluate three computer vision modeling approaches to classify the driver's stress levels: (1) object presence features, where features are computed using automatic scene segmentation; (2) end-to-end image classification; and (3) end-to-end video classification. All three approaches show promising results, suggesting that it is possible to approximate the drivers' subjective stress from the information found in the visual scene. We observe that the video classification, which processes the temporal information integrated with the visual information, obtains the highest accuracy of 0.72, compared to a random baseline accuracy of 0.33 when tested on a set of nine drivers.

Related Content