Alea Teeters, Rana Kaliouby, Matthew Goodwin, Shandell M., Rosalind W. Picard
Work for a Member company and need a Member Portal account? Register here with your company email address.
May 8, 2008
Alea Teeters, Rana Kaliouby, Matthew Goodwin, Shandell M., Rosalind W. Picard
Our wearable self-cam technology successfully gathered facial-head movement videos from natural face-to-face conversations, enabling construction of a new test of non-acted expression reading ability. The videos are much more complex than those in the MindReading DVD, including natural and potentially distracting movements such as head turns, hands over face, lips moving with speech, and other facial gestures that are hard to read by most people out of context. Using a subset with high NT-rater agreement, the best and worst recognition was when the Groden students looked at videos of themselves and their peers.