Interested in learning more about the Commalla project? Fill out this form!
Over 1 million people in the U.S. are non- or minimally speaking with respect to verbal language (mv*) , including but not limited to people with autism spectrum disorders (ASD), Down syndrome (DS), and other genetic disorders. Mv* individuals communicate richly through vocalizations that do not have typical verbal content, as well as through gestures, AAC, and other modalities. Some vocalizations have self-consistent phonetic content (e.g., “ba” to mean “bathroom”) and others vary in tone, pitch, and duration depending on the individual’s intended communication and affect.
We present, to our knowledge, the first project studying communicative intent and affect in naturalistic vocalizations that do not have typical verbal content for mv* individuals. Interviewed parents of mv* children cited miscommunication with people who do not know their child well as a major source of stress. Our long-term vision is to design a device that can help others better understand and communicate with mv* individuals by training machine learning models using primary caregivers’ unique knowledge of the meaning of an individual’s nonverbal communication. Our focus is currently on developing personalized models to classify vocalizations using in the moment live labels from caregivers via the Commalla labeling app. As part of this work, we are developing scalable methods for collecting and live labeling naturalistic data, and processing methods for using the data in machine learning algorithms. We are currently piloting and refining our data collection, machine learning models, and vision with a small number of families through a highly participatory design process.
Commalla is co-led by Kristy Johnson (email@example.com) and Jaya Narain (firstname.lastname@example.org)
(Note: This project was previously called ECHOS.)
Additional Commalla information: https://commallamit.wixsite.com/commalla
Narain, J.*, Johnson, K.T.*, Ferguson, C., O’Brien, A., Talkar, T., Zhang, Y., Wofford, P., Quatieri, T., Picard, R.W.,Maes, P., "Personalized Modeling of Real-World Vocalizations from Nonverbal Individuals," Proceedings of the International Conference on Multimodal Interaction (ICMI), Utrecht, Netherlands, October 2020. (*Co-first authors/Equal contribution)
Narain, J.*, Johnson, K.T.*, O’Brien, A., Wofford, P., Maes, P., Picard, R.W. "Nonverbal Vocalizations as Speech: Characterizing Natural-Environment Audio from Nonverbal Individuals with Autism," Proceedings of the Workshop on Laughter and Other Nonverbal Vocalisations, Bielefeld, Germany, October 2020. (*Co-first authors/Equal contribution)
Johnson, K.T.* & Narain, J.*, Maes, P., Picard, R.W., "Augmenting Natural Communication in Nonverbal Individuals with Autism," International Society for Autism Research (INSAR), Seattle, Washington, May 2020. (*Co-first authors/equal contribution)
Johnson, K.T.*, Narain, J.*, Ferguson, C., Picard, R.W., and Maes, P. The ECHOS Platform to Enhance Communication for Nonverbal Children with Autism: A Case Study. Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems (Case Studies Track of CHI 2020). (*Co-first authors/Equal contribution)
Narain, J.*, Johnson, K.T.*, Picard, R.W., Maes, P. "Zero-Shot Transfer Learning to Enhance Communication for Minimally Verbal Individuals with Autism using Naturalistic Data," NeurIPS Workshop on AI for Social Good, December 2019. (*Co-first authors/Equal contribution)