• Login
  • Register

Work for a Member company and need a Member Portal account? Register here with your company email address.

Project

Corsetto: AI-empowered Modular Robotic Garment for Capture and Entrainment of Breathing Techniques

Ozgun Kilic Afsar

This work is recently submitted as a Full Paper to ACM CHI 2023.

Corsetto is a full stack system design and platform for upper-body haptics. The Corset-like garment and its control architecture are developed as a platform for respiratory regulation, in this case for mediating skill learning and transfer between a voice teacher and student. 

Using OmniFiber technology, we fabricated a robotic upper body garment that can capture and stimulate the movement of muscle groups employed in respiration. Our initial testing was in the context of vocal pedagogy. However, a similar approach can be used to help runners with respiration support and recovery before, during and after practice; for supporting emotional regulation with deep pressure feedback onto the upper body; and as a mechanical counterpressure (MCP) suit similar to that of the MIT BioSuit.

The diagram below demonstrates the basics of respiratory physiology in singing: (a) Diaphragmatic movements and the abdominal muscles during normalbreathing. Arrows pointing towards and away from the chest indicate the actions of the internal and external intercostal muscles on the ribs; (b) The upward and outward movements of the ribs during inhalation; (c) Principal muscles of respiration (excludes accessory muscles of respiration). 

Based on the insights from phonation exercises with expert singers, we devised pneumatically-activated ‘modules’ that could be placed on these different parts of the body that are employed in singing. We decided to work with pneumatics due to their well-understood mechanics, intrinsiccompliance with the human body with the, and attractive characteristics as actuators, such as high frequency response, high force-to-weight ratio and large work density. 

Corsetto Garment Design

The Corsetto garment's design and fabrication pipeline involves mechanical adjustments for body fitting, machine embroidered actuated modules for abdominal; lower back; spinal; left rib; and right rib sections using a tailored fiber placement process. The resulting bilayer fabric structure with embedded fiber actuators provide a rich repertory of soft mechanical stimulation such as vibrotactile feedback (up to 50Hz) , lateral stretch, compression, and biaxial push and pull motions. In order to allow for testing hours of continuous actuation, we used 0.75L compressed air tanks connected to 3 FlowIO modules per garment, for driving the garment's actuation.

Corsetto System Architecture

We developed a system architecture consisting of three main components: the soft robotic garment, a FlowIO-based fluidic control platform with an auxiliary pressure regulator, and a software stack. The FlowIO-platform is a miniature pneumatics development platform for control, actuation, and sensing of soft robots and programmable materials. The robotic garment has a modular construct comprising OmniFiber-based fluidic fabric muscle sheets. When the robotic garment is connected to an array of three FlowIO devices, a range of haptic expressions are enabled with the orchestration of these modules enabled by the central controller software. 

Corsetto Machine Learning for Vocal Gesture Recognition

We explored the use of artificial intelligence (AI). Our aim was to both support the composition process, as well as automate the translation of vocal gestures into haptic gestures in real time during the live performance. To do so, a collected dataset of 3,045 voice samples resulted in 125 distinct vocal gestures (from 2 to 13 seconds) from the Feldman score recorded by the singer In this context, a vocal gesture is defined as a composed musical phrase used in the construction of the piece. These were then transformed into spectrograms, representing the vocal gestures as images of 128x32 pixels, to train an automatic classifier for recognising vocal gestures. A supervised deep machine learning methodology was then applied to train a Convolutional

Neural Network (CNN) on these images. CNNs are commonly used to recognise images, but have to be carefully designed for the particular context of recognising spectrograms of vocal gestures. This vocal gesture recognition was preliminarily tested in how machine learning may facilitate the dynamic switching between the CNN’s prediction of the vocal gesture it was hearing, and what was actually sung by the singer.

Figure (a) shows are examples of spectrograms from 4 samples representing different audio gestures to recognise. Figure (b) shows an utput example of  the trained model for audio gesture 3-1-23, showing the recognised gestures and corresponding expressions to actuate, ranked by  probability. The audio gesture 3-1-23 is correctly recognised by the model with a probability of 50%, and refracted expression of the wind lift theme will be actuated. 

Corsetto Case Study: An Immersive Live Opera Performance

To evaluate our system design, we staged two live performances of our haptic quartet based on Feldman’s original Three Voices score.  For both performances, the Corsetto’s Composer tool was leveraged for haptic scoring and orchestration of the garments. Each performance lasted for 30 minutes preceded by 15 minutes for the donning and doffing of the corsets.  After the performances, we documented the audience's experiences through micro-phenomenological interviews.

Below image is where one of the audience members wearing the Corsetto after the first performance took place; (a) side view at rest state; (b) side view of actuated ribs and abdominal modules; and (c) front view of actuated ribs and abdominal modules compressing onto the torso of the listener by ~18 %.