Our research focuses on expanding the use of multimodal AI in education beyond the common modalities of text and speech. We're exploring how other sensory inputs, like vision and touch, can create richer, more engaging learning experiences for a wide range of learners and environments —from K-12 students to senior citizens, and from classrooms to industry. Additionally, we are investigating how multimodal AI can help foster education of creative elements such as art/music, making these processes more intuitive and accessible for diverse learning communities.