The Multisensory Intelligence research group studies the foundations of multisensory artificial intelligence to create human-AI symbiosis across scales and sensory mediums. The new AI technologies we develop are able to learn and interact with the world through integrating diverse sensory channels, significantly increasing their capability and flexibility. Our group further draws upon multidisciplinary backgrounds to integrate the theory and practice of AI into many aspects of the human experience, including enhancing our digital productivity, developing new technologies for creative expression, and improving our holistic health and wellbeing. Finally, our group carefully considers the responsible deployment of AI technologies, including quantifying and mitigating real-world societal concerns around bias, fairness, and privacy, participatory co-design with stakeholders, and developing policies around the real-world deployment of AI foundation models.
Our group regularly publishes in ML, AI, NLP, vision, and HCI research journals and conferences. To see our group's latest publications, please see https://pliang279.github.io/. Our group's ongoing projects are also highlighted in the 'Projects' tab on the left panel of this page.