• Login
  • Register

Work for a Member organization and need a Member Portal account? Register here with your official email address.

Event

Media Lab Immersion Session @ Tokyo

Copyright

Image by Kanenori from Pixabay

Image by Kanenori from Pixabay

Media Lab Immersion @Tokyo

< The event is reserved for Media Lab members, members of our extended community, and specially invited guests only. > 

We are delighted to invite our members to an exclusive event featuring MIT Media Lab faculty, postdoctoral associates, and students from five (5) different research groups. These teams are pioneering groundbreaking robotics and AI projects with applications in everyday life, health, and creativity.

Renowned for its interdisciplinary approach, the MIT Media Lab fosters innovation by blending diverse fields of research. Through this event, we aim to offer fresh insights and explore potential opportunities for collaboration and business applications.

Schedule:

13:30-14:00 | Registration + Check in
14:00-14:05 | Welcome + Opening
14:05-17:20 | Presentations by the Media Lab
       14:05-14:25 | Valdemar Danry, Fluid Interfaces
       14:25-14:45 | Michelle Kim, Fluid Interfaces
       14:45-15:05 | Lucy Zhao, Multisensory Intelligence
       15:05-15:25 | Jocelyn Shen, Personal Robots
       15:25-15:40 | Short Break
       15:40-16:00 | Cedric Honnet, Responsive Environments
       16:00-16:20 | Kristen Edwards, MIT Mechanical Engineering
       16:20-16:40 | Annika Thomas, MIT Mechanical Engineering
       16:40-17:00 | Michael Fernandez, Biomechatronics + MIT Mechanical Engineering
       17:00-17:20 | Professor Joe Paradiso, Responsive Environments 

17:20-17:25 | Session Closing
17:25-18:30 | Social Session
18:30 | Adjourn

Location:

GLOBAL LIFESCIENCE HUB
Nihonbashi Muromachi Mitsui Tower, 7th Floor
3-2-1 Nihonbashi Muromachi, Chuo-ku, Tokyo 103-0022

  • Directly connected from Mitsukoshi Station on the Tokyo Metro Ginza and Hanzomon Lines
  • Directly connected from Shin-Nihombashi Station on the JR Sobu Line
  • A nine-minute walk from the southern exit of Kanda Station on the JR Yamanote, Keihin Tohoku and Chuo Rapid Lines
  • A nine-minute walk from the Nihombashi exit of Tokyo Station on the JR Yamanote, Keihin Tohoku and Chuo Rapid Lines


We look forward to welcoming you and sharing insights into the innovative work happening at the Media Lab and discussing potential collaborations that could drive impactful applications in your field. 

Below is the sign-up sheet. Kindly use your company email address to verify your identity. Please note that members will be given priority for this session, and registration will close once we reach capacity.

The event is reserved for Media Lab members, members of our extended community, and specially invited guests only.  For further inquiries,  please contact Mirei Rioux (mirei@mit.edu). 

Participants

Valdemar Danry, PhD Candidate, MIT Media Lab (Fluid Interfaces)

Specialty: #AI #Cognition #Human-AI-Interaction #Reasoning #WearableAI #CriticalThinking

"AI as a Cognitive Copilot: Designing Tools that Make Us Smarter"

  • Description: AI systems are becoming our daily companions, answering questions, shaping choices, and in subtle ways taking over our thinking. Left unchecked, they risk making us passive, dependent, and less able to reason for ourselves. But what if these same technologies could do the opposite - help us become sharper thinkers, more curious learners, and wiser decision-makers? In this talk, Valdemar Danry (MIT Media Lab) presents large-scale studies revealing where today’s AI can undermine us as well as new types of chat interfaces, wearables, and real-time signals that help people reason more clearly, question more deeply, and learn faster.

Michelle Kim, Graduate Student, MIT Media Lab (Fluid Interfaces)

Specialty: Human-Centered AI, AI for Health and Wellness, Human-Computer Interaction

"Meta-Self: Wearable AI Systems for Augmenting Metacognition and Self-Regulation"

  • Description: Wearable AI Systems for Augmenting Metacognition and Self-Regulation

Lucy Zhao, PhD Candidate, MIT EECS

Specialty: Large Language Models, Large Multimodal Models, Mechanistic Interpretability, Reasoning

"New Advances in Multisensory Intelligence"

  • Description: The Multisensory Intelligence research group studies the foundations of multisensory artificial intelligence to create human-AI symbiosis across scales and sensory mediums. The new AI technologies we develop are able to learn and interact with the world through integrating diverse sensory channels, significantly increasing their capability and flexibility. Our group further draws upon multidisciplinary backgrounds to integrate the theory and practice of AI into many aspects of the human experience, including enhancing our digital productivity, developing new technologies for creative expression, and improving our holistic health and wellbeing. Finally, our group carefully considers the responsible deployment of AI technologies, including quantifying and mitigating real-world societal concerns around bias, fairness, and privacy, participatory co-design with stakeholders, and developing policies around the real-world deployment of AI foundation models.

Jocelyn Shen, PhD Student, MIT Media Lab (Personal Robots)

Specialty: Natural language processing, human computer interaction, empathy

"Human-AI Interaction for Empathic Communication and Relational Integrity"

  • Description of Talk: Humans rarely navigate the world in isolation – through social interaction and communication, we share who we are with the world, and the world becomes part of us. Current artificially intelligent systems have the power to tailor our social understanding of others and digital technologies allow us to communicate across both space and time, yet loneliness and apathy are widespread. This talk explores how human-AI interaction with socially intelligent systems can both promote empathy in human communication, while safeguarding against the weaponization of empathy for emotional manipulation.

Cedric Honnet, PhD student, MIT Media Lab (Responsive Environments)

Specialty: HCI (Human-Computer Interaction), Embedded Systems, Sensing, Wearables, e-Textiles, Miniaturization, Manufacturing 

"FiberCircuits: A Miniaturization Framework to Manufacture Fibers That Embed Integrated Circuits"

  • Description of Talk: While electronics miniaturization has propelled the evolution of technology from desktops to compact wearables, most devices are still rigid and bulky, often leading to abandonment. To enable interfaces that can truly disappear and seamlessly integrate into daily life, the next evolutionary leap will require further miniaturization to achieve full conformability. With FiberCircuits, we offer design and fabrication guidelines for the manufacturing of high-density circuits that are thin enough for full encapsulation within fibers. Our demonstrations include a 1.4 mm-wide ARM microcontroller with sensors as small as 0.9 mm-wide and arrays of 1 mm-wide addressable LEDs, which were woven into our interactive textiles. We provide example applications from fitness to VR, and propose a scalable fabrication process to enable large-scale deployment. To accelerate future research in HCI, we also made our platform Arduino-compatible, created custom libraries, and open-sourced all the materials. Finally, our technical characterizations demonstrate FiberCircuits' durability, thanks to its silicone encapsulation for waterproofness and braiding for robustness. From wearables to insertables or even implantables, we believe that by making miniature circuits accessible to researchers and beyond, FiberCircuits will open possibilities for new scalable interfaces that embody imperceptible computing.

Kristen M. Edwards, Phd Candidate, MIT

Speciality: Artificial Intelligence, Machine Learning, Engineering Design, Manufacturing

"Multimodal AI in Design Evaluation: Statistical Perspectives on Reaching Expert-Level Equivalence"

  • Description: The subjective evaluation of early-stage engineering designs, such as conceptual sketches, has traditionally relied on human experts. However, expert evaluations are time-consuming, expensive, and sometimes inconsistent. Recent advances in vision-language models (VLMs) offer the potential to automate design assessments, but it is crucial to ensure that these AI “judges” perform on par with human experts. This research introduces a rigorous statistical framework to evaluate whether an AI judge's ratings align with those of human experts. Our results show that, with in-context learning and reasoning, VLMs often achieve better agreement with experts than trained novices do. Moreover, for some design metrics, reasoning-enabled VLMs can even achieve lower mean absolute error with experts’ ratings than experts do with each other. These findings suggest that, on certain statistical tests, AI judges are not only approaching expert-expert equivalence but in some cases surpassing it.

Annika Thomas, MIT MechE

Specialty: Spatial AI, Robotic Perception, Autonomy in Space Sub-topic: #Robotics, #AI

Talk Title + Description: TBD

Michael Fernandez, PhD Candidate, MIT Media Lab (Biomechatronics)

Speciality: Prosthetics and orthotics, machine learning, biomechanics, robotics

"Cyborg Design: Toward Functional Limb Restoration with Neural Interfaces"

  • Description of Talk: Robotics and machine learning have evolved from rigid, preprogrammed systems into adaptive technologies that learn, predict, and interface directly with the human body. This talk explores advances in restoring upper-limb function after amputation through neuromusculoskeletal interfaces and biomimetic control. The Agonist–Antagonist Myoneural Interface (AMI) and Cutaneous Mechanoneural Interface (CMI) preserve natural proprioceptive and tactile signaling, while direct neural controllers project user intentions into intuitive prosthetic movement. Together, these innovations shift prosthetic devices from mechanical substitutes to natural extensions of the body to restore, and ultimately surpass, human function.

Joe Paradiso, Alexander W. Dreyfoos (1954) Professor in Media Arts and Sciences at the MIT Media Lab 

Speciality: Wireless sensing systems, wearable and body sensor networks, smart buildings, environmental sensing systems, energy harvesting, power management, ubiquitous/pervasive computing and the Internet of Things, human-computer interfaces, space-based systems, smart materials, e-textiles, digital twins in virtual worlds, electronic music controllers, electronic music systems, interactive music/media, Human Augmentation.

Talk Title + Description: TBD 

More Events