Work for a Member company and need a Member Portal account? Register here with your company email address.
Creative Commons
Attribution-NonCommercial-ShareAlike 4.0 International
Dan Blondell, I2
The Multisensory Intelligence research group studies the foundations of multisensory artificial intelligence to create human-AI symbiosis.
Dell Technologies and MIT explore AI’s evolving business impact—offering insights and advice from top industry thought leaders.
Media Lab researchers awarded MGAIC grants for exploring AI decision-making and human-AI musical collaboration.
Pattie Maes + Pat Pataranutaporn explore how AI companions shape us—and how to design them to support human wellbeing.
Studies suggest benefits as well as harms from digital companion apps — but scientists worry about long-term dependency.
New research raises concerns that emotional reliance on AI chatbots could deepen social isolation for some users.
Higher use of chatbots like ChatGPT may correspond with increased loneliness and less time spent socializing with other people
Or are they kidding us into thinking they are looking out for our interests?
OpenAI and MIT’s Media Lab have concluded that using ChatGPT may actually worsen feelings of loneliness for heavy users.
Studies show those who engage emotionally with bot rely on it more and have fewer real-life relationships
New research from the Media Lab and OpenAI shows that heavy chatbot usage is correlated with loneliness and reduced socialization.
The results are nuanced, since feelings of loneliness and social isolation often fluctuate and can be influenced by various factors.
A new pair of studies from MIT Media Lab and OpenAI found that frequent chatbot users experience more loneliness and emotional dependence.
New research from OpenAI and MIT finds that ChatGPT could be linked to loneliness for some frequent users.
Marketplace’s Meghan McCarty Carino spoke with Media Lab researcher Cathy Fang about the findings and implications of the study.
News coverage of a research collaboration with OpenAI investigating how chatbots affect users' social and emotional wellbeing
In The New York Times, Jessica Grose considers the potential impacts of chatbots on children and teens.
We’re starting to get a better sense of how chatbots are affecting us—but there’s still a lot we don’t know.
Zero Robotics Invites Teams to Register for the 2025 Middle School Coding Competition with the NASA Astrobee Robots on the International Sp…
A new collab with the MIT Center for Constructive Communication empowers young people to shape the future of journalism and civic dialogue
There’s an accelerating cat-and-mouse game between web publishers and AI crawlers, and we all stand to lose.
It’s what psychologists call self-continuity, and can improve your health and well-being
Technologists say chatbots are a remedy for the loneliness epidemic, but looking to an algorithm for companionship can be dangerous.
“We should think more about how we want to use this technology to help people."
AI tools have the potential to enhance human capabilities and improve users' ability to make critical decisions.
While many artists view A.I. as a threat to their livelihoods, Media Lab alum Alexander Reben embraces it as a collaborator.
Perspective aware AI enables the creation of specialized Human-Ai agent surrogates.
Prof. Maes's research lies at the intersection of human-computer interaction and artificial intelligence.
Organized by the MIT Museum, the 2024 celebration of science, technology, and culture was the largest in its history.
A new program aims to teach them how AI works, by getting hands-on.
How could the increased use of AI lead humans to form addictive relationships with AI?
Robert Mahari talks to host Krys Boyd about the potential risks of human-AI companionship.
At MIT, the Open Dance Lab is exploring how to preserve an ancient dance form with the help of artificial intelligence.
Media Lab Professor Deb Roy, Media Lab alum Dr. danah boyd, and other experts consider ways to improve online space and conversations.
A system proposed by researchers from MIT, OpenAI, Microsoft, and others could curb the use of deceptive AI.
Experts warn AI-powered military systems may eventually push battlefield decision-making beyond the limits of human cognition
“Gaze to the Stars” is part of Artfinity, MIT’s Festival for the Arts.
A growing body of research shows how AI can subtly mislead users—and even implant false memories.
Something peculiar and slightly unexpected has happened: people have started forming relationships with AI systems.
AI agents could soon become indistinguishable from humans online. Could “personhood credentials” protect people against digital imposters?
Authors are Roberto Rigobon, the Society of Sloan Fellows Professor of Management at MIT Sloan, and postdoctoral associate Isabella Loaiza.
The CHARCHA process offers a unique level of autonomy to the users who choose to interact with it.
In the struggle over who can train AI models and how, there’s a casualty many people don’t realize: The open web.
“We have to think a lot more carefully about how we design the interaction between people and AI.”
As junk web pages written by AI proliferate, the models that rely on that data will suffer.
After identifying major flaws in popular AI models, researchers are pushing for a new system to identify and report bugs.
New research from the Data Provenance Initiative has found a dramatic drop in content made available to the collections used to build AI.
AI literacy—and a healthy dose of human intuition—can take us pretty far.
This year's list includes work from Media Lab students, faculty members, and alumni.
As more deepfakes find their way onto the internet, we need to find the best way to detect these harmful videos.
Editor’s Note: This blog was written jointly with OpenAI and the MIT Media Lab. It also appears here.
This fellowship supports exceptional doctoral students currently pursuing a PhD in computer science, mathematics, physics, or statistics.
Sometimes, it might be better to train a robot in an environment that’s different from the one where it will be deployed.
MeMic is a wearable recording device developed in the Media Lab’s Fluid Interfaces group.
Professor Deb Roy discusses his approach to creating spaces for listening and understanding in an age of polarization.
Join us as we celebrate the MIT Media Lab's 40th anniversary year in 2025!
Zero Robotics Invites Teams to Apply for the 2025 High School Coding Competition with the NASA Astrobee Robots on the International Space S…
Media Lab alum + MIT DUSP Prof Catherine D’Ignazio thinks carefully about how we acquire and display data—and why we lack it for many things
MemPal was selected in the Health category.
Acclaimed keyboardist Jordan Rudess’s collaboration with the Media Lab culminates in live improvisation between an AI “jam_bot” + the artist
The theme for this year's competition was Generative Voice AI Solutions.
Professor Pattie Maes chose a career in computer science for practical reasons, but was drawn to the amazing research being done at MIT.
Media Lab researcher Nikhil Singh and alums Aruna Sankaranarayanan and Matt Groh discuss ways that deepfakes may affect the 2024 elections.
Future You, developed by Dr. Pat Pataranutaporn, allows users to explore possible futures by chatting with an older version of themselves.
Professor Alex ‘Sandy’ Pentland was among the experts participating in a panel discussion on how AI may affect the 2024 elections.
In Scientific American, Dr. Pat Pataranutaporn and others consider the ways that art shapes our understanding of and interactions with AI.
Dr. Buolamwini will receive this award at the Smithsonian Museum of African American History and Culture.
Media Lab students and alums are among the winners and honorees for the 2024 MIT Prize for Open Data.
Labber Mark Weber and others are releasing an experimental version of an AI model for detecting money laundering on Bitcoin's blockchain.