health
art
human-machine interaction
learning
artificial intelligence
robotics
architecture
design
technology
consumer electronics
kids
music
wearable computing
networks
politics
bioengineering
entertainment
cognition
economy
human-computer interaction
data
machine learning
history
archives
social science
storytelling
sensors
interfaces
wellbeing
space
environment
covid19
computer science
developing countries
prosthetics
engineering
privacy
social robotics
ethics
civic technology
social media
civic media
imaging
public health
communications
synthetic biology
urban planning
augmented reality
neurobiology
biology
affective computing
virtual reality
computer vision
transportation
community
industry
energy
biomechanics
data visualization
social change
food
alumni
government
biotechnology
zero gravity
ocean
3d printing
creativity
racial justice
medicine
agriculture
blockchain
genetics
manufacturing
data science
gaming
prosthetic design
women
behavioral science
construction
fashion
materials
social networks
banking and finance
open source
systems
cryptocurrency
security
crowdsourcing
climate change
diversity
collective intelligence
fabrication
makers
wiesner
language learning
cognitive science
internet of things
bionics
extended intelligence
performance
neural interfacing and control
perception
human augmentation
startup
interactive
autonomous vehicles
natural language processing
ecology
civic action
marginalized communities
nonverbal behavior
mapping
physiology
visualization
clinical science
physics
holography
long-term interaction
gesture interface
microfabrication
mechanical engineering
networking
trust
point of care
orthotic design
water
sports and fitness
autism research
primary healthcare
electrical engineering
voice
microbiology
hacking
member company
pharmaceuticals
rfid
nanoscience
clinical trials
mechatronics
trade
academia
member event
open access
gender studies
soft-tissue biomechanics
code
chemistry
real estate
randomized experiment
publishing
biomedical imaging
gis
event
exhibit
cartography
metamaterials
installation
Algorithmic auditing has emerged as a key strategy to expose systematic biases embedded in software platforms, yet scholarship on the imp...
The Gender Shades project pilots an intersectional approach to inclusive product testing for AI.Algorithmic Bias PersistsGender Shades is...
Looking beyond smart cities
Advancing human wellbeing by developing new ways to communicate, understand, and respond to emotion
One of the principal benefits of counterfactual explanations is allowing users to explore "what-if" scenarios through what does not and c...
Automatic emotion recognition has become a well-established machine learning task in recent years. The sensitive and subjective nature of...
In collaboration with Massachusetts General Hospital, we are conducting a clinical trial exploring objective methods for assessing depres...
Inventing, building, and deploying wireless sensor technologies to address complex problems in society, industry, and ecology
Making the invisible visible–inside our bodies, around us, and beyond–for health, work, and connection
Research studies led by Dr. Shah in his laboratory have created new paradigms for using low-cost images captured using simple optica...
Photorythms: a computational art-based inquiry of portrait photographyPhotorythms investigates whether computational methods such as faci...
By predicting sparse shadow cues, our physics-inspired machine learning algorithm can reconstruct the underlying 3D scene. Abstract....
Transforming data into knowledge
Creating scalable technologies that evolve with user inventiveness
In photography, illuminating a subject directly using an on-camera flash can result in unflattering photos. To avoid this, photographers...
Video has evolved from an esoteric production to the default means of communication both online and as broadcast. We accept a bad or leng...
Depending on the operational environment of an autonomous system, a great deal of perceptual uncertainty may be introduced to an object d...
Navigation for autonomous UAVS (unmanned aerial vehicles) is a complex problem and physical field testing of associated tasks introduces ...
Many tasks are not easily defined and/or too complex for supervised machine learning approaches. For these reasons, a technique known as&...
Technical summary: Future of clinical development is on the verge of a major transformation due to convergence of large new digital data ...
Promoting deeper learning and understanding in human networks
Extending expression, learning, and health through innovations in musical composition, performance, and participation
Research studies led by Dr. Shah, in collaboration with the United States Food and Drug Administration (FDA), have develo...
An exploration of how advances in deep learning and generative models can be used to help us synthesize our ideas.
Image2Reverb: Cross-Modal Reverb Impulse Response Synthesis. Nikhil Singh, Jeff Mentch, Jerry Ng, Matthew Beveridge, Iddo Drori; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2021, pp. 286-295
Consider a small object sitting on a desk in your living room. The object is illuminated by light sources from all directions—this includ...
What is ELSA?ELSA is an AI-powered chatbot that acts as an empathetic companion, encouraging users to talk about their day through a form...
Tara Boroushaki, Isaac Perper, Mergen Nachin, Alberto Rodriguez, Fadel Adib. "RFusion: Robotic Grasping via RF-Visual Sensing and Learning." ACM Conference on Embedded Networked Sensor Systems 2021.
RFusion is a robotic system that can search for and retrieve items in line-of-sight, non-line-of-sight, and fully occluded settings. It c...
Boroushaki, Tara, et al. "Robotic Grasping of Fully-Occluded Objects using RF Perception." IEEE International Conference on Robotics and Automation (ICRA 2021).
MIT City Science is working with HafenCity University to develop CityScope for the neighborhood of Rothenburgsort in Hamburg, Germany. Th...
Tangible Swarm is a tool that displays relevant information about a robotics system (e.g., multi-robot, swarm, etc.) in real time wh...
The WSJ covers research from the Signal Kinetics group and other labs, universities, and companies working on AI-enabled devices.
#ZoomADay was a year long project exploring the creation and use of synthetic characters and deep fakes for use in online telepresence an...
The goal of this project is to develop techniques to remove identifying information from wearable and phone data to protect patients’ pri...
Akram Bayat, Connor Anderson and Pratik Shah. "Automated end-to-end deep learning framework for classification and tumor localization from native nonstained pathology images," Proc. SPIE 11596, Medical Imaging 2021: Image Processing, 115960A (15 February 2021); doi: 10.1117/12.2582303
Can robots find and grasp hidden objects?Robots are not capable of handling tasks as simple as restocking grocery store shelves as ...
System uses penetrative radio frequency to pinpoint items, even when they’re hidden from view.
Camera Culture head Ramesh Raskar talks to Rashmi Mohan about his interdisciplinary research and entrepreneurial endeavors.
View the main City Science Andorra project profile.Research in dynamic tools, mix users (citizens, workers) amenities, services, and land...
We built a low-cost and open source 405 nm imaging device to capture red fluorescence signatures associated with the oral biomarker porph...
General overview:Sepsis, a life-threatening complication of bacterial infection, leads to millions of worldwide deaths requires significa...
Imaging fluorescent disease biomarkers in tissues and skin is a non-invasive method to screen for health conditions. We report an automat...
We report a novel method that processes biomarker images collected at the point of care and uses machine learning algorithms to provide a...
The full text of our paper is available here.Sometimes the thing that we want to see is hidden behind something else. A neighboring vehi...
Henley, C., Maeda, T., Swedish, T., & Raskar, R. (2020). Imaging Behind Occluders Using Two-Bounce Light. Computer Vision – ECCV 2020 Lecture Notes in Computer Science, 573-588. doi:10.1007/978-3-030-58526-6_34
The startup OpenSpace is using 360-degree cameras and computer vision to create comprehensive digital replicas of construction sites.
Changing storytelling, communication, and everyday life through sensing, understanding, and new interface technologies
Computer vision uncovers predictors of physical urban change
www.ajl.orgAn unseen force is rising—helping to determine who is hired, granted a loan, or even how long someone spends in prison. This f...
The relationship between news content and its presentation has been a long-studied problem in the communications domain. Often, chan...
City Science researchers are developing a slew of tangible and digital platforms dedicated to solving spatial design and urban planning c...
Two-dimensional radiographs are commonly used for evaluating sub-surface hard structures of teeth, but they have low sensitivity for earl...
The Electome: Where AI meets political journalismThe Electome project is a machine-driven mapping and analysis of public sphere content a...
A.I. systems are shaped by the priorities and prejudices…of the people who design them, a phenomenon that I refer to as "the coded gaze."
This project depicts the design, deployment and operation of a Tangible Regulation Platform, a physical-technological apparatus made for ...
This is an open source geospatial exploration tool. Using various public APIs including Open Street Map and the United States Census, we ...
MIT researchers have developed a system that can produce images of objects shrouded by fog so thick that human vision can’t penetrate it.
Seeing through dense, dynamic, and heterogeneous fog conditions. The technique, based on visible light, uses hardware that is similar to ...