• Login
  • Register

Work for a Member company and need a Member Portal account? Register here with your company email address.

Post

Tales from Virtuality – Research under quarantine at the MIT Media Lab

Copyright

Joe Paradiso

Joe Paradiso

By Joe Paradiso, ETH Zurich alumnus and Professor at MIT

The COVID-19 causality cascade began for me at the end of February, as northern Italy began to encounter the pandemic. I had just returned to Boston from a week in that region, where I gave a series of lectures and took a few days of vacation. Watching the crisis build there shortly after returning, any cough or congestion felt like a significant symptom, but there were lots of more benign causes of that already around. It seemed we didn’t bring it back with us – only great memories, a few bottles of good Italian wine, the usual souvenirs, and many CDs of edgy Italian rock and jazz, as I collect unusual music everywhere I go, a habit dating to my days of living in Switzerland four decades ago. But the virus was already finding other ways to the Boston area. The first ‘superspreading’ event in this area appears to have been the Biogen Annual Leadership Meeting that took place here a week after we returned.

One of my faculty colleagues at the Media Lab is a biochemist who has spent much of his professional life focused on infectious disease, hence we had a strident early forecast of what was quickly coming. We encouraged our staff and students to work remotely where possible, already at the beginning of March, and the bulk of MIT followed suit a week or two later. By the time the clock struck midnight on Tuesday, March 17, the MIT Media Lab building was locked down, together with most of MIT. All MIT classes were canceled for the week of March 16, and became entirely virtual following the end of Spring Break a week later. This gave faculty and instructional staff two weeks to move everything online. Tools that some of us explored via MIT’s Stellar, MITx and OpenCourseWare platforms were pushed into shotgun marriages with Zoom, Jitsi, Google Hangout, and other scalable videoconferencing platforms, and teaching at MIT went all virtual by March 30. I would normally be teaching my flagship Sensors class this term, but deferred as I carried extra teaching responsibilities last term. This class would have been very difficult to fully virtualize, in that it involves extensive hands-on labs and a hardware-intensive final project. My colleagues teaching project-based classes at MIT and other universities that involve hardware are having components shipped to students at home and running virtual critiques. On the other hand, my friends in the MIT Physics Department teaching their famous ‘Junior Lab’ class (somewhat similar to the Physik Praktikum class I used to teach at ETH) count themselves fortunate in that the students had already taken most of the data they needed and could focus on analysis (even at MIT, it’s difficult to send X-ray machines, radioactive sources, NMR gear, etc. to students’ homes).

Copyright

Joe Paradiso and the students of the Responsive Environments group

At MIT, as at our sister technical universities across the world, research never stops. Hence, I needed to quickly virtualize my team of circa 20 students and researchers who had come to work with me from around the world to keep our projects going. Much of the research in my Responsive Environments group involves hardware, and we house a world-class electronics lab that we (and many other groups in the building) rely on. Accordingly, we started moving equipment out of the building the week before shutdown, and my students cracked deals with one-another as to who would be the custodian of what. Impromptu labs cropped up in corners of my team members’ apartments – as you can see in the photo montage of our at-home work zones, the students are sharing their living space with 3D printers, re-flow ovens, electronics test/assembly stations, embedded system development suites, GPU arrays for deep learning (which also help to heat their apartment), VR systems, and even electronic music gear (several of them are also musicians and involve audio mappings in their work). Accordingly, much of our physical work was able to continue, despite its diversity – for example, we are flying two experiments on the International Space Station (one just finished and is entering data analysis, while the other has just shipped to our collaborators at JAXA in Japan to prepare for launch) and we are about to manufacture 20 pairs of sensor-laden wireless glasses with one of our industrial partners that are designed to measure features related to the wearer’s attentional state. A few of my students in collaboration with other Media Lab teams are prototyping open-source systems at home to thwart tactile infectious paths for COVID-19. These include ultra-low-cost wearable sensors based on ultrasound or magnetic sensing that deliver a warning if your hands approach your face, and an arm-mounted sanitizer sprayer that activates automatically when your fingers approach a surface and/or you make a special gesture.

As we have many projects centered around Human-Computer Interaction (HCI), user studies have hit us much harder. Getting significant numbers of people into a common space to share an apparatus (be it a HoloLens running an intelligent AR tour guide or a smart fabric interface) will most likely not be possible before theses are due, so my students and I are creatively brainstorming triage. Much of the Media Lab runs on support from our industrial members, who visit twice yearly for huge meetings where everybody proudly shows their latest demos. The atmosphere is always electric during these key events, but alas, as they can’t be held now in the physical world, we are scrambling to think of ways to preserve excitement in a virtual setting.

Copyright

Joe Paradiso

The change in my own routine has been drastic. Before March, travel was a large part of my life. Like many senior academics, I would often be on a plane to a committee meeting or lecture in different corners of the world, and when local, rushing to meetings all over campus. Now, I’m always home. This constraint has led to a different level of productivity, however – the pile of theses, papers, and proposals that I need to write/review/revise starts to diminish, and I’ve finally been able to (nearly) complete my home music synthesizer studio, a task that’s been years in waiting. To keep my team coherent, we schedule Zoom meetings as needed, and have all-group hangouts on alternate Fridays. The remaining Fridays, however, are entirely dedicated to 1-1 research meetings with all of my team members – although this is a real marathon, these are perhaps among the most stimulating and fulfilling days that I’ve ever had.

In the physical world, there are too many things that can distract and interfere with or defer this kind of meeting, whereas in the virtual sphere it’s purely an intense 20-30-minute session geared around ideas, concepts, strategy, and progress. We’re fortunate to attract great talent into our research and academic programs at MIT, and these meetings always reaffirm that for me.

Our Zoom-based life has evolved in strange ways as I see my faculty colleagues morph on screen. Our hair is all getting long, and some are growing new beards. Originally, we kept the physical world in the background – we would be voyeurs into each other’s living rooms, studies, kitchens, or even outdoor decks when weather allowed. Now you are more likely to see somebody’s ceiling or a photo backdrop, which range from alpine settings to cloudscapes; we are changing them like souvenier T-shirts. After seeing some of my colleagues in MIT administration sporting Marscapes taken from JPL rovers as background, I decided to delve into fantasy and appear against a view of a settled Mars from The Expanse (an excellent neo-space-opera TV series that I have devoured during quarantine), Rigel 7 from the original Star Trek, and have even scanned some antique prints showing idyllic European landscapes from centuries ago that I bought while living in Switzerland and can now inhabit. As I very much miss the experience of attending concerts, one of my favorite backdrops is a photo I took during a Hawkwind show at the wonderful Roundhouse in London while in town to give an EE lecture at Imperial College a few years ago.

I’ve hosted and attended all manners of important meetings barefoot now. Before I introduced one of my PhD students at his Zoom thesis defense last month, I lifted my foot to my camera to underscore that fact – this was mildly appropriate, as his work was around re-rendering audio from dense microphone arrays in the real world so you could drop the listener flawlessly into an analogous virtual environment, and I wanted to emphasize how prevalent virtuality has become. The popular analogy of us all living in a ‘Science Fiction’ world now hits home when we cower at human proximity and cocoon around our virtual monitors. Perhaps an early harbinger can be found in E.M. Forster’s 1909 story ‘The Machine Stops,’ but I immediately think of scenes from Isaac Asimov’s 1956 novel ‘The Naked Sun’, which I devoured in elementary school.

Much of the work in my research team over the past 15 years has pivoted around different ways of connecting people to information streaming from sensors embedded increasingly everywhere, and how this can change the nature of presence, a topic that has now attained immediate relevance. A decade ago, we installed cameras, speech-obfuscating streaming microphones, and other sensors throughout our Media Lab building complex for research on distributed and remote interaction that culminated in our DoppelLab project – a forerunner of what is now commercially termed DigitalTwin, visitors could roam through our virtual building from anywhere and see/hear real-time sounds and stimuli tunneling in from corresponding locations on the physical site. 

Copyright

Gershon Dublon

As some of these cameras and sensors are still functional, we opened up their streams for Media Lab members to view as ambient background. Seeing a day pass in our nearly empty complex reminds us of our common home and rejuvenates the promise of our pending return.

More popular, however, are the live media streams from our Tidmarsh project, where we distributed cameras, microphones, and hundreds of wireless sensors throughout a restored wetland wildlife sanctuary in Plymouth, an hour’s drive south of Boston, to support ecological research in addition to exploring new frontiers in virtualization. Here, from the quarantined confines of your home anywhere in the world, you can connect to a beautiful natural landscape in either real life or via virtual immersion. I still spend hours with one of my screens tied to the Herring Pond, listening to the geese, birds, insects, and frogs, while hoping for a glimpse of the internet-star heron hanging out there who often appears to be hamming for the camera. Our recent Mediated Atmospheres Project has developed rooms that automatically transform between natural settings via rendered lighting, projected image, and audio according to how residents react to them – as we are all cooped up at home around screens, this initiative has ever more relevance.

Copyright

Brian Mayton

Although we can virtually connect to places in different ways that yield a degree of satisfaction, connecting to people presents entirely different challenges. Our current lives spent staring at flat Hollywood-Squares montages offered by Zoom and other online conferencing platforms start to take an exhausting toll. For small meetings with only a few people, these experiences can work, but larger groups break down, inducing what I see as a Zoom-induced paranoia. Our brains are built to pay special attention to faces, but we can’t properly process a panoply of faces staring somewhere vaguely at us from a common flat screen. When should we break into a conversation, and what kind of reaction are we really getting? How can I whisper to a neighbor or naturally drift into a separate conversation with a side group of people, like I would at a party or reception? I’ve found it quite disconcerting, for example, when talking on Zoom to see stock photos of colleagues (usually smiling) who have video off juxtaposed together with live video feeds of people showing real expressions and reactions – the comparison can make you think that the real-streaming people aren’t happy even though their expressions are at least neutral. Media Lab classes are usually very engaging with lots of discussion - several of my colleagues who are teaching this term have noted the onus of Zoom-style classes increasingly wearing down student enthusiasm as the term has progressed.

As our collective nerves start to fray from this unnatural social/cognitive overload, it becomes obvious that there is tremendous research opportunity in how can we appropriately represent the nuance of human presence in a way that naturally scales. Can we also virtualize the serendipitous and spontaneous interactions between people at workplaces, schools, city centers, etc. that work as a subliminal semantic glue to bind us together and establish a shared identity? And what about my very-much-missed experience of being out at a concert? Watching a video stream even on a magnificent TV with amazing sound isn’t the same as actually being there in the pungent press of humanity who collectively share and amplify their excitement via subtle signaling that we still barely understand.

I remember the popular heyday of shared 3D VR worlds like SecondLife a good decade or more ago, and how major enterprises like IBM bet heavily on them as the future of teleconferencing. Yes, it was early, and we weren’t quite ready for Cyberspace then – the primitive rendering, latency issues, lack of quality VR/AR platforms, etc. restricted these environments mainly to dedicated knots of users doing immersive gaming or pursuing somewhat niche experiences. Now, however, we see the importance of abstracting human interaction, and the underlying technology is much more capable than it was then. Those of us working in HCI have scrabbled at this lock for a while, but the field of remote collaboration and abstract presence is poised for a renaissance. We have collectively changed through the COVID-19 experience, and when we return to work, the virtualization needle won’t entirely reset.

Copyright

Nan Zhao

Like many others working in embedded sensing, the excitement I felt at the heyday of Ubiquitous Computing and the Internet of Things has led to concern as we start seeing elements of this infrastructure used in alarming ways. I just finished an extended Guest Editors Introduction for the upcoming edition of IEEE Pervasive Computing Magazine about this, as this issue focuses on the twin sides of attention that form crises in our networked world - unwanted attention payed to me [surveillance] vs my own attention being unwittingly diverted [manipulation]. Although the networked cameras and sensors that rapidly fill the world have invited this crisis, they also provided the means of holding us together in isolation and may pave the road to returning to normal over the next months as we leverage that information to trace potentially infected people through location monitoring and networked temperature sensing, for example.

We are living in an exceptional time that has stressed our personal, professional, cultural, and economic systems. But it has also provided us a different view of where humanity is heading, highlighting even more perils, yet also unveiling fresh promises and new opportunities. I look forward to seeing the research communities of the world band together to bring humanity past the COVID-19 era and into an even brighter future.

Related Content