Post

Meet the Labbers: Juliana Cherston

In our ongoing audio series, Meet the Labbers, we hear people from all roles across the Media Lab talk about what they do and why they do it.

Today, meet Juliana Cherston.

"My name is Juliana Cherston and I am a master’s student in the Responsive Environments group at the MIT Media Lab. I come from a physics background and my interest in joining the Media Lab was to do something a bit more open-minded and creative with physics. I always had (especially in high school and college) this feeling of physics being a thing that transcends human existence. I always had this feeling that, like, okay if humanity didn’t exist then I presume the laws of physics still would, and that there’s this really grand draw to understanding this thing greater than us. And I also had just a fantastic high school physics teacher.

These are really deep philosophical questions that one might pose ... We are the ones making the models for science.

Here, I have been working up until now in the field of sonification—representation of data as sound, both for practical but also aesthetic purposes. But also I liked engineering, and also I liked computer programming, so there were a lot of … and philosophy … so there were a lot of things at play, which is why I came to a place like the Media Lab.

One transition I experienced, which is perhaps what brought me here is that back in let’s say high school I really believed that, like, physics would lead me to the truth. That’s the closest thing that I had to really understand the truth. And what you learn more and more at a college is that actually we’re just creating models—they seem to accurately predict things that happen in the universe. But are they really the truth? Can we actually access the truth? These are really deep philosophical questions that one might pose. In that way, we are the ones making the models … for science.”      

Music: “Tranquil Piano” by the Quantizer team at the MIT Media Lab. They’ve created Quantizer, a sonification engine that, in real time, converts particle collision data into music. To find out how it works, see this story.

Related Content