Many of us enjoy making, changing, and sharing digital media, but we are often unsure how to use or present our collections. This project allows users to focus on the higher-level aspect of media manipulation: controlling the structure by which images and sounds come together. Working with its users, the Emonic Environment structures audio, video, and text into a network, while continuously providing suggestions for manipulations that can be applied to elements of this network. It operates in real time, is capable of operating with or without human guidance, and participants can contribute and edit media, or interact solely on the structural network level, leaving the low-level control to the system's algorithms. The system and its content are controllable by mouse and keyboard, microphones, cameras, cell phones, MIDI controllers, sensors, and third-party interfaces.