On September 21st, 2024, we concluded a year-long collaboration with GRAMMY-winning keyboardist Jordan Rudess, culminating in a sold-out performance at the MIT Media Lab. The performance featured Rudess improvising on 6 distinct pieces alongside a novel real-time Generative AI system utilizing music Language Models to facilitate collaborative free improvisation, which we call the jam_bot. To manifest and visualize the generated outputs of the AI system, we constructed a 16-foot-tall kinetic sculpture integrated into the live stage performance. Using a custom-trained, multimodal generative AI model for Music-to-Motion Mapping, along with a series of Pattern-based Mappings, we enabled the AI system to exhibit an expressive and communicative stage presence. In the sections that follow, we detail the design of our system and the mapping strategies used to convey the AI's presence visually.