About the project:
Max and I had already been collaborating on audio-reactive Unity3D worlds for use with various performances and events, when in June of 2015, he and Kassi approached me about doing something more immersive for the Toronto show. I had been experimenting with controlling ws2812 addressable LEDs, and we came up with the idea to have physical structures with internal illumination on stage that would be controlled by the Unity3D vizualizer.
I ended up creating a tool within Unity that would interpret colors from virtual space and broadcast them to a Raspberry Pi, which was in turn driving lighting behavior via Fadecandy LED controller. The result was that while the 3D environment projected behind them reacted to Mend’s performance via FFT analysis of audio as well as controller input, light sculptures on stage would extend the virtual world into the physical space on stage by reflecting on-screen events with on-stage light behaviors.
My roles and tools:
I designed an audio reactive framework for Unity in C# so that collaborators could quickly and easily manipulate various properties of objects with real-time audio input.
I created an API between Unity and and the Raspberry Pi (which was running a host written in Python) so that bi-directional communication could be established.
I also wrote a controller in Python that would receive messages via this API and translate them into lighting controls, which would then be sent to the Fadecandy to drive the final light output on stage.
The final piece was an fx framework that took input from an XBox controller connected to Unity that could control various aspects of the virtual world, including navigation, activating objects, and manipulating visual effects.
Demo illustrating one controllable effect
Constructing the sculptures with Max and Kassi
Sculpture consists of modular units for flexible layout and size