In what ways can a human body control continuously-generated synthetic sound?
This project was one of the outcomes of Hardware Hack Lab, a co-working space for technologists and artists run every Wednesday here in New York.
In it we took a close look at full-body interaction using commercial depth devices such as the Kinect or Asus Xtion. We tied these devices up to a custom-designed sound synthesis environment built with SuperCollider, and out also to video screens placed around the space.
Alex & I considered this project a creative exploration, and the installation an unfinished demo - an outcome of the exploration. As such, all the code is up on Github along with instructions to replicate. We feel pretty good about adding this mode of interaction to our toolbox and plan to experiment more in the new year.