Posted Wednesday, 14 December 2011
A few nights ago, after 3 intense days and nights of working, we installed our collaborative installation at Freemote Utrecht. After all the work, it was up for about 6 hours.
The group I've been working with, V4W, do this a lot - it's all part of the process for them. Turn up with a bunch of equipment, create an artwork onsite during opening hours, and display it on the final night. While we were working, people came over and observed, made comments (and jokes), and asked questions.
|We were our exhibit (on the left)|
Speaking to Gareth and Hayden about it, they say they like the exposure it gives to the subculture of digital artists, musicians and programmers, i.e. us. It's difficult for people to get a handle on what exactly it is we do, unlike say in film, music or painting. Don't get me wrong, there is a world of hidden esoteric knowledge and in-culture in those media, but the difference is that audiences have had a lot longer to figure out their relationship to it.
We did a similar thing at AlphaVille, so this is my second run with V4W. For me personally, I like the challenge. But I'm not sure how convinced I am about working that way regularly - it's hectic. Quick choices have to be made, and with eight people on a short timescale, people can pull in different directions.
In any case, what results is quite nice simply for that reason. It's a Frankenstein of different creative impulses thrown together, each relatively uncensored and forced to mix on equal terms. No hierarchy seemed to develop, and no-one was precious or held the group to ransom. It's probably because they pretty much all know each other and have worked together before, and are an open-minded group.
|Doing Yoga... no not really|
Actually somebody in the group said that it's because we're all British (and so we have a stereotype to live up to). I'm not sure about that, but with a different mix of people, I could easily see more tension.
So in effect Gareth was more a coordinator than a director. As for my role, I became more a technical consultant and facilitator. Due to some tight deadlines for proposals in the run-up to the trip, I wasn't really able to get into the design discussion until after the concept was concretised. I arrived looking for something to do and found an open niche as Max/MSP developer, helping to create the interactive sound based somewhere between Barney's sound design (mostly musical) and Alex's audio manipulation ideas (mostly noise).
|Some of the other installations|
I didn't feel like what we needed was another voice on top of these two, pushing yet another creative direction, so instead I looked for synthesis between them. The two streams proved divergent, and I think in the end we just allowed that to be. There was a music section and an interactive sound section, running at different times, and honestly each were far better without the interference of the other.
The three aspects of the video - the particles, the floor and the balls - all seemed to come together on equal terms, but we'll see how people feel about that in the retrospective we have coming up.
So again, it all points to what it was - a 3 day, open, creative experiment. We created a piece for exhibition, but in the end the exhibition was us, programming, composing, designing and setting up hardware. At least, I think that's what 70% of visitors to the festival will remember from it. But it seems that's what V4W are about - exposing the subculture.
I'm looking forward to (and slightly afraid of) the next time we work together.
- Freemote Threshold
- Audio / Visual
Posted Friday, 9 December 2011
Mike, Gareth and Joe have been working on a 3D scene which will be projected up on the wall, and which will respond in real-time to the Kinect data. The environment is built and runs in VVVV.
|A first dry run|
Mike has created the particle stream (the spheres mentioned in the design which flow to / from the threshold). He found the CiantParticles patch early, because it pretty much just does what we wanted to do. CiantParticles move according to forces you specify with parameters, and they respond to other objects in the scene - in this case our Kinect skeletons.
We are still experimenting with exactly how the particles should respond to people - should they gravitate towards and fall away? Or bounce off, or avoid them altogether? Mike has created a threshold (the vertical line in the image), and how they react will depend on which side of the threshold the person is on.
Joe has created a floor that our Kinect people will be walking on. The floor is a 2D grid distorted by waves. The wave system is supplied by the Fluid plugin. You feed into the plugin the size of the grid, and the coordinates of people's footfalls, and it sends out a new grid with the wave values.
As the footfall interaction changes, the internal state of the Fluid plugin updates to reflect the wave patterns and outputs the changes in realtime. We then use these output values to affect the 2D floor grid by supplying them as arguments to a vertexshader operating on the grid.
|Joe's working environment|
Gareth has been working with the Bullet physics system to allow Kinect skeletons to interact with floating objects inside Mike's particle stream. The idea is that when you are inside the stream and looking at yourself in the projection, you will notice floating objects near your Kinect alter-ego.
You can then reach out and interact with them - we are still experimenting as to how. But this provides a more immediate objective for people inside the stream, because the other interactions with the sound environment will probably take a little more time to get used to.
|Mike's screen rendering particles|
Data from this interaction with objects inside the stream will be sent out to the sound environment so that the interaction has a sonic element to it too, as discussed in this post.
We got as far as a dry-run by the end of the night tonight - not bad for three days work. Tomorrow we'll be looking to tighten everything up, and to experiment with how the interaction feels now that we have a running environment.
Posted Thursday, 8 December 2011
Alex, Barney and I have been making an responsive sound environment. The idea is that when people walk into the installation space, their 'skeleton' is assigned an instrument from the currently playing music track.
|Setting up the space|
As they move around, their movements control effects on that instrument only. When another person enters the space, they are assigned another instrument and each can continue to control their own instrument independently of each other, each person contributing to the overall sound.
With multiple people controlling different aspects of the sound, it could easily get a bit chaotic. So we are keeping some tracks fixed, playing back a blanket of sound against which the other controlled instruments can springboard.
|Composing with Reason - roughly speaking each instrument above will be controlled by one Kinect skeleton|
Barney has made some mellow electronic tracks, with looping melodies. The idea is that there will be some pace and a connection with the electronic music taking place in the rest of the venue. He has used Reason to compose the music and render out individual tracks for us to cut apart in realtime in Max/MSP.
Alex and I have created a Max/MSP patch to handle interaction with the sound environment. It's kind of the go-between for the music against the interaction values received over the network from the graphics server.
|Cutting it up with a modular Max/MSP patch|
The Max/MSP patch operates on up to 8 rendered tracks (instruments) for each of Barney's compositions - each track is controlled by one Kinect skeleton. Each track has up to 9 controllable attributes (reverb / granular synthesis / VSTs). So each Kinect skeleton controls 9 attributes on a single track of the currently playing composition.
One of the key parts of building this environment has been modularising Alex's original granular synth and reverb patches, so that they can be re-used with different settings and against different buffers (above).
We also created a 4-way panning system based on VBAP to allow us to pan sounds around the four speakers in our installation environment. Each rendered track can be panned individually. We'll be experimenting later today to figure out whether this panning should also be controlled by OSC data or some built-in pattern related to the sounds.
Posted Thursday, 8 December 2011
Tom and Hayden have been working on grabbing Kinect data, which is basically data about human movement, and sending it out across the network. The graphics server (an Alienware laptop) reads the data from the network and uses it to render an Augmented Reality scene, which is projected up on the wall.
|Barney interacting with a Kinect|
The Kinect is a very cool Microsoft device which was designed for the Xbox 360. It's great for mapping realistic human activity into a 3D model in realtime. It has two cameras onboard - one infrared camera for reading depth data, one webcam-style camera for getting a regular video image matrix.
Tom downloaded the Microsoft SDK (there are others, for example OpenNI/NITE), which you can use to transpose the depth data onto the video. The result is a video image with identifiable humans overlaid (see the image below).
The Microsoft SDK also contains software to take this further. By analysing data from the camera streams, and using algorithms designed to find human body parts (such as limbs and torsos), the SDK can build an overlay of 3D 'skeletal' data. This means that it works out the points where it thinks the human body parts extend to and draws lines between them to make up a stick man ('skeleton').
Tom has written a small C# program which uses the SDK to grab skeletal data points and broadcast them across the network. We are using 3 Kinects, each of which can handle 2 skeletons reliably, making a total of up to 6 actors that can interact with the installation.
|The Kinect SDK offers depth data and skeletal data|
Hayden has created an OSC server on the graphics server to pick up the skeletal data points from the network. He then set up a test person renderer (the stick man on the wall, above) in VVVV so that we can fine-tune the Kinect interaction before passing data on to the real 3D patches.
Data from the 3D scene are in turn formatted and broadcast across the network as more OSC messages, for the sound environment to pick up. The result will be an audio/visual environment which can be controlled solely by moving in front of the Kinects.
Posted Wednesday, 7 December 2011
Yesterday was our first day in the space (Nutrecht). It's a huge empty warehouse building with a number of large empty spaces. When we arrived several other installations were in the process of being set up.
It's very cold, and everyone is walking around with coats and scarves on. Music and random sounds keep kicking out from around the warehouse.
|An odd device in the corridor|
I met the other collaborators first thing at the airport to come out here (most for the first time). Gareth Griffiths is organising and managing the team. Mike Wilson is preparing 3D meshes and objects to make up the scene, and Joseph Mounsey is working on the visualisation and dynamics of the augmented scene.
Tom Parslow is writing software to collect data we are interested in from the Kinects, format it and send over the network. Hayden Anyasi is designing a VVVV patch to handle the 3D scene and feed in the Kinect data, and to push out scene-related data across the network for use in the sound environment.
|Co-working in a cold warehouse|
Barney Thorn is working on sound design, writing tracks which can be stretched and morphed by interaction with the Kinects. Alex Hosford and I are working on a Max/MSP environment to listen for Kinect data and use it to stretch, morph and treat Barney's sounds.
So we're all divided into little working groups, huddled around a small desk in a large warehouse. We all had about 1 hour's sleep the previous night, due to an early flight / train out to Utrecht from London.
|The space already has some installations... and a bicycle|
We persevered and by the end of the night we had a made a good start. There is a proof-of-concept-style basic interaction running from the first Kinect box through to the sound environment. Barney has written one and a half music tracks! The 3D visuals are started but weren't plugged in yet. A good start.
Posted Monday, 5 December 2011
It's actually quite an odd task we have set ourselves here. Rather than build an installation between three collaborators over eight days, we will be working as 8 collaborators over 3 days. It's a very quick project, but with a lot of people and large scope for the size.
The theme of this year's Freemote festival is 'Threshold'. Based on an online conversation and a single meeting, the group have come up with an outline for the installation we are going to attempt.
Gareth has sent out some design images which summarise our aims:
|1. The hardware setup consists of an active area covered by Kinects attached to a server and a projector|
|2. People interact with the installation by entering the active area. The scene is projected onto a nearby wall|
|3. We are in a sense re-creating reality in the projection, this is the view we see in the projection. The green lines over the person are skeleton data which is collected by the Kinects|
|4. The installation has two parts: on one side (the left side) spheres appear from a central line, on the other side there is nothing|
|5. When a person is in the right-hand area of the installation spheres are created from position associated with their skeleton data. The spheres have a gravitational pull to the left. Imagine dust blowing from someone|
|6. When a person is in the left-hand side of the installation spheres burst on contact with their skeleton data. An area behind the person opens as the spheres are created from the central line and flow off to the left|
|7. When a person puts their hand (or any part of their body) over the divide spheres are created from their hand, spheres will also burst on the rest of their body|
The reality of how the installation will actually look, work and feel will remain open as we experiment and install. But this gives us a good solid framework with which to start. Given the size of the group and the time we have available, it's good that we have a clear starting point.
Posted Friday, 2 December 2011
I've been invited to collaborate with V4W next week in Holland, at Freemote Electronic Arts festival in Utrecht. We'll be working on a multi-Kinect 3D augmented reality piece - a flyer has been posted here, should give you some idea.
Once I get there I'll be blogging about the creation of the work and the festival overall, should be a great week! Here are some photos from previous year's events, and the space we'll be working with.
- Jaaga Residency (17)
- Jaaga (15)
- Process (12)
- RGBDToolkit (12)
- Personal Development (12)
- I-Park Residency (12)
- Installation (11)
- V4W (10)
- VVVV (9)
- Field Research (8)
- Kinect (8)
- Long (8)
- SuperCollider (8)
- Freemote Threshold (7)
- ThoughtWorks (7)
- Tutorial (7)
- CAC Residency (7)
- Freemote (7)
- openFrameworks (7)
- Audio / Visual (6)
- Projection Mapping (6)
- Reflections (6)
- Influence (5)
- Arduino (5)
- Max/MSP (5)
- Motor (4)
- Portable Projection (4)
- Volumetric Society (4)
- Jaaga Sound and Lights (4)
- Gravity (4)
- James George (4)
- Hardware Hack Lab (4)
- Jee Soo Shin (3)
- RGBDToolkit Visualizer (3)
- Depth (3)
- Asus Xtion (3)
- michael fairfax (3)
- Land Art (3)
- Roman Moshensky (3)
- Judith Stein (3)
- Rocks (3)
- Creative Context (3)
- Mac (3)
- Picture This (2)
- Calibration (2)
- Brian Eno (2)
- Generative Art (2)
- Sketch (2)
- Cosm (2)
- Depth Video (2)
- Conor Russomanno (2)
- Presentation (2)
- Boaz Aharonovitch (2)
- Brain-Computer Interface (2)
- Ellen Pearlman (2)
- Measure (2)
- Projection Bombing (2)
- Phenomenology (2)
- Memo Akten (2)
- Git (2)
- Volumetric Lab (2)
- Apple (2)
- Tess Martin (2)
- addon (2)
- Untitled (Picture This) (2)
- Total Space (2)
- CultureHub (2)
- 3D (2)
- Ralph Crispino (2)
- Review (2)
- Alpha-Ville (2)
- OpenBCI (2)
- Mobile Projection (2)
- Study (2)
- C# (2)
- Natural Textures (2)
- Scott Wilson (2)
- Eyebeam (2)
- Joel Murphy (2)
- The Visual Art of Brian Eno
- RGBDToolkit Sketch at The Sampler
- The Artist-Geek Hybrid
- RGBDToolkit Calibration Tutorial
- How a Depth Sensor Works - in 5 Minutes
- Residency Begins at CAC Troy
- Installation Sketch at Open Studios
- Roman Moshensky's Mirror World
- Open Studios at I-Park
- Perception as a Creative Process
- The I-Park Graveyard
- Scoping Out the Land
- Residency Begins at I-Park
- Residency at Contemporary Artists Center
- Stephen Lumenta's SC TextMate Bundle
- Adding OF Addons (ofxSuperCollider)
- Setting up SuperCollider with TextMate
- Switching to MacBook Pro
- QuickRef for SuperCollider
- Getting Started with SuperCollider
- Getting Started with OpenFrameworks
- Overtones, Harmonics and Additive Synthesis
- Visit to Cold Spring
- The Final Exhibition
- Playing with Particles
- Responsive Granular Sound
- Kinecting to the Network
- First Working Day
- Designs for Freemote
- Freemote Utrecht
- Untitled - Picture This (2011)
- The Wider Context?
- Trading Time for Space
- Talk at Goldsmiths Digital Studios
- Intro to Marius Watz
- Practical Guide to Generative Art
- Cosm, Collision Detection and Volume
- Vector-Base Amplitude Panning
- Intuition, and Direction of the Project
- Reflections: What is Jaaga?
- Going Further with Ambisonics
- Introduction to Ambisonics
- Surface (2010)
- Servo Motors and Transistors
- Spinning a 12V DC Motor
- Spinning a 5V DC Motor
- First Week at Jaaga
- Presentation Style
- Beginning the Jaaga Fellowship