ANDREW McWILLIAMS

Showing posts tagged with: Freemote Threshold  Show all posts


Posted Wednesday, 14 December 2011

The Final Exhibition

A few nights ago, after 3 intense days and nights of working, we installed our collaborative installation at Freemote Utrecht. After all the work, it was up for about 6 hours.

The group I've been working with, V4W, do this a lot - it's all part of the process for them. Turn up with a bunch of equipment, create an artwork onsite during opening hours, and display it on the final night. While we were working, people came over and observed, made comments (and jokes), and asked questions.

We were our exhibit (on the left)

Speaking to Gareth and Hayden about it, they say they like the exposure it gives to the subculture of digital artists, musicians and programmers, i.e. us. It's difficult for people to get a handle on what exactly it is we do, unlike say in film, music or painting. Don't get me wrong, there is a world of hidden esoteric knowledge and in-culture in those media, but the difference is that audiences have had a lot longer to figure out their relationship to it.

Discussing options

We did a similar thing at AlphaVille, so this is my second run with V4W. For me personally, I like the challenge. But I'm not sure how convinced I am about working that way regularly - it's hectic. Quick choices have to be made, and with eight people on a short timescale, people can pull in different directions.

In any case, what results is quite nice simply for that reason. It's a Frankenstein of different creative impulses thrown together, each relatively uncensored and forced to mix on equal terms. No hierarchy seemed to develop, and no-one was precious or held the group to ransom. It's probably because they pretty much all know each other and have worked together before, and are an open-minded group.

Doing Yoga... no not really

Actually somebody in the group said that it's because we're all British (and so we have a stereotype to live up to). I'm not sure about that, but with a different mix of people, I could easily see more tension.

So in effect Gareth was more a coordinator than a director. As for my role, I became more a technical consultant and facilitator. Due to some tight deadlines for proposals in the run-up to the trip, I wasn't really able to get into the design discussion until after the concept was concretised. I arrived looking for something to do and found an open niche as Max/MSP developer, helping to create the interactive sound based somewhere between Barney's sound design (mostly musical) and Alex's audio manipulation ideas (mostly noise).



Some of the other installations

I didn't feel like what we needed was another voice on top of these two, pushing yet another creative direction, so instead I looked for synthesis between them. The two streams proved divergent, and I think in the end we just allowed that to be. There was a music section and an interactive sound section, running at different times, and honestly each were far better without the interference of the other.

The three aspects of the video - the particles, the floor and the balls - all seemed to come together on equal terms, but we'll see how people feel about that in the retrospective we have coming up.

So again, it all points to what it was - a 3 day, open, creative experiment. We created a piece for exhibition, but in the end the exhibition was us, programming, composing, designing and setting up hardware. At least, I think that's what 70% of visitors to the festival will remember from it. But it seems that's what V4W are about - exposing the subculture.

I'm looking forward to (and slightly afraid of) the next time we work together.



Posted Friday, 9 December 2011

Playing with Particles

Mike, Gareth and Joe have been working on a 3D scene which will be projected up on the wall, and which will respond in real-time to the Kinect data. The environment is built and runs in VVVV.

A first dry run
A first dry run

Mike has created the particle stream (the spheres mentioned in the design which flow to / from the threshold). He found the CiantParticles patch early, because it pretty much just does what we wanted to do. CiantParticles move according to forces you specify with parameters, and they respond to other objects in the scene - in this case our Kinect skeletons.

We are still experimenting with exactly how the particles should respond to people - should they gravitate towards and fall away? Or bounce off, or avoid them altogether? Mike has created a threshold (the vertical line in the image), and how they react will depend on which side of the threshold the person is on.

Joe has created a floor that our Kinect people will be walking on. The floor is a 2D grid distorted by waves. The wave system is supplied by the Fluid plugin. You feed into the plugin the size of the grid, and the coordinates of people's footfalls, and it sends out a new grid with the wave values.

As the footfall interaction changes, the internal state of the Fluid plugin updates to reflect the wave patterns and outputs the changes in realtime. We then use these output values to affect the 2D floor grid by supplying them as arguments to a vertexshader operating on the grid.

Joe's broken laptop screen    A render of the watery floor
Joe's working environment

Gareth has been working with the Bullet physics system to allow Kinect skeletons to interact with floating objects inside Mike's particle stream. The idea is that when you are inside the stream and looking at yourself in the projection, you will notice floating objects near your Kinect alter-ego.

You can then reach out and interact with them - we are still experimenting as to how. But this provides a more immediate objective for people inside the stream, because the other interactions with the sound environment will probably take a little more time to get used to.

Mike's screen rendering particles
Mike's screen rendering particles

Data from this interaction with objects inside the stream will be sent out to the sound environment so that the interaction has a sonic element to it too, as discussed in this post.

We got as far as a dry-run by the end of the night tonight - not bad for three days work. Tomorrow we'll be looking to tighten everything up, and to experiment with how the interaction feels now that we have a running environment.



Posted Thursday, 8 December 2011

Responsive Granular Sound

Alex, Barney and I have been making an responsive sound environment. The idea is that when people walk into the installation space, their 'skeleton' is assigned an instrument from the currently playing music track.

Setting up the speakers
Setting up the space

As they move around, their movements control effects on that instrument only. When another person enters the space, they are assigned another instrument and each can continue to control their own instrument independently of each other, each person contributing to the overall sound.

With multiple people controlling different aspects of the sound, it could easily get a bit chaotic. So we are keeping some tracks fixed, playing back a blanket of sound against which the other controlled instruments can springboard.

Barney composing in Reason
Composing with Reason - roughly speaking each instrument above will be controlled by one Kinect skeleton

Barney has made some mellow electronic tracks, with looping melodies. The idea is that there will be some pace and a connection with the electronic music taking place in the rest of the venue. He has used Reason to compose the music and render out individual tracks for us to cut apart in realtime in Max/MSP.

Alex and I have created a Max/MSP patch to handle interaction with the sound environment. It's kind of the go-between for the music against the interaction values received over the network from the graphics server.

The Max/MSP patch
Cutting it up with a modular Max/MSP patch

The Max/MSP patch operates on up to 8 rendered tracks (instruments) for each of Barney's compositions - each track is controlled by one Kinect skeleton. Each track has up to 9 controllable attributes (reverb / granular synthesis / VSTs). So each Kinect skeleton controls 9 attributes on a single track of the currently playing composition.

One of the key parts of building this environment has been modularising Alex's original granular synth and reverb patches, so that they can be re-used with different settings and against different buffers (above).

We also created a 4-way panning system based on VBAP to allow us to pan sounds around the four speakers in our installation environment. Each rendered track can be panned individually. We'll be experimenting later today to figure out whether this panning should also be controlled by OSC data or some built-in pattern related to the sounds.



Posted Thursday, 8 December 2011

Kinecting to the Network

Tom and Hayden have been working on grabbing Kinect data, which is basically data about human movement, and sending it out across the network. The graphics server (an Alienware laptop) reads the data from the network and uses it to render an Augmented Reality scene, which is projected up on the wall.

Barney interacting with a Kinect
Barney interacting with a Kinect

The Kinect is a very cool Microsoft device which was designed for the Xbox 360. It's great for mapping realistic human activity into a 3D model in realtime. It has two cameras onboard - one infrared camera for reading depth data, one webcam-style camera for getting a regular video image matrix.

Tom downloaded the Microsoft SDK (there are others, for example OpenNI/NITE), which you can use to transpose the depth data onto the video. The result is a video image with identifiable humans overlaid (see the image below).

The Microsoft SDK also contains software to take this further. By analysing data from the camera streams, and using algorithms designed to find human body parts (such as limbs and torsos), the SDK can build an overlay of 3D 'skeletal' data. This means that it works out the points where it thinks the human body parts extend to and draws lines between them to make up a stick man ('skeleton').

Tom has written a small C# program which uses the SDK to grab skeletal data points and broadcast them across the network. We are using 3 Kinects, each of which can handle 2 skeletons reliably, making a total of up to 6 actors that can interact with the installation.

Kinect depth data     Kinect skeletal data
The Kinect SDK offers depth data and skeletal data

Hayden has created an OSC server on the graphics server to pick up the skeletal data points from the network. He then set up a test person renderer (the stick man on the wall, above) in VVVV so that we can fine-tune the Kinect interaction before passing data on to the real 3D patches.

Data from the 3D scene are in turn formatted and broadcast across the network as more OSC messages, for the sound environment to pick up. The result will be an audio/visual environment which can be controlled solely by moving in front of the Kinects.



Posted Wednesday, 7 December 2011

First Working Day

Yesterday was our first day in the space (Nutrecht). It's a huge empty warehouse building with a number of large empty spaces. When we arrived several other installations were in the process of being set up.

It's very cold, and everyone is walking around with coats and scarves on. Music and random sounds keep kicking out from around the warehouse.

The Meet me in the Middle machine
An odd device in the corridor

I met the other collaborators first thing at the airport to come out here (most for the first time). Gareth Griffiths is organising and managing the team. Mike Wilson is preparing 3D meshes and objects to make up the scene, and Joseph Mounsey is working on the visualisation and dynamics of the augmented scene.

Tom Parslow is writing software to collect data we are interested in from the Kinects, format it and send over the network. Hayden Anyasi is designing a VVVV patch to handle the 3D scene and feed in the Kinect data, and to push out scene-related data across the network for use in the sound environment.

Team V4W working
Co-working in a cold warehouse

Barney Thorn is working on sound design, writing tracks which can be stretched and morphed by interaction with the Kinects. Alex Hosford and I are working on a Max/MSP environment to listen for Kinect data and use it to stretch, morph and treat Barney's sounds.

So we're all divided into little working groups, huddled around a small desk in a large warehouse. We all had about 1 hour's sleep the previous night, due to an early flight / train out to Utrecht from London.

Nutrecht chamber    A bicycle
The space already has some installations... and a bicycle

We persevered and by the end of the night we had a made a good start. There is a proof-of-concept-style basic interaction running from the first Kinect box through to the sound environment. Barney has written one and a half music tracks! The 3D visuals are started but weren't plugged in yet. A good start.



Posted Monday, 5 December 2011

Designs for Freemote

It's actually quite an odd task we have set ourselves here. Rather than build an installation between three collaborators over eight days, we will be working as 8 collaborators over 3 days. It's a very quick project, but with a lot of people and large scope for the size.

The theme of this year's Freemote festival is 'Threshold'. Based on an online conversation and a single meeting, the group have come up with an outline for the installation we are going to attempt.

Gareth has sent out some design images which summarise our aims:

The hardware setup consists of an active area covered by Kinects attached to a server and a projector
1. The hardware setup consists of an active area covered by Kinects attached to a server and a projector
People interact with the installation by entering the active area. The scene is projected onto a nearby wall
2. People interact with the installation by entering the active area. The scene is projected onto a nearby wall
We are in a sense re-creating reality in the projection, this is the view we see in the projection. The green lines over the person are skeleton data which is collected by the Kinects
3. We are in a sense re-creating reality in the projection, this is the view we see in the projection. The green lines over the person are skeleton data which is collected by the Kinects
The installation has two parts: on one side (the left side) spheres appear from a central line, on the other side there is nothing
4. The installation has two parts: on one side (the left side) spheres appear from a central line, on the other side there is nothing
When a person is in the right-hand area of the installation spheres are created from position associated with their skeleton data. The spheres have a gravitational pull to the left. Imagine dust blowing from someone
5. When a person is in the right-hand area of the installation spheres are created from position associated with their skeleton data. The spheres have a gravitational pull to the left. Imagine dust blowing from someone
When a person is in the left-hand side of the installation spheres burst on contact with their skeleton data. An area behind the person opens as the spheres are created from the central line and flow off to the left
6. When a person is in the left-hand side of the installation spheres burst on contact with their skeleton data. An area behind the person opens as the spheres are created from the central line and flow off to the left
When a person puts their hand (or any part of their body) over the divide spheres are created from their hand, spheres will also burst on the rest of their body
7. When a person puts their hand (or any part of their body) over the divide spheres are created from their hand, spheres will also burst on the rest of their body

The reality of how the installation will actually look, work and feel will remain open as we experiment and install. But this gives us a good solid framework with which to start. Given the size of the group and the time we have available, it's good that we have a clear starting point.



Posted Friday, 2 December 2011

Freemote Utrecht

I've been invited to collaborate with V4W next week in Holland, at Freemote Electronic Arts festival in Utrecht. We'll be working on a multi-Kinect 3D augmented reality piece - a flyer has been posted here, should give you some idea.

Once I get there I'll be blogging about the creation of the work and the festival overall, should be a great week! Here are some photos from previous year's events, and the space we'll be working with.

Freemote previous event

Freemote previous event

Freemote previous event

Freemote previous event

Freemote previous event

Freemote previous event

Freemote previous event


All posts

November 2014

October 2014

June 2014

March 2014

February 2014

January 2014

December 2013

November 2013

October 2013

September 2013

August 2013

July 2013

June 2013

May 2013

October 2012

September 2012

August 2012

July 2012

June 2012

May 2012

April 2012

January 2012

December 2011

October 2011

September 2011

August 2011

June 2011

May 2011

April 2011

March 2011

January 2011

November 2010

October 2010

September 2010