Posted Tuesday, 28 June 2011
Madhu Reddy, one of the resident filmmakers at Jaaga at the moment, has made a short feature on my work at Jaaga. This video shows the working process for 'Gravity':
This video shows Agnese Mosconi's 'Murale', which was also created for the Sound and Lights exhibition:
Posted Monday, 13 June 2011
Gravity is an installation work inspired by Mark Rothko and the Seagram Murals. It was installed at Jaaga in June 2011 and was part of the Sound & Lights exhibition in Richmond Town.
What was important for me about Gravity was making a connection with viewers who spend time with the piece. The process of creation was very intuitive, and not pre-planned. In this way I feel the connection is somehow direct, and personal.
There may be a context in which Gravity was conceived and created (at Jaaga in it's final days in Richmond Town), but in contrast to Reflections, the piece is not strongly associated to that context. It is abstract and psychological in nature. In this vein I think Gravity could be extended, continued, redefined in the future.
Variation and Revelation
The aim of the piece is to create a continuous, relentless, slow revelation. With each second that passes, a few new pieces of information, in the form of subtle variations, enter into the audiovisual 'window' of the world. Each new piece of information tells us something about the nature of that world, and adds to our understanding of it. In seeking to understand more about that world we become connected to it - the psychology of the viewer becomes increasingly involved.
Composition and Rhythm
There are three custom-created canvass panels, onto which light is projected. Each reveals the same tear but with slight and subtle differences in trajectory (pan, tilt and momentum).
The audio and visual movement are continuously generated in real-time, and so there is no 'loop'. The movement of each tear across it's canvass is anchored to the movement of the other two tears. There are three audio synth lines echoing the three tears, and similarly, each is free to choose it's own direction but is anchored to it's peers.
There is also an overarching rhythm which slowly and gently arcs over the piece (it can be heard sometimes as a continuous gentle bass 'thud'). This overarching rhythm is subtle and too broad to follow for long in itself, but it sets the boundaries within which both the visual-triptych and synth-triptych flow; it sets the progressive tone of the whole piece over time.
Posted Monday, 13 June 2011
Reflections is a 3D-sound installation created for the Jaaga artspace in Bangalore.
The piece is a kind of commentary on transience. The Jaaga building was to be torn down, and so the voices of core community members are given the freedom to rise up and float freely across the metallic structure. Each voice has it's own direction, it's own perspective, and it's own trajectory - united only by the common framework of the Jaaga space.
Reflections is deliberately very pluralistic, not confined to any one room, not about any one person or thing. In it's structure, it is not spatialised according to a particular speaker layout (it is designed in such a way that it could work with any conceivable speaker layout).
In this way it contrasts with the much 'heavier', much more personal Gravity installation, which was created to form part of the same exhibition in 2011. Reflections was designed as a counterpoint to Gravity, in which the feelings and perspectives of the community take center stage.
Posted Thursday, 26 May 2011
The Sound & Lights exhibition at Richmond Town has been open almost a week now. I took some shots last night to share, and to give a sense of the show for those who can't be here.
|The view from outside Jaaga|
From the outside you can see work by Tobias Rosenberger (the two video panels) and Pooja Mallya (the kinetic 'nest' at the top). This is the work that draws in passers-by and local residents who may not have been previously connected to Jaaga - it's clearly visible from the Hockey Stadium and up and down Rhenius Street.
Eve Sibley's permanent installation, the Vertical Garden is as always on display on the front and side of the building. The plants are watered automatically by a custom-designed hydroponics system periodically throughout the day.
Inside we have work from eight Indian and international artists, and a number of other contributors. The show is open from 6.30PM until 10PM every night until the final night on the 29th of May.
The work is sonic, visual, kinetic, and electronic, very much in keeping with the themes at Jaaga. At the end of the show the building will be torn down! ...and recreated in it's new home at Double Road.
|Lisa Kori-Chung 'Jaagaad'|
Lisa Kori-Chung has created a reactive sound installation out of discarded electronic components. As you spin the wheels, glitchy but tonal sounds are generated. The piece is pure hardware electronics, with no software intervention.
|Andrew McWilliams 'Gravity'|
My work, Gravity, is a three-panel sound and video installation on the ground floor. The work is self-generating: the sound is continually synthesized with new parameters and the video slowly responds to the sound.
|Agnese Mosconi 'Murale'|
Agnese Mosconi has created Murale, a kinetic, responsive installation on the second floor. The flowers respond to your movements by opening and closing, and making noises as you pass by. The piece is modular in the form of a climbing plant, like a parasite.
|Corin's workstation for 'Sonosphere-I'|
Corin Faife composed music and designed a system which allows it to respond according to the movements of people in the building. The mood decisions are based on webcam and sensor data, different combinations of movement in different parts of the building are interpreted according to a mood matrix.
|Sharath Chandra 'Prime Actant'|
In the video installation 'Prime Actant', by Sharath Chandra, the show itself is self-referenced. It contains media related to the artists and their works, and the webcam incorporates visitors into the installation too.
|Tobias Rosenberger 'Figures'|
Tobias Rosenberger's work video work is visible on two floors, from both inside and outside the Jaaga building.
On the roof there is also a gentle and reflective multichannel sound installation which is a collaboration between Abhijeet Tambe (Lounge Piranha) and Rosenberger. It features words and music loops, and is a popular space to relax and enjoy the evening.
|Pooja Mallya 'Cues in the Nest'|
The 'nest', which hovers above the roof of Jaaga used to be a space for quiet solitary contemplation, served only by a single chair and a view of Shantinagar... until Pooja Mallya turned it into a kinetic light installation! It is now furnished with a carved metal logo, lights, fabric wings and a moving frame.
|Power-Up Electronics programmable LED boards|
The people at Power-Up Electronics have created a number of programmable LED boards, and installed them across the building. Together they slowly drift to different colours and change the mood and visible texture of the Jaaga space.
If you haven't already had a chance to see the show, please head down between now and the final day on Sunday!
Posted Tuesday, 3 May 2011
As we enter the final weeks of preparation for the show, a picture is beginning to emerge of the works I am producing. My contribution will be two separate, related installation pieces. Both are a response to the milieu of the space, and to the fact that Jaaga is moving.
Sketch (a) for Gravity
One piece a personal response, the other a community response. One is anchored, deep, singular; the other plural, free, ephemeral. The works will run together and combine to express different emotional responses to the space.
The first piece is the piece inspired by the Seagram Murals. It is an audio-visual piece for the ground floor.
The visual aspect consists of video projected onto a triptych of custom-made panels. The video is based on experiments with light - particularly, sketches I created using a scanner and a few basic materials (see images, right). The audio aspect works alongside the video, and is based on field-recorded and synthesized sounds, combined in simple measure, abstract in nature.
As I discussed in this previous post, the piece is an attempt to make a space for reflection - a place where time stands still for a moment. For this reason I'm working with variations on basic patterns - blends, torn lines, scrapes, which expose themselves very slowly over time. I am trying to create power from the simplicity of these forms - constantly I am thinking of weight, strength, and power. I am not trying to emulate the Seagram's, rather to work with some of these elements that they expose.
Sketch (b) for Gravity
The second piece is a 3D-sound piece, and will be heard on the various speakers dotted across the Jaaga space. This piece has become a counterpoint to the Gravity piece, and is best described by it's evolution.
First, I developed the method. I looked into different 3D sound design approaches and eventually came up with a custom approach, and built it in Cosm. On a side note, I've been in touch with the author of this software, Graham Wakefield. He is including examples of the method in the next release of Cosm, and writing about it in a paper for this year's ICMC.
I had considered using this method as an integrated part of the Gravity installation. But as the Gravity piece developed, it went strongly in a particular direction. It was singular, weighty, personal - I knew it needed a counterpoint. The speakers in the rest of the building could be used in a way that is free, unconstrained by one location or time.
At the same time, I had a powerful sense that Jaaga is such a communal space, I wanted some way of giving expression to the feelings of the community. I wanted space for them to share their feelings on what Jaaga is, was and has become. From here, the second piece became about Reflections.
This is a perfect moment for reflection, and today I approached video artist Clemence Barret, who is running a video workshop in the space. We agreed to work collaboratively to capture dialog from the people who have been at Jaaga the longest. The conversations we capture will in any case be a nice piece for the archive, and for future works. But the sound of this dialog will feed directly into the Reflections piece.
As with Gravity, I want Reflections to focus somehow on the primary, the abstract, the emotional nuances. The captured conversations will exist as records in their own right, but the memories of so many people can exist as fragments in this soundwork - roaming freely, injecting emotion and values into the space, and filling the gaps between the steel uprights of Jaaga.
The work continues on both pieces.
Posted Wednesday, 27 April 2011
I searched today for a new 3D sound solution for Max/MSP, in a deliberate attempt to get away from Ambisonics and other 'sweet-spot'-inspired spatialisation techniques. Funnily enough, what I found led me back to a familiar name from Ambisonics - Graham Wakefield.
As it turns out, Graham is quite prolific. After building the core Ambisonics externs he went on to create Cosm - a virtual world environment. It's comprehensive, it works in Max/MSP/Jitter, and Graham is very open to let people use it. For me as a novice at Jitter it was a breeze to pick up thanks to a great website and an excellent tutorial.
|A screencap of the Cosm environment, mixing solid objects with a diffusion pattern|
Cosm's particular emphasis is in the relationship between solid objects and diffuse elements. Solid objects are modeled with traditional vector graphic techniques, and diffusion is modeled by underlying 3D value matrices. Perfect for Jitter.
When it came to building in sound, the natural choice for Graham was to use his Ambisonics externs. After all, this is how people have come to expect to explore virtual worlds - from a first-person, virtual camera, 'sweet spot' perspective.
Using Collision Detection
As I discussed in a previous post, first-person perspective is not what this is all about. But that's no problem - I can bypass the cosm.audio~ object altogether and use collision detection values to ramp up and down the volume levels on individual loudspeakers.
|A representation of a sound object (red) passing through a loudspeaker catchment area (green)|
In the image above, the red octahedron represents a moving sound object. The green sphere represents a loudspeaker catchment area. As the sound object intersects the catchment area, the volume of that sound for that loudspeaker goes up.
|Using collision detection to set volume levels|
This solution also fulfills the 'environmental' type of 3D sound environment I described at the end of this post (see 'What Ambisonics Can't Do'). With enough loudspeakers, you should be able to mimic the sound of a bird flying through the space so that it will appear differently (and correctly) to two different observers.
If any readers have any suggestions of where I can look up examples of, or information on this type of effect, please let me know! Unfortunately, we won't be able to achieve quite this effect at Jaaga, though I'll be aiming for a smaller version of it.
Posted Thursday, 21 April 2011
I have been continuing the evaluation of different sound platforms for controlling sounds in the Jaaga space.
Alternative to Ambisonics
On a suggestion from Tobias, I looked into an alternative to Ambisonics called Vector-Base Amplitude Panning (VBAP). The name seemed to have a nice ring, because it sounded like it might be the kind of vector/panning solution I described at the end of the Going Further article.
VBAP was developed by Ville Pulkki, a postdoctoral researcher at the Helsinki University of Technology. I looked up his article on implementing VBAP with MaxMSP, and downloaded the free software that goes with it. Ville begins his article by explaining the reason for creating VBAP, and it seems that this approach can accommodate a wider variety of loudspeaker layouts.
How it Works
In short, it takes so-called 'pair-wise' panning - i.e. the panning of localised sounds between two loudspeakers - and does a little more math to extend it into triplet-wise panning. The three loudspeakers are arranged in a triangle layout. Localised sounds no longer just pan horizontally, between two positions, but now pan vertically too. This change means that we have extended from 1-dimensional movement into 2-dimensional movement.
As Ville's diagram shows, as you add more triangles you can extend into the 3rd dimension too, by creating a 'mesh' similar to the polygons that describe 3D space in computer games. The amplitude of any sound 'moving through' the space is calculated for each of the nearest three speakers. The equation takes into account distance from the loudspeaker, and so VBAP differentiates from Ambisonics and irregular loudspeaker layouts can be supported. However there still needs to be a 'mesh' based on triangles, as any individual sound can only exist between the nearest three points.
Ville's VBAP diagram
The emphasis here is still on satisfying a 'sweet spot', a localised and immobile audience. In this respect, VBAP is similar to Ambisonics. As i'll discuss later on, at Jaaga we are going to experiment with layouts that don't require 'sound localisation' of the type catered for by these approaches. I'll go into that in another post.
A Side Note: Finding Max Externals
It's worth noting at this stage where I found Ville's pluggable VBAP externals, and a number of variations. MaxObjects.com is a known central repository where developers share their externals, and as such it is to MaxMSP as CodePlex is to .NET.
It's just useful to keep these sites on hand. The importance of bookmarking them will become clear next time you try to use Google to find libraries or externals! Google's just not a very good place to start for these things.
Posted Tuesday, 19 April 2011
A fellow artist at Jaaga, Tobias Rosenberger, said recently that the process of art creation should proceed intuitively, and not be manufactured from the outset. This idea resonated very soundly with me as my last two projects have been case studies which effectively prove his point.
Those experiences taught me a lot about creative process, which I'll come back to on another post. But on this current project, it's enough to say that I have purposefully avoided creating any kind of blueprint upfront. I have put anxieties over timing and achievement to one side and allowed the process to emerge naturally, even to the extent of what medium and which space to use.
On the one hand I have paid attention to what everyone else has been doing and saying, as there are several other artists preparing work for the final show. I have spent time in the space, and immersed myself in what is happening there - this is not hard to do at Jaaga. I have also explored potential media options - educating myself and experimenting with electronics, motors, and ambisonics.
Somehow along the way a general vision started emerge, and when I explored it, I found something I think could be right for the downstairs space.
Jaaga Is Immovable
In a nutshell, there are a lot of separate projects moving at once, involving movement, sensors, and interaction. many of the discussions have involved some grandiose concepts. It's a bit dizzying, being surrounded by a constant and powerful swirl of different creative ideas. Some of them are quite complex in terms of interaction and presentation. Many of them don't seem to relate to each other. And as discussed in a recent post, the change in the project context and an externally-imposed deadline have had their effects too. I found myself wanting to head for the centre - to the 'eye' of the whirlpool.
Suddenly I wanted to create something technically simple, visually and audibly simple, but by virtue of simplicity, something impactful. I wanted the pace slow, the essence emotional, the mood reflective. But in particular, in relation to the impending move of Jaaga, and all the uncertainty that surrounds - I wanted to create something explorably deep. Like dropping anchor, a huge weight to the ground, to say only one thing: "Right now, at this moment, WE ARE HERE, and it means something."
|Film students co-working at Jaaga|
Jaaga is the kind of place where all kinds of ideas fly and many things are possible, but I feel only sad when I think that she is moving. I agree that she is by nature a movable environment, but I feel it's a little before her time. I think there was more potential for her to connect with her community, and to develop her character before being asked to change. And I wanted to create some little bit of space for reflection on that.
Despite my emphasis on simplicity, it's not an easy challenge. Master artists spend lifetimes searching for simplicity and depth, and almost immediately my thoughts turn to Rothko and the Seagram Murals.
Rothko - Not a Mystic
This statement was Rothko's response to people who asked him if he was a Zen Buddhist:
"I am not interested in any civilization except this one. The whole problem in art is how to establish human values in this specific civilization."
Rothko was very much in the here and now. Here he fiercely defends his ultimate intention, which is only to forge a connection between his audience and himself, via the physical body of his work.
"I'm not an abstractionist. I'm not interested in the relationship of color or form or anything else. I'm interested only in expressing basic human emotions: tragedy, ecstasy, doom, and so on."
"The people who weep before my pictures are having the same religious experience I had when I painted them"
Rothko was not religious. What he means by these statements is that he is solely interested in exploring human condition, psychological and emotional as it is. The important word here is solely. The Seagram murals contain nothing but these emotions, and each aspect of their construction - size, shape, colour, texture and composition - serves solely as a partial communicator of that condition.
|Rothko's Seagram murals on display at Tate Modern, London|
In Rothko's earlier work, mythical and mystical symbols were sometimes depicted figuratively. This inclusion seems to have been based on the Jungian idea that the human psyche is "by nature religious". From this understanding, the evolution of the various religions of world history were driven in part by the need to fulfill certain purposes that the human psyche required. If Western culture were to continue on a non-religious path, it made sense to find secular ways to fulfill the needs of the psyche no longer attended to by religion. One of these needs was a sense of mystery, which Rothko and others thought could be satisfied by myth and folklore.
After the war, like many artists of the time, Rothko couldn't bring himself to paint figuratively any more because of the inadequacy of the figurative form to express the state of the world as he saw it. Or, in his own words: "It was with the utmost reluctance that I found the figure could not serve my purposes. But a time came when none of us could use the figure without mutilating it."
|Rothko's Seagram murals on display at Tate Modern, London|
It was this, along with tragic experiences in Rothko's personal life that lead the steady process toward an increasingly tragic oeuvre. But what interests me across the breadth of his career is more the attention to the human psyche, to the slow, subtle and ponderous attention to specific aspects of the psyche. And ofcourse, the notion, exemplified by the "people who weep" quote, of how intuition works in an artistic context. The idea goes that as an artist, your goal should not be to teach or preach, as though you have something to tell the world that it can't get somewhere else anyway. It's not about transcendental quasi-religious experiences as such (at least not for an unbeliever like me). It's about experiencing something yourself, and finding some way to put that something out there so that others who are minded to can share in that experience.
I don't want to delve too deeply into the nuances of Rothko's art. Rather I want to acknowledge him as a source of inspiration at the moment and return to intuition. I have begun a series of sketches on my computer which I intend to project when I get back to Bangalore on Wednesday. I want to see how the colours translate into projected light, and begin exploring a very fundamental question for me at the moment:
Just how deep can projected light go?
Posted Sunday, 17 April 2011
When I arrived at Jaaga four weeks ago to begin my residency, the first thing I found out was that Jaaga - the building - was to move in June. The owner of the land, Naresh, an architect and ultimately a businessman, had decided to develop what is a prime central Bangalore location. There was not (and still is not) a definite destination for the structure. This single fact has been of sweeping importance to the direction of the Jaaga experiment as a whole, and to that of the now-termed final show in Richmond Town.
|Jaaga under construction in 2009|
Light and Sound
The perspective was starkly different late last year. I agreed to come and work on the Light and Sound project understanding that the goal was to make a permanent change to the nature of the space. There was a knowledge that the land was borrowed, but Jaaga grew cultural roots, and there was a false sense of security.
Jaaga's self-definition sprouted organically in this period - nothing seemed mutually contradictive. It could be an art space, either in the academic or the Burning Man sense. It could be a business startup and co-working environment. It could be a place for poets, photographers, musicians, programmers, hackers - it could host community events, informal groups or big business conferences showcasing their 'cool urban' credentials. Jaaga was, in other words, whatever you wanted it to be when you walked in through the gate.
Now, Jaaga could end up back in a central location, where all of these things would remain a realistic and vibrant possibility. But this is very difficult - everyone knows that any space in the city center is a tough find in one of India's fastest-growing cities. So the conversation about where Jaaga moves to necessarily implies a conversation about what Jaaga is, since any move from it's current location will mean a change in audience.
The 'Final Show' - A Statement
When the fact of the move sank in, it took a week or two to absorb the consequences. At first, there was a naÃ¯ve sense that the all the move presented was a test to the idea of Jaaga's reconfigurability - that the installation work produced should be easy to disassemble and reassemble in the new location.
|A roof garden party at Jaaga|
But in truth there wasn't time for this. Jaaga is a place that grows step by step and slice by slice. The nature of the building means that it's hard for the first version of something to be the last version of something. Jaaga is an experiment in constant flux. Whether it's providing electricity for an event or waterproofing the ceiling, there is always a stream of innovative new solutions being implemented. It would take time for permanent installations to mature before they could be assured survival in a changed environment. And ofcourse, there is no guarantee of what shape, size or kind of space the future Jaaga would occupy, and therefore what role permanent installations might play.
So as the term final show emerged, the Light and Sound project lost it's sense of permanence. The project had now become something very different. It had become an opportunity to make a statement. This statement would not be something organically bound to the building's frame (Ã la Eve Sibley's vertical garden), but rather would be something which would come, go, and live on only in memory.
But what kind of statement should be made? Every day spent in the building is an opportunity to see in action the flux of life at Jaaga. And of the various people preparing work for the final show, each is somehow indicative of a strand of thought in a very multistranded space. The co-founders and Directors, Freeman Murray and Archana Prasad, have created and developed a view of themselves which seems designed to incorporate all of these various views under one overarching theory. They call it The Enlightened Singularity.
|A photography exhibition at Jaaga|
And there is ofcourse also the concept of the Living Building. Again, this is a very open concept - after all, as McLuhan points out in his "medium is the message" quote, any [media / medium / technology] can be conceived of as an extension of some part of the human body.
Resident artist Tobias Rosenberger has talked about Jaaga's "philosophical openness". Jaaga is a very accepting place, and so it prefers philosophies which allow it to continually add - people, ideas, materials, concepts - into the whole, from any directions which seem to benefit. However, what is now becoming clear is that openness in all directions is one of the luxuries of remaining in a state of constant flux. So to continue in this vein means remaining in a state of flux - permanently.
|Walker's burnt murals on the flip side of the vertical garden panels|
Is this really possible? Will Jaaga be able to maintain it's openness and still remain a vibrant cultural space in Bangalore? If the time comes to choose a clear direction, will openness stop being a virtue and start to become a handicap? Will Jaaga be forced to stop becoming and finally become? And what would it become if it did?
The fact that these questions remain unanswered is of pivotal importance to Jaaga's final show. Within the group there are diverse interests and seemingly different directions. Ultimately the answer to the question of what kind of statement should be made resides with each individual artist, their relationship to each other their relationship to the space. In a way the Light and Sound project is a kind of microcosm for Jaaga's macrocosm. We don't have answers - the result will probably only be clear after the event. But that is one of the things that make it a very interesting project to be a part of.
Posted Tuesday, 12 April 2011
Continuing on from my introduction to ambisonics, I now want to find out more about ambisonics by exploring an implementation in Max/MSP. Max doesn't natively have objects for ambisonics, so I googled for externals. There are a few results, but one of the better-looking ones (and as it happens, the first result) is published by a British academic named Graham Wakefield.
You can check out Graham Wakefield's ambisonics for Max/MSP here. What's attractive about this result is it has exactly the three things I would want to see:
- A brief introduction to the available objects (named in a way that makes sense based on Bruce Wiggins' tutorial videos)
- A concise screenshot showing the major objects in action - generating sounds, encoding to B-Format, performing a transform (rotate) operation, and decoding for a four-channel listening space.
- Download links for Windows and Mac!
So i'll give this external package a try.
One thing that jumps out looking at the demo images on this page, is that although there is a rotate object, the other standard transforms are not there (scale, translate). Until now I've found that any time you encounter one transform operation you usually encounter the other two. I'm not sure the reason for this at the moment, but I've thought about it and marked it down as a curiosity.
Azimuth and Elevation
The next thing to notice is that Graham's encoder, just like Brian Wiggins, mixes Azimuth and Elevation variables with a mono sound source. So it's time to get a better sense of how these variables work together to create the 3D effect.
A little googling tells me that these variables in this context refer to the horizontal coordinate system. This is the system used for space-object observation, and satellite dish positioning. The system seems to be so named because the azimuth and elevation variables only make sense with reference to a horizontal plane (the plane labelled 'horizon' in the image). Notably, it also only makes sense with respect to a specific observer (standing at the 'origin' - the intersection of the zenith and the horizon).
|The horizontal coordinate system|
So in our ambisonic listening space, imagine you as an observer go to the origin and look due North (0 degrees). Then you turn 45 degrees East. This is the value of your Azimuth (position A in the image). You then tilt your head 45 degrees upwards. This is the value for your Elevation (position B).
So these two variables can describe any direction in a soundfield from an origin. What this can't seem to describe is how far away the object is. You could be looking straight at it but there's no variable to tell you whether it's on the moon or right in front of your nose.
Loudspeaker Orientation - but not Position
The ambi.decode~ help patch shows the horizontal coordinate system in action. Each output channel from the decoder is by convention assigned a loudspeaker position related to a platonic solid. For example with 8 channels you could choose a 2D horizontal 'ring' layout (octagonal), a 3D 'cube' layout, or a 3D 'superimposed tetrahedron' layout. The loudspeakers are then positioned in the listening space where the intersections occur between lines in the shape.
Depending on the chosen layout you can then feed decoder orientation values for each loudspeaker, as horizontal coordinate system pairs. The ambi.decode~ help patch has some examples:
If you sketch out the example values (as I have), you can see how they work, along with predefined positioning, to construct a listening environment:
Looking at the cube layout sketch, you start to really get a sense of what an ambisonic soundfield really is. It's a 2 or 3-dimensional version of stereo (stereo being 1-dimensional). This also starts to explain why only the rotate transformation is included in Graham's patch. The other transformations make less sense in this context, and much more sense in the cartesian coordinate systems I'm used to.
The Sweet Spot
Ambisonics is therefore designed with a 'sweet spot' in mind. This spot will emanate from the origin, and it's size will be dependent on the loudspeaker setup and the nature of the listening environment (outside noise etc).
|Sweet-spot inspired food|
Looking for more information, I came across an introduction written by Florian Hollerweger, a Sound Art PhD student at Queen's University Belfast.
The text describes how there are two different approaches to decoding B-Format data (Projection vs. Pseudoinverse). However, both methods share a crucial feature in common - it's very important that the layout of the soundfield be as regular as possible - and as closely related as possible to a platonic solid shape which the decoder is instructed to decode for.
Florian goes on to describe the nature of higher-order ambisonics. It quickly becomes clear that each higher order of ambisonics is basically about increasing the number of decoded channels - i.e. using a platonic solid with more intersections (to place the loudspeakers on).
This does a number of things:
- First, it increases the 'resolution' of loudspeakers, and therefore the 'localisation' of sounds. What 'localisation' means is your ability to point confidently to where you think a sound is coming from, without necessarily pointing straight to a loudspeaker. In other words, it's the illusion that the sounds around you are really floating freely rather than emanating from loudspeakers. This only applies for a listener in the sweet spot - degradation will still occur in a similar way as you move away from the sweet spot regardless of which order of ambisonics you use.
- However, secondly it will increase the size of the sweet spot. More speakers equals more volume, and the added loudspeakers mean that you can move them all further away to attain the same volume for a listener at the origin. The listener would have to walk further away from the origin before they experience the degradation - but the degradation would occur in a similar way.
- Ofcourse, it means adding a number more encoder channels - 5 more for second-order and 7 for third-order.
What Ambisonics Can't Do
All of the emphasis in ambisonics is on the sweet spot. It's still that thing of imagining a single listener in the middle (like my stick man in the image above), or a collection of listeners in a single space hearing the same thing. Even with all the extra layers and complexity, it's still about creating one sound (or soundspace).
It's patently not about creating a number of different sound experiences at once, with a freely moving audience, all experiencing their own personal version of the sound. And, (as a specialised version of that), it can't produce an 'environmental' type of 3D sound.
What do I mean by that? Imagine a real physical bird flying through a 'cube' listening space, creating a fairly continuous noise as it flaps it's wings and crows. Imagine also that you are standing right next the point of the bird's entry, underneath a loudspeaker, looking toward the centre of the listening space. As the bird flies over your head you should hear the sound very loudly. The sound would then diminish for you even before the bird reached the centre of the space.
For a listener at the centre, the sound would start quietly and get louder when the bird reached the centre. This effect is not possible with ambisonics - you could cater for the listener at the centre (the sweet spot) but not for the listener at the edge. The soundfield is focused inwards, to a core audience.
This 'environmental' effect would require a different geometry to those available in ambisonics - i.e. more of a grid than a platonic cube. Like a rubik's cube, with each intersection between each mini-cube furnished with loudspeakers. Ofcourse this in turn would require a different type of decoder.
The Jaaga building is modular, and large, and we have a maximum of 16 permanent loudspeakers for the building - some of which vary considerably in frequency range. The loudspeakers need to some extent to be distributed across the rooms, so one option would be to take 8 loudspeakers and set up an ambisonic room.
However the effect I'd like to achieve here would be the more environmental type - and specifically designed for a freely moving audience. We already have an irregular loudspeaker layout which takes into account various requirements:
- A clear sound environment for the large hall (downstairs) which can be useful for talks and screenings.
- Two loudspeakers for the Jaagaad installation on the first floor.
I don't want to force loudspeakers in the space to conform to a platonic solid. A geometry might be good, but rather one that is far more irregular (and explorable, and that relates somehow to the space).
Another option could be to develop a simple patch that combines a point from a 3D cartesian coordinate system with each mono sound. As the point associated with a mono sound closes in on a loudspeaker, the volume of that sound in that loudspeaker channel increases. Inspired by but different to ambisonics.
My next step could be to prototype this in Max/MSP and see how it works. But not to get too far ahead without considering the obvious: Who else has done this before, and what were their results?
Posted Wednesday, 6 April 2011
I've heard a little about ambisonics from sound artists, but now that we are going to attempt to use it at Jaaga, I decided to go back to basics. Here are the results.
First, I searched Youtube for a gentle intro. This 'man-in-white-coat' video from the EBU is perfect. In it, the 3D sound concept behind ambisonics is introduced and discussed.
They also introduce the notion of first-order (4-channel) vs. higher-order (multichannel) ambisonics, and mention ambisonic's lack of compatibility to 2-channel sound. They say that in first-order ambisonics, just 4 audio channels can be interpreted to give "any number of loudspeaker signals that we need", but they don't explain how.
Advantages and Disadvantages
To get a little further, I went to Wikipedia. Here we have a nice list of advantages and disadvantages, which i'll paraphrase:
- Sounds from any direction are treated equally (not favouring front & side - i.e. it's isotropic), leading to better soundfield 'imaging'
- A minimum of 4 channels are required (first-order)
- Loudspeakers do not have to be arranged in a rigid setting, thier placement can vary (within 'sensible' limits)
- The ambisonic signal is independent of the replay system - i.e. you can record / synthesize once and replay on any ambisonic-compatible loudspeaker arrangement
- Not supported by any major label or media company - never marketed, largely unknown outside academic circles
- Conceptually different to traditional 1 speaker per setup approaches, so small learning curve
- Needs to be decoded from storage format (called B-Format) into your loudspeaker environment. Hardware boxes or software can do this.
- Number of speakers vs. size of space: "... if the listening area is too large then, without treatment, the resulting soundfield can approach the limits of stability. This has resulted in some unimpressive demos"
B-format and Channel Roles
Later in the article it describes how the storage format works and what each channel does:
"[In first-order Ambisonics], sound information is encoded into four channels: W, X, Y and Z. This is called Ambisonic B-format. The W channel is the non-directional mono component of the signal, corresponding to the output of an omnidirectional microphone. The X, Y and Z channels are the directional components in three dimensions. They correspond to the outputs of three figure-of-eight microphones, facing forward, to the left, and upward respectively."
So with a four-channel file we have a mono channel and 3 positioning channels. What this doesn't tell me is exactly what data are stored in those three positioning channels, and how they are used to coax the information from a mono channel into a 3D space. I think I'm going to come back to this later though, I want to learn more about the listening space for now.
The Listening Space
The first thing on my mind is, how does B-Format translate into the listening space? Is B-Format enough to allow multiple moving sounds to move separately throughout the space, or only one?
I came across an excellent set of tutorials by a guy named Bruce Wiggins from University of Derby, UK. There are five videos, each in lecture / presentation style, designed for the beginner, which is perfect. The presentations talk in reference to a software product called Reaper, which even if we don't use, is interesting all the same.
One of the presentation screens contains a diagram that seems to answer my question:
This diagram shows how to encode separate audio tracks to a single B-Format file, and then how to decode that file into a multichannel listening space. In Bruce's words:
"During the encoding process each source is panned, and then sent to [an encoder bus] where it is summed with the other panned sources, decoded, and then sent to the speakers.
The videos continue into practical sessions.
Decoding and Encoding
Bruce begins by creating a track for the decoder. He sets it to be a 4-channel in, 8-channel out track and routes each output to a loudspeaker. He attaches a VST plugin to do the decoding from a 4-channel B-Format bus to the 8-channel listening space. (Bruce then switches to quad out for demo purposes over a stereo video, but the principal is there.)
The next stage is to add B-Busses, again as separate tracks, and to connect them to the decoder using Sends. In this case he creates a track for a reverb send, a track for the original 'dry' recording, and a track to sum the two together ready for the decoder.
Finally, Bruce creates a track representing a mono-in, and attaches a VST Encoder so that it outputs in B-Format. He then Sends this track to the dry / reverb B-Busses.
This answers a question I had been mulling over - given a mono audio track, where does the pan information come from? In this case, the pan information is added to the mono track via the VST Encoder plugin interface. You could add it in real-time, record it, or automate it. This is because we are encoding from a mono source - if you were using ambisonic recordings, the hardware would have written the B-Format for you and you wouldn't need an Encoder.
That's enough for an intro. The questions it leaves me pondering:
- Notably all of Bruce's decoder options assume a specific (and quite lab-oriented) loudspeaker setup, i.e. a perfect octagon with all speakers at a specific angle. I will have to look further into how to handle much more 'random' listening spaces.
- Similarly, the panning options in his VST encoder use Azimuth and Elevation. It seems that in order to get a nice 3D effect more appropriate options would be X, Y and Z co-ordinates? However it may be that the controls provided here are enough.
- Less important immediately, but how do the B-Format XYZ busses relate to the W bus? What data specifically do they store? And for example, if Audio Track 1 and Audio Track 2 contained the same frequencies but were in different XYZ locations, how does the mixdown process work? I have a feeling the answer may be 'just the same as mixing in stereo', but it's hard after years of thinking in terms of two channels to immediately make that leap, and I need to get into the XYZ bus data to really get that.
- What changes when we delve into higher-order ambisonics? Does this improve the 'resolution' or some other factors aswell?
- And ofcourse, what are the different ways of working associated with other applications / plugins / environments, i.e. Max/MSP? I think this is the next question to answer, as I have a feeling that exploring that will answer some of the questions above.
See the follow-up article here: Going Further with Ambisonics
Posted Sunday, 27 March 2011
I've been spending a lot of time today trying to figure out how best to connect the Servo motor we bought the other day. Unfortunately there isn't a NYU phys comp tutorial on driving servo motors with high voltages, which would have been ideal. I've had to learn a few new things, from a mixture of sources, and I'll pull the results together here.
The difference between standard geared motors and servo motors
So the first thing to understand is that a geared DC motor like the ones we used in the last couple of posts is quite dumb. It is directly controlled by the DC current you supply it via it's two inputs. You can only really tell it to start or stop. That's it.
You can vary the speed using PWM - but PWM is a kind of microcontroller 'hack' in a sense. What's really happening there is that you are very quickly turning it on and off, and varying the delay between those on and off commands. The motor itself doesn't have any control in that process - it just spins when it receives voltage in one of it's inputs.
If you wanted to be able to tell the motor to turn to a specific angle, you'd have to attach a sensor (i.e. a potentiometer) to the shaft. This could report the current angle of the shaft back to the microcontroller. You could use this information to figure out when to switch off the current to the motor to have it arrive at your desired destination on time.
And this is exactly what a servo motor does, except rather than reporting the position back to the microcontroller and you having to work it out, you just send the position you would like it to achieve, and it does all the working out for you. For this reason, unlike the standard geared DC motors, it has three inputs - one for ground, one for V+, and one to send your control signal.
When to use a H-Bridge and when to use a TIP120?
Tobias, and my Physical Computing book all seem to suggest I need a TIP120, not an L293 H-Bridge (which is what I used when connecting the motors in the previous posts). The question is, why?
First off, what is a H-Bridge? And what is a TIP120? Going back to the beginning of the chapter (page 97), I get a brief explanation that of what a relay is and what a transistor is. It explains that a relay is an electromagnet that flicks a switch for you whenever you send a control current to it (i.e. it's a switch that doesn't need to be flicked by a person, you can flick it automatically in your microcontroller code). It then just says that a transistor does the same thing but doesn't have any moving parts, and so is quicker - but will only work with DC.
With some googling I find that the TIP120 and the L293 have something in common. The L293 H-Bridge contains transistors, and the TIP120 is a transistor. So it seems that driving powerful loads is all about transistors - using a control current to switch on and off a powerful current.
So this is the information I need to really make it make sense. Both components only allow you to switch higher-load circuits on or off. A standard geared DC motor is spun one way or another by alternating the direction of DC current through it's two connections. This is what an L293 H-Bridge is useful for, as it gives you two output legs to the motor, through which you can send HIGH to one and LOW to the other and vice versa.
However a servo motor only requires a a high voltage to give it some power to work with. The control signal can then be used as an enable pin, and then instead of blindly increasing and decreasing speed through PWM, you vary the speed and direction by using a second control signal that goes straight to the motor. The motor does the dirty work for you, and as a result you get a direct correlation between the value you send it and the location the shaft turns to - no wonder they are more expensive. So for this, a single transistor is required - like the TIP120.
It took a lot of effort to piece that together! Tomorrow hopefully I can actually build the circuit.
Posted Wednesday, 23 March 2011
Yesterday, we tested a fairly timid 5V DC Motor with Arduino, and I blogged about the experience of setting it up and figuring out the components.
Today we used exactly the same circuit, but switched to a more powerful 12V DC Motor which we picked up in SP Road for 175INR (Â£2.50). The only change we made was to switch up the variable power supply to 12V.
In short, compared to the 5V motor, the 12V is much more powerful, faster, and much noisier! At the store thy offered a range of speeds but we picked the fastest, as you can always slow the speed programmatically using PWM (as shown in the video).
Due to the noise I can't imagine using this motor in an otherwise silent environment (unless the concept is to play up the tech/inner workings) - it's just that noisy, and not a traditionally-speaking 'pleasant' noise at all. It would be useful to know if there are ways you can reduce or dampen the noise emitted.
Posted Tuesday, 22 March 2011
Since arriving at Jaaga I have been refamiliarising with simple electronics along with Agnese. Today we set up a simple circuit to drive a DC motor back and forth. We googled and found an online guide from a physical computing course at New York University, which was really helpful, and you should definitely check it out if you want to build this circuit.
It's pretty cool how much easier it is nowadays to find very friendly guides for beginners. In particular I like how they use a standardised Arduino graphic instead of dry schematics, which are great for being concise but offputting to the uninitiated.
The pushbutton on the top left of the breadboard is just there to change the polarity, i.e. the direction of the motor. It's essentially seperate from the rest of the circuit so we can ignore it and look to the rest.
The H-Bridge (the chip in the middle) looks a little scary to begin, especially when you first glance the graphic associated with it (below). But once you take the time to look over what each leg actually does, then you start to get a bit more comfortable.
The H-Bridge's job is to allow a motor which requires a high load (i.e. 5-15V) to be controlled by a control signal (i.e. the Arduino standard 5V). In this case the motor we are using accepts 5V but we're going to set up a circuit that will allow the use of more powerful motors, so that we can just switch the 5V for a 10 or 12V motor later... hence the need for the H-Bridge.
So since the H-Bridge just has to translate 5V control signals to 5-15V motor signals, what ins and outs might it need? You might expect one leg to go to the external 5-15V power source, one to ground, one to +5V, two to the motor, and one control signal to the Arduino.
As the diagram shows, in reality things are a little different. The two that go to the motor are there - marked as 'Motor Terminal 1' and 'Motor Terminal 2'. The one that goes to the power supply is there too, marked 'VCC2'. But there are four legs that go to ground! This is according to the schematic and also the imagery in the guide. I don't know why yet, but that's OK - four it is.
The next odd thing is that we have not one but THREE control signals - i.e. legs that go to the Arduino's digital out pins - which are pins 1, 2 and 7 above. The reason is that in order to turn the motor one way you use the two 'Motor Logic Pins' and send HIGH to one and LOW to the other. When you want to reverse the motor, send the opposite - LOW to one and HIGH to the other.
The third control pin, (pin 1 in the diagram), is an enable switch. If it receives HIGH the motor will work, otherwise it will not. So you can connect it to a constant +5V or set it programmatically in the microcontroller code. It functions as a programmatic on/off switch or safety-off, i.e. like the big red buttons on heavy machinery.
And the pins that aren't connected to anything on the right-hand side of the H-Bridge? The H-Bridge can control two motors at once if you want, just mirror the pin setup you have on the left to the right.
Fun, geeky stuff. We'll pick up a more powerful motor to play with later today. Here's the final result for today:
Posted Tuesday, 22 March 2011
I'm one full week in to my residency at Jaaga, so what have I been doing?
I met up with my collaborators on Jaaga Light & Sound, (Agnese Mosconi and Tobias Rosenberger). They had already kicked off a series of workshops to stimulate involvement from Bangaloreans, which we continued this week. We cover topics from Arduino to Max/MSP and encourage the guys to contribute ideas for Jaaga.
The workshops are open to anyone and we get a nice mix of people. They range from graphic design students with zero electronics experience through to advanced hackers (many are techs from some of the many big R&D centres that multinantional corporations have built here in recent years). These guys already have very advanced skills and are looking for cool projects to get involved in, so I've been focusing initially on getting the newbies up to scratch.
To facilitate the workshops and experiments we took a visit to SP Road to stock up on components and tools.
We are searching now for composer(s) to work with us on the sound aspect of the project, so get in touch if you are in Bangalore and compose using Ableton or any other software we can send control signals to (via OSC for example)... we'd like to hear from you!
And to round the week off was the Indian celebration of Holi - the festival of colour. For the wealthy this seems to mean getting made up and elegantly applying a dash of colour. But for a lot of people it means randomly tossing dye all over your friends. So my new roommate Mandy & I walked the streets looking for trouble. We find a little - mostly from ourselves...
In the final photo there is my new friend, Berlin. Now on to week 2!
Posted Monday, 14 March 2011
I did my first day of the Jaaga residency today!
And at about 7pm I gave a short, hastily-prepared informal presentation of my work and background until now. It's something all resident artists have to do when they start. Its not really something I've done much before, and I wasn't 100% on how to do it.
The people watching were mainly other participants at Jaaga so I asked for feedback immediately afterwards - knowing I may have to give the same presentation again soon, and more publicly this time. I'm glad I asked now as the feedback was very useful.
- I talked about my background as an agency programmer, and then talked about music compositions and installation artworks I had produced. It was suggested I could explain this progression, i.e. what were the motivations behind these changes?
- I wasn't sure how to frame a tech-heavy background to a non-tech audience. It was suggested I could show a couple of website front-ends as visual aids and talk a little about what what the software did, by reference to software they may be familiar with, like blogger / wordpress.
- For the later parts where I play video / media, I had a tendency to play a video and talk over it (!) and then skip onto the next video when I was done, without checking if the audience understood or needed more information. I also only offered scant clues to the context or concept behind each work.
For the last one about video, it was suggested I use the following process:
- Talk first about the context and / or concept behind the work, before hitting play
- Then play the media to illustrate what resulted from that context
- Pause afterwards before continuing to the next project, and try to make people feel comfortable to ask questions
It's funny, the feedback I got seems obvious in hindsight - It's the same things I've thought watching some other people's presentations in the past. But it's somehow a different story when you're standing in front of people and you're not sure what they expect. At the end of the day you have to keep a conversation going which all parties are interested in. And you can do that by expressing a clear narrative, and offering opportunities for feedback into the process.
Looking forward to improving this process next time.
Posted Wednesday, 2 March 2011
I'll be starting my Jaaga fellowship soon, and as I sit here soaking up the sumptuous rays of the Porto Alegre (Brazil) cityscape visible from my hotel balcony, I can't help but think how little I know of what I'm going to be doing there. I know the guys I'll be working with (some of them), and I know what kind of community awaits me, and that's all I need to know. They are a nice bunch, and I'll be with friends, and the rest isn't so important. Until we start doing it!
Conceptually, it's about a million miles from my London life as a freelance software developer. There, I was living alone in a flat, occasionally meeting friends for drinks, working a lot, meeting deadlines. The dreaded urban nine-to-five... well, ten-to-eight more often. Chasing the dollar.
Now apart from a small stipend, which I'll basically be relying on, there won't be much money flowing through. Still, now there's the promise to really be part of something cool, and not just 'cool' in the commercial sense, but cool for it's own sake. I'm looking forward to getting my Jaaga on.
- Jaaga Residency (17)
- Jaaga (15)
- I-Park Residency (12)
- Process (12)
- V4W (10)
- Personal Development (10)
- Installation (10)
- VVVV (9)
- Field Research (8)
- Freemote Threshold (7)
- SuperCollider (7)
- Long (7)
- Freemote (7)
- Reflections (6)
- Audio / Visual (6)
- CAC Residency (6)
- Arduino (5)
- Tutorial (5)
- Influence (5)
- Max/MSP (5)
- Jaaga Sound and Lights (4)
- openFrameworks (4)
- Motor (4)
- Kinect (4)
- Projection Mapping (4)
- Portable Projection (4)
- Gravity (4)
- michael fairfax (3)
- Roman Moshensky (3)
- Rocks (3)
- Jee Soo Shin (3)
- Land Art (3)
- Picture This (2)
- Phenomenology (2)
- Git (2)
- Measure (2)
- Projection Bombing (2)
- Presentation (2)
- Creative Context (2)
- Natural Textures (2)
- Tess Martin (2)
- Scott Wilson (2)
- Alpha-Ville (2)
- Review (2)
- Untitled (Picture This) (2)
- Ralph Crispino (2)
- Cosm (2)
- Mac (2)
- Boaz Aharonovitch (2)
- C# (2)
- Mobile Projection (2)
- Memo Akten (2)
- Judith Stein (2)
- Generative Art (2)
- 3D (2)
- Residency Begins at CAC Troy
- Installation Sketch at Open Studios
- Roman Moshensky's Mirror World
- Open Studios at I-Park
- Perception as a Creative Process
- The I-Park Graveyard
- Scoping Out the Land
- Residency Begins at I-Park
- Residency at Contemporary Artists Center
- Stephen Lumenta's SC TextMate Bundle
- Adding OF Addons (ofxSuperCollider)
- Setting up SuperCollider with TextMate
- Switching to MacBook Pro
- QuickRef for SuperCollider
- Getting Started with SuperCollider
- Getting Started with OpenFrameworks
- Overtones, Harmonics and Additive Synthesis
- Visit to Cold Spring
- The Final Exhibition
- Playing with Particles
- Responsive Granular Sound
- Kinecting to the Network
- First Working Day
- Designs for Freemote
- Freemote Utrecht
- Untitled - Picture This (2011)
- The Wider Context?
- Trading Time for Space
- Talk at Goldsmiths Digital Studios
- Intro to Marius Watz
- Practical Guide to Generative Art
- Cosm, Collision Detection and Volume
- Vector-Base Amplitude Panning
- Intuition, and Direction of the Project
- Reflections: What is Jaaga?
- Going Further with Ambisonics
- Introduction to Ambisonics
- Surface (2010)
- Servo Motors and Transistors
- Spinning a 12V DC Motor
- Spinning a 5V DC Motor
- First Week at Jaaga
- Presentation Style
- Beginning the Jaaga Fellowship