New York state has some outstanding areas of natural beauty. On Sunday we hiked up a trail in Cold Spring, and brought back some photographic evidence.
I want to talk in this post about some of the discussions that came up as we walked, and the way they are affecting how I'm thinking about my upcoming projects. I'll illustrate the text with some shots of our hike.
I met Hannah Gould, and as we hiked she told me about her father's studio out in San Jose, California - and their mutual admiration of Andy Goldsworthy. Bill Gould creates various types of physical sculpture, signage, fencing and gates for public locations.
It seems possible that I could arrange a trip to see him and spend some time in the studio creating work inspired by land art.
It all seemed to tie in very nicely with the plans I have been making for I-Park. It feels like a cloud is forming around this idea of 'land art', portable projection, and visual-sound correlation.
I'm beginning to see several disparate concepts converge. All of which I can come back to, flesh out and consider over the coming months.
Spatial augmentation and exploration
I'm really interested in thinking about the way that we consciously and unconsciously project value onto objects and environments around us. 'Nature' and 'natural objects', rocks, trees and rivers, are perceived with a given context based on the historic demographic context of the observer. In the work I produce I want to think about the ways my augmentations of these objects affect this sense of value.
For example, in the arrangement of natural objects in the traditional sense of land art, do we 'humanize' the object, in making it more relevant to us? Or are we moving closer to nature, in the sense that we are augmenting natural objects by revealing natural patterns?
And one step further from that, does it really make sense to demarcate 'natural' and 'human' and to posit them on either end of a scale?
Working with natural textures
And what happens to this conversation when we introduce digital elements?
If I project algorithmically-constructed patterns as light onto a naturally textured surface, does this do anything more to the surface than can be achieved with the use of natural materials alone?
Will the strictness of the algorithmic geometry work against the endless unpredictability of the natural texture? Will this form a disconnect, and what might this disconnect say about the demarcation between 'natural' and 'human'?
And does this disconnect occur in 'pure' land art?
I've always preferred 'natural' texture to algorithmically-generated ones (i.e. Perlin noise), and so I'm fascinated by the idea of projecting onto them. So much projection mapping is designed to attack as geometrically 'perfect' surfaces as possible, it will be really liberating to explore with the purpose of highlighting the 'flaws' in the surface rather than concealing them.
The synaesthetic effect
The aspect which excites me most about digital projection right now is the way it can be linked in real-time to other perceptible events - in particular simultaneously generated sound. Some people call this the 'synaesthetic' effect, though synaesthesia is about much more than just vision and sound.
Walking with Gene we discussed potential experiments we could work on with SuperCollider, and combining it with VVVV. Gene is passionate about SuperCollider at the moment, and we have already agreed to collaborate over the next few months before he goes to India.
These experiments, along with the land art training in Bill Gould's studio could provide a really solid base for the work I produce at the I-Park residency.
The technical part of portable projection
The one and only reason I am interested in 'portable projection', is so that I can walk off with a projector to a remote location and not have to worry about a power source. The way I've seen this achieved is by connecting a car battery to a projector. There are ofcourse pico projectors, but these have nowhere enough lumens to make a convincing 'coat of light'.
The car battery approach means that each projection will have a very ephemeral nature. The projector will need to be switched off shortly before the power runs out. This gives only about an hour, including set-up and mapping time.
Computer Vision and quick mapping
Gene is also very interested in projection mapping, and he is particularly interested in using Computer Vision libraries to create a real-time automapper using a webcam. This could mean that software could continuously scan a webcam feed, and use edge detection and other algorithms to continuously redraw it's internal representation of projected surfaces. This in turn would mean that as you move the projector around the object, the 'coat of light' applied would continuously update (a considerably lower-tech version of this).
This is lofty stuff, but I am interested in exploring this because it could speed up the mapping process. Which, when you are time-limited by battery life, would be a really positive thing.
I will come back and look at these subjects more in the coming weeks, to see where a little fleshing out takes them.