FLUID RESONANCE: A Digital Water Prototype

(Warning: the following images contain material that may not be suitable for all audiences. Discretion is advised).

What if water could be an instrument?

If the vibrations from a single drop of water in the ocean could be suspended in momentary isolation, what infinite arrays of symphonic arrangements could we hear? A constant flow of signals in the tide of the universe codified in sounds, awaiting to be experienced in time. There are certain moments when we are moved by sound, a direct emotional connection to the physical movement of orchestrated disturbances in the air; an unseen but pervasive, invasive and volumetric presence. Such characteristics as these are what concern me now, and are the focus of the project. The formation of relational interactions in space engendering the event and its parameters are subtly and violently manoeuvred by the invisible actions as subtext underlying the visible surface, just as sound changes its timbre by the material surface it reverberates on, yet continues to imbue into the substrata of matter.

Music makes time present. Or, at least, it makes one aware of time, even if one loses track of it. Take, for example, Leif Inge’s 9th Beet Stretch, a reimagined version of Beethoven’s 9th Symphony stretched into a 24 hour journey (a sample can be heard here on RadioLab at 4:23, or listen to the full streaming version here). The remastered continuous audio notation materializes the presence of sound and the distillation of a captured moment, giving one a moment to reflect on the mortal moments that stream by in subconscious sub-fluences every minute, without awareness. In this example, we are transported into the life of a musical movement in its own existence; in contrast, another way of thinking about the relation of time and sound comes in the form of crickets. An analog field recording taken from a soundscape of crickets was slowed down, edited to a speed equivalent to the lifespan of a human, scaled up from a cricket’s lifespan. What emerges is a harmonic layering of triadic chords playing in syncopated rhythm like an ebb and flow of a call and response. (Note: the field recording has later been reported to have been accompanied by opera singer Bonnie Jo Hunt, who recalled “…And they sound exactly like a well-trained church choir to me. And not only that, but it sounded to me like they were singing in the eight-tone scale. And so what–they started low, and then there was something like I would call, in musical terms, an interlude; and then another chorus part; and then an interval and another chorus. They kept going higher and higher.” (ScienceBlogs 2013).

When we slow down, speed up, alter, change, intersect intangible concepts into human-scaled pieces to hold, we have an opportunity to reveal insights from our point of view into a dimension outside our horizon that which we never would have encountered in the habitual form. It may not grant us full access to aspirations of knowing or truth, but the discovery of interrelated phenomena - whether it be time and music, water and sound, or natural- and computational glitches causing anomalies - gives us a better understanding of the effects and consequences of the tools used to define a language, which in turn, define our state of being and future intent.

And what of the water project?

The original intent of processing and music began with the introduction to cymatics - a term used to describe the experiments of a substance’s patterned response to various sine wave tones.

And here’s a polished music video of many such experiments compiled into a performance by Nigel Stansford:

Water revealed the vibratory patterns of tones in consistent yet exciting designs, which then began the exploration into sound processing. Minim became the library that accommodated the recall of pre-recorded instrumentation (or any .wav / mp3 file) , however it was not the first library to be experimented with. Beads, a sound synthesizer library with the capacity to generate sine wave tones provided an introduction to visualizing a simple wave form.

The location of a pointer on the screen moved by the mouse changed the wave modulation and frequency in relation to the vertical and horizontal actions, respectively. The input of movement and proximity changed the auditory pitch of a tone output.

Another variation of sound visualization from Beads was the Granulation example. This exercise used a sample of music, then ‘flipped’ the composition, pushing and pulling the tones, stretching them into digitized gradations of stepped sound. Imagine a record player turning a 45rpm disc with minute spacers in between every 1/16 of a second, turning at 33rpm (but with the same pitch) - the digital composition of finite bits reveal themselves, but link the tones in a digitized continuum. This would later become very influential in the final performance of the sound generated by water.
An inquiry into the physical properties of cymatics proved to be challenging. Initial investigations were conducted with a coagulate fluid (water and cornstarch).

It was soon discovered that commercial-grade hardware and equipment would need to be used in order to achieve an effective result. Though challenging, time would not permit further explorations. (Gary Zheng continued to explore cymatics to great effect based on these initial experiments).A second option was to simulate cymatics through visual processing, leading to some play with Resolume, a software used for sound visualization, popular among DJs to augment their sets with responsive graphic media.

Initially, the layered track interface and set bpm files made this an easy-to-use software medium. Pre-made .mov or .wav files could be loaded to simulate interaction with the beat-heavy tracks. For entertainment value, Resolume has much to offer and is easily accessible. But the spontaneity is removed from the equation of output, and is dependent upon the user’s technical knowledge of software and constraints of the program.

The method of investigation revealed interesting physical responses to sound, and in turn, inverted the experiments of cymatics - from the causation of sound into form - to form resulting in sound feedback. The intrinsic properties of water displacement on its surface had the ability to create an effect through captured video, thus water became the focus as an instrument of motion represented by auditory output, and was no longer an after-effect of sound.

Motion vs. Colour detection

A deductive experiment compared two forms of video motion detection, based on exercises conducted (code originating from Daniel Shiffman’s processing.video samples, www.learningprocessing.com). First, there was colour detection through video processing. This version of motion detection would have dictated the physical properties of objects and or additive coloured substances to the water. Adding elements complicates the design process, and alters the base-line state of the clear water interface, so additive fluid colouring was not a favourable option.

Video pixels are selected in the active frame to detect which colours within a certain threshold will be chosen to be tracked; in this case, the off-colour gestures turned the camera from a motion sensor into a motion censor.

Next up was the motion gesture test, a basic differencing visual of motion knocked out in black pixels, with a set threshold calibrated for the scene.

The gesture test proved to be less discriminatory of affected pixel detection, therefore the existing conditions of light and material properties would be critical in the final set-up of the performance, especially for a clear and less-detectable substance as water. A visualization of the water’s surface captured in the video camera early on indicated the sensitivity of the camera would be sufficient and better in controlled environments.

A third and most important layer to the experimentation was the implementation of the split screen, an application using coloured elements to respond to motion. Coloured items appeared, indicating the detection of movement in the designated zone on the screen.

At this point, the design of the project became clear. A music interface would be created with water, from which motion would be detected through user interaction (imagine water drop syringes annotating musical notes in a pool. Notes vary on a scale depending on where you release the droplets). The vibration of water is also augmented by a graphic icon, colour-coded to represent the different tones. Once the user has made the connection of the interface, colour cues, notes and zones, the ability to improvise and to create a melody through patterning becomes intuitive.

As the design of a musical interface called for a variety of designated tones, a grid was mapped out to correspond to a simplified scale. 8 tones would be selected, and would represent a major key to start with harmonic layering. A small threshold showing less motion than was detected maintained a relatively neutral background, while a graphic icon was required to track the gesture: this was needed to give visual feedback to the user, to understand the interface orientation and navigation of the zones.

An important aspect of the grid layout and the user interaction required a method of knowing where the user was affecting the interface, as it was a visual representation augmenting the real physical interactions of the water. It was determined that a static image appearing intermittently did not represent the user action of dropping water, so a sequenced animation (GIF) was customized.

8 unique variations (colours) of a 15 frame GIF were created. Then, a GIF library was sourced to introduce the animation into the code. GifAnimation was used to activate the series of images. There were at least a couple of ways to integrate the animation: as a sequence of still images, or as a compiled GIF (the latter was chosen for this instance). For further information, here is a link to start http://extrapixel.github.io/gif-animation/

In order for the GIF to be successful, it had to follow the pixels in the zone it was assigned to, and it needed to appear in an approximated area where the most changes occurred in the video processing. What transpired was a coloured “droplet” image appearing where the real droplets of water were being played out on the water’s surface. (Many thanks for the consultation of Hart Sturgeon-Reed, who helped to apply the following).

///draw the gifs

if (points>80)

{xPosition=xsum/points;
yPosition=ysum/points;

println(xPosition);

println(yPosition);
}
//Upper left 1
if (xPosition …and so on, for each tonal zone.
To recap, the foundations of gesture motion detection was layered with split screen detection, thereafter divided into quadrants, then 8ths of the screen (default 640x480). Video processing also enabled tracking of the GIF, which was implemented with the GifAnimation library. Now, Minim was used as a playback for pre-recorded royalty-free audio. In this case, the default notes on a guitar were selected as a basis for the sounds - a simple foundation easily recognizable, with the potential to grow in complexity.

A fundamental leap of concept occurred in the playback results. Initially, the pre-recorded single note tone would play, and a simple identification of a sound would be the result. Minim has the capability of playing a complete song if needed, by using the recall at the critical moment. This method may slow down the recall, however, and since the tones for the project were short, the recall required quick access / activation. Another detractor to the Play loop was the one-time cycle. A reset was required thereafter, and did not set as expected, and often the tone cut short as other tones were activated through the water motion. To counter the stuttering effect, trouble-shooting with the Trigger loop had interesting results. As the motion of the water continuously recalibrated the video detection when its surface broke from the droplets, the tones were triggered with a constant reset, creating a continuous overlap of sounds, not unlike the earlier experiments with the Beads library example of granulation. So here we are with a unique sound that is no longer like a guitar, because it is retriggering itself in a constant throng of floating, suspended long notes weaving between each other. It is guitar-like, yet it is pure processing sounds, delivering it from simulation to simulacra, activated by natural elements of rippling water.

The second point to note about the visual and auditory feedback was the glitches. In the coding structure, parameters were set to define the area within the screen (each zone had 160x240 pixels) from which an approximation would be determined, in order to represent the point of contact where the most action occurred in each zone with the GIF droplet icon. But, as the water surface continued to overlap and ripple into adjacent zones, it appeared that the icons were blipping outside of their originating boundaries, often overlapping one another. This was fortified by the seeming overlap of tones, when in fact each tone was activated on an individual basis; due to the immeasurably small lapse times between each triggered effect, two tones would sometimes sound like they are playing simultaneously, when they were bouncing back and forth from each sound. The flowing state of water in its contained form would activate multiple zones at once, which I can only surmise that the Processing sequence arbitrarily determined which zone and tonal value to play in order of activation, yet was constantly being reevaluated as the water continuously produced new results, changing the sounds by the millisecond.

The set-up: A podium, webcamera, projector, speakers, source light, laptop, water dish and two water droppers. Simplicity was key, considering the varying levels of sensory input and stimulation, however the learning curve was quick and responsive due to the immediacy of the performance.

(Two other experiments to note: FullScreen- an application whereby the Processing viewing window stretches to the size of the native computer (laptop) - and Frames - the calibrating tool to indicate how often the camera input refreshes data - did not synchronize due to the limitations of the external webcamera used. The 640x480 aspect ratio was not preferable, but did serve its purpose in speed and responsiveness.)

Prototype Performance

From the outset, the physical interactions of the environment and the design (reflection of light on the water surface, the container vessel, the camera positioning…) were discussed in detail, yet the focus remained on the programming concept. As a designer of physical space, the programming content, in conjunction with the hardware and sensitivity to environmental conditions was a negotiation process, requiring constant testing throughout the stages of development. Such practice-based research produces unexpected results, and informs the process through reflective and iterative methods. The most challenging aspect deals with the interaction of elements in its most basic state. The approach to this project was to respect the properties of water, and to work with water as the central vehicle to the creative concept. This now includes the unpredictability of its changing yet constant state.

Phase II

Future stages to the project entail expanding the range of tonal values / octaves in the instrument, which could include secondary mechanisms to “flip” to another octave, or output of sound. Recorded feedback of either visual or auditory information could become an additional layer to the performance. A designed vessel for the water interface is to be reviewed.

Other considerations:

Spatial and cognitive approaches to virtual and digital stimuli in environments have the potential to be accessible touchpoints of communication, whether it be for healthcare, or as community space.

The initial mapping of the split screen into zones carries into another project in progress, and informs the development of a physical space responding with feedback based on distance, motion and time on a much larger scale.Many thanks to Gary Zheng, Stephen Teifenbach Keller, and Hart Sturgeon-Reed for their help and support.
// Jay Irizawa Fluid Resonance: Digital Water

// base code started with Learning Processing by Daniel Shiffman
// http://www.learningprocessing.com
// Example 16-13: Simple motion detection
//thanks to Hart Sturgeon-Reed for the graphic icon detection

import gifAnimation.*;

PImage[] animation;

Gif loopingGif;
Gif loopingGif1;
Gif loopingGif2;
Gif loopingGif3;
Gif loopingGif4;
Gif loopingGif5;
Gif loopingGif6;
Gif loopingGif7;
GifMaker gifExport;

//minim sound library

import ddf.minim.spi.*;

import ddf.minim.signals.*;
import ddf.minim.*;
import ddf.minim.analysis.*;
import ddf.minim.ugens.*;
import ddf.minim.effects.*;

import processing.video.*;

// Variable for capture device
Capture video;
// Previous Frame
PImage prevFrame;
// How different must a pixel be to be a “motion” pixel
float threshold = 50;

float motionTotL;

float motionTotL1;
float motionTotL2;
float motionTotL3;
float motionTotR;
float motionTotR1;
float motionTotR2;
float motionTotR3;

float maxMotionL = 3000;

float maxRadiusL = 60;
float radiusL;

float maxMotionL1 = 3000;

float maxRadiusL1 = 60;
float radiusL1;

float maxMotionL2 = 3000;

float maxRadiusL2 = 60;
float radiusL2;

float maxMotionL3 = 3000;

float maxRadiusL3 = 60;
float radiusL3;

float maxMotionR = 3000;

float maxRadiusR = 60;
float radiusR;

float maxMotionR1 = 3000;

float maxRadiusR1 = 60;
float radiusR1;

float maxMotionR2 = 3000;

float maxRadiusR2 = 60;
float radiusR2;

float maxMotionR3 = 3000;

float maxRadiusR3 = 60;
float radiusR3;

float splitLine = 160;

float splitLine1 = 320;
float splitLine2 = 480;
float splitLine3 = 0;

int xsum = 0;

int ysum = 0;
int points = 0;
int xPosition = 0;
int yPosition = 0;

//Minim players accessing soundfile

Minim minim;
AudioSample player1;
AudioSample player2;
AudioSample player3;
AudioSample player4;
AudioSample player5;
AudioSample player6;
AudioSample player7;
AudioSample player8;

void setup() {

size(640, 480);
video = new Capture(this, width, height);
video.start();
// Create an empty image the same size as the video
prevFrame = createImage(video.width, video.height, RGB);

//Gif

loopingGif = new Gif(this, “DropGifditherwhite.gif”);
loopingGif.loop();

loopingGif1 = new Gif(this, “DropGifBlue.gif”);

loopingGif1.loop();

loopingGif2 = new Gif(this, “DropGifGreen.gif”);

loopingGif2.loop();

loopingGif3 = new Gif(this, “DropGifYellow.gif”);

loopingGif3.loop();

loopingGif4 = new Gif(this, “DropGifRed.gif”);

loopingGif4.loop();

loopingGif5 = new Gif(this, “DropGifBlueDrk.gif”);

loopingGif5.loop();

loopingGif6 = new Gif(this, “DropGifOrange.gif”);

loopingGif6.loop();

loopingGif7 = new Gif(this, “DropGifPurple.gif”);

loopingGif7.loop();
minim = new Minim(this);

// load a file, give the AudioPlayer buffers that are 1024 samples long

// player = minim.loadFile(“found.wav”);

// load a file, give the AudioPlayer buffers that are 2048 samples long

player1 = minim.loadSample(“1th_StringEvbr.mp3”, 2048);
player2 = minim.loadSample(“2th_StringBvbr.mp3”, 2048);
player3 = minim.loadSample(“3th_StringGvbr.mp3”, 2048);
player4 = minim.loadSample(“4th_StringDvbr.mp3”, 2048);
player5 = minim.loadSample(“5th_StringAvbr.mp3”, 2048);
player6 = minim.loadSample(“6th_StringEvbr.mp3”, 2048);
player7 = minim.loadSample(“C_vbr.mp3”, 2048);
player8 = minim.loadSample(“D_vbr.mp3”, 2048);
}

void captureEvent(Capture video) {

// Save previous frame for motion detection!!
prevFrame.copy(video, 0, 0, video.width, video.height, 0, 0, video.width, video.height); // Before we read the new frame, we always save the previous frame for comparison!
prevFrame.updatePixels(); // Read image from the camera
video.read();
}

void draw() {

loadPixels();

video.loadPixels();
prevFrame.loadPixels();

//reset motion amounts

motionTotL = 0;
motionTotL1 = 0;
motionTotL2 = 0;
motionTotL3 = 0;
motionTotR = 0;
motionTotR1 = 0;
motionTotR2 = 0;
motionTotR3 = 0;
xsum = 0;
ysum = 0;
points = 0;

// Begin loop to walk through every pixel

for (int x = 0; x int loc = x + y*video.width; // Step 1, what is the 1D pixel location
color current = video.pixels[loc]; // Step 2, what is the current color
color previous = prevFrame.pixels[loc]; // Step 3, what is the previous color

// Step 4, compare colors (previous vs. current)

float r1 = red(current);
float g1 = green(current);
float b1 = blue(current);
float r2 = red(previous);
float g2 = green(previous);
float b2 = blue(previous);
float diff = dist(r1, g1, b1, r2, g2, b2);

// Step 5, How different are the colors?

// If the color at that pixel has changed, then there is motion at that pixel.
if (diff > threshold) {
// If motion, display white
pixels[loc] = color(0,50,150);

xsum+=x; //holder variable

ysum+=y; // holder variable
points++; //how many points have changed / increase since the last frame

//upper left 1

if(xheight/2)
{
motionTotL1++;
}
//upper left 2
else if(x>splitLine && xsplitLine && xheight/2)
{
motionTotL3++;
}
//uppermid right 1
else if(x>splitLine1 && xsplitLine1 && xheight/2)
{
motionTotR1++;
}
//upper right 2
else if(x>splitLine2 && ysplitLine2 && y>height/2)
{
motionTotR3++;
}
}

else {

// If not, display black
pixels[loc] = color(0);
}
}
}
updatePixels();

//stroke(255);

//line(splitLine,0,splitLine,height);
//line(splitLine1,0,splitLine1,height);
//line(splitLine2,0,splitLine2,height);
//line(splitLine3,240,width, 240);

///draw the gifs

if (points>80)

{xPosition=xsum/points;
yPosition=ysum/points;

println(xPosition);

println(yPosition);
}

//Upper left 1

if (xPosition height/2)
{
radiusL1=map(motionTotL1,0,maxMotionL1,0,maxRadiusL1);
image(loopingGif1,xPosition,yPosition,radiusL1,radiusL1);
//player5.rewind();
// player5.play();
player5.trigger();
}
// Upper Left 2
else if (xPosition >splitLine && xPosition radiusL2=map(motionTotL2,0,maxMotionL2,0,maxRadiusL2);
image(loopingGif2,xPosition,yPosition,radiusL2,radiusL2);
//player2.rewind();
//player2.play();
player2.trigger();
}
//Lower Left 2
else if (xPosition >splitLine && xPosition height/2)
{

radiusL3=map(motionTotL3,0,maxMotionL3,0,maxRadiusL3);

image(loopingGif3,xPosition,yPosition,radiusL3,radiusL3);
//player6.rewind();
//player6.play();
player6.trigger();
}

//RIGHT

//Uppermid right 1
else if (xPosition >splitLine1 && xPosition radiusR=map(motionTotR,0,maxMotionR,0,maxRadiusR);
image(loopingGif4,xPosition,yPosition,radiusR,radiusR);
//player3.rewind();
//player3.play();
player3.trigger();
}
//Uppermid right 2
else if (xPosition >splitLine2 && yPosition radiusR2=map(motionTotR2,0,maxMotionR2,0,maxRadiusR2);
image(loopingGif5,xPosition,yPosition,radiusR2,radiusR2);
//player4.rewind();
//player4.play();
player4.trigger();
}
//Lowermid right 1
else if (xPosition >splitLine1 && xPosition height/2)
{

radiusR1=map(motionTotR1,0,maxMotionR1,0,maxRadiusR1);

image(loopingGif6,xPosition,yPosition,radiusR1,radiusR1);
//player7.rewind();
//player7.play();
player7.trigger();
}
//Lower right 2
else if (xPosition >splitLine2 && yPosition >height/2)
{

radiusR3=map(motionTotR3,0,maxMotionR3,0,maxRadiusR3);

image(loopingGif7,xPosition,yPosition,radiusR3,radiusR3);
//player8.rewind();
//player8.play();
player8.trigger();
}

println(“Motion L: “+motionTotL+” Motion R: “+motionTotR);
}

Using Format