Measurement and Mysticism in Early Alvin Lucier

"A thread running through a number of Alvin Lucier's early works seems to be an urge to equate musical performance with an act of scientific observation, or measurement. With sound, room acoustics, and various corollaries of sound as the declared objects of this observation, Lucier seems to put musicians and listeners in a shared encounter with ``nature'' and ``the natural world'' that combines elements of science, mysticism and universalism. What are the sources of these notions of ``nature'' and art-making, and what is the context in post-World War II America that gives rise to this interest in measuring the behavior of sound as an aesthetic? What conclusions can be drawn from the language, methodologies and idea-world that Lucier makes use of?"

-Charles Curtis

Continue reading

mornings sit on roofs

every morning i spend in greenpoint, i climb onto the roof and sit for about an hour. i have been doing this since i moved into the place in july. my practice is nothing fancy, mostly just counting my breaths up to 21 - if i make it that far - and starting over. sometimes i repeat specific phrases.

the neighborhood is rather industrial. largely it consists of auto-body garages, construction shops, and the water treatment plant. the first thing i noticed about the area was the beautiful swirling hiss that the plant emits. on some days, the various metalworking machines pound polyrhythms, their origins confounded by their reverberations through the concrete.

the recording was made on a pair of binaural microphones. they can very closely simulate the experience of being there listening to the sounds they record. i recorded for just over an hour straight, and i left the recording completely unedited and unprocessed. it won't stay that way, but i wanted to start by presenting the entire hour untouched. unfortunately, it was somewhat windy today and the wind-guards couldn't keep everything out.

i truly enjoy these mornings. it was a pleasure bringing you along with me this time.

acousmatic comets

today i started working on some sound design for an acousmatic piece.  i mentioned in an earlier post that i was interested in working on a few compositions with more narrative presence.  in the itemized list you'll find in that earlier post, this piece is #3a-- or as i have come to call it, "the damned grotto."  it's inspired by the gorgeous scene in "harry potter and the half-blood prince" where dumbledor fights hordes of the living dead with fireballs as they climb out of the black water in a cavern.  for tonight's work, i laid the foundations (still buggy) for some of the sound design in general, as well as specifically beginning the design process for a short (~14.5 second) sequence where a few fireballs skitter by and ignite some debris in a safely distant part of the cave.  everything needs work, nothing is final.  any criticism would be helpful at this stage.

click here to look at the code.

click here to listen to the sample.

ps- thank you so much, freesound!  and more specifically, thank you, homejrande for your beautiful field recording!

reel whirled

i'm fixing this otari mx5050 8-track reel-to-reel for a client who's a recording engineer. this thing is so pretty i thought i'd share some pictures:
mx5050_0

and here's a shot of it in my studio.  i just might have to "test" it -by recording some stuff- once it's fixed.  it's kind of like how parents justify eating their kids' halloween candy by saying it could be poisoned...

mx5050_1

mmmm poison...

hackpact day 1

for a definition of what hackpact is, or who else might be doing something like this, see the dedicated page on the TOPLAP website.  essentially this all means I'll be making and documenting one creative thing every day for the month of September.

for this first piece, i decided to go with something that could be thought of as both complete and as a spring board for a few other projects.

I drove out to San Fransisco with my partner of 4 years.  We made quite the road trip out of it, staying with old friends along the way.  It was such a great experience.  I love driving, and it took a particularly brutal stretch of road-- which lasted for more than 20 hours and landed us somewhere in Montana-- to get me to give up the wheel.  Something about the constant state of focus really calms me down and helps me think.  We had been living together for about 3 years, and both of us were looking forward to this summer as a way of establishing some space before she started classes at San Diego in the fall, and I moved out there with her.  Unfortunately, it doesn't look like this is going to happen anymore.

Along the way, I made quite a few field recordings.  I brought a cheap but discrete lav mic and set up my computer to record in just about each place we stopped and stayed during our trip.  The recordings are unique because they are the result of a simple supercollider patch.  Every 12 seconds or so (I experimented with the specifics as I went), the patch recorded one second of audio.  The resulting streams I stored as individual files in a directory tree, so that someday I'd figure out what to do with them and iterating over them would be simple.  Today is the first time in a long time that I've listened to these recordings.

My hackpact goal for today was to come up with a simple assemblage for one of these micro-recording sessions.  While I am not totally satisfied with the assemblage as a self-contained whole, my secondary goal for today was to end up with a framework for playing with similar material in the future.  This piece uses the entire last night we spent together.  It's very personal, and I find it pretty difficult to listen to.  Hopefully you'll find it beautiful.

Here's the code:
//load all samples in directory:
(
var pipe, line, dir, index;
dir = "/Users/josephmariglio/Music/samples/pennsylvania/sf/";
pipe = Pipe.new("ls -f"+ dir, "r");
line = pipe.getLine;
index = 0;
while(
{line.notNil},
{
var path;
path = dir ++ line;
Buffer.read(s, path, bufnum: index); line = pipe.getLine;
index = index + 1;
});
pipe.close;
~num_bufs = index;
)
//a synthdef for playing them:
(
SynthDef(\samp, {|rate = 1, vol = 1, out = 16, buf = 0| Out.ar(out, (PlayBuf.ar(1, buf, rate, doneAction:2))*vol)}).store;
)
//define patterns to arrange the sound-events
(
~a = Pbind(*[
instrument: \samp,
out: 0,
buf: Pseq((0..~num_bufs), 1),
dur: 1
]);
~b = Pbind(*[
instrument: \samp,
out: 1,
buf: Pseq((0..~num_bufs).reverse, 1),
dur: 1
]);
)
//play them together
[~a, ~b].do{|x| x = x.play;}

So the piece is a near palindrome, made up of a stream of samples indexed diachronically and reversed, one stream on either channel of audio.  They meet up in the middle.

Click here for the mp3.

hackpact 09/09!

For a definition and links to other participants, check out:

http://toplap.org/index.php/Hackpact

Basically, I want to use this month to make and document something small, creative and techie every day.  I imagine these things will be mostly sonic in nature, but i can't promise that I won't document at least a few days of breadmaking or some other such thing.  Since I'm not sure where the month of September will take me, I also have to keep my options open regarding the mediums I'll use.  I'm going to venture a guess and say there will be mostly analog circuits and supercollider patches, with perhaps a few python things and the occasional loaf of bread.  I'm starting the month off by making an assemblage from the field recordings I took this summer.  Stay tuned for documentation...

a few summer projects

I spent June traveling across the continental USA, visiting friends and making field recordings.  I will cover that month's activities in a future post. 

In July, I moved into a retired auto-body shop in industrial Greenpoint.  I am helping my flatmate convert it into a recording studio.  In the meantime, I have been fiendishly networking and putting together creative projects.  Starvation and searching for jobs have also taken up some time, as well as taking online TEFL certification classes and reading up on the GRE.  The ideas I have for projects are as follows, in no particular order:

1) I have been playing the piano a lot more regularly.  I want to start incorporating my love of home-brew analogue electronics with my fake (but much lighter) electric piano, in preparation for buying an old fender suitcase once I move out west.  I've never stopped loving to play piano, but only recently have I regained faith in it as a significant creative tool.

2) Learn MATLAB.  It's actually quite simple and well documented since it's commercial software.  Obviously I stole it, but if I start using it a lot I might get whatever Ph.D. program I end up in to pay for a legit version.  A cursory glance at the signal processing and wavelet libraries suggest some cool and very musical applications.

3) Acousmatic "scenes".  I am interested in using sound design to create narrative, stylized experiences.  This occurred to me while watching the latest Harry Potter movie (the dialogue for which was ridiculously bad).   I found that many times, sound takes over and carries the experience for the audience when other cues fail.  While there exist conventions for how this sound-language works, they are by no means rigidly defined, and certainly not purely representational.  I often think of musique concrete and acousmatic musics as analogous to computer animated features: the same 'uncanny valley' must be avoided.  Successful CGI commonly employs a healthy dose of painterly sensibilities, and de-emphasizes the ultra-photo-realism that plagues the genre.  I have specific scenes in mind that I want to do.  Briefly:

a) The scene in the new Harry Potter movie with the zombies and underwater fireballs.

b) A dream I had about screaming, tearing my clothes off and running into the forrest surrounding the village I lived in.

c) Waiting in a clock-garden.  Birds giggle and turn into distant female archetypes.

4) build a composition using successive passes of impulses and convolution with new impulses.  i'm talking either digital band-limited impulses or analogue logic circuit impulses.

5) take two inputs.  find approximate pitch (using autocorrelation) of input 1.  find n strongest partials of input 2. granulate input 1, retuning grains to each partial, remapping partial amplitude to probability weight.

6) using geometric series for rhythms.  

7) fix and expand Cattri into a new cluster currently missing a working title.

Items 1 and 3 are more like ends to means, while 2, 4, 5, 6, and 7 (and others not listed here) are more like means to ends.

On top of this, I have several projects I am working on for others.  They will also be documented as photo- and recording- ops present themselves.

tdat recording available online!

you may read & listen here.

we recorded the evening of 25.5.09, after dinner and wine.  we improvised mic stands from spring clamps and pop filters from spandex.  we hung a condenser from the light fixture.  it was good fun.  

once more i'd like to thank everybody involved with this project for putting up with me as i clawed and chewed my way through this crisis ---er, thesis.  it has been wonderfully murky.

 

our ramshackle setup:

pop_filters

pre-mortem/post-mortem

The prospect of analyzing, categorizing, or evaluating something before it has reached fruition is somewhat problematic. However, for practical reasons I find it necessary to put some thoughts on "The Data and Tension" into type. "The Data and Tension" (abbrev. tdat) is a language-event for four vocalists. I find it useful to describe vocalist interaction as though it were a game. For a thorough explanation of the rules, libretti, or development of this family of compositions, I refer the reader to the project page, and to my thesis; both are available here.

My goals in constructing tdat changed as my understanding of it did. The major changes that occurred along my process involved tdat's content, production, and naming. Perhaps the most significant of these changes emerged when I began talking about tdat as an oral notative event.

For the first iterations of tdat, the rulesets were constructed through a labor-intensive analysis method of an arbitrary text. At first these texts were hand-edited and compiled from various sources, from class notes to pro-gun blogs. The most frequent significant words were rated for their frequencies and selected to be part of the rulesets. Then, using hand-drawn matrices, I attempted to construct rulesets that seemed balanced. The result was an arbitrarily structured game whose dynamics were unpredictable. After doing around six of these, I gave up with this method and inverted the process. I wrote a simulation library in SuperCollider and encoded the structural components of the games into a character string. Then I began searching for strings representing games with specific traits. Despite that the first few searches were not fully automated, the results were impressive. I eventually took each step in the algorithm and wrote a routine to schedule them, searching a very wide space for the perfect candidate. When the time came to realize one of these pieces, I replaced each of the characters with a sentence, written by me. The sentences, put in order of the characters in the string, would comprise the text. I eventually made one further inversion to the process, which was to write out the entire text in the order I wanted it, and to determine the mapping between the sentences and their functions from there.

As I worked, I became increasingly invested in the idea that my model of creative processes should support nested systems of signification. This decision also precipitated as I pored over book after book calling for the universal reform of the common practice notational system. It frustrated me that, even today, music discourses are obsessed with the Western canons' practices of visualized parameterization. An oral notation is subversive to these practices because they comprise an event that is experienced through time. This subversion is attractive because oral notation practices are extremely common, even in the Western conservatory system. When is an event a notation for another event?

Another impetus for moving toward calling tdat a notation were some of the responses I would get from individuals regarding my thesis and the genealogy
of my compositions. It was suggested by numerous individuals that since "Solo for Amplified Window" came from the realization of a meta-notation of "Sanction of the Victim" (and indeed many other compositions), that tdat must bear the same relationship to "Sanction" as well. These individuals wanted me to explicitly define, in the form of another notation, the mapping between the two pieces' parameters. I do not believe such a notation is necessary, nor does it contribute to the richness of the two pieces relating to each other.

I will probably continue my explorations into vocal/theatrical modes of production. While it is technologically deterministic to assume that computers make good performers or composers on their own, since it attributes intention, it is not unreasonable to consider that they offer the performing or composing human very sophisticated tools. I have been questioning what the term "computer music" even refers to. In the past, computers required a great amount of expertise and money to operate, and music-making was a separate practice. Today, computers are very easily and effectively integrated with musical processes and are much more readily understood, purchased, and maintained. What, then, is a specifically computerized music?

This composition requires a level of virtuosity on the part of the performer, both in that the task itself is difficult to complete and difficult to listen to. This virtuosic listening is also required of the experiencer. Rehearsal gave the performers the ability to make the piece more intelligible, or at least to consider intelligibility as an issue and exploit the elegance or performativity of the unintelligible. For the audience, the experience of being overwhelmed by language is attractive because it provides the opportunity for multiple paths through the arc of the piece. However, in order for this to be successful there must be an adequate recognition of the pleasure and care with which the performance is realized. This experience can be difficult to communicate effectively. To a certain extent, however, it is my intention that the experience of this piece also remain outside the realm of the understood, because I don't fully understand it either, because the mere event of having all of these parties strive jointly for the communication of pleasure precludes the understanding of such an event. Paul Chan said it best:

There is nothing to be gained by making work that relates, except perhaps the cheap thrill of understanding. Better to let the work be as foreign and distant as a star, or an ambush.

This thesis is, in some ways, rather inappropriate to the conventions of ITP: it is academic, murky, and, genetic algorithms aside, fairly low-tech. In some other ways, however, it speaks a language that I hope resonates with the people of that program. I want its strangeness to elicit laughter at the familiar.

the remix metaphor in intermedia


Merce's Isosurface (Excerpt) from Golan Levin on Vimeo.

Above is a stunning example of data visualization by Golan Levin. It is a realization of data transcribed with a motion capture system of Merce Cunningham's body performing, presumably culled and edited by the OpenEnded Group. The meta-notation consists of an invitation to re-purpose the data into new digital forms, for an exhibition / performance at MIT over the next few days. More information on the event here.

I am interested in the use of the 'remix' metaphor, which could be stated either as an alternative to the metaphor I'm currently exploring -- notation -- or as a kind of subset. I am inclined to believe the latter-- that notative practices encompass remix practices.

The reason we can take a remix to be a form of re-realization is that a recorded artifact can be taken to be a transcriptive notation. This conceit has important results, and I would sum those up with the notion of a homolog / analog axis-- a 'concreteness' parameter. To say that a recording of an event is a notation of the event is to say that the notation bears a concrete relationship to its realization. To remix this recording, then, is to re-realize this notation.

The remix metaphor is something I have a bit of experience with in my own practice. As a computer musician, often the only notations I have to work with are concrete. This notative modality, as any other, comes with a unique set of semiotic assets and liabilities. The remix favors certain levels of meanings and deprecates others. Despite the fact that this tendency is universal and there is no escape, we can problematize it in the hopes of enriching the practice.

In my experience, the biggest issue that comes with the remix is that it deprecates the hierarchical aspects of its remixed. This is because of the flattening out of the dimensionality inherent in the recording process. Yes, one may have access to, say, the vocal multis of a recorded work, or one can isolate those parts in order to drop them into new material, but the remix metaphor requires the direct manipulation of the material comprising the remixed. Even if remix practice is re-evaluated in terms of intermedia, as in the above example, the underlying compositional motives, overarching social context, or specific technical implementation are not sufficient conditions for a remix.

For example, there exists the distinction between the cover and the remix, where a cover implies other restrictions on form, but the source of that material becomes unfixed. If one were to supply a piece comprising entirely new material that bears formal similarity to a component of a previous piece, then one has participated in an act of quoting. All of these modes may be distinct subsets of notational practice, if that practice is re-evaluated in terms of intermedia.

Of course, Golan's remix of Merce and OpenEnded Group is still very much a remix. And still very much awesome.

tdat performance considerations

this week i have four final project presentations in a row, and "the data and tension" will comprise no less than three of them.  mostly, this decision was arrived at for practical reasons: i want my performers to have a few chances to practice before the actual presentation on may 8th.  it will also be nice to have the experience of trying to explain it a few times before the big date as well.

i have decided that in addition to my preamble where i problematize transiency and opacity in creative work with technology, just after i play the binaural recordings from the installation, i should engage two audience members in a demonstration of the tdat system.  this way, in explaining the rules we can see them applied to a smaller system first, and the audience has the opportunity to experience the dilemma i'm trying to communicate.

this new ruleset is currently being located by a massive automated genetic search algorithm i wrote in supercollider.  i'm searching for this new ruleset with a lot of improvements to the code that both automate the entire workflow and properly log each step of the process for future sonification / visualization work.

i am still uncertain if a duet version is viable, because it's currently on its 140th generation of 32 and we're still not getting reasonable results.  i may can the duet idea and go for a trio instead, but i'm going to let this run for a few more hours and we'll see if i eat my words.  i hope i do, words are tasty.

linguini photo shoot

before i distribute these guys, i thought i'd get a few shots of them all lined up.  space baffles me- notice how, in each perspective, no two constructions can occupy the same region.  i know that idea is really basic but it's also somewhat counter intuitive. these are digital representations of physical objects, whereas i'm used to working with digital representations of digital objects.  the physical objects themselves are representations of a digital signal, itself a representation of a mathematical construct (wavelet analysis) applied to the representation of audio, which is a representation of a single point of entry for a computer flock in a previous iteration of my composition for LAN which i call 'sanction of the victim'.  the title refers to the fact that the auditory processes that play out in the composition represent extra-formal ideas- in other words, it is programmatic, it "tells a story".  the story is a critique of the network, where flocks of order-seekers are assaulted by the network's exploit, a fork-bomb.

these sculptures are notated forms, in that they are transcriptions of a previous event (that particular moment in 'sanction of the victim'), but also in that they will prescribe some event-artifacts.  they will be realized, in other words.  i hope to distribute them among my co-conspirators this weekend and afterward, so their ambiguities and limitations can be exploited.  i made them 'the same', in that the instructions i followed to construct them remained constant for each piece, because

a) they are not the same (this is different from the digital case)

b) i want to emphasize the multi-centeredness of the realization space by varying the realization conditions (who, when, and how), while keeping the notation as a constant

little tongues

"linguini" are what im calling this family of objects. originally their shape comes from analyzing a very short section of recorded audio. since the analysis divides the single burst of audio into many hierarchically organized streams (some bigger/more influential than others, in a tree format), and also due to their noodle-like appearance, i'm calling them "linguini" (little languages). they've all been constructed the same way, so they look pretty similar. at first, i was going to assemble each one slightly differently, but this is the layout that i kept coming back to. even so, each one is different simply due to their physicality-- that and my incompetence with regard to fabrication. i made these in an attempt to play with the idea that a sculpture could serve as notation for something else. already, without further realizing, the linguini are transcriptions of a previous event. they are inscribed with the representation of that event's micro-worlds. since they are the result of an analysis process (the Haar wavelet transform), the details are encoded rather than directly represented. i find the form itself to fairly reek of musicality: there are 8 layers, with each successive layer (after the first two) doubling in complexity, with most complex layers approaching the look of a noisy, organic surface.

linguini_B_closeup

there is, as always, a fairly rich precedent for visualizing sound, even into sculpture. formally speaking, the linguini do not break new ground. it's their proposed function that does.

what does calling an (otherwise formally pleasing, sculptural) object a notation really accomplish? musical notation is a system that contributes to the creation of worlds pertaining to specific roles, artifacts, and events. Cage and others categorize the roles involved as "composer" /"performer" / "listener", while still others prefer categories that favor other aspects of this dynamic. the artifacts of music could be things like instruments, venues, and notations. (i even begrudgingly concede the inclusion of the grammy award as an artifact.) events may include composition, audition, practice, participation, experience, etc. semiologically, notation participates in an infinite regress of meaning, and, especially in the thoroughly fragmented worlds of contemporary thought on the subject, notation implies a relationship between events, any of which must be potentially resolvable to another notation. because of this troublingly recursive nature of the system at play, a piece like Paik's "Zen for Head" can be positioned as a realization of La Monte Young's "Composition 1960 #10", and Young's piece "Second Composition for David Tudor" a pithy realization of Cage's 4'33". in the continuum of African American creative music, this practice may be observed as well, for example in the work of restructural master Charlie Parker. notation is essentially incomplete, by which i mean it is almost always analogic to its signified, and often there is a loss of clarity at the more homologic end of the spectrum. examples of more homologic notational practices include the concrete arts, where tape-splicing or typography have the tendency to 'point to themselves'. pop musics may often allude to this practice as well.

despite its apparent ambiguity as a notation, a physical object has mass, texture, fragility, uniqueness, and an abundance of other attributes that can constrain the variety of approaches to realize it. while the set of legal realizations remains infinite (just the same as, say, the set of legal realizations of a Bach fugue), the notation's physicality both enriches and limits the variations among set members.

against my better judgement, i have attempted to realize the first linguini, "A", by amplifying it with a contact microphone and rubbing it. using that sound material as a source, i produced a sonification of the most recent TDAT system. this sonification only responds to the pauses that happen as the result of a player hitting the end of her text without further re-triggering. i actually sonify each unit with a different grain lifted from the source material, but i constrained the amplitude such that we only hear the pauses, and perhaps part of the following and preceding units. each player's stream is differentiated by a filter that emphasizes one of four resonant tones of the object, derived from a fourier analysis.

i'm very excited to distribute these notations among friends to see their decisions. that's such an essential part of this idea: that a tool that was such a normalizing force, one that eventually grew up to encompass the property aspects of a piece of music, could be used in a way that confounds traditional forms of ownership, and meanwhile, lets us try to disentangle (or re-entangle) the idea from the technology.

the data and tension - 7A-D

finally, after all manner of disasters, i have 4 fully functional vocal pieces. all using the same text, this complex represents the 20th generation of a rigorous genetic algorithm whose fitness function takes an average over 32 scenarios. in a tested setting, piece 7A should be the most likely to continue indefinitely, since its average running time was 72,167.65625 turns. each generation of the algorithm spanned 32 possible pieces, and tested them 32 times to get a reasonable average for the fitness function. given the wide variation in output, however, i wouldn't mind it if future generations applied some other filtering techniques to punish inconsistency. right now, though, i feel good about these four. they not only test well, but they look good too, imho: i have come up with a rough draft of filled in verbal 'content'. from here, i'll probably pick one of the four (most likely 7A but you never know) and i'll spend the rest of my time tweaking it until it's just right. i also have to leave some time in to find a group and rehearse. my current plan is to have this piece comprise a large portion of my thesis talk-- a pretty risky move, considering how many other things i'm doing that i could talk about. i really don't want to address the thesis paper directly, and i think a lot of the concepts are better seen from a multi-resolutioned perspective. hey-- it's that or show documentation footage. this seems more enjoyable for my audience.

still unclear if i'll project the entire text or just the rulesets. the rulesets all happen to be nice and oblique for system 7, so if it stays that way, i may just project the rules to the audience and let the more prose like stuff lean on the conversational side.

genetic algorithm updates

continuing the thread from the previous post...

i'm on generation lucky thirteen, and i have doubled the gene pool size to 32 rulesets.  i've also automated the testing - scoring - averaging process, which was the bit that was so time consuming and required the majority of the mindless repetitive human input.  now i have a routine taking care of that for me, although i had to fork it off the SystemClock instead of the TempoClock to avoid overloading the scheduler.  also, it still takes about half a minute to run through each game, which means about 15 minutes for each generation.  again, this isn't because the processor is slow, but because the turns actually take miniscule (but not zero) amounts of time.  i did this because i can't figure out how to model the system in any way that does not involve forked scheduled routines.  at least im not still sitting in front of my computer pressing buttons every few seconds like a trained monkey.  well... that's debatable.

also, i have been writing out data files representing the genomes for each generation, but i discovered today that the first 10 files had been corrupted.  i have fixed the problem, and was able to save my breeders for generation 11.

i am interested in the possibility of sonifying rulesets as computer music.  obviously i'm interested in sonifying them as vocal music and instrumental music, but today i was thinking about how i'd do it with a computer as the source.  i think the concept that hurt my head when thinking about computer sonification before was that i assumed i'd let each unit resolve to a sound.  a more attractive approach, at least to me, would be to let only significant units (ie those with transitions) resolve to audible sounds, and let everything else resolve to blocks of silence.

tdat- a genetic approach

so, the last post on this topic left me with a ruleset that i liked and moving on to start filling in those ascii characters with meaningful content. i am sorry to say that i have hit a snag. that is, i ran a few simulations of the 'tHl' ruleset and got enough unsatisfactory results that i decided to can it. i was not thrilled to do this, to be sure.

i have somewhat intentionally been allowing my ideas to stew about what to fill into those characters, since i will be performing the piece as my thesis presentation and i'm still doing some research. it no longer worries me that i do not have this content, since i'm bursting at the seams with ideas for what that content should be. what worries me now is that the structure is inadequate, and that i had prematurely settled on such a structure.

my solution has been to use a genetic algorithm to traverse the massive search space and find a ruleset with a reasonably good chance of lasting longer than a few asynchronous turns. i am not modeling sexual reproduction because the bitmasking involved makes it somewhat unwieldy. in other situations breeding would make more sense. Jenny and i just aren't ready for that just yet. har har har. seriously though, sexual reproduction is terrifying.

so these rulesets don't breed, they split and mutate. charming, no? i spend most of my time testing them, all 16 variants at a time, 32 times per generation. the variants are then evaluated for their average longevity. in my implementation, since i do plan on sonifying these systems, each player in each game is a routine forked off of the system clock. that is, each turn does take an amount of time, albeit small. i have tuned down this timing element to vary between 1/128 to 1/256 th of a second. i was able to do smaller subdivisions with smaller gene pools, but i decided to go for bigger populations. at the end of each generation, i write out a little data file, keeping track of the population as it evolves, and then remove all but the top 25%. these four are permitted to pass on their genes to 3 other children each, and themselves live on to compete in the next generation.

right now, i'm on my fifth generation. generations take about 20 minutes each and require intermittent input. not the best implementation, but for now it'll do. i may eventually get this running on its own. perhaps i could distribute the breeding process across the cattri. right now that seems like more trouble than it's worth, but who knows. there are so many variables at play here that perhaps the only way though is by brute force.

flight of the cattri

for those who need to be refreshed, the Cattri = my linux cluster.  currently, it's four old-ish computers i got for free, connected with a router, which communicate with each other (and anyone on the wireless network) over OpenSoundControl (UDP) and SSH.  the name comes from the Pali word for the Four Noble Truths in Buddhism, literally "the four".  each computer is named after one of the Noble Truths.  i use these computers to make music.

last semester, i composed a piece for the cattri called "Sanction of the Victim", and implemented a few proofs-of-concept.  eventually i performed it at  Diapason art  space  for a crowd consisting mostly of my classmates.  if you'd like more in-depth documentation, feel free to dig around in my blog archives.  there have been lots of versions of this composition.  the basic idea is that there are two systems, like species of organisms, let's say, living on a single network, sharing its resources.  one organism likes to band together into herds, and those herds tend to band together into meta-herds, and so fourth.  these are represented in sound as a pitch constellation, played by a virtual instrument that resembles the Gamelan, but in only the most extended sense.  it also sounds a bit like a drippy shower head.  the other organism spreads virally, using up network resources and eventually disabling the router.  it exploits minute differences in the timing across the network.  this beast sounds a bit like a furious tremolo on a rubber band orchestra.  when i performed the piece, after i infected the network with the virus, i got up and theatrically unplugged each computer from the router, performing a kind of quarantine.  if you caught the reference from the title, it is a case of "Belkin Shrugged".

in addition to relating to my thesis, this composition is significant because i will be performing a revised version of this piece at the SuperCollider symposium on April 10.  until recently, i had been focusing a lot of my efforts on the flocking algorithm that directs the pitch constellation.  there was also an issue with representing the positional data of the flocks with the pitch constellation, which i solved using a hyperbolic wave terrain (see terrains-- they're pretty!).  each of these tests i ran on my blazingly fast macbook pro.  when i first tried to implement the flocks on Dukkha ("the reality of suffering"), often the first "remote" location i test things on, it pretty much started emitting smoke.  ...ok that's an exaggeration, but it was bad!

i spent the next few days trying to make the algorithm more zippy.  i overhauled the synthesis functions and got them under control.  initially i had wanted each note to ring out for a period inversely proportional to its frequency, so lower notes would last longer.  when this was implemented, the fact that each note brought with it more cpu overhead became a serious problem.  to solve this, i ended up using a physical modeling approach where each 'note' is triggered by a similar excitation signal-- basically enveloped pink noise-- and the different notes are really the result of a ringing filter, simulating a resonant body, like a bell or drum.  this way, i raised the overhead for the whole system, but decreased the cpu spikes that would occur if many notes overlapped, since the excitation signals are very short.

once the synthesis functions were more efficient, i tried to tighten up the logic that triggered them.  the problem was the flocking algorithm itself.  no matter how i sliced it, or which scheduler i used, nothing was happy.  the fact of the matter is that using my implementation and hardware, a flock of 40 automatons is simply too much to handle per cpu.  i realize that i could use a kd-tree structure or something similar to divide up the space into smaller quadrants, but that is for a later revision.  for now, the problem has been solved by distributing the pitch constellation across the cluster with a good deal of redundancy.  i let each cpu handle a constellation of 13 flocking pitch-creatures, giving me 12 pitches that are doubled.  i mapped these out to make sure they are not in reinforcing sections of the octave.  since each cpu will have representations of the other cpu's average positions and headings to flock with, i am not concerned with the computational tradeoff.

then came the problem of distributing the flocks across the network.  i had gone through a lot of testing to ensure that a model of the process would work between my macbook and Dukkha.  because much of SuperCollider is an interpreted environment, working on many computers at once can be somewhat troublesome.  it's hard to remember exactly what you have going on in each environment on the network.  it's like trying to keep track of four different conversations at once.  a lot of networked music solutions i've seen in SuperCollider, Chuck, and PD do not address this problem, since the common assumption is that each computer is manned by a different performer.  i have developed a tool to extend the supercollider environment so that it is readily distributed on my hardware.  i made no real efforts to generalize it, since my needs are my own, but in reality it would not take very much to use it on other distributed SuperCollider setups.  even with this tool, i was running into problems with addressing multiple computers simultaneously and transcribing the event into a reasonably easy to interpret notation.  i resolved this problem-- and thus arrived at code that lets my Cattri flock together-- by further extending this tool.  now, i can make transcriptions of code that gets sent to each computer, with notes on which computers received the code.  i realize now that as a performer of "SoV", my instrument is not only the Cattri itself but the tool i created to interface into it, so a static representation of the code that runs the piece in the abstract is not enough to allow for simple realizations.  for a notation to work properly, it must be something that takes into account instructions for me as well as my cybernetic extensions.  in other words, i am now a firm believer in documentation.  of course, eventually it is conceivable that i could condense this notation into instructions for my laptop to conduct the piece, without my direction at all.

so i have some documentation videos and sound recordings, but i'm afraid i'll have to wait to release them until after the 10th.

terrains

I'm still working on SoV.  Now that the flocking is fixed, I've moved on to debugging the parameter space itself.  I had been saying this whole time, since winter, that the x,y coords relate to the frequency and phase of the notes being struck, but I found out today that this is not quite the case.

If it were the case, and you mapped the phase information (whether the note should be struck or not) for every point, it might look like this:

freq_phase_space

In the picture above, colour is used to denote where in the phase the oscillation should be, given that position.  It's a wave terrain, basically.  Now for the fun part.  When I first started trying to notate (!) the system into a function system, I ended up with some chaotic-seeming behaviors.  Graphing the system revealed the bizarre truth, yielding something that looked like this:

aliasing_hyperbolas_green_small

This is not what I was expecting!  Above we see the same terrain as before but zoomed way out.  The patterns are the result of aliasing.  Even using a very high resolution, these patterns appear (in fact they get worse).  They are the result of sampling itself, and actually require a smoothing filter to reduce the artifacts, much like sound aliasing.    And, also like its sonic counterpart, I think it's gorgeous.

aliasing_green_hyperbolas_large

This one is even higher resolution.  The self-similarity is so pleasing to me.  And finally, my favorite:

aliasing_black_hyperbolas

I have actually made a few animations of these, where I alter the z-axis and you can see the whole system seem to bubble and seethe when you are really just zooming in or out.  This last one looks like it was knit or something.  I think it would rule as a sweater or blanket.

Don't worry, I'm still working on SoV, but I thought this accident was worth stopping a moment and considering on its own terms.  Also this has brought me some insight into another use of notation.  Our ears are very precise, but only in certain capacities.  While the same could be said for our eyes, these two modalities tend to appear complementary in many of their fortes and foibles.  Seeing something plotted out can not only help explain what a performer is to do, but it also facilitates certain kinds of analysis.  I had been using graphics previously to show my audience something about the music.  I do not think this is necessary for something like SoV, nor is it the only way to use graphic notation for systems music.  Nor are we stuck with max-patches.  This experience was an example of what happens when transcription is used for analytic purposes, as an aid for the composer.  I would say the same for my experiences with the much less visually pleasing display for TDAT.  TDAT's application of transcription was more symbolically oriented, however, which lends itself to still other types of analysis.

flocking study

The patch documented above uses a similar parameter space to the previous study, with each ellipse representing a static tone in a constellation, and the x,y coordinates of each ellipse determining the frequency and relative placement (phase) of the occurrence of that tone.  Here's some SC code for it: Boid.sc, boid_test_0.scd.  This particular example uses a random pitch constellation, instead of the one for Sanction of the Victim.  Boid.sc is directly based on Shiffman's Processing class, itself directly based on Reynold's synonymous algorithm, commonly used in computer graphics.

It's kind of amazing and wonderful that different self-organizing algorithms make such wildly different outputs, even with the same (or very similar) parameter mapping.  This one reminds me of tape delay.  A nice tweak to the parameter space could be to have the left edge (x=0) be rationally related to the right edge (x=width), and let there be a more smooth fade between those two states.  That way, there would be no discontinuity of frequency, and the x axis would comprise a Risset-Sheppard phenomenon (ritardando or accelerando).  I will also be playing with tactics for increasing sparseness in the sound masses, such as constraining the x-mapping to comparatively low frequencies, or making entrances and exits happen for one pitch at a time.  Also, the prospect of applying this parameter space to sampled sounds might be interesting, so instead of letting each boid represent a different pitch, let each boid represent a brief sampled 'grain' from a soundfile.  That way, the listener might have more attention to devote to the sequence of the firings.

repel study

Each ellipse corresponds to a pitch in a constellation, with regular occurrences (events of 'striking the bell') determined by position in the window (frequency vs phase). Ellipses have a mass directly proportional to their size and inversely proportional to their pitch. The ellipses are attracted to each other according to gravitational laws, and repel each other according to a 'force field', whose magnitude may vary across the arc of the video. When two ellipses repel each other, a transient sound event (grain-like) occurs, whose spectrum is determined by the ratio of the two pitches involved. When more than three ellipses repel each other at once, a squealing bowed metal sound occurs, also related to the ratios between constellation pitches.

I wrote the code in SuperCollider as part of an effort to clean up the sourcecode for other pieces based on dynamic systems, such as Sanction of the Victim's flocking algorithm.  Actually, this study uses SoV's pitch constellation and mapping of frequency and phase to x,y coordinates. SoV's debut performance at Diapason used classes based on Fredrik Olofsson's "Red Universe" class library, which I modified to get a "Boids"-esque behavior. While his class library was useful as a springboard, I found its implementation slightly troublesome. Not that his code was somehow bad or anything, but there were a few things I would do differently. Specifically, the reliance on inheritance made for confusing code, although I assume that his reasons for using that style were didactic. So my project has been to rebuild a class library specific to my needs and coding style. This gravitational forces study contributes to that effort.

Source code for the above movie available here: Class Definition, Implementation Script.

object as transcription

detail shot
detail shot
front (zoomed)
full frontal
"myspace" angle
"myspace" angle
side view
side view
top view
top view

I've been thinking about the different functions of notation, such as transcription or transmission, and the different forms that these things can take, such as schematics or so-called 'concrete' scores (recordings), and trying to put some strain on the meanings to come up with some murkiness. After seeing the beautiful 'ur-scores' of Doug Wadle-- and initially misinterpreting them to be of mixed media rather than paint, due to the fact that I was looking at pictures-- I decided to investigate the possibility of using a 3-dimensional object as a kind of notation. Precedents include (but are not exhausted by) Cage's "Rocks Role (after Riyoanji)", the aforementioned Wadle, Xenakis' work with architecture, and the use of electronic circuits themselves -- not their schematics-- as scores by the likes of Tudor, De Marinis, etc. How can the ambiguity inherent in using an object as notation be overcome and used as a strength?

After making my portfolio recording of 'Sanction of the Victim', a composition for 2 or more networked computers, I was looking at the waveform up close and found this really cool shape that lasts ~0.09 seconds. It turns out, this artifact happens a few times in the recording, and it is the result of all the tones in a pitch constellation flock being struck at once, and the filterbank momentarily blowing up. Because of the flocking algorithm used to determine rate and alignment between the tones in the constellations this only happens once per computer. It occurs at the moment when the system on a particular computer starts up. I grabbed the brief snippet (only 4096 samples long) and performed an analysis of it, breaking it into 12 layers using haar wavelets. I first cut the shapes of these layers out in foamcore, discarding the top 4 layers because they were way too complex to cut out at that scale. As it turned out, the top two layers were still nearly impossible to do in foam with an xacto knife. I sent the vectors, along with some 1/2" thick medium density fiberboard, to AMS for lasercutting. I was dismayed to find that 1/2" MDF proved too difficult for their laser to cut through without starting a fire, and they sent me home with a charred piece of synthetic wood. I came back with 1/4" thick masonite, which they lasercut without incident. The resulting layers I fixed together with woodglue so that the lowest values lined up and the piece could stand upright. All this was painstaking work, but a nicely different pace from directing vocal pieces for humans or programming computers to simulate them. Also, I do enjoy the feeling of carrying around raw materials in Manhattan. For some reason, there is a level of devotion to a composition that is felt when one's arms ache from schlepping as the result of it, which is different from the devotion one feels to a massively complex piece of software. Well... sometimes. 'Sanction of the Victim' is both a massive piece of software and four computers, so my arms and brain felt the devotion to that one. The weight and dimensionality of this first masonite form (there will be three more) certainly work for me as components inspiring devotedness.

I do not know the proper way to realize this score. I have suggestions, though. One could trace or rub the score onto paper, superimposing staves perhaps. It could be a Rainforest object. It could be a Cartridge Music object. It could be that you're a computer science enthusiast and you make a table of the wavelet coefficients (with applied scale) of the values implied by the score and resynthesize the original recording (at a lossy compression, due to the missing top 4 layers).

tdat systems testing: a bottom-up approach

The vocal piece and its corresponding realization as an instrumental piece uses the following algorithm:

Four players share an identical sourcetext, divided into four sections. Each player has all four sections.

Each player has a unique ruleset consisting of four trigger words, each linked to one of the four sections of the sourcetext.

The piece starts with one player starting from anywhere in the text. The player reads aloud ten of the words in the section. Since the player may start anywhere on a given section, the player should wrap up to the top of the section when she hits the end.

When a player hears one of the other three players say one of her trigger words, she must move to the section of the sourcetext associated with that trigger word in her ruleset, beginning anywhere within the section. Players do not trigger themselves. If a trigger word is said that would move her to the same page she is already on, she ignores it.

An end state is reached when all four players have stopped reading.

In the above description I refer to the smallest significant unit of text as words, but this clearly need not be the case. It is actually a much easier prospect to listen for longer events such as phrases or sentences. Scale is a huge factor. For the instrumental realization of this piece I am planning to have my performers listen for musical phrases or contours, not absolute pitch.

I wrote a small class library in SuperCollider for handling simulations of rulesets. There are two classes: a player and a game. In various implementations ( 0 1 2 3 ) I have been using them slightly differently to pull different kinds of data from them. I started by running a few lines of code that generates sourcetexts with pseudorandom numbers. I was able to collapse rulesets and sourcetexts into one axis by keeping the rulesets the same throughout. I used the ascii characters [a-zA-Z] to stand for 'words', and the ruleset matrix remains

A    B    C    D

E     F    G    H

I     J    K    L

M     N     O    P

where along the rows are players, and along the columns are sections triggered by the character. Since all other ascii letters trigger nothing, they are interchangeable. What is really significant is where each of the trigger words are placed within the text. To simulate playing the game without turns, I had each player fork off and make moves after pausing for an exponentially weighted random amount of time, spanning from 1x to 2x. I am working, then, with sourcetexts of 52 'words', divided into 4 sections with 13 'words' in each.

When I was first testing this piece out with people, I would start with a sourcetext that I liked, divided it up by hand, and performed word frequency analysis to arrive at a ruleset. This was incredibly labour intensive. Also, I found that despite certain top-down constraints I would work within, such as maintaining a balanced transition matrix or placing trigger words for each player in each sourcetext, etc, the systems would sometimes misbehave. That could mean we'd get stuck in a loop or the system would end prematurely, or that players would miss out on reading a particular section of material entirely. There are so many sensitive factors that go into the initial conditions of this system that to parameterize these and come up with some linear solving strategy seemed way out of my league. And initial conditions aside, the mere fact that these games are played asynchronously provides us with a further level of inconsistency. There was no way I could be sure that my intuitions in composing the rulesets were guiding the outcomes at all! Plus, how many times can you really find four people willing to try out this exercise, even in grad school?

Using this class library, I'm able to test rulesets out at extremely high speeds, in high numbers, and generate massive datasets from the outcomes of those hundreds of games. I started by randomly generating about 50 systems and testing them for longevity, using a metric that makes sense in light of the fact that we can't measure time in synchronous turns. From this first round, I selected 13 systems whose longevity metrics were significantly higher than the rest, and tested those for longevity, also generating markovian transition matrices to determine the probability of a particular rule being carried out at any time. A more balanced matrix would suggest a more even distribution of sections to be read by players. Finally, I honed in on two systems whose longevity scores were an order of magnitude higher than the others. I ran tests on both systems to determine the average length of a game started by each player. During this process, I selected the system that will become my piece by taking the average longevity score of all the players longevity scores: tHlcqbunKdrifgojIPEDLXwxmNGMQVJRahZBFUkWsOeAyCpTYzvS. I've been calling it 'tHl' for short. The initial condition preference weights end up thusly: [ 0.26285490520858, 0.45210917667393, 0.18168356048467, 0.10335235763283 ], so player 2 should start ~45% of the time, with player 1 starting ~26% of the time.

To programmatically allow for changes in density, I would like to try having certain non-trigger words resolve to silences. Because I decided this while I was running tests (a nice affordance calculating machines give us is to let us pay attention to how we're interpreting them), I was keeping track of the number of times a player reached the end of their sourcetext before getting triggered to change to a new section.

Since I am to perform this piece (possibly as the thesis presentation itself!) in May, I have decided to stop there with the analysis / synthesis of systems and get on with replacing those ascii letters with meaningful bits of stuff. However, with a more rigorously organized setup, I could see myself crunching a lot of numbers to come up with something better. With all that data, I would probably be able to come up with alternative realizations / infra-notations of all kinds. One particularly interesting realization could be to produce a statistically analogous outcome using a classical markov chain based on the transition tables I'm getting from the analyses. That would mean very different things, since in the presentation of the piece I'm also trying to frame the performers' dilemma for the audience. Because the outcomes could potentially be fairly similar, it raises some interesting points (at least to me) about the typical parameters that conventionally comprise an alternative realization.

ps I will be sonifying this system many times over-- that's why I decided to go with SC instead of python.

markovSamp.py

here's a simple markovian sample munger written in python2.5.1. should be compatible with 2.6. all bets are off for 3.0.

analyzes wave files. don't be a joker and run mp3's through it unless you're willing to accept the consequences.

it takes a really long time to generate transition tables for each sample. this is mostly because i'm new to python and my programs are not optimized. eventually i'd like to have some flag that lets you save transition tables based on samples so you can generate new soundfiles from old transition tables without having to re-analyze the soundfile. on any reasonable machine hovering around 2 ghz, a one second sample takes a few seconds (<30) to analyze. generating the soundfile may take a few seconds depending on how long a file you asked for. run the script without params to get its usage. usage is also covered in the comment block at the top of the script.

usage (assuming python aliases python2.5 / 2.6):

python markovSamp.py <path-to-infile (unix-legal paths only)> <depth (samples per unit)> <length (number of units in output file> <path-to-outfile (will write to new file if filename doesn't exist at path)>

happy munging!

ps- I used this script to generate the following stereo signal from one of the Unwashed guys' dark psy kicks.  They called it 'the monster', hence the name.

Markov Monster

tdat v3-6

Look to my previous two blog posts for details, if you're confused or would like to see some more background.

The Data and Tension (hereon abrev. 'tdat') has gone through several more iterations of sourcetexts and accompanying rulesets. I've tested almost all of these on groups of four, and a few on groups of two and three, despite that the rulesets are designed with four in mind. Using fewer players than the rulesets provide for causes problems like loops and truncations. To come up with rulesets, I used some simple software tools for frequency analysis of words in the source text, but the vast majority of the work was done by hand on graph paper.

an example of my awful handwriting
an example of my awful handwriting

The process of crunching these matrices out by hand is extremely labor intensive. However, this proved to be the best method as it gave me the most hooks into the ruleset. Slight changes in the balance of these matrices can sometimes yield very different (and occasionally unuseful) results. Designing the game is a game itself!

another hand drawn matrix
another hand drawn matrix

It had been suggested a few times that a piece of software could be designed to write these rulesets. While at first, the thought of doing so seemed ridiculous (the piece is, after all, meant to pull me out of the computer!), after doing eight or so of these (with six different sourcetexts), I have decided that there must be a better way.

Plus, drawing the matrices is the easy part, believe it or not. The hard part has consistently been testing rulesets for brokenness. It is surprisingly difficult to find two to four people who want to give you the amount of time and effort it takes to test one of these, and even when there's support, finding a suitable space to do this in is equally daunting, if not more so. There must be a better way.

So I've written a SuperCollider class library for simulating these pieces. It may be found here. It consists of two classes: a player and a game. Games contain an array of their players, and the source text. Players contain their rulesets, and information about their reading state.

The important aspect that distinguishes this game from other games is that turns are not taken synchronously. Players are instructed to read "over" one another and many times the rates at which they read are potentially variable and unique. The solution to this was to implement a "turn" method for players that encompasses all aspects of taking a turn (ie saying a word) and allow forked processes to call this method decoupled from each other. Since SuperCollider is interpreted, the classes are compiled to the interpreter while the hook into the controlling processes is interactive. Here's an example of an application script, which gets run interactively from within the SC environment.

This is going to totally invert the way I've been working. Instead of starting with the sourcetext and arriving at dynamic behaviour through rulesets, I can engineer and test the kind of dynamics I want, and fill in the words later.

As far as representing the data meaningfully, I am currently looking at sonification as a strategy. Visualizing may also work, but it's going to have to be multi-threaded. The test script is pretty unclear because each player posts to the same window. For now it's a lot easier to detect loops and other issues.

On Tuesday, I tried out version 6 (my favorite sourcetext) with a group of four players, with the sections projected rather than on printed paper. While this had the potential to be more intelligible to the audience, the lack of space kept the players from really moving around. Instead, the effect of having the text projected was to cause people to speak louder as the activity became more dense. Two points proved salient from that experience: first, the text should be in outline form, like the first few iterations of the piece, and second, to try rules that forced players to stop reading.

the data and tension (trial 2A)

Confused? Refer to this blog post for details...

I tried the 1st satisfactory iteration of the piece (source material may be found on the blog post linked above) in a setting with an audience. Granted, this particular time, the whole process was a bit rushed, but regardless the experience opened up another line of inquiry for this composition: what about the people watching? I had originally intended for the participants themselves to embody both the role of performer and audience, blithely suggesting that the experience of participating is the content of the piece. I have revised this line of thinking. How can I incorporate a passive audience into the experience of the piece, while keeping the emphasis on the process?

I began thinking about what impressions I'd like this audience to come away with. The topology and dynamics are a factor I'd like to communicate. More important than even these structural ideas, however, is the understanding that the performers are interacting with each other in a complex and challenging way. I decided that a program note written in prose may be helpful to this aim, but perhaps a more elegant solution could bring this to light. What if I notate the ruleset like a map?


This notation is much less useful to the performers, who require less information about the system as a whole, and more information about their specific task. Furthermore, the layout is too confusing if one is to react quickly using this schema. To this end, I will keep the individualized rulesets as cards with trigger word - section number pairs. Furthermore, the full source text could be projected behind the performers so the audience member could follow along, if they wanted. The performers would have the text on music stands in front of them, along with their cards. More tests will determine whether a single source text works better than 4 copies in this setting.


For version 2A, I expanded the source text by elaborating on the material, preserving the outline format. I added more material, from other class notes. Finally, I felt comfortable enough with the format that I wrote some new material specifically for the source text. I imagine further additions to the source text to be more poetic as I become even more comfortable. The outline format allows a certain poetic that balances the representational with the abstract. Since the source text was bigger, I was able to find more significant trigger words. This should make the task easier. Also, I edited the way I selected trigger words. Now, for each person, I select one of the top 4 most frequent words in each section. As a result, each section is guaranteed to activate each person. I made sure that each person has the total same number of trigger word instances in the entire text. In the next iteration, I'll take into account the sum of trigger words pointing to each section, across people. This way, I'll make an even more level playing field. To generate these rulesets, I'm using a matrix of trigger words, with the vertical axis corresponding to the different sections of text, and the horizontal corresponding to the performers. Instead of payoff, as in typical game theory matrices, I'm substituting trigger words, which are valued according to their frequency. So, to sum across a column is to determine the likelyhood of a single person getting triggered throughout the game, and to sum across the row is to determine the likelyhood of a section of text to be activated. I've been doing these by hand, and I'm totally content working this way, because I've developed a few software tools to reduce the busywork, and also I like having the ability to intervene. I may eventually generalize this algorithm to a set of constraints, which would act as a fitness function and totally automate it, but that would be more to impress my geeky friends than to actually help me compose.

source text

sections:

rulesets:

I haven't tried this one out yet, so if anyone wants to get three other friends together and give it a shot, that would totally blow my mind. I'll cook you dinner if you document it.

the data and tension (trial 1)

'the data and tension' is the working title for a composition for two or more untrained vocalists. it is a realization of the composition i wrote for two or more computers connected by a local area network: blog-entry, recording. the participants read sections from a shared body of source text related to networks (as political, aesthetic, and communicative entities). each participant has her own rule set, which relates certain 'trigger words' to various places in the text. when a participant hears another participant say one of her 'trigger words,' she must jump to that section and begin reading. in reading the section, she may choose to start at a point other than the beginning of the triggered section, so long as she goes back to the top of the section and finishes the remainder. the piece begins with one participant reading, and i have not seen this ruleset yield any ending scenarios.

the participants generally found the composition fun and engaging. while observing the unfolding process was equally fun and engaging for the composer, i imagine the true audience of this piece to be the performers, since they are the only ones who get to experience the process first-hand. that said, there is room for performative components if they happen to emerge.

the source text was compiled from notes i had taken from classes, edited together by hand. the version i used in trials 0 and 1 may be found here. the version for trials -1 and prior were broken (hence their negative status) and are not available. i will be adding to this text as time goes by.

i divided the text into four sections of equal lines: s0, s1, s2, s3. four keywords were selected for each performer, one for each section. to facilitate this i wrote a simple word frequency counter not unlike the one found here. my version can be found here. to find the most frequent words from the list my scripts generates, i used the 'head' command in unix. then i chose 'trigger word' lists by hand for each participant. i chose these lists so the average frequency of all their words would be close to equal. the lists are available here: t0, t1, t2, t3.

to realize this version, give each participant a copy of the four sections of the source text (the 's' files). also give each participant a unique 'trigger word' list (the 't' files).

we found it was best to have everybody share a single copy of the source text pages, printed and laid out in front of them. we actually used two copies, so they could sit in a circle. we found this formation improves the group dynamics.

a larger source text would be nice because it would allow me to use more meaningful 'trigger words'. since my sample was so small, the frequency drop-off between words like 'a' and 'the' to words like 'data' or 'tension' was too great to omit the less meaningful words. this also made the realizing process more difficult, as one is trained to pay less attention to words of the first category.

while i was still in the planning phase, i had considered applying a delay to the vocalizations, to make the piece more difficult to perform. it was quickly discovered that the composition needed no help being difficult, and the idea was abandoned.

after a few trials, we started talking about putting more constraints on how long the triggered reading section should be. perhaps if, on hearing a 'trigger word,' the participant could read some number of contiguous (wrapping) lines from the appropriate section, instead of the entire section. we have yet to experiment with this parameter, although it should introduce a little more space into the realizations as participants wait for their 'trigger words.'

overall, the source material seemed appropriate and did not alienate those less familiar with networking protocols. instead, the group dynamic allowed for an engaging experience and participants seemed able to synthesize the content with the act of performing, which caused a few laughing fits.

an excerpt of the composition can be found here.

(Working) Definition of Terms

In defining the terms "notation," "realization" and "experience," in some ways I am positing my own answer to John Cage's famous question: "Composing's one thing, performing's another, listening's a third.  What can they have to do with one another?" (Cage, "Experimental Music", 1955) Furthermore, my use of the first two terms has been heavily influenced by the work of Ronald Kuivila, who asked the question: "what if we begin to think of the creation of media work along the relationship of notation/realisation?" in an interview with Josephine Bosma for V2 (Kuivila, url : http://framework.v2.nl/archive/archive/node/text/all.xslt/nodenr-132496).  In many ways this project is a direct response to both questions.  I have simply mapped “notation” to “composing”, “realization” to “performing”, and added “experience” (a term in reality borrowed from Anthony Braxton's notion of the “friendly experiencer”) to parallel “listening.”  This is not intended as a simple translation of terms.  It intrigues me that within the practice of composing fit the three terms "notation," "realization" and "experience."  I believe this self-similarity may be found as well in the acts of "performing" and "listening."  Kuivila offers his own description of the relationship between "notation" and "realization":

"I mean notation in a "prescriptive" sense that sets ground rules for a complementary activity - realization - rather than in a "descriptive" sense that specifies a work fixed in every detail." (Kuivila, url : http://framework.v2.nl/archive/archive/node/text/all.xslt/nodenr-132496)

I suggest that just as notation sets the ground rules for realization, the same relationship may often be observed between realization and experience.  Kuivila offers this framework as a solution to the transiency of technology -- and thus the media arts it enables.  Meanwhile, the framework also gives the composer of media arts a richness of interrelation between compositions, resulting in a hierarchical network that may span the lifetime of the composer and --perhaps most excitingly -- may extend across many composers.  This practice may also be noted in the playfully appropriative behaviors between generations of the post-war avant-garde.  (Kuivila, "Open Sources," LMJ December 2004)  Indeed, this practice can be extended across disciplines and media as well.
Alex Galloway formally defines the term “protocol” as  “...a system of distributed management that facilitates peer-to-peer relationships between autonomous entities.” (Galloway, “Protocol”. 2004, 243) He qualifies the ramifications of this management system with statements like “Protocol is a universalism achieved through negotiation, meaning that in the future protocol can and will be different”. (ibid)  In response to the crisis of media art-- that is, the insurmountable competition between the individual and the technology industry-- notation provides a protocol for extending beyond both the individual and the technology.
Finally, I will use the term “net arts,” as per Florian Cramer's proposal to Nettime (archive url: http://www.nettime.org/Lists-Archives/nettime-l-0002/msg00138.html) to refer to “any kind of artistic work in the net, be it 'net.art', net music, net poetry/net literature, net radio or whatever else does fit”. The insistence on pluralism is not as pedantic as it might first seem.  The 1999 lawsuit against the Etoy net artist collective by Internet toy corporation eToys pushed the anti-corporate artists to realize perhaps the most expensive performance art piece in history: “Toywar”.  The goal of “Toywar” was to decrease the stock value of eToys, inc. by incorporating Etoy itself and giving the participants share options and to facilitate the communication between them.  In this way, Etoy was able to drive eToys to drop the lawsuit; soon thereafter eToys declared bankruptcy, having lost more than $4.5 billion as the result.  Lawsuit was withdrawn on January 25, 2000, and Galloway suggests this symbolic date for net arts' “second phase”:

“Like the struggle in the software industry between proprietary technologies and open, protocological ones, Internet art has struggled between an aesthetic focused on network protocols, seen in the earlier work, and an aesthetic focused on more commercial software, seen in the later work.” (Galloway, 232-3)

In addition to this shift, as if to add more subtlety to the statement, Florian Cramer's proposal to pluralize net art may be the result of even further diversification of the modes and aims of art works produced on the net, and not simply Galloway's political binary.  In contrast, Cramer likens the usage to the English pluralizing of “the arts.” (Cramer, 2000)

eee

I recently moved my portable work environment to an Asus eeepc 901 running Ubuntu (Easy-Peasy).  I actually did the install while it was still being called "Ubuntu eee," and made the remark "easy-peasy" after completing the install.  The name is that appropriate.  Not kidding.

Beyond installing and configuring it, the move to ubuntu was likewise fairly easy as well.  My homebrew OSC cluster runs ubuntu server, so I had cut my teeth using linux without a GUI.  I compiled supercollider on the little guy.   I went with gedit instead of emacs for this laptop because I was getting tired of all the wacked-out key commands.  I'm still running emacs on the servers though, because I don't really code directly in that environment.  Quickly I learned that the IDE in linux is very different than the IDE in OSX.  For one thing, GUI is handled completely differently.  This isn't too much of a problem as most of my work is GUI-free, but that rule has the notable exception of remote_lang2.3 , the interface Ron Kuivila and I wrote to simplify the process of writing distributed code, as well as to protect me from karpal-tunnel syndrome (which Ron claims is the direct result of too much emacs).  While I can neither confirm nor deny its effect on my numb fingertips and shooting forearm pains, the patch is absolutely essential for me to effectively develop networked code.  I have now altered the source code to work with gedit.  It's still as simple as I could get away with, because I'm not really interested in flashy UI tricks.  Behold.

You'll notice it's not an .rtf file anymore.  This is because both linux implementations of supercollider currently do not support anything but plain text files.  I am way into this.  To reformat all my old sc work into plain text, I wrote a script.  Actually, since sclang is the culprit for adding / managing the formatting anyway, I figured I'd let it sort it out.  So I wrote the script in sclang.  There is a certain type of programmer that might cringe at this, and there is a certain type who will find it funny.  I am the second type.  Behold.

As for my karpal-tunnel, I have started baking bread.  Kneading dough for ten minutes at a time is really good for the tips.  It's also a great thing to do while code is compiling.

Abstract

Thesis: Problems of Notation, Realization, and Experience

What happens to a realization of media arts when it is distilled to a notation that implies an experience? This relationship may be retrofit ex-post-facto, as a process of describing a realization, or the notation itself may come first as in more traditional instances, giving birth to a realized form through performance.  How can this problem be used to enrich the languages surrounding media arts, and perhaps the arts themselves?
My objectives will include a literature review, ethnographic research of practices, and active participation in the making process. The emphasis will be on actually producing notations and attempting to realize them, but a well informed perspective should also include reading what other people wrote (especially if it isn't text) and observing what people do. A suitable 'end product' would entail a brief review of written discourse surrounding the issue, a few case studies, some auto-ethnography, and most importantly, a playful series of compositions that deal with issues of notation. Currently, I am interested in this problem as it pertains to network arts, since the tools I have developed in the last few years seem well suited to this medium. I, however, do not wish to focus my gaze 'vertically,' (ie on a particular application of the problem) but would rather prefer a more broad, 'horizontal' approach that permits me to vary my modes of production as much as I see fit. This freedom is part of what will imbue the final output with the playfulness I'm hoping for. In this sense, the project is an exploration, albeit a focused one.

Formative Thoughts on Notation

A few years ago, as a sophmore at Wesleyan, I was introduced to the compositional system behind Anthony Braxton's work.  Lacking the foresight to look into the composer's extremely extensive writings on the topic, I instead blithely opted to simply dive fourth into the prospect of having to interpret his notations in performance.  Often --or at least during certain rehearsals-- this meant sight-reading.  Coming from a background in Jazz piano, I was already familiar with a form of music notation that is somewhat decoupled from the classical convention, however this asset provided me with little preparation for the kinds of wildly unexpected instructions I would receive from Braxton, both in written and verbal form, generally involving geometric principles and esoteric diagrams.  While there is most certainly a conceptual continuity through Braxton's work, the cross section of his cannon I was exposed to comprised of several large-scale notational systems which run the gamut from traditional notes-on-staves to drawings of schoolbuses.  The crisis of how to realize a drawing of a schoolbus into music, paired with the somewhat managerial problems that come with having more than four pianists in an ensemble with access to only one piano, led me to eschew the piano entirely for no-input mixing very early on.  This decision only exacerbated the issue of staying true to the composition.  While improvisation certainly plays a factor in Braxtion's musics, it was hard for me as a young composer to reduce his meticulously catalogued, gorgeously rendered graphical scores to the status of Rorschach ink blots.  Reconciling these crises was a task often determined by context; while eventually I gained a gradual understanding of how to play chords and harmony lines with mixer feedback, I also learned what Braxton believed a drawing of a schoolbus should sound like.  For my experience with Braxton, it became necessary to develop a personal relationship with the composer and his musics to adequately realize the ideas behind the notations.  Only later did I attempt to parse his Tri-Axiom, a massive corpus pertaining as much to political theory as to poetics.  Even now, I still feel somewhat perplexed when I try to interpret one of his scores.  I'm starting to think that's part of his point in making them.

Not long after my introduction to Braxton's systems, I became acquainted with the scores of John Cage and David Tudor.  I believe I realized (for class with Ron Kuivila) both Cage's "Cartridge Music" and Tudors "Rainforest." For both of these pieces, however, it was impossible to develop a relationship with the composers, as both had passed away too soon.  At first, the idea of performing a piece in this post-war (shall we say, "American Experimental?") idiom really bothered me.  I felt trapped by Cage's apparent insistence on perfectly divining a chance-derived score, and subsequently perfectly complying with it in performance.  With regards to the Tudor composition as well, I felt a sickening lack of ownership that really challenged me.  It was one thing, it seemed, to realize something with the composer right there, and quite another thing to be limited to second-hand information and some meta-score materials.  (Of course, to call this information second-hand is somewhat misleading, as Ron spent much of his time preserving the legacy both Tudor and Cage left behind, both in concept and in artifact.)  My interest in notation as a scholarly problem springs from some of the language Ron sets up in his writings about Tudor (Kuivila, Ron.  "Open Sources".  Leonardo Music Journal, Vol. 14, No. 1. (1 January 2004), pp. 17-23.) and an interview he gave regarding notation in media arts (http://www.nettime.org/Lists-Archives/nettime-l-0002/msg00126.html).

As a computer musician, I often find myself in a position where I am unable to separate the notation of a composition from its realization.  That blurriness was something I always found comforting about Musique Concrete; the "sound object" can be treated more like sculpture than theatre.  This perspective seems innocuous until the ramifications are considered: as a composer, do I produce streams of binary information?  What is the specific artifact that I am responsible for, mp3's?  Source code?  What happens when the software and hardware systems I use to produce these compositions fall prey to the ubiquitous and inevitable processes of decay and obsolescence?

[In the case of so-called aleatoric musics this problem becomes even more complex.  Cage  had a notorious distain for recordings of his work.  His idea was that, even in the case of his electronic pieces, the frame that recording imposed would overpower the synchronicity of events in a chance composition.  Xenakis, however, comments on the effect of his aleatoric processes (again, so-called) in a much more neutral way in his "Formalized Music."  It almost seems to amuse him that people read lyricism into chaotic information.  Perhaps their discrepancy in position comes from the role mechanical reproduction has along the notation-realization axis for either composer.  If a tape-head is interpreting a magnetic score, as perhaps a practitioner of Musique Concrete would believe, then that performance is very weakly related to chance, indeed.]

This crisis came at a time when I had become very interested in dropped udp packets over wifi as a compositional medium.  I realized, after several attempts at hand-coding error correction, that the lost information was far more rich than the compositions I was trying to write.  At that point, I decided to stop fighting against the medium and produce a few works based on dropped packets.  Additionally, my interest forked into the realm of meta-composition.  The following generalization results:

Play silences of various durations.  When you have made a sound, you have made a mistake.  Make mistakes.

I proceeded to realize this in terms that seemed as far removed from udp packets as I could get-- which resulted in "Solo for Amplified Window," a brief, focused exploration into the problems of realization and notation in performance, expressed through the rich acoustic properties of a simple physical system: a large glass window fit with piezo discs and festooned with loose guitar strings.  The instrument for which this piece was composed derives its complexity from the mere act of amplification; it does not rely on elaborate signal processing or sensor interfacing.  The performer is to realize a graphical score which has been algorithmically generated anew prior to each performance.   The notation prescribes the motion of a large stone ball along the surface of the glass.  The instrument has been designed to accentuate the event of a mistake-- during which the ball knocks against the frame of the window-- while the algorithm which generates the score is an attempt to serialize the possibility of failure.

sounds herd

tuesday, december ninth, I installed an 8 channel network piece expressing micro-rhythmic patterns with flocking algorithms and force simulations. in each of the four computers on the LAN, I placed a world with 40 agents, each representing a train of semi-pitched impulses, resembling steel drums. the x coordinates represent the rate of the train, and the y coordinates represent the position within the cycle (phase). these drummers each had proportional masses to their pitch, and exhibited the following behaviours:

1) move toward the average location of the nearest drummers you can see

a) visibility is determined by an angle width and a distance.

b) width is centered around the drummer's heading, the distance is proportionate to the drummer's mass.

2) steer to avoid those drummers that are too close

3) collisions are possible, forces determine outcome

each of the other 3 nodes in the LAN are represented, in terms of their average position, as supermassive drummers in the worlds. they do not react to their surroundings directly, but pose as obstacles for each flock.

for debugging purposes, i used sc's cocoa interface to animate the drummers. this video is from one of the local debug runs.

each drummer plays a single drum whose pitch remains constant, so the resulting texture is a pitch constellation whose changing parameters include timing relationships, overall density, local coherence, and speaker position.

the performance on tuesday was well received, despite the fact that the context was less than ideal.  to show a piece like this in a setting where the focus is on particulars and not on the subtle aspects of experience is tough and somewhat frustrating.  regardless of this, i feel as though my explanation was adequate for that setting, and the system worked just about flawlessly, with the only hiccup my being locked out of one of the computers and needing a reboot.

as i explained to the group, this setback actually fulfilled part of my aesthetic requirements for the piece as i disapproved  of the power paradigm implied by the piece.  if i am to be positioned as a godlike dictator over these four computers, then an act of civil disobedience is only responsible.

later in the week, on the 15th, i presented an improved incarnation of the piece which added to this self-organizing  flocking algorithm a second, more insidious one. this behaviour i programmed to spread virally and without bounds across the network, choking the router and the distributed virtual machines.  once triggered, this behaviour almost immediately results in a Denial Of Service lockout from LAN access.  the only way to kill the virus is by quarantine.

the virus is essentially a version of the network oscillator pieces i had come up with early in my experiments with networked systems.   essentially, each receptor, on being triggered with a value of 1 or -1, broadcasts to the network the opposite of that signal.  as a result, in systems larger than 2, this results in an oscillation between 1 and -1 on each node in the network.  this value is used as an excitation signal to a karplus-strong-like pluck algorithm which is sensitive to the timing between plucks.  imagine a frettless  (or alternately, slide) guitarist whose left hand moves closer to the bridge as his right hand's tremolo moves faster.  i took extra measures to make the sound extra-plinky, and to give it more dynamic range.

click here for a recording of both textures running, as each computer is faded in.

the performance on the 15th was at diapason, an art space  run by michael schumacher.  before i began i explained a bit about the process.  i used my laptop to wirelessly start the flocks on each computer, which were panned around the space's 12-channel system.  since i re-initialized the environment prior to the performance, i set up the entire composition right there in front of the audience using the interface Ron and I came up with.  This provided a nice parsing between computers, as each one started in turn.  Having informed the audience about the viral code I would deploy, I simply waited for a few minutes to allow the flocks to organize and shift around each other, and then at a moment I determined to be opportune, I infected the network, giving the audience a visual cue.  Finally, after a few minutes of letting the clogged network suffer, I ping-flooded a few of the computers on the LAN, experiencing 100% packet loss.  Then I got up and somewhat dramatically unplugged each of the cat-5 cables connecting to the router, giving a few moments between them so we could hear each infection site heal.  The flockers continued plinking away after this, and I reconnected the nodes to allow for networking between the flocks.  Then, having regained access to the router, I stopped the flocks one by one from my laptop.

I was very happy with the performativity of the system in this form, but I would like to improve on the composition by allowing the virus to inject vm's with its code receptors programmatically, and chase it with some kind of healing function or antivirus.  This incarnation of the composition could then be installed, with perhaps distributed, bus-driven led systems (anything visual but non-cgi) to portray more clearly which nodes are infected and which are not.

this was my first experience of the composition in a true multichannel context, previous versions (like the recording above) had been mixed down to 2 channels.  i found the system a lot simpler to navigate, its processes more immediate to locate, in this new context.  i look forward to the next opportunity to work in a massively multichannel arena.

i am coming up with a proposal for this  installation to  be deployed at several sites, such as the supercollider symposium at Wesleyan University in the spring, as well as at Diapason once again in the above mentioned installation form.  it would really be a treat to show Ron and the other guys what i've been up to.

karplus-WEAKNESS

click to listen

a flooded network.  each node broadcasts the opposite value of what it receives, so signals oscillate from hi to low, reaching network capacity very quickly.  the nodes measure the time between messages, using regions of network lag as the organizing principle for hitting notes of various registers.

an idiosyncrasy of the code (whose artifacts i enjoy) causes the network glitches to come at regular intervals.  in later code this was fixed.  but in this recording, i find the fragile regularity of the rhythmic organization to be just varied enough to remain interesting for extended listening.  the resulting texture is somehow reminiscent of later Feldman.

also, karplus-strong algorithm ftw.

window experiments cont’d

the meta-notation is as follows:

play silences of various durations. if you make a sound, you have made a mistake. make mistakes.

this should be familiar as i've been trying to realize this notation into a more concrete score over the course of this semester.

the rules for interpreting the score break down as follows:

roll the ball continuously through the entirety of the performance.  the figures you produce may be ellipses, figure-eights, or three-circled knots.  orientation (horizontal vs vertical) is open to interpretation, as is direction.

the score is comprised of an ordered set of events.  each event has two values: a duration and a number of loops.  produce a figure or set of figures containing the notated number of loops, over the notated length of time.

there is now a simple patch for generating score values, written in the supercollider language.  it is available here: www.joemariglio.com/window_proto/score_gen_0.rtf

if one runs the whole patch, what is returned is an array containing all the information for a single realization of the piece.  at the innermost level, there are tuples describing duration and loop number.  these are grouped into three approximate 'sections', two lasting 120 seconds and the final one 60 seconds.  in practice, the performer only need note the durations and loop numbers, so this final parsing is omitted from the ~score environment variable.

this 'divination patch' is necessary to generate scores that are statistically identical in difficulty, but unique in particulars.  it is important that each performance be unique so that practice only improves ones ability to read the notation, leaving the possibility for mistakes due to score difficulty intact.

for those not fluent in supercollider, the algorithm is as follows:

all the possible durations for events are the non-primes between 12 and 30, inclusive.  no duration is repeated and they occur in no particular order (the order is determined with each realization).

break the durations into three approximate sections of 120, 120 and 60 seconds each.

the difficulty is determined to be the ratio of loops per second.  there are 5 approximate values for this ratio: 1/2, 1/4, 1/6, 1/8, and 1/10.  these are chosen anew each event.  distribution weights depend on which of the three sections the event is in.  for the first section, values tend toward 1/10, for the middle section, values tend toward 1/6, and for the final section, values tend toward 1/2.  the curve is exponential with a factor of 3.

the number of loops in each event is determined by the difficulty ratio * the duration.  this value is rounded to the nearest integer.

running the patch just a moment ago yielded the following score:

[

[ [ 30, 4 ], [ 25, 3 ], [ 12, 2 ], [ 16, 2 ], [ 26, 3 ], [ 22, 2 ] ],   //section 'a'
[ [ 28, 7 ], [ 27, 3 ], [ 24, 12 ], [ 15, 2 ], [ 20, 3 ] ],   //section 'b'
[ [ 21, 5 ], [ 14, 2 ], [ 18, 5 ] ]   //section 'c'

]

since these arrays are pretty difficult to read, especially while rolling a ball around on a window and watching a clock, i am considering borrowing an idea from cage's 'cartridge music', wherein lengths of time are denoted as arcs of a unit circle, one for each minute of the piece.  following the composition, then, is as simple as following the second hand as it travels around the clock's face, and taking note of the regions that make up each minute.

as far as the resulting sound goes, i'm relatively content leaving things as they are.  i like the sound of the ball on glass, and i believe there will be enough bells and whistles throughout the nime concert to make this five minutes of focus and simplicity engaging.

window experiments

i started trying to work out the notation system for this performance. the idea has been that, prior to each performance, a new score would be generated algorithmically so that the likelihood of my making an error would follow some nominal curve. obviously, if the score were static i would practice it, and thus throw off the whole effect.

i began by cataloguing each curve i could make with the ball on the surface. for ideas, i played with polar coordinate functions, and tried to narrow down the family of curves that would be most appealing on the materials. i found the lissajous family of curves to be quite nice, since they cover a rectangular area. also appealing were the rose curves, although eventually i determined those to be too difficult to execute. i limited my scope to lissajous shapes with a/b = 1 (ellipse), 1/2 (figure-8, infinity sign), and 1/3. these shapes could be oriented horizontally or vertically, and traced in a clockwise or counter-clockwise fashion. i decided not to explicitly state the direction, for several reasons. i found that treating the change of direction as an event itself made more sense ultimately.

as for durations, i decided simply to come up with some set of durations which add up to the target duration (5 minutes). then, as long as all elements of the set are used, the performance will always be the same length. i eventually landed on the set of non-primes between 12 and 30, although this could very well change.

i also wanted to parameterize how fast the ball should move, as this is the best predictor of failure in this system. however, rather than using the rate of shapes to produce, i decided a looser approach would be more beautiful. while this rate is used as a constraint for the composition algorithm, i will display the instruction in terms of 'how many' rather than 'how often'.

i am currently using a metronome to keep track of seconds, and i think i'll be doing that in the performance as well. other things remain less clear. i initially imagined cards would be sufficient to help me keep track of the tasks, but now i'm starting to wonder whether some kind of graphical display would make things simpler. i dislike having the score ultimately reside in the computer. perhaps i will eventually have the card system totally worked out and have both representations. the computer version is more a conductor than a score, really, so i think it's ok as long as i get some kind of notation that has been printed out or transcribed.

i have been using the following (very simple, buggy) supercollider patch to test possible score outcomes and hone in on reasonable constraints. here's what i have:

a = (12..30).reject({|x| x.isPrime});
b = 1/(1..5);
d = 2;
~score = Routine({
[
{
{Impulse.ar(1/d)}.play
},{
i = 1;
loop({
i.post;
" : ".post;
c.post;
" : ".post;
(c*e).round.postln;
i=i+1;
wait(d);
})
},{
loop({
"__________________________".postln;
c = a.choose.postln;
i = 1;
e = b.choose.postln;
"events: ".post;
(c*e).round.postln;
"__________________________".postln;
wait(c*d);
})
}
].fork
}).play;

and here are some lissajous pictures...

a/b = 1

a/b = 1/2

a/b = 1/3

* * *

also, as i got progressively worn down by the process of composing with logical constraints, i started messing with more feedback systems that used the window-ball system as a source of richness. using one piezo as a pickup and the other as a driver, i had previously been discouraged by the incredibly screechy results. taking a cue from a discussion with hans, i decided to try implementing a pitch-shifter into the signal path, to reduce the shriek and get some harmonically related and physiologically comfortable feedback. of course, instead of using my computer to do this, i breadboarded a simple circuit.

the resulting feedback was mostly noise-ridden, but every once in a while it would converge on a tone. i used the ball like a 2-dimensional glass slide on a guitar string, a technique which worked well, especially near either piezo. i discovered some rich aural terrain by trying to locate the node points on the glass. the resulting mayhem i posted here.

despite how fun that exercise was, i definitely would like a simpler nime performance than that. i think in other contexts, a performance like that would be awesome, and i can't wait to try it with some of my other circuits and patches, but the dialogue i've set up is way too subtle for me to simply do a feedback piece as a result. mostly, i am relieved i still know how to make a circuit, considering i haven't done anything but code and think all semester.

a solution for distributed computing in SuperCollider

I developed the following paradigm for writing code on a network of arbitrary size, allowing me to stay in one authoring environment while communicating to several machines and receiving posts as though they were local. When set up with a terminal multiplexer like gnu 'screen', it allows for a scalable solution to writing distributed code clearly and effectively. To set up the system, on each compute node I run screen, and within the screen session I run emacs in sclang mode. I then add a responder that allows remote interpreting and postback, and a function that flushes the post buffer for regular updates. Then I detach from the screen session and close the ssh tunnel. From here on in, as long as nothing traumatic happens to those remote machines, I have remote interpreter access and posting using OSC packets, and total decoupling between my representation of those language processes and the processes themselves. Here's a screenshot:

source code

birdsong

click to listen

this is a recording of the cattri, my cluster, "performing" (read: generating in real time) a piece of music that has much in common with things like fractals and wavelet noise.  actually, the algorithm itself was inspired by some reading i had been doing about computer graphics, to which often i find my work relates.  the individual notes, occasionally long and sustained, often short enough to be perceived as granular, consist of band-limited noise with formants, generated by phase-modulating a bank of sine oscillators with white noise.  the paths these note-events take through frequency-space is a lattice derived from 3 intervals: 4/3, 7/4, 8/7 and their inverses: 3/4, 4/7, 7/8.  these intervals are sorted and then shuffled like a deck of cards (split in half and interleaved), twice.  there is a variable depth of recursion on the following phrases: forward through the row, backward through the row, and random.  the frequency-space lattice also reflects rhythm, with lower bands generally moving slower than higher bands from the same compute node.  each compute node is relegated to a different frequency region, from 50 to 400 by octaves.  finally, the output is summed to 2 channels and processed by a fifth computer, on which i am live-coding various effects, such as waveshaping, physical modeling, and comb-filtering.

i left the cattri (just the four nodes, not the effects) running this algorithm for about 24 hours straight. i did this because kapow, my parakeet, gets lonely when jenny and i aren't home.  he loves fractals.

future modifications of this piece will allow for dynamic communication between nodes, and not simply a divide-and-conquer strategy.  also, i would love to work out how to incorporate network errors into this system;  i'm thinking i could infect this network with a virus that moves from node to node, pelting the other nodes with spam and infecting the weakest node.

installation of this piece could be done arranging the nodes in space.  a simple visualization, projected into the site, could accompany.

solo for amplified window

i intend my nime performance to question some of the fundamental tenets of the nime conference. it might be pithily asserted that the project is neither new, an interface, nor for expressing music. the instrument is a found object (in this case a window) which is amplified with piezo discs. masses, such as a stone ball, marbles, and grains of rice are placed on the surface of the window, which is tilted manually by the performer to induce shapes like circles and figure-eights. prior to each concert, a score is generated with a computer program which notates the intended shapes, their order and durations. the reasoning behind generating a new score with each performance is to forbid mastery through memorization, and the reason for the computerized divination versus, say, bird entrails, is to parameterize the probability that the performer will make a mistake. in this case, a mistake may consist of many events. private ones, such as an error in interpreting the score, will not be noticeable. the public mistakes, such as a collision between a mass and the window frame, or one mass with another, will be amplified by the very nature of the system.

i was tempted at first to use complex machine listening algorithms to determine the event of a private mistake, or at least to better encode the public mistakes into synthesized note events. several prototypes of this patch exist, but i have decided to eschew convention and stick to the acoustic properties that charmed me in the first place. currently my plan is to produce an algorithm that composes well and with the intention of throwing me off, and to practice this task so i can give it a run for its money.

that being said, a subtle amount of processing will probably be added to the signal, to emphasize those beautiful resonant characteristics of the system. also, toward the end of the performance i may add a touch of commentary from the computer. but this will be tastefully done, if at all.

more dumb things to do with networks

I did it again. Bigger. This time, instead of making two oscillators (and actually the previous recording only had one oscillator), I made four. Each machine gets sent to a different channel in my crappy samson mixer (the one that I used for no-input mixing with the Braxton ensemble back in undergrad), where I made a desperate attempt at spatializing them with panning and equalization (don't hold your breath). Here is a recording (sorry for the codec & remote hosting: I'm trying to save on server space). I mess with the eq a bit toward the end, but that pulse-width sounding stuff that happens is totally an emergent property of the network. Sweet!

In this system, each node waits for a message containing either a 1 or a -1. On receiving this message, 2 of 4 nodes invert the signal while the other 2 do nothing to the signal. The signal (normal or inverted as the case may be) is sent directly to the soundcard as a dc offset, and the node sends its signal to all the other nodes. All of this happens 'instantaneously', which is a fancy word for 'as soon as possible'. So, what we hear is the result of all different sorts of bottlenecks in the system, as well as packet collisions or outside interference. We hear the time constant of the network itself, plus the language, plus the soundcard, plus the mixer.

//dukkha:
s.boot;
SynthDef(\NetOsc, {|dc = 0, vol=0.5| var osc; osc = Silent.ar + (dc*vol);Out.ar(0, osc!2)}).store;

~langs = ["192.168.2.5","192.168.2.6","192.168.2.7"].collect{|ip| NetAddr(ip, 57120)};

~count = OSCresponder(nil, '/count', {|t,r,m,a|
var dc;
dc = m[1].neg;
s.sendMsg(15, 1000, 'dc', dc);
~langs.do{|addr, index| addr.sendMsg('/count', dc);};
dc.postln;
}).add;

s.sendMsg(9, 'NetOsc', 1000, 0, 0);

//samudaya:
s.boot;

SynthDef(\NetOsc, {|dc = 0, vol=0.5| var osc; osc = Silent.ar + (dc*vol);Out.ar(0, osc!2)}).store;

~langs = ["192.168.2.5","192.168.2.6","192.168.2.7"].collect{|ip| NetAddr(ip, 57120)};

~count = OSCresponder(nil, '/count', {|t,r,m,a|
var dc;
dc = m[1];
s.sendMsg(15, 1000, 'dc', dc);
~langs.do{|addr, index| addr.sendMsg('/count', dc);};
dc.postln;
}).add;

s.sendMsg(9, 'NetOsc', 1000, 0, 0);

//nirodha:
s.boot;

SynthDef(\NetOsc, {|dc = 0, vol=0.5| var osc; osc = Silent.ar + (dc*vol);Out.ar(0, osc!2)}).store;

~langs = ["192.168.2.5","192.168.2.6","192.168.2.7"].collect{|ip| NetAddr(ip, 57120)};

~count = OSCresponder(nil, '/count', {|t,r,m,a|
var dc;
dc = m[1];
s.sendMsg(15, 1000, 'dc', dc);
~langs.do{|addr, index| addr.sendMsg('/count', dc);};
dc.postln;
}).add;

s.sendMsg(9, 'NetOsc', 1000, 0, 0);

//marga:
s.boot;

SynthDef(\NetOsc, {|dc = 0, vol=0.5| var osc; osc = Silent.ar + (dc*vol);Out.ar(0, osc!2)}).store;

~langs = ["192.168.2.5","192.168.2.6","192.168.2.7"].collect{|ip| NetAddr(ip, 57120)};

~count = OSCresponder(nil, '/count', {|t,r,m,a|
var dc;
dc = m[1];
s.sendMsg(15, 1000, 'dc', dc);
~langs.do{|addr, index| addr.sendMsg('/count', dc);};
dc.postln;
}).add;

s.sendMsg(9, 'NetOsc', 1000, 0, 0);

//sunnata:
NetAddr("dukkha", 57120).sendMsg('/count', 1);

for those of you nerdy enough to withstand the source code, your reward is the following ascii graph:


where dukkha and marga invert the signal before passing it along, and samudaya and nirodha simply echo the same signal back.  if the network were arranged as a loop, instead of being fully distributed, each node communicating with the next instead of broadcasting to everyone, it gets trickier to make the system oscillate.  for this behavior to emerge, we must have an odd number of inverting nodes.  however, the distributed topology of this network allows us to have an arbitrary number of inverters.  next, i will try out different topologies and mixes of individual node behaviours, and also different sound mappings...

welcome alterations

Tried conversations (engineers and artists). Found it didn't work. At the last minute, our profound differences (different attitudes toward time?) threatened performance. What changed matters, made conversation possible, produced cooperation, reinstated one's desire for instance an utterly wireless technology. Just as Fuller domes (dome within dome, translucent, plants between) will give impression of living in no home at all technology must move toward the way things were before man began changing them: identification with people in the one we tell the round, each individual free to pay for them. Art and TV are no longer necessary. Tried conversations (engineers and artists). Found it didn't work. At the last minute, our profound differences (different attitudes toward time?) threatened performance. What changed matters, made conversation possible, produced cooperation, reinstated one's desire for continuity etc, were /things/, dumb inanimate things (once in our hands they would not otherwise have had . . . Sounds everywhere. Our concerts celebrate the fact concerts are no longer necessary. Tried conversations (engineers and TV are no longer two different things. They're equally tedious . . . TV's vibrating field's shaken out arts to pieces. No one would think of keeping a chord to himself. You'd welcome alterations of it. Sub-routines are no home at all technology must move toward the way things were /things/, dumb inanimate things (once in no home at all technology must be constantly changing. Bewildering and productive of joy. Are we hear when we're not No; it's Yes. (A computer that turns us into artists.) What'll art is not separate from the purpose of operation, complete mystery. Introduce disorder. Sounds passing through circumstances. Invade areas where nothing's definite (areas -- micro and facilities together in a way that welcomes the stranger and discovery and takes advantage of the several energies had they not been brought together . . TV's vibrating field's shaken out arts to pieces. No use to pick them up. Get with it . . . Art's socialized. It isn't someone saying something, but outside of them in process of whom it was written for or where it might have appeared, even though the photocopy is not separate from the purpose of synergy, an energy greater than the sum of the several energies had they not been brought together . . TV's vibrating field's shaken out arts to pieces. No use to pick them up. Get with it was written for or electronic. It'll sound like what we know in. It won't sound like music - serial or where it with people and their energies and the world's material resources, energies and facilities together in a telephone, he locates materials, services, raises money to pay for computer art? The answer's not No; it's Yes. (A computer that turns us into its own: life. Life includes technology. The answer's not No; it's Yes. (A computer art? The answer's not No; it's Yes. (A computer that welcomes the stranger and discovery and takes advantage of synergy, an energy greater than the sum of the several energies had . . . Sounds everywhere. Our concerts celebrate the fact concerts are no longer necessary. Tried conversations (engineers and artists). Found it didn't work. At the last minute, our profound differences (different attitudes toward time?) threatened performance. What changed matters, made conversation possible, produced cooperation, reinstated one's desire for continuity etc, were /things/, dumb inanimate things (once in our hands they would not otherwise have had . . . Art's socialized. It isn't someone saying something, but people doing things, giving everyone (including those involved) the opportunity to pieces. No use to pick them in the round, each individual free to lend his attention wherever he will. Meeting house. Composer, who no longer necessary. Tried conversations (engineers and artists). Found it didn't work. At the last minute, our profound differences (different attitudes toward time?) threatened performance. What changed matters, made conversation possible, produced cooperation, reinstated one's desire for continuity etc, were /things/, dumb inanimate things (once in no home at all (outdoors), so all technology must move toward the way things to do. We need for or where it might have appeared, even though the photocopy is marked "Copyright (c) 1969 by John Cage."] (They) bring people together (world, people), people and facilities together in a way that welcomes the stranger and discovery and TV are no longer two different things. They're equally tedious . . . TV's vibrating field's shaken out arts to pieces. No use to pick them up. Get with it . TV's vibrating field's shaken out arts to pieces. No use to pick them in the world where our central nervous system (electronics) effectively now is. Everything happens at once (a different music). Art's in process of technology. Do not been brought together in a way things were before man began changing them: identification with nature in the world where our central nervous system (electronics) effectively now is. Everything happens at once (a different music). Art's in process of coming into its own: life. Life includes technology. The purpose of technology. Do not imagine there aren't many things to do. We need for instance an utterly wireless technology. Just as Fuller domes (dome within dome, translucent, plants between) will give impression of living in no home at all (outdoors), so all technology must move toward the sum of the several energies had they not been brought together . . . not just inside our heads, but outside the law, we tell the stranger and discovery and takes advantage of joy. Are we an enterprise. Using a telephone, he will. Meeting house. Composer, who no longer necessary. Tried conversations (engineers and artists). Found it didn't work. At the last minute, our profound differences (different attitudes toward time?) threatened performance. What changed matters, made conversation possible, produced cooperation, reinstated one's desire for continuity etc, were /things/, dumb inanimate things (once in no home at all (outdoors), so all technology must move toward the way things were /things/, dumb inanimate things (once in our heads, but outside of them in the world where our central nervous system (electronics) effectively now is. Everything happens at once (a different music). Art's in process of coming into artists.) What'll art is not separate from the purpose of technology. Do not just one man. Art's (Technology's) self-(world)-alteration. [The following text was found among Cage's papers. He has no recollection of keeping a telephone, he locates materials, services, raises money to do. We need for instance an utterly wireless technology. Just as Fuller domes (dome within dome, translucent, plants between) will give impression of living in no home at all (outdoors), so all technology must move toward the way that welcomes the one we know in. It won't sound like music - serial or electronic. It'll sound like chords. No one we know in. It won't sound like music - serial or electronic. It'll sound like music made by man himself: not just one else does. Economy. (We do not believe in "human nature.") We are no longer necessary. Tried conversations (engineers and artists). Found it didn't work. At the last minute, our profound differences (different attitudes toward time?) threatened performance. What changed matters, made conversation possible, produced cooperation, reinstated one's desire for continuity etc, were /things/, dumb inanimate things (once in our profound differences (different attitudes toward time?) threatened performance. What changed matters, made by man himself: not just one man. Art's (Technology's) self-(world)-alteration. [The following text was found among Cage's papers. He has no recollection of whom it was written for or where it to anyone who no longer arranges sounds in a piece, simply facilitates an enterprise. Using a telephone, he locates materials, services, raises money to pay for them. Art and TV are no longer two different things. They're equally tedious . . .
TV's vibrating field's shaken out arts to pieces. No one would think of it. Sub-routines are altered by John Cage."] (They) bring people together (world, people), people and artists). Found it didn't work. At the last minute, our profound differences (different attitudes toward the way things were before man began changing them: identification with people in the round, each individual free to pieces. No use to pick them up. Get with it . . . TV's vibrating field's shaken out arts to pieces. No use to be. But to accomplish this our technological means must move toward the way that welcomes the stranger and discovery and takes advantage of them in our heads, but outside of them in a way that welcomes the world where it might have experiences they would think of keeping a chord to himself. You'd welcome alterations of it. Sub-routines are altered by John Cage."] (They) bring people and their energies and the world's material resources, energies and facilities together in a way that turns us into artists.) What'll art become? A family reunion? If so, let's have had . . . Sounds everywhere. Our concerts celebrate the fact concerts are criminals. There, outside the law, we tell the truth. For this reason, we exploit technology. Circumstances determine our actions. Computers're bringing about a situation that's like the invention of harmony. Sub-routines are like chords. No one would think of joy. Are we an enterprise. Using a telephone, he locates materials, services, raises money to pay for them. Art and TV are no longer two different things. They're equally tedious . . . TV's vibrating field's shaken out arts to himself. You'd give impression of living in no home at all (outdoors), so all technology must be constantly changing. Bewildering and takes advantage of synergy, an energy greater than the fact concerts are no recollection of whom it was written for instance an utterly wireless technology. The purpose of art is not separate from the fact concerts are criminals. There, outside the law, we happen to be. But to accomplish this our technological means must be constantly changing. Bewildering and productive of joy. Are we an audience for computer art? The answer's not No; it's Yes. (A computer art? The answer's not imagine there aren't many things to be. But to have experiences they would not otherwise have had . TV's vibrating field's shaken out arts to pieces. No one would think of keeping a chord to himself. You'd give it to anyone who wanted it. Sub-routines are altered by a single punch. We're getting music made by a single punch. We're getting music - serial or electronic. It'll sound like music - serial or electronic. It'll sound like what we hear when we're not hearing music, just hearing whatever we happen to be. But to accomplish this our technological means must be constantly changing. Bewildering and productive of them in the world where our central nervous system (electronics) effectively now is. Everything happens at once (a different music). Art's in process of coming into its own: life. Life includes technology. The answer's not No; it's Yes. (A computer that turns us into artists.) What'll art become? A family reunion? If so, let's have it with people in the round, each individual free to accomplish this our technological means must be constantly changing. Bewildering and productive of whom it was found among Cage's papers. He has no longer arranges sounds in a piece, simply facilitates an enterprise. Using a telephone, he locates materials, services, raises money to pay for computer art? The answer's not No; it's Yes. (A computer that turns us into artists.) What'll art become? A family reunion? If so, let's have it with people in the round, each individual free to lend his attention wherever he will. Meeting house. Composer, who no longer arranges sounds in a piece, simply facilitates an enterprise. Using a telephone, he locates materials, services, raises money to pay for them. Art and TV are altered by a single punch. We're getting music made conversation possible, produced cooperation, reinstated one's desire for continuity etc, were /things/, dumb inanimate things (once in our profound differences (different attitudes toward time?) threatened performance. What changed matters, made by man began changing them: identification with it . . Art's in process of coming into its own:

remote language

Ron Kuivila and I have been working on some SC3 code that would allow for remote interpretation of the language across a network.  Of course, there already exist several options for the SC3 user that provide various advantages (Client/LocalClient, BroadcastServer, oscGroups, PB_UP, etc), but these were way too complicated in their implementation for my needs, and provided features that I simply had no use for.  The result is very paired-down and simple to use.  On the local side, there is a window for typing SC3 code, with a header consisting of a smaller text box for the target IP.  

Entire blocks of SC3 code can be placed into the window and interpreted remotely (also works on loopback) by pressing enter.  The overall behavior is a lot like the SC3 IDE, although somewhat limited in terms of key-commands.  However, this allows me to simply write my code in the SC3 system and remain there in order to execute it remotely.  On the remote side, all that is necessary to run is SC-Lang and an OSCresponder that listens to calls from the local machine.

There are other advantages to this approach over others, namely that each instance of this editor can be used for communication with a different node on a LAN.  As a result, the process of composing for 4 networked machines has become rather simple, no longer the logistical nightmare it once was.

Despite my naming the paradigm "remote_lang2.0", it is still in the development stage.  Additions to functionality must be added before I can say this stage has been surpassed.  For now I have to keep an ssh tunnel into each of those remote machines in order to view post-window data.  This is in the process of being improved.  Also, I'd like to see a command-log for each window, so I can save a document of legal SC3 code as a transcript of the commands sent to each server.  Furthermore, there are some environment access issues that should be solved before I'll be totally happy with it.  But this is a start, and allows me to actually get back to composing instead of trudging along in emacs.  For the curious, you can download the code here.

dumb things to do with networks

(click here for sound) 

one particularly dumb thing you can do with a network is arrange two nodes to bounce messages back and fourth as fast as they possibly can, like a game of ping pong. while activity this is fun in and of itself (at least perhaps for some), the more interesting things occur when we tie the act of sending and receiving to something we can experience directly. the first thing i tried was to have either node count the number of times it caught a message and threw it back. this was interesting for about three seconds, which is just about how long it took for a packet collision to stop the whole process anyway. then i had a particularly stupid idea: why not use a network as an oscillator? here's what happened:

so most simple square-wave oscillators work by outputting a 'hi' or in our case a '1', feeding that back into the input, inverting it to a 'low' (-1), and starting the process over again. using various methods of 'slowing down' this feedback process, we get differences in what we perceive as pitch (or rate, if it's lower than ~20hz). unfortunately, i decided this would be fun to accomplish over a udp network.

in the above tastefully coloured ascii diagram, you will notice a main feedback loop between two nodes whose names are in Pali: Dukkha, translating to 'suffering', and Sunnata, or 'emptiness'. Don't worry about my mental state-- Dukkha is the first of the four Noble Truths of Buddhism (aka the Cattri) and the concept of 'emptiness' given this context is actually much more complex than it is angsty. in addition to these names are assignment operations taking place on variable 'i': i=-1*i, and i=i, respectively. this is an abstraction of the process that takes place inside of a schmitt trigger or some similar square oscillator. however, in this implementation, the rate at which the system feeds back is directly related to the conditions of the network, which is being utterly flooded with messages at a rate that is limited only by the language that is sending those messages, in my case SuperCollider. since, as i said before, packet collisions generally stop this game before longer than a few seconds, every once in a while one of the nodes sends out a message that isn't contingent on the other node, just to keep things rolling.

metacompositional strategies emerge after petulant frenzy

i had to have my little existential crisis ('little' here is a relative term: it lasted a few weeks), but my strategy feels a bit more informed in terms of the use of technology. if i am able to locate my relationship with the medium i choose to realize a concept with, then the issues surrounding that realization become all the more clear. also, it saves me from making an artifact whose longevity is contingent upon preservation, like a fruit cake desiccated with freezer burn, or garish html from 1998.

that being said, while the 'crisis' component of this problem has been reduced, the problem itself remains. the key difference here is that i feel able to concentrate on things other than solving it, and my life doesn't feel totally meaningless despite the fact that i can't provide a succinct solution at the moment. this is largely an emotional development, not a theoretical one.

several solutions present themselves: text scores, flowcharts and transition matrices are at the top of the list.  in addition to those forms of notation, i can include software (along with its source), circuits (with schematics), and physical systems (CAD markups? text scores?).  i believe that more than anything, a broad spectrum of approaches should be tried and bred.  with any luck, i hope to find a hybrid system or two that are less conventional and ideally more elegant than any of the approaches listed above.  although that's an admittedly bold statement: the event-score is about as elegant as notation gets, in my book.  but it has its limitations, just like any other system.  a target hybrid system that i'd be satisfied with would not simply dismiss the tools as transient, nor would it attempt at being a universal connector, while at the same time it could stand abstracted from the tools that produced it and still make sense.

one never sees new colours.  cartographers of past centuries decorated the edges of their maps with monsters made of pieces of animals.  i will begin by looking into systems of notation that have come before, attempting to use them in practice, and researching their cultural interfaces.  meanwhile, i will be synthesizing these animal body parts into notational chimeras and minotaurs, careful that their playful forms match the statements made with them.

i don't know if i will end up finding anything that works for me.  i do know that any work i do for this masters will be just incomplete enough to continue playing with for that next degree, provided i'm not utterly sick of the topic.  so, perhaps the above may also be read as a statement of purpose.

clients, distributed

I have gotten sclang, the client component of the 'supercollider' environment, up and running across a LAN.

Let's start with why this is significant...

Supercollider is a client-server application, communicating either over UDP or TCP. Thus, it can be readily implemented across a network where the client resides "locally," sending messages (I prefer UDP so I can handle my own redundancy) to a distributed cluster of servers, much like a single keyboard controlling many tone generators via MIDI. In the distributed server model, the single client acts as a hub for the server nodes to communicate through. The connections are essentially one-way, in the sense that the kind of data that can be sent back from the server alone is limited. We have audio, of course, but the dataflow from the server app is limited to error messages, trigger messages (say from inside a synthdef), or various state-dumps, where the server status is queried. While this is useful for certain applications (mostly debugging), eventually this gets old. Since communication is restricted to the server, I cannot write osc responders or evaluate sc code remotely. This has changed.

Now, I am able to distribute the client app in addition to the server, allowing truly bi-directional dataflow and much more complex paradigms for interaction. Say, for example, I wanted to distribute a control structure for several people to edit together. If all we have is one-way communication from the server nodes, I am limited to directly sending OSC messages to query the state of the server(s) and sending more packets to change the synth-nodes based on numeric id's. I have no access to, say, the underlying thread that generates those synth-nodes in the first place. With a distributed client app, it is elementary to set up a new process in the place of an old one, and the synthesis for this process could be occurring anywhere else in the cluster. It is all up to the constraints of the system.

The only thing I really did to allow this to happen was to set up the appropriate conditions for sclang to run remotely. This involved getting emacs to befriend sclang and setting up the embarrassingly simple file structure for sclang to work. The CCRMA tutorial by Fernando Lopez-Lezcano "How to make SuperCollider go 'beep'" was extremely helpful for this. It assumes one has installed supercollider, jack and emacs, which each of my Cattri boxes already has set up. Actually, the setup was so minor that I felt kind of stupid for not having done this before, instead opting to change my entire coding style to server-message so I could keep my client local.

Now I can set up the Cattri so that sclang code can be remotely evaluated on my local machine, thus reducing the amount of time spent in ssh tunnels using emacs. That being said, I kind of like emacs. I just miss my backspace sometimes.

mistake event - window prototypes

i have been playing with different scenarios that might work as realizations of the following score, mentioned in a previous entry:

play silences of various durations. if you make a sound, you have made a mistake. make mistakes.

i imagine that in order to properly realize this score, the performer must be doing some action that keeps some event from happening.  this event should have audible consequences.  i'm unsure whether it's necessary that the potential for such an event to occur is predicated by the presence of the performer or whether the performer is merely preventing a typically audible event from taking place.  i could go either way on this, but personally i like the idea that the performer must bring with herself the condition of possible failure, because otherwise the performer is positioned as just a silencer or guardian against some arbitrary event.  i want the mistake to be more akin to personal failure, and for that to happen the event must depend on the performer in some way for its fruition.

another point of ambiguity in the score lies in the interpretation of how sounding relates to the mistake.  there are two possibilities: one in which the mistake generates the sounding and one in which the sounding generates the mistake.

the process of trying to realize this score interestingly feels a bit like trying to solve a riddle.  things can get pretty abstract, so i like to stick to materials that are immediately accessible and appealing to me.  my previous demo for a possible realization was to use a marble in an amplified cylindrical can.  the task was to keep the marble rolling around the can without hitting the edges or causing the marble to bounce.  on the mistake event, the computer read a selected word from a Cage essay on technology to which had been applied a first-order markov chain.  the amplified sound of the marble rolling along a metal surface was also passed through the speakers while the performance occurred, so the 'silences' notated above were taken to mean something more figurative than merely the absence of sound.

for a prototype, i had been considering creating some sort of large metal funnel that would allow me to gracefully roll a ball along its interior, where the angle would be just slow enough as to allow for several orbits before the ball would sink down the shaft.  after considering my options for fabrication, i found that this would be not only terribly expensive but also somewhat arbitrary to the point of the composition i am realizing.  meanwhile, i had come across a large number of old window panes.  i found that the rectangularity of the window pane made it much more likely for my ball to collide with something as i maneuvered it in circular paths on the glass.  also the sound of the glass-marble system itself, when highly amplified, is truly delightful.

this configuration gives us a few solutions for realization.  the performer's task could be to roll the ball along the surface of the window without hitting an edge and causing a spike in amplitude.  this would be an analogous system to the first prototype with the can, only larger and thus somewhat harder and more interesting to listen to.  another possibility could be to track the position of the ball based on relative amplitudes from two piezos.  i'm not sure if this works.  in both versions, a secondary graphical notation could be used to guide the actions of the performer.  in the version where position is tracked, an 'error' could be defined as a significant deviation from this graphical score, perhaps in addition to knocking against the edge.

mistake_event_schematic
this is  a possibility for realization i am investigating.  i think perhaps the window speaker idea might be somewhat dangerous.  i actually hadn't been thinking about the danger of working with windows until i described the idea to Rob Moon, who simply advised "be careful!"

window

there's a picture of the window standing on its own.  i have attached contact mics to either side of the object.

ball

that's the ball.  it's heavy and smooth.  it has a stand.

so at this point i am also considering the possibility of more overt 'musical' statements.  it is still important to me that those statements come from the event of a mistake in procedure.  it occurs to me that if i am to design a composition around the event of a mistake, then my goal should be not only to devise a system in which mistakes are probable and interesting, but also to compose responses to those mistakes that enrich the experience.  the first and most obvious mistake event response was to use the score itself, on some level of linguistic expansion, perhaps with a small amount of commentary, as fodder for each event.  i would like it very much if the target performance space gives us the opportunity to distribute program notes, because then my spoken score would consist of those notes.  this would provide some kind of time structure that carried with it both a sense of linear progression and unfolding.  on further reflection, i think i can make more definitive statements as the piece progresses.  perhaps this unfolding also results in the 'embalming' of the system's silent state (ie ball contact noise) into progressively more conspicuous resyntheses.  this allows not simply the content to progress through the piece, but also some formal aspects as well, where note events are introduced and eventually become more present.

this is the current state of affairs.  there will be more updates later.

events as entities

from an old dream journal, undated entry:

"I do not trust human history.  Child mind regards events as entities, separate from their historic relationships, existing in their own terms."

my dreams were weird last night.  actually, i had none.  then i woke up with jenny's alarm and fell back into a light dreamy state for maybe an hour.  in that hour i had incredibly vivid experiences.  i woke up disoriented.  in addition to my practice, which now encompasses my daily life in ways it could not while i was bound to the idea of rarified space, i have been phasing in the daily logging of my dream life- something i haven't done regularly in years.  already, i like what it has done for me.  from today:

"In NIME.  I am doing my presentation for a prototype.  It has somewhat changed so that now all it does is cut large slices of swiss cheese and dispense it to people with a glass of ginger ale.  Mike says he's disappointed that I had made such a change in my concept.  I tell him that I disagree that such a change has occurred at all:

'It is either about the role of composer, and the actions and practices that encompass composition, or it is about cutting cheese.'"

indeed.

two new compositions

1.) play silences of various durations. if you make a sound, you have made a mistake. make mistakes.

this by no means needs a computer for its realization. i can imagine ways of implementing it with musicians playing 'traditional' instruments, or, better still, non-musicians trying to be as silent as possible while performing an activity that is very difficult to do silently. personally, i picture carrying large amplified pieces of sheet metal, on rollerskates, blindfolded. or something to that effect. realized for a network of computers, i imagine a client sending messages over UDP (or some such protocol which does not enforce a handshake). the messages, if they are received, keep the network silent. errors can be handled in one of two ways: either an error causes the client to send a non-silent message to the server(s), or the silent messages have a limited effective duration which wears off after some time, allowing sounding to happen. in the first paradigm, the silent packets have no interpretation coherent to the observer, whereas in the second they have a negative impact on some running soundmaking process. other realizations are acceptable as well; i'm going to start with those two.

2.) find all the unique words in a lexicon that contain a particular grapheme set, sort them using some orthographic method (such as alphabetical, reverse-alphabetical, etc) and say them. so for example if my source is the above text, and my grapheme set were "he", the result would be: "coherent either grapheme other sheet the them they whereas". ideally the text is spoken without affect and without pause. in cases where the set is longer, and breathing becomes necessary, several solutions present themselves. multiple performers may be synchronized, tape may be utilized to remove pauses, or the speech can be synthesized. my first realization of this piece took the form of a very simple bash script, which uses the digitized oxford english thesaurus as its lexicon, and the grapheme set "our":
cat /usr/share/dict/words | grep "our" | say

several solutions to different grapheme sets may be intoned simultaneously.  another particularly beautiful grapheme set is "ht".