giant staircase

giant staircase

... or the "big stairs," as my Aussie friends called it, is literally the face of a cliff with stairs carved into it, i believe of aboriginal origin. I was fortunate enough to get a recording of this soundscape with a well calibrated pair of condenser mics through some kind of pre-amp. Mitch hooked me up with the gear and persuaded my sister and me to come out to the site and be tourists. since then i've used the material as fodder for a few projects, including my recent studio retrospective, "Codecs, 2007-08 ."

this time, i used the material as input for my spectral tracing patch in supercollider. the patch consists of a filterbank that sends osc messages back to sclang regarding the spectrum of the incoming signal. on sclang i declared a function that would respond to the osc messages by writing csound score instructions into a file. the result was a csound score that was 102.5 megabytes! it is so big, i was unable to open it in JEdit! on the commandline, i could cat the file contents to the console, but the resulting data vomit lasted about a minute. what a huge file! i used a pre-filter to get slices of the spectrum and dynamic range, and smeared the spectral components by statistically re-tuning the grains by various octaves.

this tracing function is a bit like a charcoal rubbing. it often can result in a pretty close resynthesis if one takes care to get the appropriate slice (which is monitored and performed in realtime) and providing one sets up the appropriate resynthesis instructions on the other side (which can be performed in realtime but in this case was not). of course, i have better ways of making a clean resynthesis that reside completely in scsynth and thus are sample accurate and totally realtime, but the point here was to generate a csound score for grain instructions.

it is interesting to note here that csound is used for the mpeg codec. i have been looking at csound patches for wavelet analysis and they seem pretty simple and easy (for csound anyway) to implement.

some thoughts on performance

i have just finished digitizing the most recent live set i have recorded. the artifact was made on a small handheld tape-recorder. as a result, the sound quality suffers quite a bit, which helps amplify the "ethnographic" sensibility of the document. the grain of the tape is a welcome distortion, just like the filtering that occurs as the result of the space i'm playing in. i really believe that too few composers who work with electronics, especially those who work with acousmatic pieces, pay too little attention to recordings made of their performances, and instead emphasize the original artifact. the way the performance space filters the sound is lost because of this oversight, especially in transient-rich electronic music. to me, this elision results in a lack of dimensionality to the sound and an overemphasis on the effect of control, which is achieved by monolithically assuming the transparency of the medium and the perfection of the resynthesis.

in the performance i used a few instruments i designed specifically in order to thwart my own intention to control them. these somewhat limiting and unpredictable instruments (automatons?) were incorporated into a mixer which ran feedback both internally and through a planar speaker element i had set perpendicular to the stage's PA. small microphones created more feedback loops as well as picking up the instruments themselves. software was designed and manipulated in realtime. instead of manually controlling parameters like density, trigger rate, grain pitch, and deviation, i set up a control system based on polyrhythms, and manipulated the ranges and types of behaviours.

the instruments were mostly driven by analogue oscillators with photo-sensitive components. i came equipped with a flashlight to influence them. some of the instruments also produced light, which allowed interdependent events to occur amongst the "automatons."

i see these "automatons" as an example of what Bowers & Archer call "infra-instruments." however, it is not the individual actors themselves that i am as interested in so much as the ecosystem they inhabit. for my NIME performance, i plan to use some sort of clumsy physical system to trigger note events, preferably of both electroacoustic and computational origin.

as for what sort of sound-world i'm shooting for, the analogy always comes back to David Tudor. i guess another sound-world i wouldn't mind evoking is that of Paul De Marinis, specifically with his use of articulatory synthesis and linguistics.

cattri surgery

Over the summer, Mike and I determined it would probably be a good idea to disassemble his five node baby cluster and let me integrate them into the Cattri. Seeing as most of my development thus far has involved a single remote server, Dukkha, not to mention the fact that the introduction of more nodes would necessitate the purchase of a switch, we decided to take each computer apart and organize the guts for spares. I think he kept a hard drive as well. Anyway it's great to know that anytime a part goes, there are plenty of identical replacements for me to choose from. We also kept one whole computer intact, just in case.

This change in supplies allowed me to double the RAM capacity of the cluster. It wasn't particularly ballsy before, just 1 gig for the whole four nodes. But this seemed to be enough to run well-optimized granular processes, even those using buffers stored in memory. I have two gigs of ram on my laptop, which is itself a dual-core "cluster," but that's not quite the point. The point is that each node now has 512 mb of ram to work with.

Taking the cluster apart also gave me the opportunity to do a little cleaning. I got to the cable-salad first; later when I had each machine open on my workbench, I was able to clean some of the dust out. The mixer also has been cleaned up a little, and just rewiring the whole thing gave me the chance to think about routing matrix I want to build this semester. To have a performance where the signal path is dynamic and programmatically accessible also lends itself to the possibility of visualization. I am really intrigued by the idea of network topology being analogous to the surfaces of manifolds. Of course, these would often be objects that could never extend into the four dimensional container as given by our senses but they could be represented as shifting, animated projections. A dedicated "projector node?" Now we're talking.

It's a guurlll!

This is what Marga looks like on the inside. I love the pink circuitboard hottness. Marga means "The way leading to the cessation of suffering," the fourth Noble Truth. She has a white case and the nicest cables of the four. I figured she's a lady, so she has to look classy, especially given she's outnumbered by the boyz.

forking-a

so i'm still playing with the idea of using l-systems as pop music structures.  let me recap as to why this interests me, because i'm also trying to reaffirm that i'm not just doing this as a technical exercise.

first, let me position myself as a practitioner of western music investigating western applications of "pop" culture.  while western "pop" music is heavily constrained in terms of form, it has a high degree of semantic power because those constraints are conventional and solutions can be predicted from within a range of legal possibilities.  this is not unlike other highly constrained art forms.  because of this, and because of other socio-political assumptions implicit in its making, "pop" has the potential to profoundly express emotions.  among these socio-political assumptions implicit in its making are those of class, gender and race.  of those three axes, only one is presumed to exist, that being gender, which is generally treated fairly conservatively to the extent that it exists at all.  the "pop" gambit is to make distinctions disappear, so all potential consumers are pleased, and perhaps even feel liberation in performing traditional roles and customs.  additionally, since class doesn't exist, or perhaps since only the middle class exists, one must be completely blindsighted to the means of production which exploits the class system in order to survive.

all the while, this music is so heavily constrained that it can be inflected with many subtleties, and in order to pick up on these subtleties, one must be privy to the constraints that the format entails.  the same phenomenon happens with other formal structures like the blues.   however, these understandings are generally assumed to be unconscious.  very few "pop" songs call direct attention to their formal composition, although of course exceptions exist.

this leads me to a problem in such rigorously constrained musical forms: given all these very specific rules for how to make a "legal" solution, how does one still manage to convey such a wide range of emotional content?  i believe i already answered this question, but it's still a problem.  this can be expanded into other problems, such as, given an arbitrary but similarly constrained ruleset, how can one still achieve such a high level of affect in a solution?  the answer clearly does not lie in the use of words, since i would place tonal counterpoint practice in a similar category of strict rulesets.  also in karnatak music, where melodic improvisation is constrained to such a degree that within a few improvised notes, the audience is able to make conclusions as to the identity of the piece.  Note this is not due to a limited cannon.  (One possible explanation for this phenomenon at karnatak music concerts may also be the shared notion of the learned audience as a social virtue.)

This brings me to l-systems.  L-systems are formal rewrite languages used to produce self-similar, occasionally plant-like structures and fractals.  These systems have sprung up in many sound and visual art works.  I do not pretend that my concept is unique in that respect.  To my understanding of the current work in place, my position is unique because I am striving to produce music that, while constrained by these algorithms, remains evocative and subtle.  Furthermore, I am attempting to draw comparisons with poetic grammatical substitutions (ie epic poetry) and other spoken but extra-natural grammars (such as solkattu).   In so doing, terms in a generated 'sentence' are comprised of short phrases which are context sensitive but also highly constrained.  While the form is strict, its interpretation is indeterminate within a significant range.

Not much of the above is news.  I have posted about this project several times previously, and if you want more specific examples and anecdotes I refer you to my earlier posts: more algae and factory factories .

The news comes from an issue I am having with certain types of l-systems that branch or fork off.  Since those terms are vague and abstract, and also apply to most l-systems, let me clarify.   I am having problems of rendering parallel branches.  The issue is of implementation, to be sure, and I'm close to a solution as well, but I thought I'd share some thoughts before I totally figure it out and get sick of it.

To allow sclang to choose which patterns to tell the server(s) to play, I have been piping in a commandline script Mike Clemow and I have been working on that generates the l-system instructions.  It is fairly simple python code, and with Mike's help it has been abstracted to allow for user-defined rulesets, and stochastic applications of those rulesets.  The problem comes down to the definition of a pipe.  A pipe is the result of a commandline operation that can be read as a UnixFile into sclang.  This process I have implemented in a serial fashion, reading in bytewise as characters.  Herein lies the problem.  A branching l-system looks like this:

a -> [aab]; b-> a;  axiom a;

a

[aab]

[[aab][aab]a]

[[[aab][aab]a][[aab][aab]a]]

... and so on.  the '[' represents a matrix push, where the current values for position are saved and restored at the matrix pop ']' .  This can be accomplished in sound by forking off a thread from the main thread to follow the instructions contained within the brackets while the main thread continues simultaneously.  Thus, instructions that are read in series must be used in parallel by many processes.  Unfortunately, this does not fly, at least without some considerable tomfoolery.

I am considering saving the forked instructions into arrays stored in sclang.  The prospect kind of scares me.  Of course in a distributed environment I can send those forked messages to remote ips for synthesis, but that still leaves sclang with all these arrays to deal with.  Another implementation of the clustering could allow sclang to be distributed across the network as well, but I'm not sure what I think about that just yet.

an evening of new computer music- live at wesleyan 4.24.07

I've posted the recording of this concert here.

One track was not recorded through the mic set up- ".big eat0r".  It was recorded off the server.  A big thankyou to Fugan Dineen, a friend, teacher, and collaborator, for recording this event for me.  Also many thanks to Ron Kuivila for helping me get my act together and getting me unstuck.

The event happened in the Old Cinema in Wesleyan's CFA, a concrete box like the rest of the buildings there, but this one was particularly well suited for amplified music.  It was intimate but also had stadium seating, which was excellent, although it really looked like I was checking my email the whole time.  At least I had my huge G4 then.  This was sort of its swan song, I guess, as it died very soon thereafter.

Prior to the concert, Tom Benner had installed his Helium spectrum sonification piece, which he played over two loudspeakers that were pointed right at one of the building's flat concrete walls, above the entrance to the building.  The spatialization was lovely, and it seemed like certain frequencies were actually coming from other objects or buildings in the courtyard.  Meanwhile, inside the building, I had installed some very low frequency material derived from convolved impulse trains.

Devin Connelly played detuned guitar in "The King of Pentacles," the penultimate piece.  The system in place for that composition responded to 'secondary' aspects of Devin's playing, such as amplitude curves, as control parameters.  Many of the pieces worked with a time domain analysis method based on zero-crossings.  Sort of like wavesets, but enveloped and overlapping, convolved with bandlimited impulses.  Some of the work, like "The Moat Describes Its Castle," and "Personal Flowers," represent experiments with speech and extended vocal techniques.  "After Dad" I wrote for my father, whose birthday coincided with the date.  I know he's got video somewhere, because he made sure to document it, like he and my mom documented so many piano recitals.  Even though I'm still trying to explain it to them, they came all the way out to Connecticut to support me (not to mention sending me there in the first place), and for that I am very grateful.

automatic error-checking over an osc network

so i have been writing a composition for a cluster of linux compute nodes running scsynth. in doing this i have learned quite a bit about the perils of networking with a udp-based protocol such as OSC. since packets just get dropped all over the place, i set up an elaborate system for finding such errors and dynamically restoring the node hierarchy within a remote server. the way i've decided to do this was to clear the nodes being dynamically allocated and preserve the static nodes i'm using for mixing and routing. Here is my error checker:
~error = OSCresponder(z.addr, '/fail', {|t, r, msg|
if(msg[1].postln == '/n_set',
{z.sendMsg(57, 0)},
{});
});
~tree = OSCresponder(z.addr, '/g_queryTree.reply', {|t, r, msg|
msg.postln;
~array = msg.select{|x| x.isInteger}.select{|x| x >= 1000};
~array = ~array.sort.postln;
//s.sendMsg(15, ~array(..5), 'gate', 0);
Routine{
~array.do{|x| z.sendMsg(15, x, 'ovr', 0.5)};
wait(0.125);
~array.do{|x| z.sendMsg(15, x, 'gate', 0)};
"bounce?".postln;
~tree.remove;
wait(4);
~tree.add;
"done".postln;}.play

});
so ~error is a responder waiting for the error message to come through.  when such an event occurs, the responder sends a "query tree" message that basically asks for a snapshot of the node tree on the remote server.  on receipt of the response, the ~tree responder parses this message and finds all the dynamically allocated node ID's, lengthens the release of each node's envelope, and closes all of them.  then a 4 second wait period counts off to ensure positive feedback is suppressed.

this method works swimingly, most of the time.  the only issue is, since both incoming and outgoing packets can get lost, sometimes an error will occur and then the notification that an error has occurred will get lost. this means the error has the potential to go completely unreported.  to combat this, I am deliberately throwing errors at regular points in the phrasing to ensure a fresh start for each phrase.  i really dislike this, but there isn't too much else I can do about it.  Errors happen often enough that I am considering making synthdefs that free themselves after a period of time, instead of this method of opening and closing envelope gates.  while there is more explicit control over when layers occur, the drawback is network instability.  for future compositions, i will probably still use this error checking mechanism, or something like it, but i certainly will change my strategy of control once more.  whereas this piece works almost completely off of one thread in series, with only the error checker operating on a separate thread, i will be taking advantage of parallelization and forking in the next piece.  Also, I think I can whittle down the number of messages i send over osc by finding redundancies in the code as well.

also i've been messing with performance situations that necessitate an external server.  i have a track that i'm working on right now that is basically a remix of the cluster piece, using a much more accurate version of the 'Codecs' spectral granular resynthesis patch (basically a time-domain wavelet transform).  this sample accurate analysis / resynthesis patch resides completely within a synthdef, and so is able to be implemented across a net using osc commands only for higher-order parameters, such as spectral envelope shaping, variable wavelet envelopes, frequency and time domain smearing, shifting and erosion.  since it is a realtime process, it lends itself well to application over a network.

one of my future goals is to build an analogue routing matrix that can be controlled by the serial port (ie from within supercollider) that will connect all the nodes together.  this would be good for chaotic feedback based systems as well.

other thoughts about the future lead me to acoustic analogues of granular synthesis, more specifically the use of piezos and solenoids activating resonant chambers of different sizes to resynthesize the human voice.  actually it could all be accomplished with piezos fairly easily just using a massively multichannel audio interface.  if that's not a job for a supercomputer, then i don't know what is.

uses for music

So I was following some links from the STEIM blog and somehow ended up on  Richard Scott's youtube channel.  On there I found some beautiful video pieces by Yoann Trellu.  Specifically this one caught my eyes/ears.  Scott created the music on an analogue synth of some kind.  Anyway the point is, someone operating under the handle trojjer left some typically youtube comment to the effect of:

What's the point??? Argh... Is it a "Magic Eye" video or whatever, or just pointless A/V noise? Although it does seem like there's some sort of Asteroids style spaceship navigating through tunnels at certain points... It's still stupid though.

I couldn't help but respond:

music is pointless.

I know it's probably shocking to some people, especially ones who might be so caught up in some peculiar incarnation of postcolonial capitalist dreamland
that they have been trained to act as though they have forgotten the pleasures of immediacy, that music, the stuff that costs money to produce and costs you money to put on your i-pod, has absolutely no value whatsoever.  And I mean this in a way distinct from the way that money itself has no value-- money has the metaphorical value of a promise, and nothing that can be pointed to symbolically has immediate value (or pleasure).  What I mean is that music has no practical value, which separates it from things like food and infrastructure.  Are people 'helped' by the production of music?  (Obviously one can answer 'yes' to this question, but only if one is being extremely metaphorical, to the point where one uses mental, social or even psychological terms, which is another level or two up from what I'm referring to.)  I'm saying that music is as purposeless and necessary as play or gossip.

Of course if you know me, you know I have a problem with the term Music at all.  Typically I like to be more specific, by providing some kind of cultural frame to the term.  Terms like 'Western Music,' 'West African Music,' 'Karnatak Music,' and 'Film Music' (usually associated with a region and period) provide a necessary frame, without which it seems pointless to discuss the topic at all.  This effect is the result of the many different roles musical behaviours can play across cultures.  Rather than attempting to nail down all musical behaviour as some kind of intrinsic substance (the "organized sound" definition) or even experience (the "musical expression" caveat), I feel much more comfortable with the notion that, being a cultural practice, we can only describe how things that are called music by someone interact with things that person gives other names.  It can only satisfactorily be examined "in vivo," or in use.

Thus, I have a few specific examples of uses for music, all my own:

 

1.)    Solkattu

The only portable music system I have that works right now.  Solkattu is an oral system of rhythmic notation originating from South India.  It is similar to a solfege system, except Karnatak (South Indian) music already has solfege for working with scales.  Also, in a solfege system, it is generally required that there be a one-to-one relationship between vocal syllables and some aspect of the sound, generally fundamental pitch or scale degree.  Solkattu diverges from this because there are many syllabic substitutions that can be applied to the same rhythmic material.  Thus, it has a wider vocabulary.   It is also proper solkattu grammar to apply the same syllable to a different rhythmic value, thus leveraging the system's ability to perform expansions, contractions and other operations on a phrase while maintaining relative relationships within that phrase.

I use solkattu while I commute through the greater Manhattan area, generally on the subway systems.  I find it less conspicuous than an i-pod.  Also, it leaves the option open for me to interact with my surroundings immediately and without recap.  Most importantly it is free and open source.  (Tee - hee.)

*note the wikipedia article refers to konnakol, or the act of performing solkattu.  solkattu is the system, and konnakol its performance.

2.)    Robert Ashley 

I love to drive long distances.  Especially at night.  Something about the stark simplicity of the lines you have to follow and utter deprivation of other senses really appeals to me.  I spend time driving in silence at times, other times it is not uncommon for me to listen to music or to do solkattu while I drive, depending on my level of arousal.  When I do listen to music in situations without conversation, the music I most commonly listen to (especially these days) is Robert Ashley.  Specifically I listen to his operas, which do not seem to resemble the work of or reactions to Monteverdi at all (ie what is generally called opera), at least on a superficial level.  I would also say that Robert Ashley himself resembles Monteverdi, the New Media artist who invented opera in 1607.  Ashley's scenes are incredibly visual, but sufficiently abstracted that they work over a very extended period of time.  The time experienced in his work is not narrative time, but something more akin to my experience of internal time.  Another way of saying this is that I'm always listening to Robert Ashley operas, but sometimes, especially when I drive long distances at night, I make the stereo play them for me.

3.)    Ella Fitzgerald

I like to solder naked.  Actually I think I solder best naked.  One thing I do when I solder, especially if I'm really concentrating,  is to sing jazz "standards" of the Cole Porter, Richard Rogers, or Jimmy Van Heusen variety.  These songs belong to a cannon of work that I became very familiar with as a cocktail pianist.  The specific style of phrasing, melodic interpretation and arrangement I seem most susceptible to is that of Ella Fitzgerald.  For reasons beyond my understanding, I find myself belting these renditions out when I solder or prototype a circuit.  I don't (generally) do this when I shower, nor when I drive, or when I ride the subway.  I don't even do this when I disassemble and desolder components I find.  And don't even ask-- I can barely chew gum while I play the piano.  Only when I'm soldering.  Naked.

One popular use of music that I truly do not understand is as a substitute for notifications by electronic devices.  I have never successfully been aroused by an alarm clock whose setting is to play music.  I also don't get the whole ringtone thing.  Also the "please enjoy the music while your party is reached" phenomenon on cellphones now is something I have trouble relating to.  That being said, I generally don't mind it when other cellphones cause pop music or ringtones to play because who knows how long this phenomena will last.  I imagine we have already seen the extinction of many ringtones.

One last anecdote.  Jenny worked at a BOCES school for mentally retarded and autistic people a few years back.  We were dating at the time, but living in different states that summer.  For the longest time all I heard were her stories about this one autistic person named Florin, who would spend most of his time apparently unresponsive, but would occasionally take requests and sing, nearly inscrutably, songs from popular culture such as Madonna, the Beach Boys, or Disney.  I imagine his 'use' of music as the point of convergence between his language and mental states and those of the people around him.

a few network drawings

a network:

(key for above network: - - - - means one time interaction; ----- means constant communication)

a network with directionality between connections:

a network with variably weighted connections:

mobile om

so I made this box for my twin sister's birthday (June 20), but in the haze of june birthday present making I missed my chance to document it before I gave it to her.  then I went home to PA and found it chilling out on her dresser.  so I have uploaded footage of the mobile om box to my youtube channel for all to see.  it works by amplifying mechanical vibrations in the box and playing them out a small speaker.  this causes feedback which can be controlled (at least you can try) with different positions of the lid and pressure points around the box.  basically the unit acts as a vibrational amplifier, or, for those so inclined, a portable compass in case you need to find om.

momoscillator (final version)

here's a video of the final version of the momoscillator.  it was made largely from recycled (read: dumpster-dived) materials.  the momoscillator is a chaotic music box.  it is intended as a birthday present for my mother.  as of this post, i am about to travel to pennsylvania to deliver it to her personally because it is both incredibly fragile and resembles a bomb.  there is a single photoresistor-controlled feedback bus from the unamplified signal to the timing oscillator.  it is much more calm than previous incarnations and also somewhat more melodic.  the power switch is tripped by opening and closing the box.

momoscillator

I've been developing a circuit for my mother's birthday present. Initially I was going for a modified version of the gift Jeramie, Jenny and I made for Luna, but things have changed.

The circuit in Luna's present was a simple 3-voice sequencer using gated oscillators for the voices and a binary divider for the clock signals. This way, the tempo could be modified separately from the pitch of the three tones. The fourth oscillator remained ungated and controlled the clock signal for the divider IC.

This system is nice and orderly, like a music box, but electronic. Unfortunately, the momoscillator differs from this model with the use of feedback resistors. Simply by mixing a portion of the divider circuit's response with the +9v bus and sending the result to the second input of the gated oscillator circuit, extremely complex patterns emerged. Additionally, the input signal to the amp circuit was omitted, and instead the amp circuit output the difference between the ground coming into it and the 'rest' input to the 4040, which is also nominally 'ground'. so the circuit contains both feedback and instability from a split ground. A demo of that unstable version is on my youtube channel now.

Later versions of this circuit omit the split ground phenomena, as it renders the breadboarded prototyped version significantly dissimilar from the final version.  Since many different circuits can perform the same theoretical tasks, there can be a lot of variation from the theoretical system and an instance of that system in implementation.  These changes don't normally matter ie in a well-grounded, stable circuit, but they do here.

The instability of circuits as the result of conflicting feedback loops, or attractors, is far more interesting to me as a composer.  More documentation to come.

lunar lounge unit (beta) complete

i have finished testing and debugging the beta version of the lunar lounge processing unit.  i hesitate to call it a guitar pedal because it's simply not one.  not only is the interface different, but also it is not necessarily even an effect as opposed to a synth.  i'm starting to buy david tudor's reasons for eschewing the distinction.  all told, the thing is the most complex analogue circuit i've ever built -- and the least predictable.  i wanted its usage to be as ambiguous as possible so there isn't much in the way of markings on it.  also the insides are totally viewable from the outside, which came in handy as i was testing it, not only for the occasional jiggle (the universal fix) but also for the ability to add to the circuit with ease as it's running.  i added a pot that connects the result of the dual-inverter distortion circuit's output to just before the summing resistor after all the octave dividing and pitch-synchronous gating.  the logic behind this was to control how much the gating interrupted the signal, like a wet/dry knob, sort of.  sometimes it sounds like that.  mostly it affects the tone in awesome but not too predictable ways.  i enjoy having it on an aux send on my mixer because i can send feedback through it pretty easily.  when that happens, it self oscillates in the most glorious of ways.  also, i found with the addition of that pot, the whole circuit became rock solid.  previously, i assume when the signal got too hot for too long, it would get edgy and drop out, thus necessitating the jiggle.  after the pot was added, this behaviour stopped and it became very stable indeed.  well, stable is a relative term i suppose.  i'm really excited about adding a few more summing matrices and jacks to the sides so i can produce multichannel sounds from this one unit.  it's housed in an alarm system's motion detector box-- the kind that light up when you move in front of them.  i tested it with phase-modulated grains in supercollider.  the incoming material
was actually quite uninteresting, but the unit responded with stuff that i can only describe as analogue granular synthesis.  it makes these quasi-synchronous trains of pulses that sometimes have spectral content related to the input, at pulse rates related to what the circuit determines to be the fundamental pitch of the signal.  i am considering adding pitch-synchronous leds to the face for interfacing with photosensitive circuitry.  i have determined to build devin a different version of this circuit, in stompbox form, with 1/4" jacks instead of rca's.  aside from improving the design and making the case more protective, my rationale behind this decision was that i have been giving too many of my circuits away and i don't have much to show for all my hard work.  also we have no idea what this thing is going to do to a guitar...

click here to listen to another demo.

top view

front view

Codecs, 2007-2008

New album download link here.

For quite some time now, I have been operating under the assumption that I must not look back to what I have already perfected, and that the material I want to show the "world" should be completely new. The issue with this method of operation is that my documentation suffers from some significant gaps occasionally. To combat this, and also to sort of respond to the work of my peers, I have compiled some material, some of which has been previously self-published under a cc license. My intentions for this record are not only to document the technical achievements I have undergone in the past year, but also to arrange this material in a way that is faithful to the larger concepts to which those achievements represent a response. Specifically, this album responds to the idea of an 'album' as a set of discrete but somehow interrelated set of information packets, each one encoded in some way which facilitates the mechanical reproduction of the whole. In this context, the question of whether this is a "lossy" process, (like an MP3 or OGG), or "lossless" process (WAV, AIFF, FLAC), is somewhat arbitrary, as all encoding systems are truly self-referential, and the real substance of the experience comes from a sort of 'quantization' of one system to another. These artifacts of translation become the 'content' within the frame of the ID3 tags and JPG cover art. In this album, I have focused on the artifacts of spectral encoding, amplifying and building structures from the blemishes and internal nuances of sounds. The alchemical 'insides' of sound, which many simpler, non analytic granular processes totally ignore, can emerge and be manipulated in a number of surprising but phenomenologically convergent ways. Why commercially available sound-producing platforms all but totally neglect this area of open research is absolutely beyond me. I think it goes hand-in-hand with the faddishness of granulation in music right now. Much like the vocoder or the gate reverb in the past, the narrow mode of usage deemed commercially viable for granulation will certainly cause future consumers to reject the technique altogether, declaring the effect 'retro.' Listen to "Graceland" recently? Then you know what I mean (gags).

More important than the wavelet and asynchronous spectral granular techniques I used in the pieces on this record are the compositional considerations I made to showcase them. Many of the pieces are only as long as they need to be to allow the driving idea to be expressed. There is very little 'grooving' going on in this album. Each piece demonstrates a different system of meaning that is closed in on itself, and therefore in contrast to previous releases I have omitted the use of the crossfade. Furthermore, the dynamic range of this album is crucial for demonstrating many of its ideas. While a lot of electronic music (and that basically means all recorded music) tends to approach flat dynamics, stuck all the way up at the loudest region of amplitude for much of the 'body' of the work, many of these pieces incorporate silence and space as an important component. This is not to say that I dislike compression as a technique. I adore the negative space one can achieve by squashing out the dynamics from a sound. It is this multiplicity of framing devices that mark this album as interstitial, in that it occupies many different territories of genre, tempo and dynamic range, not to mention compositional process and intent.

1) tone:

banded noise / formant wavelets triggered by spectral analysis of trainlet clouds. also granular reverb and delay lines.

2) cartesian contusion

the initial rhythmic motives were designed in a standard "drum machine" style editor I created. I placed a bandpass filter after this so I could select areas within the spectrum to focus on. the result I ran through a 60 TET filterbank across the entire audible spectrum from 16 hz - 32768 hz , triggering synthetic wavelets I designed for synth percussion. this forms the percussive motives. I used these motives to trigger longer wavelets derived from an expodec envelope and my own voice. the filterbank I used for this layer was 12 TJT, and similarly I used a bandpass pre-filter to tune the register of the result. this forms the melodic/cluster motives. I did some additional spicing to the percussion using soft-clipping and a very subtle amount of waveshaping, eq-ing, etc. the two layers are not simply 'mixed', but the signal from the percussion actually modulates the resulting melodic signal by a sort of stereo ring-mod, where the modulating signal determines the stereo position of the carrier. then I mixed and peak-limited the result.

3) tamarack:

rivers made of bells, rivers made of bugs, rivers made of coffee. concrete wavelets applied in series. the first attractor layer is made of trainlets, which show through in the spaces. the 'distortion' you might think is the result of your speakers or my poor recording technique is actually an extremely bright spectral phenomenon caused by high density transients. it can be harmful to hearing at high volumes or with headphones, so make sure you play it extra loud so you can damage your neighbor's ears too.

4) wavelet iron trio:

three renderings of a process I came to call "time-domain ironing," wherein the timing information for an analyzed signal is replaced by a linear or geometric series. in this case the timing information becomes geometric series. this means rather than having each wavelet be evenly spaced from the next, this distance increases with each wavelet. the only thing that changes with each of the three renderings is the resynthesis wavelet.

5) antonym:

this is the open space. you're on a highway, going up and downhill. you may take exits and go on overpasses. the contours of the traffic around you wax and wane like tidal cycles.

or, if you prefer, it's uncle braxy's ghost-trance for psy burnouts. the tiny vampire rave inside your skull outfit with circus midgets and wormholes.

spectral ranges are defined within the trance train. while the train accelerates you look at it through a telescope, while it decelerates you see every atomic gesture, every morpheme that makes it up.
the wavelets can be defined within slices of the spectrum, and restricted to discrete time/frequency relationships like scales and temperaments. they can range from long resonances to infinitesimally tiny crispers on transients, from faithful facsimile to clouds of resampled memory.

6) What's the nine-to-five for?:

haar wavelet mutation occurs between 2 channels of audio, and duplicated as stereo. the statistical formula is the same for L and R but the outcomes are slightly different. this gives some nice wide imaging. also the thing is a palindrome. the algorithm stochastically replaces haar wavelet coefficients from one signal with those of another, resulting in a non-linear crossfade or mutation.

7) the young lovers:

a version of the chase sequence from Lenore, a short horror film I worked on by Asli Soncelly. it received the award for best senior film- digital in 2008 from Wesleyan.

8 ) salmonella:

a picture of salmonella constructed from my voice undergoes frequency domain erosion. pixels are rendered into grains of my voice and undergo gradual spectral gating. particles whose amplitudes fall below a rising threshold are removed. click here for the picture.

9) sed at the abysmal pyramids:

synthetic and concrete spectral grains. the foreground grains are from a recording of a spring being plucked.

10) the great division:

"Hard and Painful" by AndyChrist, remixed using haar wavelet techniques and recursive counting algorithms.

11) exit strategy:

I designed the sequencer for this piece to emulate the patterns I was hearing from the leaky taps in my bathroom at the time. the controlling synth plays low frequency impulse trains at rational intervals to the fundamental, and at every period of the fundamental (or subharmonic thereof), it changes this interval. the impulses are interpreted as strongly timed messages to produce sound-events. there is one of these synths controlling each "track" or instrument part. as a result, the parts tend to meet at regular intervals and relate to each other coherently, while their specific relationships change regularly. the temperament is just, arranged in an 11-tone scale. I recorded the concrete material, mostly birds and insects, in the Australian bush. There is also a layer of harmonic gabor wavelet resynthesis, frequency quantized to the gamut of the piece itself.

lunar lounge unit R&D update

I've completed most of the soldering for the first edition lunar lounge unit.  I am about to commence the testing and debugging phase of development.  I have a functional breadboarded version next to my workspace for reference.  Right now, the jacks and switch haven't been soldered to the unit and alligator clips give me connectivity.  I will probably wait until after the debugging phase, and possibly after I find and prepare some kind of enclosure for this thing, to solder those external components on.  I am considering adding a power indicating LED.  I'm not going to give this version to Devin, anyway.  I'd really like to encapsulate the unit into a true stompbox, and add some tone/mixing knobs for further control and variation.  Also, for the proper guitar version of this circuit I might add a pull-down resistor before the divider circuit, so it shuts up when there's no input instead of oscillating on its own.  For a sample of how it sounds, click here.  For more information about the concept, click here.

a wide shot of my testing setup.

a closeup on the unit itself.

really, really low

i like looking at this recording through an oscilloscope because it's so low frequency.  i used phase modulated grains with varied envelopes.  i livecoded the parameters in realtime, both with direct osc messages for the phase modulated grains and with a gui i made to programmatically stream envelope shapes into a buffer for realtime manipulation.  pure synthetic granular synthesis is pretty rare these days because commercial software all but ignores it.  i think once the hype dies down people will stop abusing the technique as an effect and perhaps those who stick with it will develop it further.  in my wavelet analysis days, i was really mystified by two aspects of granular synthesis: phase and enveloping.  i just couldn't see the point in using anything other than a von hann window, as it produced the fewest artifacts across the bar.  of course, i was neglecting the well known fact that 'artifacts' is a fancy term for 'cool stuff.'  ok, that's a bit much.  suffice it to say that this new paradigm for synthesis is very eye-opening and dynamic.  that leads me to another thing that seems to be lost on much commercial music: dynamics!  seriously, i think the one trait each of my favorite recordings share with each other is the creative use of dynamics, intentional or otherwise.  another cool thing about this track is that if you leave it on and walk around the building you're in, you can still hear it because the waves are so large they pass through quite a bit before being absorbed fully.  i had it on repeat for a while and just experienced from different locations in my apartment and felt like it was different each time.  also, no matter how loud you play it, if you live next to people, no one will ever suspect it's an intentional sound coming from your speakers because you want it to.  ergo, no noise complaints!   mostly i like it because it spends much of its time right at the threshold between pitch and rhythm, and attempts without too blatantly just adding harmonics, to demonstrate that neither pitch nor rhythm exist on their own at all, but are completely one another.

lunar lounge demo

this is a demonstration of the current prototype of the lunar lounge pedal. it is the left channel of a recording of me playing "penelope" on a grand piano just about four years ago. "penelope" is a tune i wrote and arranged for the solo piano album i made that year. there were some handclaps in the recording, and i believe some small bells as well. the only thing that really survives the effect is my right hand. i also treated this demo after the fact with some fourier manipulations, just at the beginning and end, for some smoothness. i will probably put some kind of light-sensitive filter on the pedal for expressive / chaotic tone control. i decided that this pedal makes guitar players sound like reggie lucas, especially if a wah is used after it. the pedal has changed a bit since the last post: i added a second pre-amp inverter stage with a 10mega-ohm resistor across it to add some distortion, and some feedback capacitors to low-pass the incoming signal a bit.

Lunar Lounge Pedal 2025 Edition

i've been developing a pedal for Devin to play his guitar through loudly on 8.8.08 when we take over a biker bar called Popeyes in Peekskill, NY.

the basic idea has been to pre-amplify the signal with an inverter chip, possibly after filtering it with an R-C circuit. the signal is then used as a clock for a divider circuit, which is capable of dividing the signal by several octaves until it becomes a series of clicks and the sensation of tone disappears completely. the octaves that are perceived as stable tones then become gated (turned on and off) by the lower-than-audio-rate octaves. these pulsing, detuned versions of the input signal can then either get summed back together and amplified (as in the pedal), or sent to different speakers around a space.

i think another possibility for this thing could be to generate synthetic tones using the NAND gates as oscillators, and having the pitch of the incomming signal affect the overal speed, but not the fundamental pitch of the resulting pattern.

i was considering submitting a more complete schematic and recording of the circuit for the DVD version of Nic Collin's book, but i'm pretty sure it's too late, and it's not been quite perfected yet.

i played King Tubby with Soul Syndicate through it. the output signal no longer sounded like dub. occasionally when the vocals crackled through, i was reminded of Jamie Saft and Merzbow's duo recording, Merzdub, or perhaps some Panasonic. I will get a recording of something up soon. however, as you might have guessed, it's very unstable. i described the rules of the system in my abstracted, idealized language because i come from a computer programming background, when in reality that's only a point of view and systems don't all behave that way. it seems to lock on easier to higher pitched material, so i thought perhaps a guitar would take to it. i tested it with solo Derek Bailey, and the output signal no longer sounded like Derek Bailey, but it at least still sounded like a guitar. at least until it would get confused by softer material. i worry that a line-level guitar might not be enough to drive the circuit reliably. if that's the case then i'll put another pre-amp hex inverter stage.

i like this path because it reminds me of some granular things in the computer music world. it's really refreshing sometimes to work in a medium that's harder to control. in my computer music compositions, i've recently been focusing on allowing the system in place to be stochastic in nature, where the form merely determines a new table of probable outcomes. while this is all just peaches and cream, i'm getting proficient enough in that environment that i can occasionally predict it too well, and i need to introduce some more indeterminacy into the system. that being said, there have been some recent moments of pure exploration with supercollider. i have happened upon two parameters of granular synthesis that i hadn't really played with yet: envelopes and phase. i did a few experiments in the past with simple phase modulations of wavelets, but it seemed like more trouble than it was worth. oh how wrong i was. ditto with the envelope thing. more to come on that later.

housecleaning

i have been working on several new projects, which will be documented in the pages to come. for now, i would like to document a triumph a bit smaller than building an analogue synth, or a guitar pedal, or a plant-shaped pop tune. in this post i would simply like to document that i have cleaned and reorganized my work space. there will be a larger revision of this space very soon-- we have plans to loft the bed so that my keyboards and some other larger musical gear will be able to fit. also, transistor pins are sharp, and do not mix with silk sheets very comfortably. especially in Jenny's opinion. so there will most likely be another
post sometime in the near future about the new space. however, in the interim, i'd like to capture this moment before everything gets turned around again.

the analogue

the digital

a mid summer tight scream

It took approximately 14 hours to drive from Brooklyn to Asheville. We spent almost an hour and a half squeezing through Canal street in lower Manhattan to get to the Holland Tunnel, a decided improvement from our previous attempt, which lasted for almost twice that.

You might say we were headed for the middle of nowhere, and you'd be partially right. When we landed, we found ourselves surrounded by some 900 acres of empty orchard and forest. Of course, the valley's emptiness turned out to be an illusion; in fact it was vibrating and skittering with shrieking insects and frogs and jellyfish. Apparently the lake has its own species of jellyfish found only in that region. Birds had allegedly transferred the polyps from the ocean into the fresh water on their feet during migration.

The insects, particularly honeybees and spiders, feasted upon each other and the decaying, lichen-wrapped apple trees which resembled green skeletons that stretched up to the sky with many furry arms.

I call the scream tight because it came from the base of my spine and out the center of my forehead. It crawled up the back of my neck and burrowed into my cranium like a tick with a drill bit for a head. I wanted it to be full and loud but instead it shredded through my dehydrated throat and took to a whining register.

Asheville is a town where time slows down and pauses between words. Things can happen much more frantically above the Mason-Dixon line, where there is enough coffee and paranoia to fuel the next apocalypse. We had packed a jar of instant and some non-dairy creamer, but we didn't drink very much at all. The woods kept us awake.

The lineup disintegrated as an increasing number of acts fell through. A hive was discovered near where the second stage was staked out. Cinder said they needed to fill in some holes in the schedule, and my friends suggested I play my set after the invocation, despite the fact that I neither DJ nor play dark psy. Terrified, I obliged.

The set I had with me was about a year old. It was poorly mastered and in a direction more befitting a morning time-slot. Also, as I said before, it was not dark psy. Nevertheless I played the thing, or rather, Rod's computer played the thing, immediately following a ritual involving an altar placed in front of the DJ table. There was so much deliberation and so many accidents that brought me to press that single button.

I think future sets will involve prosthetics, sculpture, and visuals as a way to make the processes more transparent and direct. Jeramie seems really open to performative works, so we'll see where that takes us as well. I hope to have something worked out by the time I play again.

The triplets surely did us proud. Their set absolutely blew me away. The new material they played was impeccably produced, and Rod's custom software for granulation sounded excellent on the giant speakers. The textures they hit with some of their breaks were just gorgeous. Also, seeing all three of them behind the DJ table made me feel all warm and gooey. My back was messed up from writhing non-stop for hours, my kidneys keenly burned from dehydration, and yet I was completely transfixed. Actually, for their set I shook harder than I had the whole night. I had no choice really. They were like a team of nerdy biologists from some other planet, and we on the dancefloor and out in the Carolinian jungle were like their hapless, unsuspecting alien specimens. That was some twisted stuff.

At the time, I wasn't really expecting to let a scream like that out. My joyful head was just beginning to register the overwhelming recognition of being mangled to a pulp. It signaled as much pleasure as it did horror. My lungs pushed their hardest but the air shot right back at me from the cannons that were those massive subwoofers. While it was all I had left in me, I remained hungry and unsatisfied. I resolved on the 15 hour drive home to let another one out sometime in the near future.

Praat

this
is possibly the coolest sound software i've ever seen.  it's incredibly powerful in analysis and synthesis.  it excels at linguistic / phonetic type things.  you can implement neural nets and OT grammars.  which is possibly the coolest thing i've seen sound software do.  also it's fully scriptable and can run from the unix shell.  i'm considering running it with scripts called from supercollider, possibly piping the outputs back into supercollider for granular stuff.  i wonder how well it would hold up under a livecoded/networked system...

more algae

Following up on the previous post "Factory Factories," I have made some new developments in the direction of lindenmayer systems.

To recap, I have been interested in using L-systems as compositional tools for quite some time. Avid readers of this blog who take their ginko may recall Sasu and Iannis, visually generated L-systems that served as 'scores' for sound compositions. Instead of reviewing what an L-system is in much detail, I refer the perplexed to the above listed entry on Sasu or this other entry on Iannis.

Anyway, the whole point here is that L-systems are represented traditionally by strings of characters, each one representing instructions for growth. So a very complex L-system is generally anticipated to be quite long. The two programming languages I have been trying to use to generate these strings, Java and SuperCollider, have come up somewhat short of the mark, specifically due to the computationally intense task of managing these strings of characters, growing new generations of strings, and reading them into (hopefully) aesthetically pleasing results.

My solution? Learn Python.

I messed around with Python a bit with Mike to create shell tools for our OSC clusters. I have been impressed with its string manipulation ability and agility, and I'm now working on a simple app that grows algae files. From there, SuperCollider can read in the L-system bytewise and produce some noise. Since the L-system is generated asynchronously, this paradigm lends itself well to either the Arboretum or the Cattri.

Why all this to do about L-systems? I'm producing a pop album for my boys at Unwashed Records that will use this technique exhaustively.

Crunchy Sonic Robot Attacks Psytrance Festival!

Balu is a light sensitive sound synthesis agent.

Mostly he enjoys squonking at the sky, attempting to communicate with extra-terrestrials. He knows that many of these beings like to attend psytrance festivals out in the woods, so Balu bought a ticket to Gaian Mind, packed up his few personal possessions, and warbled his way to southern PA.

There, he was tested with many strange occurrences. His mind was brought nearly to the brink of sanity (even for a CMOS logic chip!) as the overwhelming synchronicity he had been feeling turned out to be the impending singularity of all beings. Of course, when Balu stared into the sun, his circuits became so excited that almost no one could hear his yelps. No one, that is, except for the machine-elves...

In the cool of the night, Balu's mind would drift into rhythmic cells of shifting phases and pulsetrains. Ever the trickster, his strategies changed with each life-form he contacted: no two beings elicited the same response from Balu.

At this point it is safe to assume he has returned to his home in Brooklyn, to bring the top of the mountain down and disseminate his message of evolution and bliss.

factory factories

agog

a simple l-system rendered into granular structures. an l-system is a type of recursive language that was developed initially to describe plant growth and bacteria. one could also call this composition a fractal. this l-system is the 3rd generation of the axiom x_xx, where the rule x--> x_xx is applied to every x in the sentence each generation. the resulting sentence

x _ x x _ x _ x x x _ x x _ x _ x x _ x _ x x x _ x x x _ x x _ x _ x x x _ x x _ x _ x x _ x _ x x x _ x x _ x _ x x _ x _ x x x _ x x x _ x x _ x _ x x x _ x x x _ x x _ x _ x x x _ x x _ x _ x x _ x _ x x x _ x x x _ x x _ x _ x x x _ x x

is then rendered legal supercollider code, and interpreted. this was done in realtime, and the initial l-system was devised on a single node in my new linux cluster, "Cattri". Cattri is a wirelessly accessible cluster built for livecoding that lives in my house. since the final product i chose to document wasn't complex enough to warrant that kind of processing power, i was still able to render it on my macbook. i can livecode with it by manually altering the code which programatically alters the code to be run, for example producing another generation of the l-system or changing how the l-system translates into SC3 code before it is interpreted. no edits, no overlays, all eq-ing and dynamics processing was done in realtime in SC3, and we barely scratched the surface of the processing power of one machine, much less a cluster. i'm starting to wonder if i should use the cluster to generate datasets in realtime and use them for livecoding performances. --actually nevermind that's exactly what i'm going to do.

i really like the possibilities of using the Cattri in performances. yes, they're heavy, yes, i don't know if they're totally necessary yet, but once i figure out a way to really use them for what they can do, you better believe i'm going to play gigs on them. with other people. i like the idea of using this l-system patch (or something like it) to algorithmically produce the structures for improvised dance music.

recent developments in a nutshell

Sorry it has been so long since my last post. It has been a really wild ride.

I find myself running full speed ahead so often, that sometimes it seems like to reflect on what I've been able to accomplish would result in some kind of an explosion. At the same time, my lack of reflection robs me of the perspective I need so allow myself to burst with these new skills in a satisfying way. And so, I sponged for so long, reacting and responding to ideas and growing ever readier to finally pinpoint that thing of all things, only to continue honing tangents and indulging obsessions. I feel that the time has finally come to take a step back and account for what needs accounting for.

I have started work on an ecosystem of small musical automata, each of which demonstrates a feedback system involving a 'crossing of levels'. The first one, pictured below,

photo-101.jpg

uses a photoresistor controlled clock to time a regular spasm of movement to a shaker motor. the photoresistor reacts to minute changes in light, which may either result from the movements of the light organic material attached to the weight, or from the freewheeling motion of the entire system. The periodic shakes occur faster as it gets more exposure to light, and the entire sculpture emits a squiggly drone through a resonant cavity in its base. He is particularly fond of lamps. I cannot thank Jeramie Belmay enough for his help with designing the body. This is the prototype. This is the final being.

The second item in this series is currently being designed as a light-emitting being of similar stature, essentially a converse of the first. Instead of blocking ambient light, this automaton blinks. The sounds are more complex as well. They play together nicely. Documentation to come.

I created a simple biofeedback composition using in-mouth galvanic response in conjunction with control-level feedback structures called "Loop".
Asli Soncelly's short film "Lenore" was an amazing project to be involved with. She has a really sick mind, a powerful asset when working on a horror film. I did the entire project in free open-source software, including some patches i made in SuperCollider for spectral granular synthesis. Also designing reverb in SuperCollider was an interesting process. I eventually chose convolution as my method of choice, and recorded reverb impulses in my bathroom. I also had some fun with the ambient noises, since much of the original sound had been damaged during filming. Contact microphones and freesound became my friends very quickly. It won "Best Senior Film" at Wesleyan, due to Asli's bloody cinematic genius.

Mike Clemow and I have been building a linux cluster at NYU for the purposes of running sound synthesis and other high-level media production algorithms using OSC as a message-passing interface. We made an appearance at the ITP spring show to demonstrate some of its functionality in these beginning stages of production. Using Julian Rohrhuber's Just-In-Time library for SuperCollider, I livecoded on the cluster using my laptop as a head-node, controlling streams of wavelets in realtime. I have learned much about the architecture of SuperCollider itself, and really about networks themselves, from this project. I cannot thank Mike enough for bearing with me as I wrap my head around these machines.

The Arboretum, a 12-node linux cluster workstation, has birthed two much lighter, more transparent "dev clusters". Mike and I each own a few machines, and are building clusters to our own specifications. I named mine the "Cattri", Pali for the "four", as in the Four Noble Truths in Buddhism. Each of the four linux boxes is named the Pali word for a Noble Truth: Dukkha, or "The Nature of Suffering", Samudaya, or "The Origin of Suffering", Nirodha, or "The Cessation of Suffering", and Marga, or "The Way Leading to the Cessation of Suffering".

At this point I have set up a wireless network on which it is possible to livecode with these machines, which are piped through my monitors as well as through the loft's new sound system-- a pair of flat speakers and an amp we found in the trash, fixed, and mounted to the walls. Between those and the cluster itself, it's shocking to think what people throw out! In addition to composing recursively symmetrical granular structures on the toilet, it is possible to wirelessly control the house's music and have access to a shared music library.

I see myself currently working toward a greater capacity for spontaneity in my work as my skills get honed. Current obsessions include formal languages, networked microphones, analog electronics, wavelet analysis, improvisation and sculpture. I also see myself becoming more interested in extended vocal techniques. Of course, more musings on each of these will happen at a later point.

lou

lou is a kinetic sculpture somewhat related to a mobile. he is named after lou harrison. lou (the sculpture) produces feedback through a network of steel wires which freely extend from his segments. there are two piezo discs driving this feedback. lou is held together almost entirely by his own weight, through a system of steel wires and flat washers which keep him suspended and in balance. the reason for this delicate design was to make him more relevant (along with the Calder tie-in) to the themes of the Mechanisms course, such as minimum constraint design. the application for his design is so that all his moving parts are free to vibrate in many different ways, producing many different resonant modes, and so that he is very sensitive to his environment.

lou consists of a single steel wire "spine" from which seven basswood strips extend. the top six strips bear very little weight, with only two wire "whiskers" sticking out from either side. these whiskers act as possible pathways for mechanical vibration. slight changes in the environment cause different mechanical connections to form between parts of the sculpture. additionally, these whiskers restrict the motion of the individual basswood strips in an organic looking way; each strip affects every strip below it.

the seventh (bottom) tier supports the driver circuit and two piezo discs. these are contained in a box without a top which hangs in balance from the bottom strip. the only motion that is restricted is downward; the box is free to spin and wobble, supported only by its weight. piezo discs are small devices which translate mechanical vibration into electricity and vice-versa. the top piezo disc acts as a 'contact microphone', listening for mechanical vibrations in the structure, while the bottom piezo disc acts as a 'transducer', causing the structure to vibrate. the top piezo is held in place by the weight of its own driver circuit, pinned between the washer and the seventh strip. additionally, four steel wires with weights on their ends hang down from this joint, also held in place by the weight of the driver circuit box. the bottom piezo hangs freely from the box by its own wires and from it extend two steel wires which are free to make connections with the weighted mic wires.

i promise it makes more sense when you see it.

printemps0

check out my soundclick page for a really sweet recording i just made.  i made the spring out of some extra wire, and made an ad hoc set of components to play it like a dub reverb thing.  along the way there's some nice kitchen noises, a train and a fire alarm.  all recorded live, btw.

feedback mobile

so, after many different tries and much sweat and electrocution i have constructed this monstrosity of a mobile. somewhere in the greater philadelphia area, calder's corpse is shrugging its shoulders and claiming non-involvement. this has been a lesson in fabrication. the theory behind it is cool too, but the majority of the heartache came from trying various ideas out and having each one end in abject failure. i have learned the following:

1.) do not expect wood that you cut against the grain to be nearly as strong, or look nearly as good as a piece you cut with the grain. this is especially true for smal strips of basswood.
2.) always have a duplicate, breadboarded copy of the circuit you are trying to solder onto perfboard. unless you like confusion.

3.) it never looks the way it does in your brain. ever.

4.) touching a bare, live piezo disc across the crystal side will electrocute you. if it is buzzing softly and running straight off of a dc circuit this is fine, but if it is heavily amplified with several gain stages and uses an inverted power transformer to do so, this is exceptionally painful.

5.) it never sounds the way it does in your brain. ever.

6.) you won't leave enough room for the wires.

7.) the fewer constraints you put on something, the fewer chances you have to screw up.

8.) steel wire will break if it gets too gnarly.

9.) hot glue is hot. so is molten solder.

10.) if your piezo disc is not making sound and has just shocked you (see no. 4), NEVER put your ear up to it just to make sure it is malfunctioning.

i assure you, i'm ok. nothing was plugged into the wall. nor will it ever be, at the rate i'm going. i would much rather pay for batteries than go out smelling like a poorly cooked hamburger.

it's a mobile, consisting of 7 rectangular basswood strips centered around a single 30 gauge steel wire. at the bottom hangs an open box with the custom electronics inside. off of each of the wood strips, minus the lowest one, hang two steel wire "whiskers." the electronics consist of a home-made distortion circuit and a driver circuit for two piezo discs, one which acts as the microphone, the other as a speaker. these discs also have whiskers soldered onto them; three whiskers each. the microphone's whiskers are the longest, and hang down with the aid of washers affixed to the ends. the speaker disc is affixed to the bottom of the components box, and amplified to the point where the entire sculpture vibrates and acts as the source of sound. the microphone is affixed to the central wire that holds the structure up. as a result it is incredibly sensitive to the microscopic movements of the system. as you might imagine, feedback ensues through the system, creating patterns of tones and creaks even while the sculpture might appear to be completely still. this concept was adapted from an earlier idea to use photocells and analogue pulse oscillators to sonify the balancing system. the photocells were scrapped because i became unsure if light was the best attribute for the system to respond to. i may eventually add some kind of motor based agitator to the system if there is time before thursday.

audible pictures

so the other day in class, Jeff showed us how to use the max/msp picture object to get image data and translate that into noise. he challenged the non-maxers to do the same in our respective environments. mike used osc packets to read the image in realtime, while i went the NRT route for some added flexibility and synthesis power. we both used proce55ing to load the image file, but while mikes communicated via osc into his chucK patch, mine simply spits out a .txt file with a 'header' of the image dimensions, and a 'body' of rgb values for all the pixels in the image. the 'header'/'body' were line separated, while their values are space separated. in sc3, i was able to come up with a simple i/o paradigm for this and a parser to generate arrays of horizontal line data, which then i used for various purposes. in these examples i am using a pattern to cycle through the array of amplitude information determined by the brightness of each pixel in a line. this produces a series of spectral snapshots along a 'sound pixel' grid. for the first one, i just used 12tet, while for the second i used a just scale with 11 t/8va . eventually i'd like to write a patch that will look at hue and map that onto phase, like a lot of spectrographs do, but that hasn't quite happened yet for a number of reasons. these reasons are also related to why i haven't blogged in a while, and now that i'm somewhat being forced to take a break (im at work) i'll be posting these little misadventures presently.

picture

this is the picture i used for those first two. black = silence. i used it because it was small, but the picture also worked really well because of its repetitive, geometric shape and also its high contrast.  i actually inverted it so there would be more space.

new compositions online

I have uploaded several new compositions onto my server. They include the following:

livecoding practices in superCollider:

meditation on beat frequencies (somewhat slow at first):

livecoding 0

meditation on additive synthesis (particularly primes):

livecoding 1

meditation on just intonation (and tuning systems that reflect the timbre of the instruments playing them):

livecoding 2

meditation on modulation (and a healthy dose of feedback):

livecoding 3

meditation on chaotic maps (specifically the "gingerbread man" 2D map):

livecoding 4

scrambled data and header trickery on a sample of Wagner's infamous "Tristan Chord":

data scramble 0

data scramble 1

data scramble 2

data scramble 3

Data transformations include header changes, mp3 codec bending, multi-resolution substitution.

I used soundhack for header changing, iTunes (blech) for the mp3 codec, and textEdit for the substitution.

I will make a habit to continue posting / recording my livecoding improvisations because I think it's good practice. Also that keeps me honest in terms of not screwing up and recompiling the class library mid sesh. Also it's just good to hear yourself when you practice so you find out what you're good at and what needs work. Personally I'm just getting started with this but I think my main problem is that it's hard to make the first few minutes interesting if I have to start with a blank document. I think I'll allow myself to reference other code, as long as it's copied into that new document and altered to fit into the context of the sesh. I also really like having the improvisations be structured around an idea, formal or otherwise, because the limitation gives me something to try to work around. I hope to become adept enough with my analogue components to build electro-acoustic instruments on the fly as well.

A dully intelligent algorithm

Still pretty soft in the attic, this new load-balancing algorithm distributes the cpu load amongst n-servers by selecting the server with the lowest average CPU for that cycle and sending it the next synthesis job. It makes sound! It sounds pretty good! Only occasionally does it hiccup. This could be improved by letting the client keep track of some data about the synthesis job (ie how long it will last) and selecting the appropriate server for the job. I don't know. Maybe that's not going to help at all. Perhaps some kind of global note-stealing scheme would work better. Also a paradigm for recording and summing this final data would be nice. I'm currently listening to the fake cluster render a track i recorded at 1/16th the speed in terms of 16 cycle sinusoid wavelets.  8 cycle wavelets also sound pretty good, and reduce some strain on the synthesis nodes.  My theory is that the ring-mod like distortions I was hearing in earlier experiments are the result of the synchronous triggers exceeding the server's control rate.  The absence of this distortion in a multi-server context suggests that a little diffusion (on the level of a few samples, perhaps) was all that was needed to clean the signal up.  While the smaller particles reduce strain on the servers, there still are a few glitches, especially in broad or rapidly changing spectra.  Maybe non-realtime is the only way to avoid the glitches? That would be incredibly lame. there must be a way to secure the fidelity of the resynthesis...

Keyboard Matrix

A few weeks ago I found the keyboard and mouse button component of a dell laptop in the trash. The things people throw out these days! I immediately brought it home and started taking it apart. When I peeled off each plastic button, and removed the housing they rest in, I found a plastic sheet folded onto itself with many busses and nodes. When a key is pressed, the two sides of the circuit join at two nodes and connect one bus to another. These intersections are sort of like a Cartesian space but with really convoluted rows and columns that twist around and do all kinds of nasty things. Appearances aside, we are talking about a two-dimensional matrix. Perfect for mixing a bunch of inputs to a bunch of outputs, for example, or for, say, typing on a keyboard.

In this floppy, thin, somewhat translucent form, the mixing matrix is ideal for sewing into clothing or mounting onto a sculpture. In addition, it's ideal for making feedback loops between many components, kind of like how I was using a commercial mixer with the Braxton ensemble so many years ago. The possibilities all seem to point toward an expressive, unusual instrument of some kind.

The only issue is, since the rows and columns are actually this really twisted knot, how do I know what will connect to what?? Well, the only way to figure it out is the old trial-and-error approach. Or I guess I could look it up... Nah this is more fun! My secret agenda for that standalone oscillator from the previous post was to use it as a test tone for this matrix. I was getting tired of using the guitar amplifier and my computer. This is safer and more reliable. No grounding fault from a 9-volt battery has ever killed anyone, at least to my knowledge. And lord knows you can't trust the power grid in these parts...

keyboard

When I find a match, I mark the node with its row and column.  Those three in the picture actually are the mouse buttons, and share their own column.  The keys are more complicated and require a letter in addition to a number to be identified.  Right now I have found all the a's.  The keyboard keys make up a 9 X 15 matrix, but not all points in the space are taken.  For example, only A1, A2, A3, A4, A5, A7, A10, and A11 exist.  And they are in no particular order on the board...

keyboard0

Practice

I'm currently trying to get my improvisation chops down in the context of both live coding and live circuit building. To this end, I have been practicing every day for at least 45 minutes using superCollider to perform with code as it is written. I have been studying the JIT (just-in-time) library of superCollider and hope to eventually master this dark art. Today I expanded into the analogue world: while I was practicing with sc3, I decided to put together a handheld pulse oscillator with a preamp so it could be operated from a single 9-volt battery and play through a speaker I found in the trash somewhere. I love standalone stuff like this. Of course, anytime I do something like this successfully I am not documenting it, and anytime I'm documenting it I'm either failing or I'm trying to reproduce something that happened spontaneously. Ah well. I'll eventually get better so I can produce pearls on demand. Anyway, here's a video of me presenting today's practice session while staring creepily at the webcam.

Feedbacker

This is loud.

You have been warned.

It's also somewhat large which is silly because the video footage isn't all that interesting.  But I think that adds to the humor factor.

It's a recording of feedback that happens when you have the wrong settings for audio I/O in iMovie, being altered by the angle of the computer screen, and also by a homemade pulse oscillator I whipped up.  I will be posting another improvisation with that thing shortly.

I thought it was cool that I could catch harmonics of the delay simply by changing the frequency and location of the oscillator's speaker.  I also like how the instrument is self-contained, requiring only a 9-volt to run, fitting in the palm of one's hand.  You'll be able to better see it in a later post, promise.

A Free-Body Diagram of a Trendy Ikea Desk Lamp

Lamp

It speaks for itself, really. Stylish, understated... Swedish.

diagram

I analyzed this think as if there were no friction in the central gears and the whole thing balanced itself perfectly, which is obviously not true. I was hoping this would be more complex, but it actually involved no trig at all, sadly.

hacked cd player

i was supposed to be cleaning, but as soon as i found this cd player, i instinctually fired up the soldering iron and started taking it apart.  like most traditions of torture, i tried to keep the device "alive" for as long as possible, testing it with some awful funk cd i found in the player.  once that got boring, i started short-circuiting out components and separating them from the main board.  the laser was the only thing i botched, and i suspect it happened when i desoldered it wrong.  ah well.  the lcd display and two motors were consolation enough.  i'll just have to be more careful next time.

photo-50.jpg

all the components in a row...

photo-51.jpg

Headhunter!!!

the display definitely still works, and i'm considering making a sculpture with it.  not too solid on how it's going to work yet, but that will definitely be fun.

also i finally properly gutted my BBE case, and the top piece fits very nicely on a slant, allowing me access to whatever circuits i put inside.  also there are tons of perfectly sized holes for knobs and jacks and even a pretty little "on/off" switch!

photo-52.jpg

it's like a convertible!

photo-53.jpg

now all it needs is a new paint job...

jardin de senderos que se bifurcan

mediate !=== immediate

media !=== immedia

media != qualia

media extend qualia

<present, ineffable moment> = immediate qualities

media are an extension of that which is lost when the split between 'now' and 'not now' does not appear. this is not because of some esoteric math, it's because media (including conscious and subconscious languages, formal languages, and perhaps even anything anything this side of cognition) imply their negation. And from there it all explodes. Suddenly-- but not immediately-- we have them*n and them / n . The set makes room for the incredibly intricate, somewhat gooey (depending on your affect at the time) garden of bifurcating paths. Oops. also, this is not news. This has been happening and continues to happen, and regardless of the staggeringly large number of responses we (and everything else, including systems of this) somehow come up with, stuff in our particular band of nodes (or single node, depending on your affect at the time) to which we belong can see 'up' the tree. we see out and down (sometimes). many things see only down, and some seem to see not at all, and indeed some are invisible to us entirely as well. perhaps a substitution for 'see' could be 'to recompile the lexicon and the grammar and press "play"' (to 'axiomatize' in fewer words). i suspect this is why my friend Andy kept saying "no, it's prickles" to Jenny's constant "goo" onslaughts. until it would disappear from conversation. and what more is there to do but laugh at it and move on? or perhaps drift away into sleep. these are, of course, the polarity of the "robustness?" of the system we represent at a particular time, which is a collection of processes, similar in the sense they may inherit some things from us (or us from them, depending on the polarity of the "robustness?" of the system we represent at a particular time). in addition to this we have that which is mediate (d) .

every good language has an exit strategy.

actually, they all obviously have many. a staggeringly large number of them. but the best ones are both able to be really relevant but also to refresh very quickly. NLP people will call this "reframing", I think, although I wasn't paying very close attention when it was explained to me and actually know very little about NLP at all. but i like the word because i think its kind of poetic.

as human beings, much of what we currently call our "experience" is semiotic in nature. another way to say this is that when we construct our experience, beliefs tend to have the most effect on us. especially our own. however, a system is by its own definition self-referential. it always comes back to the root. we can't see up. well, to be somewhat pithy, we do; but from there everything starts to branch out again.

i have run into some excellent and terrifying times with my project. i am currently mortified that we are making something that is far too complex than it needs to be. i suppose the idea is that it must be very robust, so that it can accommodate a wider spread of applications. at the same time, the idea that an environment like supercollider can already make this happen pretty much all on its own is a somewhat frightening design "achilles heel". however, to generalize this process to the point where there is the possibility of running this off of a live-cd and have anyone using any OSC aware environment (or actually anything at all) make digital (or haptic, etc) art with it is what fuels my interest at this point. mad of it.

i am interested in building a simple osc-powered neural network with the capacity of real-time performativity. emphasis on "simple". it will probably be the dumbest neural net in existence, but it will be way sexier than a lot of them.

one last thing about that "art" thing. our beliefs about "art" are entirely semiotic just like the rest of us. also, just like everything else, "art" or "aesthetics" is entirely self-referential. so it's somewhat dubious when practitioners of some field, like "neuroscience", "NLP", "anthropology", etc, offer smug "answers" to the "questions" (ie explain away) some other field. there is a gap implied by the act of "speaking" (perhaps even in the sense that "my soul speaks to me") that we would have to superceed in order to resolve. since that isn't possible, we can't even say whether something out there already has "answered" the question of us. or what that means at all. These frameworks for understanding are best taken as dialogues, themselves being in dialogue with each other.

This place is intense, man.

I'm referring, of course, to Earth.  This stuff is wild.  Right now my life has been taken over by this thing, not to mention the four classes I'm apparently enrolled in.  I have to tell you, building this supercomputer is a lot like having a thesis nobody gives you credit for.  It is perhaps among the most important experiments of my life, and reaches into every aspect of my creative drive.  I have been developing paradigms for massively parallel sound applications involving microsounds.  This endeavor has been ongoing, since I first heard about granular synthesis.  But now, I have developed a few systems I like to use in composition, and these systems are looking like prime candidates for use in the Arboretum.  Since it's so easy for me to max out my laptop's scsynth with these patches, distributed techniques for managing the data explosion are a no-brainer, really.  The method I'm really enjoying at the moment is to compose using a trainlet sequencer I built, and process those results with banded wavelet analysis using the newly honed constant q transform.  This way I can both build structural frameworks (clouds?  waterfalls?  petri dishes?) and listen 'into' them by manipulating their bandwidth, region, or sensitivity.  Tone is the first composition I successfully recorded using this technique.  To the result of these analysis passes, I can assign any note event I want.  I am fond of mixing concrete sampled wavetables with synthetic ones, because while the synthetic ones can be quite good for accuracy, there are some visceral effects I can achieve with concrete material I would never have been able to predict.  A lot of times these phenomena sound like unintended distortion, either something wrong with the speakers or your ears.  Sometimes, these effects can be extremely bright, with enough spectral energy to really cause some damage if you're not careful.  Tamarack is a good example of this kind of thing.

So how do I plan to use this family of patches if they only work on a cluster like the one I'm building with Mike at ITP?  I got my own personal cluster!  Well, I got the computers for it at least.  It will be a four-node (+my macbook as head node) cluster using a similar open-source framework powered by OSC.  I'll be gigging on it once we get it up and running.   Mike's building one too, so we plan to book duos and have cluster battles very soon.

fallout from the blast

I am currently in PA right now, resting and eating. My face has been half paralyzed by some virus or something. I just found out it's definitely not lyme disease. I have been taking an insane regimen of steroids, antibiotics, antivirals, and vitamin B for the past few weeks. (Not to worry: it will subside eventually, at least that's what they tell me...) Weird things happen when you can't move your face. My left eye is unable to lubricate itself and I am in constant danger of corneal damage. I ate my first sandwich today. It was nearly impossible to avoid chomping my lip. At least I can taste with that side of my tongue again. That was rough.

Of course, I have also been composing a ton. It seems my breakthrough was more than just a manic delusion and now poses some very real threats to my compositional process. As a matter of fact, the paradigm shift I'm experiencing right now has really liberated me in the scariest and most relieving of ways.

Ok. So what the fuck is it? I'm still trying to come up with a way to explain it that preserves why it's important while at the same time demystifies it. I guess it's a two-parter. Part one: I have a much better understanding of how SuperCollider takes linguistic instructions and turns them into sounds. No, I cannot read machine code. But before compiling to that level, SuperCollider translates everything into OSC messages, which are these mysterious little things that were supposed to replace MIDI back in the day. OSC is a network protocol. It can be used to communicate between different applications on a computer or even between different computers (we'll get back to the ramifications of THAT in a second). The thing that makes SuperCollider cool, aside from the beauty of the language itself, and the myriad other things that keep that lovin' feeling going inside my pants, is that there is a Client application and a Server application, hidden and conjoined to look like it's one program called "SuperCollider". So, when we run a line of code in SuperCollider, that code is translated into an OSC message and sent from the Client to the Server. This happens every time there is a note. One cool stepping stone I found along the way to the still-mysterious breakthrough was the power and efficiency of using OSC messages directly to trigger individual particles of sound. This allows me a further level of control for the generic parameters that make up these particles, which get sprayed all over the place and make an incredible mess. So by doing this we gain a level of abstraction, because we don't have to know how the whole system will behave at all, we just have to change how each little quark behaves relative to the other.

Part two of this little escapade is that I have found the incredibly simple way to pass a message from the server back to the client again. Let me elucidate. So generally speaking, when we set up some kind of "system" that makes sound in SuperCollider, we have to get messages into the server that tell it to make sounds at particular times. If this communication can only go one way, we are left with a very specific family of options, of which any type of elaborate sequencer, synthesizer, or effect unit could result. What happens if we want, as the result of some kind of analysis of the sound or internal state of the Server application itself, an event to happen that changes the architecture of the Server? You can see how immediately things begin to shift. Before, using text as opposed to graphics to control sound just made things look nicer (esp if you're using a filterbank of 400 filters, for example). Now we have moved into an MO where it is compulsory to use a computer language as opposed to a user interface. (of course, you can design your own interface when you're done.)

So this is how that Iannis thing was made possible. Iannis' whole structure jiggles with the sounds of events that are the result of several levels of de-abstraction. So proce55ing sends SC a message like "ive just turned left and moved forward once". And SC responds with "launch this pattern!". Then SC responds to itself again (AHA!!) with "play these grains!". Then SC responds again with "I just played these grains, Proce55ing!". And proce55ing responds by jiggling.

Ok ok but this has effected SO... SOO much more about the way I compose than that. The above was a study in message passing. Wonderful. But what does this mean when I'm trying to *gulp*... write a song?

Well, for starters, I now understand what Ron Kuivila was always shouting at me: "Get INSIDE the note event!" I was always like "umm... maybe you should try tea instead of crystal meth?" But he, of course, was right. Much of the work that I've done in SuperCollider, while it sounds interesting, only deals with the surface of the sound. To get inside the decision-making process that leads to the sound is to free yourself from the particulars of it. In this way, I have moved past the signal processing approach to sound (having learned some valuable lessons) and into a systems approach.

So what is this currently looking like right now? I am currently just as feverish composing with these systems as I had been discovering them. I am averaging about one composition per night. This is how it usually goes for me. Composerria. The way it has worked so far is to set up some underlying sequence of material that is either played explicitly or remains hidden, which serves as the stimulus for several layers of machine listening patches. These patches generally consist of a bank of hundreds of constant-q bandpass filters that listen for activity at each spectral band. When activity is measured, it can be at a low, medium or high dynamic state. Each state triggers OSC messages which consist of the dynamic state and the frequency. These are parsed and can be used to do... anything. At this point I've been using them to trigger individual wavelets. A wavelet in this context is just a grain whose length is inversely proportional to its frequency. It can look like anything, depending on the task. The best resynthesis wavelets have been synthetically derived from a sinewave (one derived from a filter, not a wavetable) and a Von Hann envelope. The larger the wavelet (in cycles), the more distortion is introduced to the lower register, at least with this method. A way to get around this is to experiment with different window shapes, each with its own forte's and foibles. Exponentially decaying envelopes are great for convolution/reverb purposes. To some resynthesis wavelets I add a banded noise component, to round out the uncertainty. Of course, once I was pretty satisfied with resynthesis, I got bored of it and moved on.

When a bandpass filter is placed before the analysis, the composer (or whatever controls this pre-filter) is permitted to explore inside the spectrum of the sound, and isolate any region thereof. Cage is famous for addressing time-domain filters (such as the particular 2nd-order butterworth constant-q bandpass i happen to use) by saying that they effect pitch, timbre and volume all at once, because in actuality these things do not exist independently of one another. One can palpably experience what this means when playing with a pre-filtered analysis patch! I find that once two or three layers of this process occurs, the ear becomes inclined to allow the microsounds to coalesce. I have had the experience more than once of listening to a first pass and being grossed out, only to find the lacking component to be simply one more layer to fill out some other part of the spectrum. I have experimented with both synthetic and concrete wavelet shapes. The effects that these processes have can range from replicating bizarre speaker malfunctions, playing a sound underwater, impossible reverberations, to harmolodic aggregations. Spectral counterpoint! The hidden harmonic progression that resides within all sound! Vertical listening!

Things I am going to look at:

1) using analysis to drive patterns of notes

2) using analysis to recognize patterns of notes

3) using analysis to drive animations

4) self-directing pre-filter systems?

5) matching pursuit?

With that, I leave you, dear reader. For listening purposes, I direct you to my soundclick page (in the "About" section of this blog) where I have and will continue to post updates in mp3 form.

iannis video

so i've taken a break from sleeping / drinking porter / eating cheese to put this video out there. a video of tubby in full operation will be coming soon as well.

iannis

so sassu was really cute and organic looking.

that's great and all, but iannis is not. iannis is mean and smells like chemicals.

iannis, like sasu, uses a visually rendered Lindenmayer system to generate quasi-granular note events. GESUNDHEIT!

in english: iannis is a self-similar, 3 dimensional structure that grows according to rules first applied in biology, but also found in linguistics and mathematics. the rules are applied sequentially, like a cookbook, but much simpler, more like driving directions. "Take I-90 for 45.6 miles and get off on exit 38-b toward Hellertown." That is an event. Actually, it's two. You have the event "Take I-90 for 45.6 mi" and the event "exit 38-b twd Hellertown". (note: Hellertown as I understand is not anywhere particularly close to I-90. I have no idea what exit it would be, either, I just made that up.) Which Hellertown, you might ask? Exactly.

so right there we have two levels of orientation. i like to think of them as degrees of magnification. we have the single event and the whole structure, which is made of many events. visually speaking, since this structure grows (and slowly rotates :-0), we see it predominantly in terms of its entire timescale, or about 5 minutes as i recall. In the beginning we have nothing and at the end we have a very large, complex thing that resembles a wire sculpture. actually it disappears at the end I think. perhaps I should work on that. Curtis Roads, in his beautiful introduction to Microsounds, calls this time scale or magnification "Meso Structure". It encompasses the entire composition from start to finish. thus, there is a persistent element-- "How I get to Hellertown"-- and a transient element "turn left to get on the ramp (DON'T DRIVE LIKE THAT YOU'LL GET US KILLED!!) ".

the events, however, happen much faster. also there are many more of them. yes, this time i did not write a piece of music with just one note. that was an earlier version, when it would crash immediately and i smelled harsh synthetic vapor coming out of my macBook. jk.

side note: Roads is careful to let us know that one year ≈ 0.000000031 Hz.

i would classify the events in this case to reside comfortably within the "Micro" time scale, right between taking long enough to be able to distinguish each one like a musical 'note' or a 'sound object', to happing rapidly enough that they blend and become a texture, either noisy or pitched, or some wonderful combination of the two.

between these two poles i have set up (which indeed are just two landmarks on a continuum), there resides the world of the phrase and the world of the sound object. the beauty of iannis is perhaps most easily found in the structure of message-passing that makes up his syntactic skeleton, precisely because it begins with a 'sentence' or string of instructions and moves through the time scales until it reaches the micro-event, at which point it comes full circle.

in the beginning, the entire instruction set that will form the song is figured out ahead of time. then, the instructions are followed one by one in a manner similar to sasu and my previous experiments with the L-Systems class in Proce55ing. Actually without further blathering let me take this moment to thank Patrick Dwyer for making such an elegant and easily extensible class. as this part of iannis' visual component follows these instructions, it passes messages to another program, superCollider, which handles the sound.

if this were sasu, superCollider would make some notes that correspond to the event data. iannis is way too badass for that. so, iannis instead takes these events and, after classifying them, selectively determines which ones are phrase events and which ones describe the phrase event that is about to occur. it's as if "going forward" is the phrase event, and turning one way or the other describes what "going forward" means or how the phrase event will sound. iannis then generates patterns of messages with counters and ID numbers to keep track of them so I don't have to.

after this, the message is passed back to superCollider (from superCollider) for further mischief. each phrase event triggers a pattern of micro events that perhaps we can think of as notes, although they too are very densely layered and coalesced. which is part of the fun of doing all this anyway, right? i mean the better the fidelity of the record, the cooler it will sound after you pop it in the microwave for a minute or two. there's more to subvert.

these note events, as i said, trigger individual micro events-- at this point small copies of the same sample which is much less than a second long. it's a kick drum i made a while ago to sound like some of Pan Sonic's work. (perhaps the next one will be called mika or ilpo.)

ok so one last trick happens when superCollider actually gets to make the micro kick events which i'm particularly happy about. the synth has imbedded in it a message to be passed back to proce55ing, which jiggles the entire visual structure as though the sound is being played through it.

so iannis as an audiovisual entity displays sensitivity to each level of magnification intrinsic to the rules that make him up. ok enough i have to get back to him and make sure he didn't explode; you know how those Greeks are... ;-0

ps here's a picture of the kick drum sample's waveform. the whole waveform is one second. only the initial transient is used, however.

picture-3.png

since the duration of the microsound (kicklet?) is inversely proportional to its pitch, and there are only a few discrete pitches in the constellation, i can tell you that at most it lasts for 0.1454545454545... seconds, and at least it lasts for 0.018181818181818... the micro-pitches that make up a note event are, relative to one: 0.25, 0.25, 0.5, 0.75, 1.25, 2. these are fibonacci numbers. also i fixed the end state so the picture stays. it was easy.

pps- no WAY am I posting a screenshot of this guy.  not until after tomorrow...

mcluhan

Medium = Message. This is a 'type' identity. The message or content of a particular piece of media is always another medium. A corollary to this could be that we are always extending our extensions, since McLuhan also defines "media" to be an extension of ourselves.

Language = Medium. Also a 'type' identity. This statement I see as an elaboration on McLuhan's discussion of light. Language, grooming, and fire I believe are some very early human media. I think if we were to really get down to it, I would say that we ourselves are extensions or physio-linguistic legacies and prosthetes of each other.

Hot media: a chimpanzee grooms her daughter.

Cold media: her daughter grunts, coos and swats back at her mother.

I'm pretty sure McLuhan mentions the fact that media cools and heats up depending on the generation or individual that re-interprets it. The temperature changes because strategies for appropriate interaction with the world changes. Actually both of those statements are different ways of saying the same thing.

Ultimately I am of the mind that this process of prosthetics and amputation of our extensions is the essence of human drama. Also, I would say that the reincarnation/samsara story is a metaphor or explanation (redundant?) for this process. (of course it's redundant; you've done this a squillion times already!)

I would also go on to say that these extensions of ourselves are exactly the antithesis of material pleasure, because if you are extending something you're not experiencing it. So:

Medium != Present

Reality, as we tend to think about it, being a phenomenon (albeit an immersive one) described or defined by its emergent properties and its overarching stories/metaphors/"Laws" is just a concept. Actually it's a very abstract, sophisticated concept. Composite reality in terms of "what-is-happening" is in truth immediate. (note the etymological presence of the very word "MEDIA" in that one!) Another way to put it is that it is ineffable, unable to be uttered. Unable to be extended.

However, human culture is incredibly good at extending itself. To quote Mr. Watts "we are menu eaters, we eat the menu instead of the food".

automaton parade

here are some current re-workings of that infamous undead cat.

cat3

cat2

cat1 <----- CommLab Final

(

i have posted this under phys-comp not in error, but because i'm going to interface this application with tubby and my performance on thursday. i think it would be only fair to start by tuning/playing tubby and then moving on from there, so that it's apparent who does what.

)

breakthru

i am writing this post to set down that a breakthrough has occurred. i have written a patch that achieves a level of machine listening and autonomy of note event selection hitherto unattained in any of my previous work with computer music. this is the culmination of 2 years of struggling with conceptual limitations and hacks.

allow me to explain (or try minimally and fail). sorta how tubby now knows how to speak, i have given similar autonomy to shrod. a large part of it is his improved filterbanks. now he's got 539 analysis bands, localized in time and frequency. he also has 3 stages of amplitude resolution, and the microsounds he triggers can be scaled accordingly or be completely different depending on these stages. I guess that's what I mean in terms of autonomy. The data coming in to this puppy gets categorized and analyzed in ways that have come out of the study of psychoacoustics. for example his analysis bands are logarithmically spaced (at least for now) at 50 bands/octave. what shrod does with these osc messages is entirely up to me as well, so i can do realtime transforms which range from attempts at reconstructing the data (a venture whose success hinges upon what the material is and how it relates to the reconstructing 'vocabulary') to pattern detection and counterpattern deployment (realtime software automata that can realize any number of strategies such as tone clusters, counterpoint, rhythmic motifs, etc). this thing has a view from within the sound and is able to not just change the stimulus but produce strategic responses. Eventually, he will use a bank not of constant-q filters, but of cq driven phase vocoders arranged to do multiresolution analysis across multiple time/frequency scales. although at that point each octave will be divided linearly, over each octave the stimulus will be analysed with an appropriate window and provide identically scaled time/frequency resolution. The phase vocoders will similarly not be driving cosine summation oscillators but triggering note events, so the strategy will be similarly variable. This will also give us data about the phase structure of the stimulus, making wavelet and matching pursuit operations extremely straightforward. at this moment i have attempted a few tricks with him, including reproducing him with a live updating wavelet shape derived from an envelope function and the current stimulus, transposed across the spectrum. i have also tested out assigning different wavelet shapes to each of the 3 levels of dynamic resolution. also i have used a few synthetic wavelet shapes such as banded whitenoise with a formant (always a favorite) and straight sinusoid tone pips with a von hann envelope.

I hope to use this patch as an automaton to trio with tubby and me, since tubby i see as his hardware hacker, physical computing, evil jamaican twin.

I am also experimenting with the idea of remixing (or something) this album I recorded in 2004 on a beautiful, impeccably mic'ed concert grand. It was called "Animals Should Wear Clothes", and it is largely solo piano with some hand percussion and bowing the strings, etc. I have already produced a few starting points, using shrod. there's actually a piece on the album called "Schroedinger's Cat" which I have started working with. It was unconscious I swear! One of these remixes was my CommLab final because I was so excited about this thing for the past few waking cycles. I'll post links to the music when I upload it in some fashion or another.

in other news, i've lost my mind.

More Booty

these are from alan's casio keyboard we killed while bending. if you like your gear, don't bend drunk. that's all i'm gonna say about THAT one.

a speaker! oh how lovely. this entire keyboard cost about a dollar. plus i stole it. how much do people usually pay for small speakers like this???

photo-46.jpg

yet another kind of amp IC! apparently it's designed for cassette players. OR HACKED METROCARD READERS?!?!?!

photo-47.jpg

also everytime i look at a clock it's freakin 4:33!  help!

tubby vs schrod: round I

It's the dub spring champion versus the great wavelet hope.

Ok that wasn't very funny.

There's a new piezo oscillator on tubby, so one spring is doubled up. It's a crazy loud piezo that has three leads instead of two. Weird. It's also super fat.

Also in the video note my new workspace. I sprawl out on the floor to do my electronics stuff, so this way I can have my computer and soldering iron elevated and at a safe distance from one another. Also the giant magnet in the back of my amp is a comfortable distance from my computer now.

Schrod is a thing I'm working on that decomposes incoming sound into note events via a 165 node constant-q filterbank. The current note events are microsounds derived from banded noise and 16 cycles of a sinewave, tuned to the same frequency, with an expodec envelope. I might use some of the audio from this bout for reverb/feedback convolution purposes later. His name is Schrod because of the cat.

My initial intention was to make both units autonomous and separate. I think I will still aim for that, although that's not what just happened. What just happened? ...Oh yeah I made that video.

So in the video (note the tasteful 'merlot' colored velour jogging suit) I am sending Tubby vibrations with a three piezo oscillator. That's the circuit board with the three knobs and the disks that are all up in Tubby's dreads. The smaller circuit board whose power bus is connected to the oscillators' is the driver circuit for the fourth feedback piezo, whose input is an aux send from the mixer. This is basically the same as before but with a third oscillator.

Schrod simply analyses the sound coming from Tubby and tries to re-create it with his arsenal of microsounds. "Tries" is the operative word. He's really not very sophisticated right now (he ignores amplitude) but in terms of time and frequency domain accuracy he's better than anything I've ever coded. I'm also stoked that I am getting individual note events out of him, and that he's not a note event in and of himself. That's a conceptual thing that's taken me a really long time to arrive at. Expect to see more of Schrod, or at least some of his kin. He's the prototype for the first generation of cluster patches. Enough about that.

I also have SChrod runing thru the mixer with Tubby. This gives me more freedom to control their relative levels, but additionally gave me the unexpected option of sending SChrod thru Tubby. Which I did. So in the first little bit of the video where I'm tapping the springs I'm trying to get that feedback coefficient just right.

Supercollider crashes a little before the end of the match, so I guess Tubby wins... THIS TIME. He's a tough motherfucker though; I don't see how mere software is supposed to outlast FULL POWER TUBBER-WAREZZ!!!

ps-- we were pretty loud.  you were warned.

tub luv

so i gave my tub some luvin today. i'm making him talk. the hex schmitts IC i put in did the trick. I initially had made a three oscillator piezo instrument with pots for each frequency but I got way more out of it when I put one of the piezos on a driver circuit coming out of an aux send on my mixer. now there are two pulse oscillator piezos, one for each of tubby's medium springs, and one feedback channel piezo on his largest spring. now he can talk just fine, and listen as well. I use the pots (my black metal bbe knob heads on a smoother pot) to change the frequency of the pulses, and when I find a sweetspot the feedback piezo goes nuts and tubby talks back. It is very much like a conversation and much less like an instrument or a sound effect. I think an interesting idea would be to use my newfound automaton as a model for a supercollider patch. This way, there would be three performers, where only one of them is human. Supercollider could make neat visualizations happen, too, if I chose to go there... at this juncture anyway. I have to remember this must be done in a week. Could happen tho.

I am suddenly ok with the reintroduction of a computer because I am thinking of the sculpture as a single entity, just as the computer would be, and just as schmitt and I are. It would be totally sweet if the mixer didn't have to be used, but I think it's the best way to keep noise in the system manageable. otherwise we'd have a repeat of the set at Jake's. Not that I didn't enjoy the feedback self-oscillating and not acting like a filter at all, but it would have been nice to have some say in the matter. anyway, that's where the idea came from, so whatever. that's what it's about, I guess.

It departs from Tudor and looks at Behrman. There are also several helpings of Nick Collins. I would like the computer element to perhaps comment on Roads or Xenakis. It would be nice if the visuals were some kind of A-Lifey fractal thing, but that might be pushing it for next week.

I like this process much better than how I've been going about things in the past. It all springs from trying shit out, not just arm-chairing it. It's sort of like user-testing, in a way, but I think the whole framework of design has a tendency to bleed out the poetry and anarchy. I would rather think of it as a deep listening / friendly experiencer / shut-the-f*ck-up-and-try-it kind of user-testing.

tubby and i took a shower

photo-45.jpg

here's the hot liquid glory

Narcissism and Media

the narcissist I would define as a specific subset of the modern condition wherein the individual is driven by the will to find herself to be the essential building block of matter. she expects that, hiding behind the particulars, within and among them, and among their categorization and behaviours, even the mere act of experiencing them is constructed from things within the self. the universe is made of selves just like you and me. the narcissist, to be consistent, must be able to find patterns in all phenomena that lead her back to herself. this is an incredibly narcissistic idea, hence the name.  i'm pretty unhappy with it as a name, but i can't seem to find anything better at the moment. if one were to look at it linguistically, the self is an axiom. we look at any method of description of stuff and find ourselves to be the central building block. the narcissist is a twin. there is an implicit dualism in the very act of describing something because you are using words to describe something to me. or to yourself. bah its no use i keep saying the same thing!

im going to make tubby autonomous. i want to use a 74C14 (or perhaps several), possibly in conjunction with some kind of variable resistor for added autonomy. i want to be able to converse with tubby. or perhaps for tubby to converse with other tubbies. i think this was the initial driving force behind the music box thing and i forgot why i liked it so much. i think some unpredictable, autonomous behavior is essential to the idea of this guy. a music box is not as interesting if it is itunes, unless the visualizer is on and you're into staring at a screen for a really long time (ahem). a music box is cool because of all the little gears inside it. it was like a clock, the antebellum model for the universe (and thus mind). now we have other things like the idea that a) there is a conscious, b) there is the negation of consciousness (the other, space, the unconscious), and c) that both of these things are shaped like languages. the computer is our clock. the binary is this unconscious, almost naughty little drive or desire. to quote Robert Ashley:

"The fantasy is the distance, the reluctance, the reticence, the otherness."

of course, he was talking about masturbation. but i would say that all language is massage, and so either your pulling bugs out of other people's fur, or you're preening yourself. you're either talking or thinking. you're either thinking or doing. that is also a narcissistic statement. so there's the binary between that which you are doing or aren't, but there's also that which did or will do. there's also what you hope to do and what you hope not to do, what you wish you had done and what you regret. or perhaps there's also what you thought you would do or wouldn't do, and what you remember or don't remember. i think once we get to that level the bifurcations are way to many to hope to account for. but they are bifurcations.

what is the role of glossolalia in all this?

glossolalia i would define as language stripped of causality, or perhaps with causality relegated to the realm of the metaphysical. it's the unmoved mover. is it axiom? what happens when it is?

we get things like the universal word or intonement. from logos to om. we get all of indian classical music and we also get shakespeare and also freud. we get chompsky, we get pythagoras. of course there are problems. how good of a write off is glossolalia? what's the signal to noise? if too much shit is discarded as noise, we've got blinders on. words are media, words are extensions of ourselves. thus words imply a binary between the client and server.

wow. ok. enough.

goodnight.