fallout from the blast

I am currently in PA right now, resting and eating. My face has been half paralyzed by some virus or something. I just found out it's definitely not lyme disease. I have been taking an insane regimen of steroids, antibiotics, antivirals, and vitamin B for the past few weeks. (Not to worry: it will subside eventually, at least that's what they tell me...) Weird things happen when you can't move your face. My left eye is unable to lubricate itself and I am in constant danger of corneal damage. I ate my first sandwich today. It was nearly impossible to avoid chomping my lip. At least I can taste with that side of my tongue again. That was rough.

Of course, I have also been composing a ton. It seems my breakthrough was more than just a manic delusion and now poses some very real threats to my compositional process. As a matter of fact, the paradigm shift I'm experiencing right now has really liberated me in the scariest and most relieving of ways.

Ok. So what the fuck is it? I'm still trying to come up with a way to explain it that preserves why it's important while at the same time demystifies it. I guess it's a two-parter. Part one: I have a much better understanding of how SuperCollider takes linguistic instructions and turns them into sounds. No, I cannot read machine code. But before compiling to that level, SuperCollider translates everything into OSC messages, which are these mysterious little things that were supposed to replace MIDI back in the day. OSC is a network protocol. It can be used to communicate between different applications on a computer or even between different computers (we'll get back to the ramifications of THAT in a second). The thing that makes SuperCollider cool, aside from the beauty of the language itself, and the myriad other things that keep that lovin' feeling going inside my pants, is that there is a Client application and a Server application, hidden and conjoined to look like it's one program called "SuperCollider". So, when we run a line of code in SuperCollider, that code is translated into an OSC message and sent from the Client to the Server. This happens every time there is a note. One cool stepping stone I found along the way to the still-mysterious breakthrough was the power and efficiency of using OSC messages directly to trigger individual particles of sound. This allows me a further level of control for the generic parameters that make up these particles, which get sprayed all over the place and make an incredible mess. So by doing this we gain a level of abstraction, because we don't have to know how the whole system will behave at all, we just have to change how each little quark behaves relative to the other.

Part two of this little escapade is that I have found the incredibly simple way to pass a message from the server back to the client again. Let me elucidate. So generally speaking, when we set up some kind of "system" that makes sound in SuperCollider, we have to get messages into the server that tell it to make sounds at particular times. If this communication can only go one way, we are left with a very specific family of options, of which any type of elaborate sequencer, synthesizer, or effect unit could result. What happens if we want, as the result of some kind of analysis of the sound or internal state of the Server application itself, an event to happen that changes the architecture of the Server? You can see how immediately things begin to shift. Before, using text as opposed to graphics to control sound just made things look nicer (esp if you're using a filterbank of 400 filters, for example). Now we have moved into an MO where it is compulsory to use a computer language as opposed to a user interface. (of course, you can design your own interface when you're done.)

So this is how that Iannis thing was made possible. Iannis' whole structure jiggles with the sounds of events that are the result of several levels of de-abstraction. So proce55ing sends SC a message like "ive just turned left and moved forward once". And SC responds with "launch this pattern!". Then SC responds to itself again (AHA!!) with "play these grains!". Then SC responds again with "I just played these grains, Proce55ing!". And proce55ing responds by jiggling.

Ok ok but this has effected SO... SOO much more about the way I compose than that. The above was a study in message passing. Wonderful. But what does this mean when I'm trying to *gulp*... write a song?

Well, for starters, I now understand what Ron Kuivila was always shouting at me: "Get INSIDE the note event!" I was always like "umm... maybe you should try tea instead of crystal meth?" But he, of course, was right. Much of the work that I've done in SuperCollider, while it sounds interesting, only deals with the surface of the sound. To get inside the decision-making process that leads to the sound is to free yourself from the particulars of it. In this way, I have moved past the signal processing approach to sound (having learned some valuable lessons) and into a systems approach.

So what is this currently looking like right now? I am currently just as feverish composing with these systems as I had been discovering them. I am averaging about one composition per night. This is how it usually goes for me. Composerria. The way it has worked so far is to set up some underlying sequence of material that is either played explicitly or remains hidden, which serves as the stimulus for several layers of machine listening patches. These patches generally consist of a bank of hundreds of constant-q bandpass filters that listen for activity at each spectral band. When activity is measured, it can be at a low, medium or high dynamic state. Each state triggers OSC messages which consist of the dynamic state and the frequency. These are parsed and can be used to do... anything. At this point I've been using them to trigger individual wavelets. A wavelet in this context is just a grain whose length is inversely proportional to its frequency. It can look like anything, depending on the task. The best resynthesis wavelets have been synthetically derived from a sinewave (one derived from a filter, not a wavetable) and a Von Hann envelope. The larger the wavelet (in cycles), the more distortion is introduced to the lower register, at least with this method. A way to get around this is to experiment with different window shapes, each with its own forte's and foibles. Exponentially decaying envelopes are great for convolution/reverb purposes. To some resynthesis wavelets I add a banded noise component, to round out the uncertainty. Of course, once I was pretty satisfied with resynthesis, I got bored of it and moved on.

When a bandpass filter is placed before the analysis, the composer (or whatever controls this pre-filter) is permitted to explore inside the spectrum of the sound, and isolate any region thereof. Cage is famous for addressing time-domain filters (such as the particular 2nd-order butterworth constant-q bandpass i happen to use) by saying that they effect pitch, timbre and volume all at once, because in actuality these things do not exist independently of one another. One can palpably experience what this means when playing with a pre-filtered analysis patch! I find that once two or three layers of this process occurs, the ear becomes inclined to allow the microsounds to coalesce. I have had the experience more than once of listening to a first pass and being grossed out, only to find the lacking component to be simply one more layer to fill out some other part of the spectrum. I have experimented with both synthetic and concrete wavelet shapes. The effects that these processes have can range from replicating bizarre speaker malfunctions, playing a sound underwater, impossible reverberations, to harmolodic aggregations. Spectral counterpoint! The hidden harmonic progression that resides within all sound! Vertical listening!

Things I am going to look at:

1) using analysis to drive patterns of notes

2) using analysis to recognize patterns of notes

3) using analysis to drive animations

4) self-directing pre-filter systems?

5) matching pursuit?

With that, I leave you, dear reader. For listening purposes, I direct you to my soundclick page (in the "About" section of this blog) where I have and will continue to post updates in mp3 form.

Leave a Reply

Your email address will not be published. Required fields are marked *