Warm Jets

Warm Jets is a new vst plugin I've made. It provides three, independently bypassable, high-quality effect units in a network with feedback, which allows for a diverse range of transformations ranging from subtle psychedelia to bleary washes. Warm Jets can easily self oscillate, making it a powerful synthesis tool in its own right.

Empirically Derived Distance Measures for the Z-Plane

In this post, I test the effects of infinitesimal, orthogonal movements of a single complex pole on mean squared error in residual estimation. The goal is to find and validate a distance metric that relates residual estimation to estimation of eigenfrequencies. My hypothesis, derived from informal inspection of large related datasets, is that such a distance metric will correspond to that of the Poincare disc model.

In Defense of Tikhonov Regularization

The previous few research-oriented posts (e.g. 1, 2, 3) have been fairly critical of Tikhonov regularization in a specific machine learning application being developed. This post explains the source of the problems with integrating this form of regularization into the algorithm, and demonstrates its successful application.

Tikhonov Regularization and Residual Estimation, part II

The previous 2 posts have demonstrated the effects of Tikhonov Regularization on a polynomial regression algorithm. This post continues to explore Tikhonov Regularization's effect on out of sample residual and eigenvalue estimation. In order to provide a better analysis of the out-of-sample error, animated scatterplots were created of the eigenvalue estimates for a number of iterations of the learning algorithm, as the Tikhonov Regularization parameter, $\alpha$ swept from $0$ to $10^{-5}$ in steps of $10^{-6}$. This range for $\alpha$ was chosen after numerous trials suggested it could be a possible optimal range. In addition to trends in eigenvalue estimation, this experiment also plotted those in residual estimate error.

Tikhonov Regularization and Residual Estimation

In this post, I explore the effects of Tikhonov regularization on out-of-sample residual estimation using polynomial regression, as part of a continued effort to validate a machine learning algorithm for modal analysis. The algorithm estimates spatially distributed modes in a signal, and uses this information to estimate the residual, or forcing function, of a physical model. Today's experiments explore the frequency-domain effects of the regularization parameter $\alpha$ on residual estimates.

A Z-Plane View of Regularization

A z-plane analysis of Tikhonov regularization in polynomial regression algorithms shows that an increased weight decay factor $\alpha$ generally corresponds to eigenvalues of reduced radii. Such a strategy offers increased out-of-sample error in regression models, effectively an insurance policy against overfit. A number of experiments demonstrate the filterbank representation of this technique for a variety of values for $\alpha$.

Forcing Function Estimates as a Function of Analysis Order

This post continues to explore the strengths and weaknesses of the modal analysis algorithm theorized in an earlier post. In this experiment, we will find the overall correlation between analysis order and forcing function estimation accuracy.

Forcing Function Estimates as a Function of Modal Damping

In a previous article, a method was described for the estimation of the residuals measured in a resonant system, given a small amount of memory of previous values in the timeseries. The previously supplied formulas are thought to be applicable in single as well as multiple timeseries. The accuracy of estimation is thought to be affected by several parameters in the analysis, and the dynamic behavior of the measurements themselves. In this experiment, we attempt to correlate estimation accuracy with modal damping. My hypothesis was that, on average, the estimation accuracy would be negatively affected by increased damping. This reasoning came from previous work wherein the accuracy of eigenvalue estimation was observed to be much better when those eigenvalues had larger radii, and therefore less damping.

Forcing Function Estimation in Modal Analysis

In an earlier post, I described a technique for modal analysis which can estimate damped modes of vibration in an object, as measured at a variable number of points on its surface. I have applied this technique to field data from an array of laser microphones, measuring waves on the surface of the water. Now I will develop the mathematics and practical considerations behind estimating the forcing function in such recordings.

Regarding the Last 3 Posts

The last 3 posts were chapters from my PhD qualifying exam at UCSD, which I wrote over the course of 12 days (to the hour!) from Dec 12th to Dec 24th, 2013. It's a document I had to write so that I could start working on my dissertation. The stuff I wrote may make it into a publication or two, but in the meantime I thought it'd be nice to share it on the internet. In case my server doesn't display the equations right, you can grab the print version here. I recommend reading it in bed, as a cure for insomnia. 😉 Enjoy!

Wave Propagation and States of Vibration

"If you were trying to measure how a solid object vibrates, the question might be broken down into two sub-questions: how does the object vibrate in general, and what is its specific vibrational state at a given moment in time. What models have you found that might be useful for describing the vibratory behavior in general, and what are the prospects for picking up specific vibrational state information? Is it possible to predict under what conditions it will be possible to make these measurements on real objects, and/or how the number of sensors available might affect the quality of measurements that are possible?"

-Miller Puckette

Lear on the Second Floor

a few months ago, composer anthony davis contacted me about a chamber opera he was working on. that opera turned out to be "lear on the second floor," a re-imagining of shakespeare wherein lear is a neuroscientist suffering from alzheimer's. anthony wanted me to portray her madness and disorientation. i was happy to oblige, of course...

Odum~

for some time now, i've been interested in the work of howard t. odum, zoologist and pioneer in the field of systems ecology. h. t. odum's goal was to derive a language for the transformation of energy into different forms, which could represent such transformations as edges in a network.his notational system, called "energy circuit language", borrows from other systems theory notations such as forrester diagrams or electrical schematics, but diverges from such systems in an attempt to describe systems in even more general terms. h. t. odum spent years honing this language by attempting to model the data he and his brother eugene odum, a zoologist and ecologist as well, collected from various ecosystems. the two brothers are ostensibly responsible for the modern conception of the term 'ecosystem' itself.

Daubechies Wavelet Distortion / Delay VST's

I have made a few VST plugins that make use of Daubechies 4 Wavelet transforms for time/frequency localized dynamics processing and multi-scale delay. Both plugins posted below were compiled for OSX 10.4 or later.

A Linux-Based Open Source Toolchain for the STM32F4

ST Microelectronics has released the STM32F4 "Discovery" evaluation board for the M4f Cortex ARM processor chip. In addition to the M4f ARM microprocessor, itself a pretty awesome tool boasting 32bit floating point precision and a vast array of assignable I/O capabilities, the STM32F4 is loaded with tons of sensors, USB hosting capabilities, and even a fully accessible ST-Link V2 programmer, which allows the user to program other M4f microprocessors. Check out this great demo / explanation here.

This Is Not Art: Australia, 2011

this fall, i was privileged with the opportunity to participate in two arts festivals in newcastle, australia. these festivals were "electrofringe," whose focus was on the practice of electronic arts, and "critical animals," whose focus was the critical theory underlying creative endeavors. both of these festivals are components to a larger meta-festival called "this is not art." i was invited to engage in a variety of artistic and pedagogical events, some of which were newly realized just for the festivals (viz. songbirds).

Sonic Shredder v0.2

This summer, I had the privilege of working on a software project with a client who trusted my aesthetics implicitly. He understood what I was about, and asked me to make him a software instrument in SuperCollider that could eat other software instruments for breakfast. In return, I gave him the Sonic Shredder.

OZ

I recently had the privilege and good fortune to travel to Australia again! This time, I got to play some music and even do some teaching, thanks to Estee and the other kind folks at Electrofringe and UNSW. In the process, I really enjoyed making field recordings of trains & birds (thanks, Sam!), taking pictures of some beautiful street art, and hanging out with friends both old and new. I was again reminded of how incredible the coffee is down there. That alone would be reason to go back! I also got to visit Melbourne this time, which is a truly lovely city, with some lovely people in it.

Set Them Free

This is a remix project. The initial 2 minutes are a presentation of the original track, recorded in 1973 by the People's Temple Choir, originally on the beautiful gospel album "He's Able." You may listen to the entire album here.

so much for the theory: live in san diego

thanks to everyone who came out, and thanks to kaiborg for sharing the stage with me. it was an honor.

food deserts

for this ep, my main tools were supercollider and logic. these tracks were first dropped in a performance at the loft in san diego, on 2/2/2011.

who is anybody?

On Wednesday, December 1st, I had the pleasure of sharing the stage with MC Justin Zullo (AKA Conundrum), Kimbridge Balancier (AKA sPeakLove), and Diana Cervera. I DJ'ed (from the command line!) for Conundrum for the first half, and played a solo set for the second. Due to a few technical problems, I was unable to record the set with Conundrum, but I was able to reconstruct the solo set. I've decided to release this set online as an EP, in a manner similar to "gosling_glider_gun."

give me everything.

on thursday, october 28th, this piece made its debut at brooklyn college for their biannual "international electroacoustic music festival." despite the fact that i couldn't be there in person, i couldn't pass up the opportunity to have it premier in brooklyn, due to the subject matter. i'd been planning this piece for two years, and it was awesome to finally see it through to completion.

the thief (pitch -> midi app)

one of my clients is a sax player who also plays synths for his band. he wanted me to find some way for him to control his synths (hardware or software) with his saxophone. we discussed the ewi controller, and decided that this was unsatisfactory as a solution, due to his use of falsetto and multiphonics. he also felt that the ewi controller wouldn't be as comfortable as using his own familiar sax and reed. as a result, i wrote him a simple software solution that would read in pitch and amplitude values from the soundcard, and output midi values. the resulting software is super light-weight and ready to be deployed on a number of hardware configurations. it's also completely modular and networkable. the working title for this guy is 'the thief'.

requirements:
currently, this software only runs in osx. >_< . i may eventually port it to linux, but if you're running windoze, good luck to you. you will also need a soundcard (most computers these days have *some* kind of audio interface), and if you want to get fancy you should have a microphone.

dependencies:
this patch is not a single piece of compiled software that runs like a self-contained black box. rather, it is a system comprised of several programs networked together to provide a single service. so, in order to run this patch you will need the following (don't worry, they're all free):

occam, a lightweight midi client that speaks osc

supercollider, my development environment of choice for realtime audio

something to control with midi. this could be a hardware synth (if you have the appropriate cables), or ableton, logic, ardour, reason, whatever. choose your poison.

once you have these things installed and ready on your box, we can talk about the implementation. my source code is below. to run this code, paste it into a new window in supercollider, select the entire document, and press . note that is not . you should see the little server window on the bottom left boot up, some crap print to the post window, a hideous grey gui pop up, and occam open up. on the top part of my hideous grey gui there are two black buttons. one has a green [>], and one has a red []. pressing the green [>] will turn it on, while pressing the red [] will turn it off. hopefully that convention is obvious. there is also a single long slider with a number box next to it. you may type into the number box or you may move the slider with your mouse. this controls the rate at which new midi notes are sent to occam.

this patch simply analyzes audio for pitch and amplitude at a regular interval, and sends this data out as osc to occam, which converts it into midi. i chose this model instead of something more complex (that could, say, encode arbitrary audio into discreet midi notes) because i was hired to come up with a specific solution to my client's problem, not a general solution to all midi-related problems. if you are so inclined, please take my code and do whatever you want with it, so that it will suit your needs. all that i ask is that you inform me and credit me.

and with that, here's the code.

* * *

a few lessons stick out to me from this first development cycle.

1) i am reminded not to think like an engineer.
no offense to engineers-- you guys are very important! i'm just not one of you. my methods are generally faster, more improvisational, and way less general. i spent considerable time banging my head against a wall trying to make my solution work in all cases. this was wrong.

2) working with clients can be rewarding in ways that doing your own work isn't.
i now remember the midi spec.Â  i never use midi and honestly it makes me seasick.Â  as the result of this project, however, i've learned a lot about implementing solutions that use it. a major lesson regarding midi is that supercollider's midi output implementation is a total joke. another one is that osculator, a fairly highly regarded osc-midi client, is also a total joke. occam is the lightest, fastest, simplest midi client i've found. another major lesson? i learned how to use the scary 'environment' window in logic. it totally looks like max! which makes me want to vomit. regardless, these lessons will definitely prove useful to my freelancing and for my own projects.

3) sometimes walking a noob through your algorithm is the best way to debug it.
one night, when i was particularly frustrated with trying to do the impossible with my app, i vented by trying to explain how horrible my life was to my roommate, who despite some familiarity with supercollider is definitely not a programmer. this turned out to be the most useful few hours i spent on the project, not because we happened upon a new algorithm, but because it gave me the confidence i needed to let go of the grand vision in favor of something more realistic. also debugging was thereafter a snap because i knew the algorithm like the back of my eyelids.

keep in mind this is the first iteration on this thing. i'll be working on some other features, such as microtonal tracking, and the now-infamous "arrogance" knob. when i've brought this project up in conversation, there have been a few musician / producer types who really perked up. i want you cats to try this out, and complain to me about it! the more you complain, the more likely i will be to listen to you. also if you are a musician / producer / developer type, and you want to steal my code, please do that! but let me know so i can steal your code too. ^_^ happy tweaking!

ps - i made a short track with this patch, just for laughs. it doesn't really demonstrate the patch very well at all. something more demonstrative will come after i have a meeting with my client and we record some user testing. it's the track from this post, using a fender mexican strat through a homemade fuzz pedal.

hello midi

i haven't used midi to do anything other than play an outboard hardware synth with a keyboard since 2006.

that being said, it certainly has its advantages as a protocol. actually, it only has one that i can think of: that everyone uses it. so far a few clients have asked me to make them things that use midi, so they can interface with other music software. the above track is a proof of concept working toward that project; yet another example of how projects i do for other people push me out of my comfort zone.

the code in supercollider is embarrassingly simple. like i said, it's a proof of concept. all it does is randomly plays midi notes. the timing is scaled so that lower notes last longer. that's it.

slightly more complex was the process of setting up an IAC bus in OSX, and getting logic to listen to that bus.

in "audio midi setup," select the "midi devices," pane. from there, you should see your "IAC driver". double click it and turn it on by checking the "device is online" box. you can add ports in the menu on the bottom left. each port has 16 channels. i have never been able to get the bottom right number boxes to say anything other than "1" and it doesn't seem like this matters much at all. that's it for setting up the IAC; supercollider and whatever other audio software you might want to use will now recognize these ports and treat them as physical midi connections.

in "logic," go to the "environment" window. from the top left of that window, you should select "all objects," and under the "view" menu, de-select "by text". then make a new "physical input" object and connect the little triangle next to your IAC port to the "sequencer input" box.

(an aside, do you see why i hate graphical interfaces?? this could have all been done with a few lines of code and there would be way fewer opportunities for error. i spent most of my night tooling around ambiguously titled menus and windows... rawr)

anyhoo, i used a bass sound from garage band-- the only instrument from garage band i have on my computer. also, the only software synth on my computer. while i was in logic, i decided to practice my spectral compression chops, and give the sound some depth. there is also a touch of reverb.

so there! you don't need max for live, you can just do this and use a free program like pd or supercollider to check your email with your daw. because that's so useful.

glib, numb, smile

this may or may not be finished. i submit it to you anyway, reader, because feedback would potentially be cool.

it began as a patch to keep my bird company while i was out at work. it uses brownian motion (random walk behavior) for several of the controls. it's a stream of sawtooth grains, each one no more than a few cycles long. each grain's frequency is an exponential curve, so each grain is a little glissando. i also passed the signal through a vintage spring reverb i pulled out of a weird old mixer. i left this patch running for about 3 days to keep kapow company. a lot of the textures remind me of analog stuff, a good thing in my book. also, he seems to enjoy it. mission accomplished. (the first one, anyway, read on...)

you can grab the code here.

so that explains the patch, but the recording you're listening to is a different thing altogether. i had two main goals with this, although i didn't originally set out with them in mind. the first one was to use only the algorithm as a base material, with no realtime gestural control at all --that will come later, be patient-- but to make it sound gestural with subtle mixing techniques. the other goal was to limit my palette to only those two textures (dry vs wet), while keeping the listeners' attention through, again, subtle mixing techniques. no "plug-ins" were used except eq and dynamics processing. i'm realizing now that these two goals are actually related. so what started as an exercise in algorithmic music turned into an exercise in mixing.

next steps are to turn this patch into an instrument, complete with realtime implementations of the mastering-like stuff (eg, eq + dyn processing), and some kind of an interface. there are a lot of different possible controls for this beast, but i'd love to have it be an intuitive, gestural sort of thing. my two options as i see them are buying a playstation style usb controller, with two joysticks, which is higher on ye olde $/ time metric, or to build one from all these linear faders i have lying around and an arduino, which is lower on the$\$ / time metric. i have to decide which is more of a priority. currently i have neither, so it's moot.

keith's drone engine

i have been debugging this fm instrument i'm building for a client. (his name is keith o_0 ). i installed supercollider on his machine and updated his class library to accommodate for the requisite changes i made to the class library (see previous post- nothing has changed in that regard). the demo was pretty successful- this project is turning out to be really fun. there's something really satisfying about watching someone interact with something you've built for them, especially if it's a creative tool. for me personally, the tool's interface and paradigm weren't terribly interesting, until i put it in keith's hands and saw his eyes light up. then the inevitable feature requests came, although honestly i think i made more of myself than he did. ok, so i should qualify the above statement further, the demo was pretty successful *until* he tried to save and reload some previous settings. a bug showed up that i thought had been ironed out. i've now fixed it. the code may be found here. make sure, if you want to run this on vanilla supercollider, to make the changes documented in this file.

here's a screenshot:

the program consists of 16 oscillators, which can be routed into one another for frequency modulation. more rigorously, the synthesis process is phase modulation, since the audible effects are more sensible, at the expense of slightly weirder math. i like to think the sonic results are pretty intuitive. the interface itself is designed to look like a 16 channel mixer, where you can re-route any channel into any other channel. i know what you're thinking, but audible feedback is not allowed since each oscillator can go to only one output. the pre-amplitude of the input for each channel may be set with the "pres" row of number boxes. this value is kind of like a "trim" setting on the input of a mixer- all the incoming signal of the channel is multiplied by this value. among the feature requests i am entertaining for the next iteration cycle are ringing filters, noise generation, and delays.

check out that previous post (linked above) for a delightfully blissed-out video demo! ^_^

up

click here to look at the code for generating the original pulse track.

click here to look at the code for turning the track into an irrational set.

this is a continuing experiment that extends my work in massive irrational rhythmic / harmonic sets. for some theoretical background, consult this earlier blog entry. i have been exploring the effects of taking many copies of a sonic event, retuning it to a large irrational set- generally some equal-temperament scale- and playing all of those copies simultaneously. the resulting copies begin perfectly aligned, but gradually move out of phase with one another and produce a doppler-like shift, with ever-increasing and expanding complexity since the sets never realign. i have tried other irrational sets, as you will see from my fairly messy code, but so far nothing has compared to the geometric series produced by many-toned equal-temperaments. the difference between this code and the earlier experiments is that instead of firing off events directly, i am retuning an entire track of audio to produce something like an irrational delay.

the one issue i have with the process is that the waves fall out of alignment very perceptibly quickly, and as this process progresses the changes become more subtle. the transformation becomes essentially less interesting as time goes on. i have found one way to combat this is to gradually speed everything up. this works to an extent, and i will continue looking into different curves that might aid this effect further. another set of experiments i have done in the past involved delaying the items in the set to cause the 'singularity point' to occur at a time other than the beginning. i would like to try that approach with this implementation, but that will have to happen later.

the original track, which i may also post later, is derived from a bizarre version of an excitation-response style percussion synthesizer. the filter responses themselves are irrational sets, which i have found produces a nice doppler-esque tail, similar to the inharmonic ringing of a piano string. the pattern itself repeats fairly quickly, and divides the pulse into fifths.

reptilian blues

this was partially improvised in supercollider. my goal is to start re-incorporating rhythmic elements into my work, while avoiding certain tropes of electronic dance music. ideally, i'd like to continue playing with things like tempo changes and metric subdivisions as i had been before. this piece also uses massive irrational pitch/rhythmic sets, although the main rhythm remains more or less stable. the two samples that i use are a brief recording of contact mic'ed guitar strings and a single distorted kick sample i generated previously. neither of these sources are very apparent, as the samples are very harshly manipulated. the piece's meter, determined by the interaction between the source sample and the function that scrubs through it, is in some larger even multiple of 5, possibly 40.

scale calculator (there's >1 way to split an 8va)

click the screenshot for a browser-based demo. (once you calculate a scale, you will not be able to save it unless you download the app.)

this is a simple, geometry-based calculator intended to demystify some of the concepts in just intonation. the boxes correspond to possible notes in a scale, and the vertical lines correspond to positions of equal-temperament tones. these lines are visual guides, and the number of equal-temperament tones per octave is user-adjustable. the ratios become more "complex" (higher value divisor) towards the bottom of the window. to approximate an equal-tempered scale, simply select the number of tones you'd like, select the boxes whose *left* sides most accurately line up with the vertical lines, taking into account the occasional trade-off between simpler ratios and a closer approximation. when you are finished with a scale, you can either clear it by pressing "c" or return it by pressing "r". when the scale is returned, a small text file will be created with the date and time as its title, in the directory of the applet. this file contains the array of ratios you chose using the calculator.

KEYBOARD COMMANDS:
"+" - increase number of vertical bars (tones equal temperament)
"-" - reset number of vertical bars to 2
"c" - clear scale buffer
"r" - return scale buffer, printing to file named with date and time, located in app directory (file io won't work in browser version)

i provide this mostly for didactic purposes, but i also use this personally to obtain some of my scales and i thought it might help other people interested in breaking into microtonal theory.

enjoy!

keith's function generator

linked above is a demo of the first iteration of this compositional tool i'm developing for a client. the admittedly lame working title is "keith's function generator." this will most likely change as the development cycle progresses.

we decided early on that the layout should look and feel a lot like a mixer, since it's a familiar paradigm for keith. incidentally, that choice also makes life simpler for me because the requisite GUI primitives are readily available in SuperCollider/Cocoa.

it was important that this instrument provided a wide array of routing possibilities, while remaining simple and robust to operate. this required a few interesting (at least to me) programming sleights of hand, to ensure that all routing possibilities were useful and not destructive.

the intended application of this tool is not a performance interface, it is a compositional interface. this means that keith will use the program to generate preset files, and during performances he will select between these presets, rather than twiddling the virtual knobs on the fly as i do in the video. regardless, the video demonstrates basic tweaking, routing, and saving operations in the GUI.

(N.B. if you want to run the code on your own machine, you should take a look at this log file, where i list all the changes i'm making to the vanilla SC class library.)

extant!=extinct

click here to listen to the aiff (or play in the widget above).

this is the first piece in an album i'm doing for National Solo Album Month 2009, with the current working title "sotiety!=society". the entire album will be finished by december 1st. it will include livecoding improvisations as well as homebrew analog stuff. this piece is a livecoding excerpt.

dresses like earths

click here to listen to the mp3 (or play in the embedded widget above).

i recorded this livecoding rehearsal on the bus back to new york from pennsylvania. it was done in supercollider, starting from a blank document. i am practicing with using convolution in livecoding contexts because i really love the flexibility afforded by the technique. here, i convolve a brief recording of contact mic'ed sculpture wire with band-limited impulses. as i improvise with the code, i eventually arrive at a configuration that uses feedback. ultimately this kills my soundcard and i have to stop, but i got a solid 37 minutes out of it before that happened. this rehearsal contributes to my livecoding practice as well as my general inquiry regarding convolution techniques. notice how i increase the server's memory size by several orders of magnitude to allow for such a massive convolution window.

the rhythms and formants are derived from the same pitch constellation : [1, 1.067, 1.422, 1.666, 1.875, 2].
in more traditional musical language this is : [tonic, minor second, sharp fourth, flat sixth, major seventh, octave].
the rhythms get more dense at some point because i convolve the array with itself to yield: [ [ 1, 1.067, 1.422, 1.666, 1.875, 2 ], [ 1.067, 1.138489, 1.517274, 1.777622, 2.000625, 2.134 ], [ 1.422, 1.517274, 2.022084, 2.369052, 2.66625, 2.844 ], [ 1.666, 1.777622, 2.369052, 2.775556, 3.12375, 3.332 ], [ 1.875, 2.000625, 2.66625, 3.12375, 3.515625, 3.75 ], [ 2, 2.134, 2.844, 3.332, 3.75, 4 ] ].

just in case you were curious. hopefully that is meaningful to someone. ^.^

squawk rawk

this song is dedicated to my parakeet, kapow. his favorite music is curt roads. mostly he checks himself out in the mirror, but sometimes he will bite you if you put your finger in his cage.

savage altruism

click here to listen to the piece. (or press play in the widget below...)

in this improvisation, i was exploring the convolution of particulate sounds; looking for unique ways of generating impulses to convolve with responses to derive grains. one trick i stumbled upon is a way to produce harmonic distortion by convolving a sound with an low-passed, soft-clipped impulse. this effect emphasizes odd harmonics and sounds similar to decimation. another trick i played with was to dynamically change the size of the convolution window, to get different sized grains.

for the interested / critical, i have a few words on why i sometimes do this kind of work. these pieces are studies. i am honing my skills for both my own satisfaction and ideally for the enjoyment of the interested. eventually, all this work will prove useful when it comes time to work on something else. while these pieces offer very little narrative on their own, they are usually inspired by events or emotions that are completely unrelated to the techniques they showcase. i attempt to hint at these more subtle origins with titles. while i recognize this activity to be essentially flawed or incomplete, i find most communicative activity to be so as well. the brief, shallow pleasure of 'self expression,' a phrase i have come to despise, is actually what drives these studies. video artist paul chan would call this 'the cheap thrill of understanding.'

'savage altruism' could be an act of kindness that is ultimately undertaken for the sake of the person who undertook it, ie flattery. it could also be an act of utter cruelty performed for the sake of the victim it is inflicted upon, ie euthanasia. either case requires a fair amount of self-deception on the part of the altruist.

i am unsure of the extent to which savage altruism comprises social behavior.

heat escapes

i made this one last night and promptly passed out while waiting for it to upload. i'll warn you that the code is messy and incomplete as this was a livecoding session. i didn't start recording at the beginning of the session, however, so i don't want to use the term 'livecoding' to describe what's going on here. perhaps the term 'improvisation' is more apt.

the sounds are made by causing filters to malfunction, and waveshaping the result. as you can see from the code, i'm using ringing and bandpass filters with frequencies around 1 hz. that's what i mean by 'causing filters to malfunction.' filters generally misbehave around these regions. i have quite a few in parallel (ie a filterbank) and two in series. the gamut of the filterbank, not that you can hear it as such, is based on a just tempered locrian mode.

a tender palsy

when i was livecoding in the park yesterday, i was experimenting with sets of impulses. as i have been interested in irrational rhythmic relationships recently, i was playing with an octave of 32-tone equal temperament, from 1 - 2 hz. i love the sound of filtered impulses, especially when i use convolution as the filter, but i'm often disappointed with windowed convolutions because they don't provide ample time domain resolution in the response. while this is fairly trivial in certain contexts, the lack of resolution is very pronounced when attempting a convolution with something close to an ideal impulse, which is what i'm doing here. it occurred to me to use several layers of convolution, with different window sizes, to allow for more resolution. i had tried this a few years ago with somewhat unsatisfactory results, because i was using nested filters to cross-over between resolutions. this improvisation does no such thing. there are no cross-overs implemented here, so i imagine there's a fair amount of combing.

the recording i have provided does not follow the standard rules for livecoding because it is a reconstruction of a livecoded event. my laptop had one of its rare freak-outs this morning just before i could save yesterday's work. saving work is one of those areas where my process needs to develop. grrrr. anyways, reconstructing the code was a good exercise because i could then expand on the idea. i think i've found some interesting territory. i'll definitely be refining this parallel convolution process, ideally with some kind of cross-over that doesn't suck. now that i know a little more about what i'm doing, i think it's possible.

the piece itself made me think of some of the sleep incidents i've had growing up. i suffer from sleep paralysis, a disorder also called 'night terrors.' they aren't always bad, in fact sometimes they can be quite amorous, if you get my drift, but they always involve a large portion of my body being paralyzed, while i lie there completely alert. occasionally this is accompanied by the sensation of my head exploding. sometimes i'm able to move my head and talk, in which case i try to yell as loud as i can, in order to wake myself up. i remember, as a child, being able to predict one of these episodes coming on because i could hear it. the sound was a kind of whooshing noise, a bit like a off-balance drier, but more intense. this recording reminds me of that a little, hence the title 'a tender palsy.'

ps - i decided not to embed or link to the mp3 because lossy compression artifacts are absolutely wretched on material like this.

dictionary sex, part II

more sample level markov chain work.

sources used for this briccolage are mostly contact mic recordings of things like single drops of water, screwdrivers scraping wood surfaces, and music box tines. a pitch constellation was achieved by limiting the possible chain depth to only a few different values.

i'm fond of the dynamic range of this recording. this patch seems to create some pretty compelling textures. i hope i eventually put them to good use. for now, i definitely think of these pieces as studies, in case you were getting concerned.

dictionary sex

i was a bit concerned about using the above title for this post, because obviously it opens me up as a target for the spam hordes. however, i trust my filters, and the title is just too apt.

i have been working on some python code that makes music using a statistical algorithm called a markov chain . with all the running around i have been doing of late, i have had little time to document the past few days' work.

in addition to the problem of racing against time, i have to contend with the added issue of documenting partially working or completely non working code. this has been an issue with analog projects as well. i think my future strategy will be to document failures as well as successes, so other makers who read my blog (all two of you, one being my mom. hi mom!) have the opportunity to see me fail -- er, learn from my mistakes.

"dictionary sex" is an appropriate title for what i've been trying to achieve with this application. i have not quite gotten there yet, but i've only really been debugging it for the past two days (today will be the third). the idea is simple: analyze a sound file, and generate a data file from it. take two of these data files and make a third by mashing them together. use the third file to generate a new sound file that has equal properties from both parents. cute, huh? now if only i could get this to work and not take a million years, that would be great.

i have been looking at ways in python of saving data into files for later applications to use. the pickle library seemed like a reasonable place to start, and soon i moved to cpickle because it operates so much faster, being implemented in c. however, despite being up to 1000x faster than pickle, it's still a very long process to save such huge amounts of data into a file. depending on the type of analysis, these files can be much larger than the original wav file. this is a problem i'll attend to later.

so i started this post three days ago, and as i finish, it's no longer a sob-story. i have actually achieved dictionary sex. here is an example of what i'm talking about:

in this recording is a mÃ©nage Ã  trois of transition spaces, each coming from different sound sources as well as different analysis parameters. from a single dictionary file, i can generate an arbitrarily long sound file. frankly, i don't remember what the different parts were anymore. here's the code: markovSamp_4.py and markovCat.py.

that the cessation of suffering is attainable

otherwise known as the third Noble Truth, of four Truths the Shakyamuni Buddha brought to us after his experience under the Bodhi tree. in Pali, this concept is summed up by a single word: nirodha.
in addition to pointing to this salient idea, nirodha is also the name of one of my computers. it is part of a cluster of four computers, the other three of which are named after the other Noble Truths. the cluster as a unit is called the cattri, Pali for "the Four." more information on my cluster can be found here. nirodha's hard drive failed during an installation of Sanction of the Victim, a LAN composition that involves running processes that are directly at odds with one another. since all the computers in the cattri are more or less identical in hardware, and since i have a ton of identical spare parts and identical spare computers lying around, i decided to ghost dukkha (that there is suffering), over ethernet, into a new machine, renaming it nirodha.

to begin, i used a live cd for knoppix v5.1.1 because later versions apparently don't come with sshd installed. since i still have no WAN connection for my cattri router, i needed to use a livecd that would be ready to go without using apt. plus i like knoppix.
while this version of knoppix came with sshd already installed, i needed to generate the host keys and start the daemon. i kept running into the following error: "could not load host keys." finally, i just went through the gui and started sshd using their menu option, but not before generating the host keys.
running off the livecd, i used dd over netcat to ghost dukkha's drive. the command pair i used was:
(target): nc â€“l â€“p 7000 | gzip â€“dfc | sudo dd of=[TARGET_HARD_DRIVE_NAME] (source): sudo dd if=[SOURCE_HARD_DRIVE_NAME] | gzip -cf | nc [TARGET_IP] 7000 â€“q 10 
i started the target code first, because it acts as a listener. there's no visual output (!) while this process happens. my pair did it in 1742.15 seconds, at a rate of 11.4 MB/s.
after rebooting nirodha into the ghosted drive, the machine was an exact clone of dukkha. well, almost exact. the ethernet interface for some reason had gotten renamed from 'eth0' to 'eth1.' evidently this is common with ghosting and no one knows why. check out post #14 on this board. using ' ifconfig -a ' i found the device and renamed it in /etc/network/interfaces.
finally, to change the hostname from dukkha to nirodha, i ran the following (from this documentation):
 hostname nirodha sudo echo "nirodha" > /etc/hostname sudo /etc/init.d/hostname.sh start 

and now i have a new computer! ^.^

redundancy

cliick here to listen to the mp3.

i added a new feature to my monstrous sample level markov chain script. this one lets the user specify a number of repetitions per unit of the chain. the result sounds a bit like waveset stretching, but it's different because it's not based on zero crossings and because it generates new material rather that simply slowing down playback.

My source material comes from yesterday's field recording. I think they're shaping metal with some kind of pneumatic powertool.

Sorry this is so short but I am so very exhausted. More documentation will come tomorrow.

singularities

this post continues the idea from my previous post about massive irrationally related sets. one impressive thing about these things is that they extend infinitely and never repeat, despite that they initially are completely aligned. another way of saying this is that for each set there is only one singularity. we can assume that any set of irrationally related periodic signals will have, in the fullness of time and if we extend them in both directions of time, one and only one singularity. but how can we predict when that singularity will occur? if our set of periodic signals happens to be an equal temperament, we can use the following formula to delay the singularity by one cycle of the lowest signal in the set:

for every signal x, xdelay = 1/f0 - 1/x

where f0 is the frequency of the lowest signal in the set, and x is the frequency of the signal to which we apply the delay.

to delay the singularity by more than one cycle, simply replace those 1's with the number of cycles.

we can now generate sequences of mostly periodic signals whose phase we occasionally manipulate to get singularities whenever we want them.

to make this improvisation, i not only play with definitions of the streams as they run, but also the synthdefs. among other things, i mess around with the probability of a grain to be muted. as you can see from the commented out bits, i was trying to create fadeIn and fadeOut routines to automate this with reflexive code.

irrationality

ah irrationality.Â  possibly my favorite aspect of the human condition...

consider equal temperament.Â  the geometric series that coincides with an n -tone equal temperament scale is highly irrational, with ratios based on the nth root of 2.Â  since 2 is the lowest prime, all equal tempered gamuts are based on irrational numbers.Â  for a while now, people have been arguing over this compromise in music, specifically in the context of tuning.Â  for reference, i direct the interested reader to google.Â  i would say this is one of the great controversies in the field of music.Â  personally, i tend to use just intonation in my work, because i love how it sounds.Â  often, a piece written in just intonation simply pulls on my emotions in ways that equal temperament just doesn't.Â  as a piano player, i also love the tuning system that grand pianos use, which contrary to popular belief is not equal temperament.Â  generally a well tuned grand piano uses a stretched octave due to the weird overtone structure of piano strings.

another reason i often use just intonation in my music is that i have the tendency to re-apply ideas from one domain, such as tuning, to another, such as rhythm.Â  harmonies are determined by whole number ratios in just intonation; the parallel in rhythm is the polyrhythm.Â  but ever since a year or so ago, i started playing with massive sets of equal tempered sines.Â  the result is an infinitely expanding wavefront, made up of many smaller waves, who start out perfectly aligned and march inexorably into entropy.

click here to look at the code.Â  (apparently from some kind of tutorial i was writing at some point)

so i've been thinking recently about massive, irrational, rhythmic structures.Â  this has led me to the following patch:

it's made up of a set of 100 irrational relationships, each one mapping to a different place in a soundfile, and a different rate of occurrence.Â  since these rates begin completely in phase and theoretically will never line up again (remember computers can offer only finite representations of values...), we get the same richness, but in a different domain.Â  more on this in the next hackpact entry...

the universe is also in brooklyn

this is yet another unfinished piece, so please keep that in mind.Â  the main pulse is divided into fives.Â  i also experiment with spatializing the sound using allpass filters.Â  i have to wake up in 3 hours, so sorry if this seems curt.Â  g'night.

recursion

i wrote much of today's patch on the bus back to NYC.Â  i'd done some experiments in the past using sc3's JITlib to make recursive patterns, so i decided to start there and see where i ended up.Â  i kind of like where that paradigm took me, especially with the drums.Â  i'll definitely be revisiting it in the future.

* * *

here's a recording of a somewhat cleaner version of that patch.Â  i removed some of the reverb from the low end and softer bits.Â  (thanks to randall for the suggestion!)Â  by adding a gate before the reverb, and then mixing the original (ungated) material with it, i was able to make the quieter parts drier.Â  to remove some of the low end muck, i used a 2nd order high-pass filter before the control signal for the gate.Â  this way, the low end has to work harder to open the gate. in sc3:

SynthDef(\master, {|in = 16, out = 0, wet = 0.125, thresh = 0.1| var input, signal; input = InFeedback.ar([in, in+1]); signal = Compander.ar(input, HPF.ar(input, 1600), thresh, 10, 1, 0.01, 0.01); signal = Mix.ar([FreeVerb2.ar(signal[0], signal[1], 1, 0.7, 0.9)*(wet**2), input*((1-wet)**2)]); signal = Limiter.ar(signal, 0.9); Out.ar(out, signal) }).store;

mulch, dust, talc

so in addition to working yet another 8 hour shift, on less than 3 hours of sleep, installing security cameras in a manhattan apartment building, today i also drove for 4.5 hours to get to my parents' house in pennsylvania, arriving just before midnight.Â  still, i've made this hackpact, and i have every intention of keeping it.Â  just please forgive me if today's seems a little smaller in scale.

</preemptive apology>

i have updated my markov_samp.py to allow for a randomly varying depth.Â  this breaks up some of the synchronicity of the resulting mulchy textures. previous versions of the code only allowed the user to select a single value for the depth parameter, which sounds a lot more angular and choppy when applied to a sample-level markov chain.Â  in the new alternate version, this value is a maximum.Â  the patch is a bit computationally intensive, especially when analyzing larger soundfiles, so the single-second field recordings i've been messing with are pretty ideal fodder.

i made a short audio sketch out of some material i got from this patch.Â  i'd like to revisit it sometime soon- i feel like i'm only scratching the surface of what it could do.Â  the sketch uses a single second from that night in berkeley.

you can look at the code here.

you can listen to the piece here.