pillaged again!!

Stereo carnage porn shots:

my bbe rack unit died the other day.  it took me this long to tear it apart.  i think i just forgot it broke honestly.

again there's a ton more stuff i got out of the unit but these are some highlights.

photo-43.jpg

look at that knob!  that long thing is a sweet button that clicks.  the boxy thing on the right of the photo is a 1/4" telephone jack!  i have three now.  i'll start making pedals, um... IMMEDIATELY.  who wants one??

photo-44.jpg

yarrr!  check out these IC's i got out of the board too!  on the left are all quad amps.  (!!!!!) on the right are some weird things i can't seem to find any info on that isn't entirely in Korean.  two of them say BBE Aiwa on them, so i assume those are sweet filter circuits.  the bottom one is some kind of memory circuit from what i gathered from the data sheet.  i believe it has a microcontroller in it.  not sure.

those amp ics are gonna be useful for my final, perhaps in conjunction with tubby.  who would have thought this thing would be so useful when i got it for $20 ?  it had been thrown out and re-appropriated once before that.  it's crazy that people just throw this stuff away when we can still make cool instruments and things out of them!

spring thing

i played a set on sunday in brooklyn.

it was largely through composed material made in supercollider, as my previous sets have been.  the style reflected a lot of the aggression my music has been building up over the years.  i don't think it likes me very much, or you for that matter.

anyway the sound went through a mixer before going to the final mixer and my buddy's sound system, and on this pre-mixer was an aux channel linked to the sculpture thing i'm holding in the picture.  on the bottom inside of that tupperware base is a contact microphone, which i hooked into a channel in my pre-mixer.  the aux send went out through a hand held piezo driver and disk, which i would touch to the springs at different locations, playing the sound through the sculpture in different ways.  it's kind of like spring reverb (kinda like dub), at least not when its on stage subject to a large PA.  most of the sound i got was feedback, shaped by the springs.  which was cool, but clearly this thing can handle more.  i would like to try with it again sometime soon, possibly solidifying the construction and giving it a name of some kind.  at this point im kind of into Tubby as a name, after the King, of course...

photo-42.jpg

MuBo

the unofficial title of my p-comp final with Natacha.

we are still playing with ideas, which feels good for this project. since this is a prototype for a much larger extracurricular project, i don't feel as worried about not having This-One-Thing that we're looking into and just executing. instead of power-walking, we dèrive. sorry, that was cheesy. ANYWAY-- the point is that we're improvising with several different design concepts for the construction of this project.

at this point we are very interested in magnetic tape as a medium. i was drawn to this not just for the frail physical nature of the form but also because it might free us from using the computer to generate sounds in realtime. also the sculptural element would be neat and might involve big clunky gears, which we also like. we are also interested in how sounds of non-synthetic origin work once transduced through our piezo situation. all this is not to mention the fact that if this thing could stand-alone we might sell it and its siblings after the show... i really like the idea of using a balloon with some beads in it as our first object, but the idea of slashed-up distorted little paper speakers also seems kind of cool.

either way this is what we've got in terms of building blocks: a working driver circuit and several piezos that definitely work with it. a few dismantled tape heads, one of which might still work although i still have to hack it. several ideas for superCollider code that would still seem cool for playing through objects. a source for boxes, pre-fab.

it seems like we just need to keep playing with our materials until we get something we like. i hope that doesn't take long, since i need something to demo on thursday. at the same time, i understand that this project (as many of my projects end up) exists outside of the classroom to a large extent, and while my classes inform me with techniques, theory and criticism, the timeline and restrictions that really count are imposed on me from the world outside.

that being said i'd rather not fail physcomp. so demo by thursday it is!

oh here's a driver circuit if anyone's interested. they're crazy easy to make.

photo-38.jpg

or alternatively (if the photo is hard to decipher):

photo-39.jpg

also while im at it, the coolest things happen when you dismantle a stereo...

photo-40.jpg

a tape head.  the other one didn't turn out as pretty.  i definitely felt like a turn-of-the-century dentist.

or a barber, i guess.

also there were some other cool looking components i managed to desolder and rip out of that box.

an incomplete picture of the booty. YARRR!

photo-41.jpg

just to give you a sense of proportion for that HUGE-ASS knob (it has both a motor and an led on it!), and also those MAMOTH capacitors, i have made their adjectives in caps.   also very stoked about the rca ports...  mmmm...

L-Systems Project

so i have completed what i consider to be a working model of how to implement Patrick Dwyer's L-Systems class with OSC events feeding into supercollider.

here are my notes during the last stretch of the planning phase, which should give you an idea of my strategy.

we need a new function for L_System2 that will take a character as its input argument.

its sole job is to send appropriate OSC messages into supercollider based on the instruction character.

this function will be deployed immediately prior to the rendering for-loop which draws the entire shape.

this function will have the effect of passing the difference between the old + new frames in terms of the instruction used to generate it.

this way, both the visual and audio sentences can be generated in realtime, synchronized.

sound "consciousness" -- event based; moment-to-moment;
visual "consciousness" -- cumulative; persistent structural residue;

sound provides momentary richness and nuance to a gradually maturing visual structure

instructions have two 'dimensions', r & theta. a change in theta does not explicitly create a new event, but the change is stored for the next change in r.

***

that last bit about r & theta only applies to the first trial. my newest strategy has moved to substituting lexical entries with classes in proce55ing that will produce animations and sounds of their own accord, with the L-System providing a structure for their deployment. I have already completed two musical L-Systems. The first one, simply titled "L-System2" is a simple triangle fractal where turns correspond to a change in parameter and a forward move corresponds to a note event. the note events are a simple fm instrument between two sine oscillators. the parameter changes refer to the change in either the carrier or modulator frequency by a very small factor (~1 hz) depending on the degree of the turn. 30degrees = 1 hz. since the moves happen quickly this sounds a bit like a synthetic version of Alvin Lucier's "The Silver Streetcar of the Orchestra", wherein he instructs the performer to play the triangle at varying speeds, gradually progressing the beater all the way around it. to put it another way for those who haven't heard that piece, it sounds like someone is hitting a bell in a slightly different spot each time at semi-regular intervals. needless to say it's pretty minimalistic in visual and auditory representation.

the second one is far more complex, and has some personality to it. I call him Sasu, after Vladislav Delay / Luomo / etc, which are all projects of Sasu Ripatti, a Finnish computer musician. Sasu (the L-System) looks a bit like a brown worm with long white probosci coming out of him. these spokes each have blinky green pods on the ends resembling eyes somewhat. with age they wither into his body and turn dark red. his body never actually moves, it just grows, so eventually he looks more like a plant than a worm. his spokes add up to look something like a dandelion seed-pod, and his body gets wiry and dark in the older parts. his sound is comprised of little chirps and clicks. the high chirps occur when a pod forms. because of his complexity, proce55ing slows down as he grows, so that his song is in a constant state of retardando. since his chirps are actually synchronous impulses and formlets, the slowing down produces a kind of phase effect as the tempo of the note event slides in its relationship to the frequency of the micro-events comprising the notes. also the occasional bass note is triggered to add weight, although i might end up taking it out because it sounds quite Autechre-ish and that may not be what i'm going for... that being said obviously i'm a hopeless Autechre fan. have been since 1996 so don't try to stop me, it's too late. that's right kids, get your glowsticks out while you still can feel your hands!

ok enough of that.

i hope to be able to generate more complex graphics and sound with less cpu expense with this new method. since the events triggered by the l-system will be semi-autonomous and much richer by themselves, perhaps i won't need to work with as complex a terminal sentence. we shall see.

baby sasu

baby sasu. isn't he precious??

growing up

he gets fattier and fattier. such a big eater!

adolescence?

ooo! what a rebellious teenager! just blinking and chirping away...

adulthood

my how you've grown! soon you'll have your own place and a CAREER or two. deadbeat.
old man sasu

old man sasu. this is as old as i've grown him. he grew overnight, playing thru some piezos. he was still growing, albeit very slowly, when i closed the applet... see you on the other side, brother.

recess dismissed

i'm doing a series of tracks that are remixes of one another, exploring some of the new patches i've written recently. the most recent one is "recess dismissed". the one before that was "feast and resist", and prior to that it was "tease and be kissed". ONE of these things will eventually be named "cease and desist", which would be the title track to the album if i actually follow through with this idea. given the amount of crazy that's going to happen this coming week with regards to Red's class, i might just suffer a stroke instead. tonight i enjoy some of the last few moments i have to myself for the next week. wild. i should probably be sleeping. instead i'm cleaning my room and also my computer, backing my stuff up to dvd so i'll have more portable space to work with.

ANYWAY

the new patches i use on this one are as follows:

fractional waveshaper (see blog entry)

thrash distortion suite (see blog entry)

constant q tranform

the constant q thing is a new concept for me. actually it's an old concept, perhaps 'the' concept, but it gets implemented in a way that is so simple and elegant i can't believe it didn't occur to me earlier. so the signal gets broken up by a logarithmically scaled filterbank, which basically means it measures the volume of frequencies that are spaced evenly across the perceptual field, unlike with FFT where it's linearly based. as a result, fewer approximating bands need to be summed for a similar quality result. that means i can do less computation and get apparently the same result. at this point in time i can also save samples of analysis data directly and manipulate them in realtime. granted, these things don't sound very "realistic", but their application extends beyond realism very quickly. currently the filter coefficients are stored as separate buffers for each layer, and the player reads them and outputs 43 sinewaves, at 16 bins / octave. notice how it ignores phase. also notice how it will inevitably produce harmonic spectra. while this is interesting, it's not necessarily universally desirable. eventually i plan to use this model of behavior, however, for applications to matching pursuit thesauri.

SC3.1!!!

there's a new version of superCollider.

there are some pretty significant changes/improvements, so wise up.  two things that were significant to me, within the first 5 minutes of operation:

1. accessing help files is different: instead of cmd ? it's cmd d.  cmd shift d will give you a cute gui for help.

2. new fft implementations are more cpu cost effective and also more extensible.  this is totally sweet for what i'm working on right now, by the way.  couldn't have come at a better time, honestly.

also if you do the Quarks thing, you have to execute that terminal command again to access svn.

so far, it looks awesome!  a big "thank you" to all who made this happen!

fractional waveshaper

I designed this superCollider applet after the method described in Roads, 1996 and DePoli, 1984.  It produces a transfer function in the form of a buffer that is the ratio between two chebychev polynomial sums.  On the left side is a table of 256 values corresponding to the polynomial orders of the numerator sum, and on the right is a table of 256 values corresponding to the polynomial orders of the denominator sum.  To update the buffer in realtime, you must first allocate it and press the same button to start streaming into the buffer.  Press the same button to stop streaming.  A popup menu to the right of this button denotes the buffer index number where the transfer function can be found.  Also included is a "plot" button, for a visual read-out of the current transfer function.

the initial screen:

startup

edit the amplitude tables, select the buffer to allocate with the popup menu to the right, and hit the "allocate" button:

draw with the tables

stream into buffer:

stream into buffer

stop streaming to conserve processor power:

pause the streaming process

plot the transfer function:

plot the transfer function

/*Fractional Waveshaper.  The graph on the left side is the denominator function, the graph on the right is the numerator.  Be sure to allocate the buffer before playing with this thing.  It will only update the buffer if you told it to start streaming into the buffer.  While in "streaming" mode, it will update the buffer twice a second.  This is to ensure the server doesn't lock up.  When you're not updating the values, stop the stream.  When you reallocate it will give you an error message but it means nothing and will still work fine.  Take it easy on the MultiSliders, going too fast won't fuck up the server like in previous versions of the patch, but it does take up a lot of processing power and will probably freeze everything for a second if you get too excited with it.  If you're curious as to what the current waveshaper function looks like, press the "plot" button for a printout.  Enjoy!!!  (© 2007 Joseph Mariglio) */

(
var size, window, sliders, sliders1, init, init1, bufSelect, bn, button, buf, streamer, numerator, denominator, quotient, plotButton;

//make a window with two multislider views:

size = 256;
window = SCWindow("fractional waveshaper", Rect(620 , 450, 10 + (size * 4), 80 + (size * 2)));

sliders = SCMultiSliderView(window, Rect(0, 0, size * 2, size * 2));
sliders.action = {arg xb; ("index: " ++ xb.index ++" value: " ++ xb.currentvalue).postln};
sliders1 = SCMultiSliderView(window, Rect(size*2 +10, 0, size * 2, size * 2));
sliders1.action = {arg xb; ("index: " ++ xb.index ++" value: " ++ xb.currentvalue).postln};

//initialize multisliders with arrays:

init = Array.new;
size.do({arg i;
init = init.add(i/size);
});

init1 = Array.new;
size.do({arg i;
init1 = init1.add(i/size);
});

sliders.value_(init.reverse);
sliders1.value_(init1.reverse);

//some cosmetic considerations:

sliders.isFilled = false;

sliders.xOffset_(5);

sliders.thumbSize_(12.0);
sliders1.isFilled = false;

sliders1.xOffset_(5);

sliders1.thumbSize_(12.0);

sliders.valueThumbSize_(15.0);
sliders1.valueThumbSize_(15.0);

sliders.indexThumbSize_( sliders.bounds.width / init.size );
sliders.gap = 0;
sliders1.indexThumbSize_( sliders1.bounds.width / init1.size );
sliders1.gap = 0;

sliders.strokeColor_(Color.blue);
sliders.fillColor_(Color.yellow);
sliders.background_(Gradient(Color.new255(135, 206, 235), Color.new255(74, 112, 139)));

sliders1.strokeColor_(Color.blue);
sliders1.fillColor_(Color.yellow);
sliders1.background_(Gradient(Color.new255(74, 112, 139), Color.new255(135, 206, 235)));

//make a popup menu to select buffer number:

bufSelect = SCPopUpMenu(window, Rect((size*4/3)+15 + (size * 2) - 80,40 + (size * 2), 60, 20));
bufSelect.items = [
"b0",
"b1",
"b2",
"b3",
"b4",
"b5",
"b6",
"b7",
"b8",
"b9",
"b10",
"b11",
"b12",
"b13",
"b14",
"b15"
];

//make a button for the allocation and streaming into selected buffer, and another button to plot the buffer:

button = SCButton(window, Rect(size*4/3, 40 + (size * 2), 10 + (size * 2) - 80, 20));
plotButton = SCButton(window, Rect(size*2, 15+(size*2), 120, 20));
button.states = [
["allocate b#:", Color.white, Color.black],
["stream into b#:", Color.black, Color.green],
["stop streaming", Color.black, Color.red]
];
plotButton.states = [
["plot", Color.black, Color.white]
];
buf = Buffer;
streamer = Task({
loop({
buf = Buffer.alloc(Server.local, 1024, bufnum:bn);
buf.loadCollection(quotient);
wait(0.5);
});
});
plotButton.action = {
buf.plot;
};
button.action = { arg butt;
butt.value.postln;
if( butt.value == 1,
{buf.free;
buf.alloc(Server.local, 1024, bufnum:bufSelect.value);
"allocated buffer".postln;
bn = bufSelect.value});
if( butt.value == 2,
{streamer.start;});
if( butt.value == 0,
{streamer.stop;
numerator = Signal.chebyFill(1024, sliders.value);
denominator = Signal.chebyFill(1024, sliders1.value);
quotient = Signal.newClear(1024);
quotient.waveFill({|x, i| numerator[x]/(denominator[x]).max(0.00000000001) }, 0, 1023);});
};

//now for the "fractional" part:

numerator = Signal.chebyFill(1024, sliders.value);
denominator = Signal.chebyFill(1024, sliders1.value);
quotient = Signal.newClear(1024);
quotient.waveFill({|x, i| numerator[x]/(denominator[x]).max(0.0078125) }, 0, 1023);

//everytime a slider is touched, update the equation with the new values:

sliders.action = {    numerator = Signal.chebyFill(1024, sliders.value);
quotient.waveFill({|x, i| numerator[x]/(denominator[x]).max(0.0078125) }, 0, 1023);

};
sliders1.action = { denominator = Signal.chebyFill(1024, sliders1.value);
quotient.waveFill({|x, i| numerator[x]/(denominator[x]).max(0.0078125) }, 0, 1023);

};

//show the window in front:

window.front

)

Running the above code in superCollider 3 will get you the applet.  To waveshape a sound, I recommend my ChebyShapers 2.0 applet, or you could just make your own; they're relatively simple.  Be sure to include a DC notch at the end of it, and possibly some amplitude normalizing.

This waveshaper provides extremely gruesome distortion.  Fractional Waveshaping has been found to be capable of such timbral effects as exponential spectra and spectra that resemble a dampened periodic function (Roads 1996).

Feel free to use and distribute, but please keep the header commentary, including the copyleft info.

Chakra Energy Ball

Sorry I haven't had the time to post about this project.

The basis for the idea came from thinking about Muzak, specifically in terms of John Sterne's "The Non-Aggressive Music Deterrent". To summarize his critique, Muzak is a threat to community ethos because it represents a corporate agenda through the sound of spaces. How can Muzak be different? How can we change the concept of "public music" to represent a community's emotional/psychic state?

Another starting point was photography. We wanted to look into photographic techniques to find ways of photographing Auras, for representations of the psychic profile of a person. Eventually, these two ideas merged to form something altogether bizarre. What if there were a public fixture that represented the aura of its community through light and sound?

We started with ITP as our test community. ITP is extremely well blog-ed and otherwise documented on the internet. So we wrote a text filter that would read the RSS feeds of all the blogs and listserves and come up with a psychic distribution between the seven chakras. This filter would run passively every 24 hours, looking only at new information.

The distribution file would be picked up by a Proce55ing applet which interprets the distribution file into color and sound information. The colors were at first represented on the screen, and eventually sent to serial as RGB values for 1 watt leds to interpret. The sound is generated in SuperCollider, which Proce55ing sends OSC messages to.

At the time of this post, we have an orb-like fixture whose color changes based on the distributions and sound that comes through the computer running the applets.

More documentation will happen once we present the prototype in approximately 20 minutes.

ps

For those who are so inclined, here is the synthDef file for SuperCollider to interpret the OSC messages with. It should provide some information about what kinds of sounds this thing will be making. (For those less inclined to SC3, it will be producing binaural drones with difference tones of 2- 5 hz (theta range) related to the harmonic series. each chakra is correlated to a different harmonic. A ringing filter emphasizes the harmonic related to the color being shown. Also some upper harmonic material is generated by running white noise thru a resonant filter bank with coefficients corresponding to each of the 7 greek modes. these modes are also related to each of the 7 chakra-states. I coded the modes in just temperament to reduce beating unrelated to the binaural effects we were shooting for.)

SynthDef("chakraDrone", {|out = 0, f0, f1, f2, f3, f4, f5, f6, amp = 1, fund = 32, nlvl = 0.1, color = 3, chVol = 0.0625|
var signal, signal0, noise, lvl, diff, harm, modes, chorus;
lvl = [f0, f1, f2, f3, f4, f5, f6];
diff = [2, 3, 4, 5, 6, 7, 8];
modes = [
[1, 1.125, 1.25, 1.422, 1.5, 1.667, 1.875], //lydian
[1, 1.125, 1.25, 1.333, 1.5, 1.667, 1.875], //ionian
[1, 1.125, 1.25, 1.333, 1.5, 1.667, 1.8], //myxolydian
[1, 1.125, 1.2, 1.333, 1.5, 1.667, 1.8], //dorian
[1, 1.125, 1.2, 1.333, 1.5, 1.6, 1.8], //aeolian
[1, 1.067, 1.2, 1.333, 1.5, 1.6, 1.8], //phrygian
[1, 1.067, 1.2, 1.333, 1.422, 1.6, 1.8] //locrian
];
chorus = DynKlank.ar(`[Select.kr(color, modes)*fund*8, nil, (0..6).collect{7}], WhiteNoise.ar(chVol));
signal = (0..6).collect{|i|
FSinOsc.ar([fund*(2*(i+1)), fund*(2*(i+1))+diff[i]], 0, lvl[i]);
};
noise = PinkNoise.ar(1)*nlvl;
noise = noise.dup;
signal0 = signal;
signal = Mix.ar(signal ++ noise);
harm = Ringz.ar(signal, Lag.kr((color+1)*fund, 16), 0.1);
signal = Mix.ar([harm, signal0, chorus.dup]);
signal = Limiter.ar(signal, 0.98, 0.01);
Out.ar(out, signal*amp)
}).store;

FTP full

so I've managed to fill up my allotted ftp space within one month.  i'm gonna see if i can get more space to finish uploading the second set, etc.  also most homework assignments get posted to my ftp, so there's a great excuse waiting to happen...

set the second

i am posting my second set, "1", here. i've encoded it to vbr mp3 so it's smaller but should be high enough quality for anyone who might want to hear what it is that i do. sometimes.

pep talk

I'm posting my email of inspiration to my fellow "chakra group" folks on my blog because I know some of them read my blog and not their email. also it provides a link to a beautiful composition by Ms. Radigue that I have placed on my ftp for reference/poetry/entrainment. additionally, i like to compile my personal rants (about natural resources, microtones, idiom, aliens, FOSS, oddmeter, wavelets, socialism, etc etc etc) so that people see just how paranoid i actually am. thus:

i feel like it's time for a pep talk.
i'm uploading a composition by Eliane Radigue to my sftp to give you guys an idea of what im thinking of for the sound. also you should check out the La Monte Young dreamhouse project that Hans mentioned. These are examples of simple sounds that produce extremely complex reactions to human bodies. I think both of these composers will offer a source of richness that perhaps goes beyond the "close your eyes and relax" nature of other music that might seem at first to be similar. I want to make this clear: I am a composer of experimental music. I am not interested in creating the sound for this thing only in terms of what people are going to be comfortable with. What we got with the demo last week was a palpable physical reaction; I say this is unequivocally a good thing. My discovery was that a 30 hz difference tone (the highest one) is way too high powered to be messing with in this project, so i'm sticking to theta waves. I want to stick to brainwaves because I feel that the literature on chakra relationships to sound are highly context dependent, whereas the relationships from chakra to brainwave and brainwave to sound are not. Thus when the literature tells you some arbitrary hertz value relates to a chakra, that person has already assumed the context of a particular key in music, in addition to 12 tone equal temperament. Why? Because that's convention. We are not here to be slaves to convention. 12 tone equal temperament is a fairly modern invention, and in my opinion, has very little to do with sound. It is a social phenomenon that happened when a bunch of westerners decided they figured out the secret to making music 'universal'. problem is, the math is so complex that our ears and bodies don't resonate with it. this is because when you divide an octave into twelve equal parts you get intervals based on the 12th root of two. this is an irrational number. the relationships are a compromise to allow for transposition into any key. this has nothing to do with Pythagoras, who had his own scale, based on simple relationships between notes. Also the reason why we have 7 notes in a typical scale is because of convention. the "we" in that statement mainly refers to the west. Chinese musicians discovered the heptatonic scale centuries before "we" stopped lopping each other's heads off and boiling the blood into pudding. They decided, however, that this system was not as pure as the pentatonic (5 note) scale. I could go on forever about tuning. The point is, I am a person who has studied sound almost all of his life. I have a very good understanding of how this project is going to go in terms of sound. I welcome criticisms and commentary and the like. But I don't think I could put my name on this when we're finished if I'm not satisfied that the sound is as good as it can be. people are often made uncomfortable by things they're not used to. perhaps in the field of visual design this happens less often because sound is a 'wet' medium. Sound has the ability to make people feel in ways that vision cannot. You cannot 'look away' from a displeasing sound like you can a displeasing painting. Sound vibrates your entire body and can change you. This is why our demo got people gasping and totally immersed in our experience. The whole thing would be much less convincing if we left the sound out or compromised it in a way that belittles its role. Also recall that everyone who commented about the demo commented about the sound. Our professor gave us a reference to a composer whose work can make people nauseated as well. This is not to say that nausea is a goal, but palpable physical experience is. And nausea sometimes results. my perspective is that people often respond negatively to things they aren't used to. Certainly, I would expect that no one attending ITP is having intense religious experiences everyday. These are the kinds of things that shake people into health. Spectral surgery, if you would. Which are we looking for? Do we want Kenny-G's greatest Christmas Hits or do we want to align people's kundalini fields? To the extent that it's the latter, I'm in. To the extent that it's the former, I'm out.
JM

ps
(for your reading pleasure)
http://en.wikipedia.org/wiki/Eliane_Radigue
http://en.wikipedia.org/wiki/La_Monte_Young
http://en.wikipedia.org/wiki/Tony_Conrad
(quote from this article: "Conrad's most famous film, The Flicker (1966), is considered a key early work of the structural film movement. The film consists of only completely black and completely white images, which, as the title suggests, produces a flicker when projected. When the film was first screened several viewers in the audience became physically ill.")
http://en.wikipedia.org/wiki/Pauline_Oliveros

Matching Pursuit with Time-Frequency Dictionaries

I have just found this lovely article by Mallat & Zhang about granular decomposition based on a dictionary of particles.  This seems to be extremely relevant to my interest in wavelet and garbor transforms.  A possibility for the distributed computing project?  Hmm... I've always wondered about the possibilities of a variable wavelet transform, where the mother wavelet to be transposed to fit a frequency quanta would be of variable position in the Daubechies series, based on context or other aspects of the analysis... Could the multi-resolution analysis required for such a transform be applied in realtime by a distributed process of sufficient efficiency?  Each layer could be analyzed by a different computer on the network, and the synthesis duties could be scheduled by a first-come first-serve basis, since the processing involved is embarrassingly parallel.  The problem becomes less parallel depending on how context informs the choice in Daubechies type, though...

diy filter squelchbox- thinger

i built an RC low-pass filter with variable cutoff. i used it to make gabba-inspired noisey stuff. i played it out of an old guitar amp. the audio version of the composition has two tracks overdubbed, both live and analog. the video version is totally live. i'll post a picture of my circuit tomorrow. i sort of passed out on the floor for a bit after doing this...

the circuit uses the arduino's pwm out to produce pulse waves. in the parts where the instrument sounds like a 'bass' (sort of) the frequency is fixed for the length of the note, whereas where the instrument sounds like 'drums' (again a loosely applied concept), the length of time between pulses increases during the note event. i also attached an led to the sound output so you get a visual cue as well as audio. originally this was going to do the work of automating certain parameters of the patch using a photocell, and it worked beautifully, but of course the damn photocell broke when i touched it in ways it didn't want me to. after shouting a few obscenities i put a potentiometer there instead. there are three pots on the total instrument: volume, pitch/duration (which was supposed to be controlled by photocell), and cutoff.

here is the audio file.

lines

an example of the kind of thing I want to work on with the cluster, but designed in proce55ing way small scale. i have rigged up a function that represents this picture as a moving stochastic spectrum, describing the attributes of a wavelet comprised of a pure formant and a noise coefficient with a von han (often called 'hanning' despite the silliness of this title since there is a 'hamming' function that it should not be confused with) window. here is a mute, internet-friendly version of this patch.

the spectrum is mapped so that the y axis is frequency (logarithmically warped) and the x axis is purity, with left being sinusoid formants and the right being banded noise. at this moment the wavelets consist of 32 cycles their fundamentals. this gives them enough time to overlap nicely and each one can leave an impression of their spectral content. also it allows there to be more contrast between the different poles of both x and y axes.

the points are the solution to the intersection of two different clusters of 96 lines each. they are distributed relative to themselves in a fibonacci relationship, and they change direction when either endpoint hits the edge of the screen in an independent, 'bouncing ball' fashion. while this motion seems repetitive and uninteresting when you just look at the lines themselves, when you look at their intersections only, the pattern becomes much more varied. i found this to be ideal for mapping microsound parameters to.

Google class on Distributed Processing

http://code.google.com/edu/content/submissions/mapreduce-minilecture/listing.html

So a few fellow ITP students and I are working on a proposal for building a linux cluster. Mostly I am just along for the ride I guess, at least in terms of building the thing: I don't know anything about linux or networks. I imagine this will be one of those much-heralded 'learning experiences'. I anticipate learning quite a bit about the nature of this technology, which evidently companies like Google seem to believe hold much of the future of computing. I'm personally interested in the affordances of OSC over a network because of a quite poetically written article from the OpenSoundControl.org website which seems to now have disappeared (the article, not the website). my plan is to implement a parallel program that will render gigantic Xenako-Varésian manifolds and careen them into each other. i also have schemes to use it to render crazy resolution mandelbrots.

Thrash Distortion Suite

I have produced an applet in SC3 that does various types of nonlinear distortion, both of the constructive and destructive variety.  In it's most basic form, you can feed it samples and record yourself messing with it in realtime.  Included in the multi-window interface are:

  1. variable nyquist
  2. soft clip
  3. phase distortion
  4. value quantization
  5. 128-band resonator bank
  6. 128th order chebyshev polynomial waveshaping
  7. global limiter

the waveshaper displays its values as a histogram with realtime updating to the waveshaper.  this is huge.  each box in the histogram represents the magnitude of a corresponding order of polynomial.  the realtime updating function allows the user to tweak the controls of a system that can be somewhat counterintuitive.  originally the waveshaper updated every time you pushed a button.  i found this to be unsatisfactory because it doesn't take into account how crazy the system is.  it's just not possible to predict how it's going to behave sometimes, and i found my waveshaping to all sound the same because i never had the chance to do something specific to the spectrum in realtime.  i wrote this applet in response to a request by my friends Dave Mostoller (DJ 800 Death-Sticks) and Andy Aylward (DJ AndyChrist).

to download this thing, click here.

make sure you are running SuperCollider 3 on OSX and your build is up to date.

Navaratri Festival

I'm so glad I got to see the main performances at the Wesleyan Navaratri Festival this year.  T. N. Krishnan is truly a master at the violin.  It was his 80th birthday at the night of the concert.  We gave him two standing ovations.  It was wonderful to see his son playing next to him; the communication was fierce and their chemistry was magical.  Also Trichy Sankaran was absolutely on fire.  He basically did everything but smash his mrdungam on stage.  The tani he played at the end was astonishing.  I recognized some material, but he took what little I knew to places I didn't know existed.  In the tani he was slipping between khanda, tisra, and chatursra with expert precision and brilliance.

I ran into one of my past professors in the lobby of the world music hall, where the bhojanam was prepared.  I told him my plans to come back every year during this festival to soak in this beautiful culture.  It seems like everyone at Wesleyan just takes this kind of stuff for granted, but once you leave, you realize there are very few places in the world like it.  I unfortunately missed the installations by Alfredo Jaar (!), but I imagine I'll get another chance to see it.  I don't think a lot of my friends who still go to Wesleyan have taken the time out of their schedules to see the exhibit.  Shame!

Anyway enough of this silliness, I just resolved with Jenny that we would have to come back each year to CT to celebrate Navaratri with the folks who really know how to party.

Observation Assignment II

I was in Washington Square Park, playing around on my laptop.  It was pretty hot and humid.  I noticed the lawn sprinkler behind the tree a few minutes after I had sat down, and decided to move to a bench where I'd be safer from any liquid shrapnel.  From this distance I worked on some silly processing thing I don't think I took the time to save and checked my email.  Soon, however, this appliance became popular.  At first a young girl just moved the whole sprinkler so that its spray radius would occasionally intersect with the seating area.  I don't think this was intentional.  Still, I was glad I had moved.  The people close enough to be affected didn't seem to mind too much, as no one had laptops and they weren't being subjected to a tremendous amount of water.  And besides, it was pretty hot and humid.  Next, a young boy began to interact with the device by forcing it to hold its position, thus producing a steady stream of water.  This was to the delight of the girl and a few of their friends.  They held their hands in front of the spray and soaked their heads.  Surprisingly enough, none of the people sitting in the nearby benches had moved this entire time.  Additionally, a woman of somewhat indeterminate age, perhaps thirty, produced a video camera and began taping them as they played.  She asked their permission first.  The kids ran off eventually, their shirts soaking.  They left the sprinkler slightly turned so that it continued to gently mist the seating area.  I returned to my email.  Shortly thereafter, an older man and woman walked by the sprinkler, whereupon the man approached it and placed the bottom of his shoe in front of the stream, presumably to wash the dog shit from the bottom of his shoe.  It took him a few tries until he was satisfied.  I left somewhat abruptly to evade the increasing paranoia that the branches above me had become too moistened by the device and had started dripping.

Observation Assignment I

  • Alarm clock. Snooze.
  • Woke up and used a coffee maker to brew spiced black tea.
  • turned off ambient music coming from laptop.
  • Brushed teeth & massaged gums with electric toothbrush.
  • checked weather report on laptop & covered my naked appropriately
  • Took the elevator downstairs-- I live on the 5th floor.
  • I totally see why cellphones and driving are a bad mix-- cellphones and walking are bad enough! I narrowly escape the perils of other people's decreased spatial reasoning.
  • eschewed the elevator that goes to the ITP floor; i learned to distrust that thing long ago...
  • took notes from class on laptop-- it certainly beats my shit handwriting.
  • on the way home I found a truck that said "PENSKE T UCK RENTAL" on two separate faces. Took a picture from my phone.
  • went to the bank with Frank (hehe). he deposited a check and we went grocery shopping. The significant part here was the atm area (the bank itself was not open as it was 9:00pm or so): I was debating doing the in-depth "Part II" to this assignment on the way this area was designed. I have since changed my mind, although I could put up a decent argument to the effect of "architecture is a form of technology". (language, too, but that's ancillary.) point is, the way the entrance went off to the side and the space opened up, was very poetic for such a function-oriented space as an atm. The line of machines neatly tucked to the other side, while the overall maintained an openness, probably for security purposes. line of sight, even if the room were packed, would be preserved. Escalators whisked one up to the banking area had it been open. all a very economical use of space. bravo, citibank.
  • a poor use of escalators, in contrast, was the whole foods on 14th, where one starts on the first floor and works her way up. of course, the entrance is on the second floor, with no delineation of what the hell could be going on. so we wandered around looking for all the food. bad whole foods!
  • we had bought tons of food, and decided to interact with a taxi to get home.
  • used coffee maker to brew kava kava tea, whereupon I passed out listening to ambient music on my laptop...

MWUAHHAHAHAHAHAHAHA

OMG I AM SO STOKED I JUST GOT SUPERCOLLIDER TO OPEN AND BOOT FROM JAVA!!!!!!!!!

yeah that's right i am a dork. this is the most significant thing EVER.

Here's how i did it, in case someone searches for this problem:

//step one:

Runtime runtime = Runtime.getRuntime();

Process appProcess = runtime.exec("/applications/SC3/SuperCollider.app/Contents/MacOS/SuperCollider");

//so that's for straight Java, put that in your setup() func.

//for Proce55ing you want to put it into a try, catch statement, like so:

void setup(){

try{

Runtime runtime = Runtime.getRuntime();

Process appProcess = runtime.exec("/Applications/SC3/SuperCollider.app/Contents/MacOS/SuperCollider"); //this should be your path to SuperCollider. Be sure to open package contents of the SC3 Icon, because you have to go a little deeper to find the unix exec.

}

catch(IOException e) {

println("Error!" + e); //this in case something screws up.

}
}

//Step two:

make an rtf file with the path: users/<username>/scwork/startup.rtf

and place your startup instructions there. i started with a hello world post to test it, and after figuring out exactly where to put the file, I placed

s = Server.local;

s.boot;

This will boot the localhost server immediately on opening the SC3 app. Cool, right?! Now you can run cross-platform stuff without having to touch SuperCollider at all! i'm still getting used to server messaging techniques, because I've gotten pretty far without learning them, but I'm sure there's a way to clean up this process even more and boot the server through OSC alone. I'll post that when I figure that one out. For now this will do. I'm totally ecstatic and will probably not sleep for another few days as a result. ahhh the joys of mania.

commlab response: Blogs of War/Everquest

Re: Blogs of War

War is repugnant to me.  The internet is cool though.  War has definitely become progressively more mediated as telecommunications improved since WWII.  Hopefully this means that civilians will feel more connected to the war that's going on, and eventually war will go out of style... But that's doubtful.

Re: Everquest

I don't see how a system like Everquest could be considered "self-contained" in any way.  Yes, it's nice that everyone starts flat broke, but we are nesting this system within the larger economy.  And nobody starts out equal here.  So this Castronova guy seems to be barking up the wrong tree.  I think he would agree with me now that things have progressed as they have within the game world.   I have no idea what we're going to do when one of those game economies collapses.  But more importantly: what are we going to do about the larger economy when we run out of oil?

youtube vs ifc media lab

you tube seems a lot more streamlined.  I had to resize my video to get it on ifc, which is kind of an issue if I want to put my other animations up there, because the window size is an 'initial condition' that affects everything about the system.  I also occasionally don't really have cast or crew, or win awards, or some of the other extraneous things ifc asked me about my animation.  honestly I prefer just throwing my stuff onto an ftp and then linking to it from some other source, like a blog or SNS because then I can contextualize it how I want.  I understand why ifc might be a better resource for a film maker, but, to put it simply, I am not a film maker.

blip cell

so here's a video of a new generative piece I'm working on. the rules for each of the 32 cells are as follows:

  1. if both of your neighbors are going faster than you, halve your speed.
  2. if both of your neighbors are going slower than you, double your speed.
  3. if either of your neighbors is going the same speed as you, pick a new speed.

the initial conditions are randomized, with the exception of the first two cells. they remain at the fundamental and first harmonic, just to anchor things a bit. the piece does shift around, despite this attempt at holding it down. the 32nd cell picks a random initial frequency and does not undergo revision.

the sounds are bandlimited impulse trains ("trainlets") which are convolved (frequency-domain multiplied) with a kick drum. allow me to explain why this is cool. a bandlimited impulse is basically an impulse that is approximated by summing harmonically related cosines. so the more harmonics you use in a trainlet, the more it starts to look like a click instead of a sine. the way convolution works is that one sound becomes the impulse response for another. thus, when a sound is convolved with an impulse, it returns itself. so basically it's like playing with granular synthesis, except it's synchronous (there's no jitter on the frequency) and it produces trains of bandlimited copies of the grain. trainlet synthesis is nothing new to me personally; i've been developing an incredibly over-complicated composition/performance UI for this in supercollider for a little over a year. and it works for me, so whatever. what is new and cool about this patch is how simply it describes something so complex.

this is the first processing-supercollider patch that I really like aesthetically as well as conceptually. crcl_ln is neat, but way too frenetic. also with that patch the user must set a new window size to change the initial conditions, whereas this is done intuitively for you. also i was getting tired of the bouncing thing. this patch moves a bit slower. although there is more actual "randomness" involved, I feel the system looks pretty organic, and well grounded. I also like the colours. the layout is so simple, but the space is effective in my opinion. I left this thing on all night and experienced some beautiful drifts. a four-minute clip does it absolutely no justice. sometimes the cells would become so big that they'd take up the whole screen and their frequencies would be well beyond the human auditory capacity, but fifteen minutes later, sometimes more, sometimes less, the whole system would have moved somewhere else entirely. similarly, sometimes rhythmic structures emerge that feel so infectious, partially because of the contrast to the other moments in the patch, but also due to the fact that despite their changes, everything remains harmonically related.

birthday!

Jenny's birthday is September 30th-- this Sunday.  I can't wait.  I'm planning what should be a really pleasant night.  I don't want to say too much here because I'm pretty sure she reads this.  It's going to be under 20 people in a living room, all sockfoot and listening to ambient music, watching video installations, and cuddling.  I made invites using Processing just for the practice with image uploading.  You can pretend you were invited and click here, there's no shame.  Hopefully I will get some video of the event and post them as well.

photodrone

here's something I made in SC3 and Arduino using the sun as a light source. the resistance drives a chaotic function in SC, which is passed through a 32 band resonator, tuned to harmonics related to the just-lydian scale.

click to download mp3

Led Symphony

So I finally got SuperCollider to befriend my Arduino board. I have to say I couldn't have done it if Alex hadn't proved to me it was in fact possible. Sometimes it's good just to bounce things off of people so you can formulate the right question, and then by the time you do that it's all over and the problem is self-evident.

Oh yes! The thing I was doing wrong... I was trying to run the SC-side reading task before the Arduino knew what it was doing. Another common error results from mismatched baud rates. The arduino SMS example starts you off with 9600, while the SC example starts at 115200. Stick with the higher number in both.

I made a thing in SC to play a sinewave through a waveshaper. No big deal. I got a bunch of led lights to randomly blink so that one was always on. Also not a big deal. I aimed the led "chorus" at a photoresistor and sent the resulting resistance into SC to play degrees of the harmonic series with the sine/waveshaper patch. It, too, is not a terribly big deal, but man is it gratifying! I also was able to write the led randomizer sequence in SC, which is nice because I'm so comfortable with it... And you know, all this stuff came really easily once I successfully passed a message from the board to SC. The hardest thing to do was take the video in iphoto, with the built-in camera. Also not a big deal...

So yeah, I demonstrate the patch in the video. I try to show how light affects the patch by blocking the resistor occasionally. Also I change certain values from a gui in SC, which I tried to use my other monitor to display for the camera, but it became way too big a deal. Okay enough with that meme before it totally spirals out of control.

voltage regulator weirdness

ZZZZAARRKKKXX!!!so right now i'm typing with one hand, not because it's the cool thing to do (which by the way it totally is) but because if i stop making contact with the top of the voltage regulator, the improvised extra cooling unit for my laptop will stop working. i have a metal clamp attached to the regulator, and as long as i touch that (the regulator itself is too hot), the shit works like gangbusters. i am somewhat afraid. but i can rationalize my way out of exposing myself to the risk of mild electric shock by saying it's all for the laptop. but what happens if i actually get zapped while i'm writing this? it would probably fry my computer anyway. well, that settles it then. burn baby, burn! im gonna try a fresh voltage regulator...

a selected (and abridged) email correspondence…

Hey Joe,

I'm trying to use an Evolution X-Session controller with SuperCollider.
Basically, it has 16 assignable knobs that I'd like to have controlling cvs
in Conductor. Do you have any examples / advice for how I'd go about
setting this up?

In other news, I like your "Harmonic0" project in Processing. I've been
getting into Processing myself. It's amazing how many cool designs can be
constructed with relatively simple code.

~David
Hey David,

I recommend you dig the MIDI helpfile, as well as the CCResponder helpfile. Unfortunately I don't have an appropriate example here with me (I'm working the night shift in the ITP equipment room), but I'm pretty sure you want to connect your Conductor to midi using a .task{ }; function. I'll be more specific if you still need help. And tomorrow.

Re: Harmonics0. Thanks! I'm just starting out with this shit but it's very similar to java, which I almost failed out of at Wes. But that's because the prof was an assclown. The specific design of the patch uses digital harmony (think Whitney Music Box). That's why it's called harmonics. It's actually the inverse of the harmonic series. Send me patches! For either environment... it's always good to see new code. I'm currently doing stuff in SC with wavelets. Once I found a solution to the Continuous Garbor Wavelet Transform (basically a multi-level FFT fed by a filterbank), things started really opening up for me.

Also you might have to tweak around with your X-Session to find what CC#'s are what... that was kind of a bitch on my somewhat obsolete novation. (...) anyway here's a patch i wrote using the CGWT to change the time/pitch independently. it's way cleaner than an FFT especially when you slow shit down, but there's a kind of distortion that gets introduced when you go too slow because it's only dealing with harmonic content. soon there will be banded noise coefficients for each layer, which will probably help that.

JM

yeah, if you have anything more detailed, that’d be awesome. I’ve also been taking to Jon Zorn about this – he did something similar with the doepfer pocket knob controller, that I think I’ve almost got something working (well, at least the crossfader is working – one out of 17 isn’t horrible, I suppose).

~David

Hi David,
(..)here's a simple little diagnostic task func that you could put into your Conductor. Obviously I have assumed you named it "con", but whatever.
////////////////////
con.task_(
{
MIDIIn.connect;
MIDIIn.control = { arg src, chan, num, val; [chan,num,val].postln;
}
);
////////////////////
(Also if this is redundant, I'm sorry, I don't know what you know.)

this thing will print out the channel, the CC#, and the value of your control messages as they happen.
you can easily see how you'd then connect your CC#'s to CV values within this func.
here's how I did "The King of Pentacles" (the piece with Devin on detuned guitar). obviously your specifics will vary.

////////////////////
con.task_(
{
MIDIIn.connect;
MIDIIn.control = { arg src, chan, num, val; [chan,num,val].postln;
case
{num == 5} { vol.input_(val/127)}
{num == 104} {fund.input_(val/127)}
{num == 105} {density.input_(val/127)}
{num == 106} {rate.input_(val/127)}
{num == 12} {zerox.input _(val/127)}
{num == 16} {rLow.input_(val/127)}
{num == 17} {rHi.input_(val/127)}
{num == 18} {pLeft.input_(val/127)}
{num == 19} {pRight.input_(val/127)}
{num == 108} {pLow.input_(val/127)}
{num == 109} {pHi.input_(val/127)}
{num == 110} {preAmp.input_(val/127)}
{num == 111} {postAmp.input_(val/127)}
}
}
);
////////////////////
does that help? I realize it's pretty elementary, but I don't really do much with midi.

Relatedly, I'm trying to get SuperCollider, Processing, and Arduino talking to each other using OSC commands. I got my first 'hello world' last night at around 3 am. right now I have a little white ball that bounces from side to side and triggers a default synth each time it changes direction. I'm wondering if you have any experience with OSC messages because I'm finding it very difficult to get the syntax right between two programs. it seems that i can tell the synth to turn on and off, but the "/n_set" argument changes are just not happening. of course, this is a drastic improvement from where i was a few days ago with the Arduino quark in SC: I had Arduino reading an analogue pot in and had SC reading the Arduino's serial input using SMS but nothing would happen within SC and then applications started disappearing (!!) from my toolbar. a very dramatic bug. (if you want code examples for the OSC shit that worked, lemme know and I'll hook you up, but I'm not making you any promises re: blowing your socks off)

anyway keep at the midi thing. it's a noble pursuit, and I'm sure you'll work it out.
JM

crcl_ln (demo)

play version one

play version two

this is a demo of a generative patch i made in processing and superCollider. the processing patch sends osc messages to superCollider to generate a pulsar train or individual pulsar each time a "ball" hits one of the sides and changes direction. each of the 32 balls travel at harmonic ratios to each other, which also corresponds to their relative sizes. for example, the second largest ball travels twice as fast as the largest, and the third largest travels three times as fast as the first. and so on... lines are drawn connecting the balls to emphasize their spacial relationships as they shift from complex to "ordered" throughout the piece. the audio material is made up of 2-cycle sinewave shapes, with an exponentially decaying envelope. the longer trains are selected from the collisions based on the total number of collisions per side. there is no "random"ness to this patch at all. everything you see is the result of simple rules being played out by 32 actors. the patch doesn't repeat at the end point, in fact I have yet to stare at it long enough for the cycles to coincide. I apologize for the poor quality of sound/video but perhaps this will be an incentive to see this performed live. I figure this stuff makes a lot more sense to an audience of lay people than a dude sitting behind his laptop. you don't have to understand how this works to appreciate it, and perhaps to predict certain aspects of it. if i were hiding behind my computer while this happened, and you had no visual reference at all, you would get bored pretty quickly. similarly, if i were hiding behind a projector being filled with stock footage loops with no relationship to the music at all, perhaps being selected by a "V-J", you would probably either get bored as well or simply get distracted from one by the other. Here, in this system, the visuals represent a causality for the sound that demonstrates the order of the system and offers a peek behind the mathematical curtain which generates it. oh, and by the way, the screen flashes each time a longer pulsar train is selected. for some reason, this didn't show up in the screen capture.

55- word story

There was a nagging melody to the chatter of the chuckling people nearby who seemed to be watching a television show, or watching  everyone as though it were television show while I sat bewildered watching everyone as though it were a television show on the train I got on last night to come back home.

Orality and Literacy: Criticism

Ong makes it very clear from the beginning of his book that he has an agenda. His thesis on the psychodynamics of oral versus literate cultures is at best mildly entertaining and at worst horribly offensive. He insists that there are two distinct classes of cultures: the primary oral and primary literate. His commentary on A. R. Luria's research into differences in cognition between the literate and illiterate are fascinating. His extrapolation, however, that these findings can be categorically assigned to some monolithic "oral folk" is trivializing and naive. There is significant intuitive evidence that suggests how an illiterate person may differ cognitively from a literate person, but not much of Ong's writing goes beyond the intuitive. In fact, while he is quick to cite Luria's research, I am left questioning the political baggage Luria is carrying around that prompted the research in the first place. The book reeks of misplaced nostalgia masquerading for scholarship.

Now that that's out of the way, there is some validity to what Ong is trying to say. The written word, or perhaps the analytical word, when used unaccentedly, or without idiom, can only describe the negation of something. It is only possible to use this language to draw an outline around an idea, a person, an experience, with the kind of language Ong so sloppily criticizes. But let's not throw out the baby with the bath-water. To build a memory database by way of aggregating information into mnemonics can only get one so far. There are distinct advantages to the written word and that which, according to Ong's sweepingly broad thesis, emerged as a result.

I studied solkattu for three years in college and by no means would I consider myself to be an expert in the tradition. Solkattu is a form of oral tradition used in South India to generate and commit to memory rhythmic phrases for use in drumming. My professor was an American who studied in India and in the states, and became somewhat famous in South India for devising notational systems that preserved the underlying structures of the tradition while providing a language to generate endless new material, all within the idiom of Karnatak music. Wrote memorization is typically the method for transmitting this information in India, but I would never have learned it that way. As a result of studying with this man, I gained access to the music in a way that would engage me and my way of thinking. Traditionally, a teacher in South India would not provide her students with a means of 'understanding' the material, but rather she would expect her students to listen and repeat. And repeat. And repeat... And a typical South Indian student would not ask as many questions as a typical American student. So the process of learning and engaging with ideas is completely different. So Ong has a point: oral traditions and literate traditions can also correlate with differences in mental modelling.

Ong's Achilles' heel is his unflinching nostalgia for something he has never experienced. I understand that his thesis is a remedial one, in criticism of the mainstream of the time (keep in mind he wrote this in 1982 or so), but I couldn't help but wince audibly at a few of his paragraphs. There are many solutions to this dilemma Ong is describing, many of which have been widely adopted in the fields of anthropology and cognitive psychology by now. For example, in the field of cognitive psychology, we have found that a purely phonics based learning system does not do the whole job in terms of teaching kids to read. The grapheme to phoneme rules are simply too often broken in English. A multi-faceted approach using both phonetic strategies and orthographic strategies for identification is necessary for a confident literate child. Also we have discovered several regions of the brain which are dedicated to the recognition of symbols and patterns, which also closely associate functionally with language and auditory regions. While the inner mechanics of these areas is still the subject of debate, many cognitive psychologists believe this region to have developed from other skilled, pattern oriented tasks such as hunting. So there are a few counter-arguments to the claim that orality is authentic and literacy is synthetic.

shprd_tn : the never ending glissando (frequency domain edition).

shprd_tn.pngshprd_tn Sorry for the lossiness: this editor seems to have trouble with flac files.  (cntrl click to download; click to preview)

constructed from 0.25 seconds of source material. i chose the "ar" phoneme as sung by Kenny Hagood with the Miles Davis nonet (the first note in "Darn that Dream". the patch is almost identical in praxis to the "tempus fugit" patch, except that it works on the frequency domain, leaving time intact. this concept is proven by the synchronous repetition of the phoneme, regardless of where along the 4 octave span any instance may be. the patch uses a 6-octave filterbank feeding 6 independent FFT's of layer-appropriate window size, effectively resulting in a 6-layer continuous garbor wavelet transform (CGWT-6). the algorithm is improved re: amplitude scaling. for some reason i couldn't quite figure out how to make a hann window out of a linear attack and decay. (i spent hours playing with waveshapers for this task! talk about inefficient!) needless to say this was easily fixed. all i had to do was let the amplitude curve be amp^(1/2). the time-domain patch has been similarly remedied.

Hello World!

my first solder! It's amazing how gratifying a simple "hello world" response can be when you're new at something. Making that led (pin 13!) light up or that first print screen message really feels so good you just want to do it again and again. I feel like every time I write some new patch in SuperCollider, an environment I've come to be comfortable with, I still get that same rush when I've finally succeeded at debugging. I think that's really what fuels all the following inquiries.
So I soldered my first joint today. No, I did not smoke it. What a rush, though! I really can't wait to do it again. Now I'm thinking of building a little addition to my laptop stand that will house an extra cooling fan and a switch. Also this BBE unit I came upon for <$20 was definitely up for grabs because there's a dirty connection in there... hmmm... Actually looking around this room there's a fair number of things I could really play with and probably destroy in the process...