Bovina Clovis

Shortly after their tour, the EYEBALL FRIENDS became infected with a strange and seductive sickness. They labored day and night, sleeping next to their tools, only to wake up in the same cold sweat, and with the same inexplicable drive to press onward. All told, the episode lasted a little under two weeks, but by the time it had run its course, a terrible and gorgeous new member of their band was born.

Continue reading

MLEF: the blended fetal eyeball paste tour

Last week, MOLTEN LAVA EYEBALL FIEND played a few shows in Texas and a show in New Mexico. We're calling this our first tour, although we really see it as a dry run for a much larger tour around SXSW. For the most part, the networking and brainstorming was invaluable to the EYEBALL FRIENDS. Also, it was really exciting to find new audiences that seemed to enjoy what EYEBALL FORNICATION is about. Several recordings exist that document this:

Continue reading

SPARKLEMATIC ADJUSTABLE

On June 3rd, Cooper Baker and I performed this piece for amplified steel bar. It involves grinding and cutting the metal and processing it with an array of quirky homemade analog effects units. It was very loud, and emitted constant showers of sparks. We had way too much fun doing this, and we will most certainly be doing it again soon.

Continue reading

MUTANT SEAL HOMICIDE

MOLTEN LAVA EYEBALL FORCAST is:

Adam Goodwin: Upright Bass, Contact Mics
Joe Mariglio: Loathing & Disgust

This time the EYEBALL FRIENDS also incorporated a laptop into their noise matrix. This angered the mutant seals.

BUSHWICK

MOLTEN LAVA EYEBALL FIEND is

Adam Goodwin: Double Bass, Contact Mics
Joe Mariglio: Analog Noise Network

SCHOOL NIGHT

MOLTEN LAVA EYEBALL FIEND is

Adam Goodwin: Double Bass, Contact Mics
Joe Mariglio: Terror and Revulsion

Creosota, Live on the Air

On Thursday, May 12th, Josh Redman was kind enough to host Creosota on his show "5 4 3 2 Fun" on KCSB 91.9 Santa Barbara! Adam and I talked quite a bit about the performances that would happen the next night at CEMEC (not as Creosota). We start playing around 8:25. Enjoy!

Bird Tree Shrine

Last Saturday night, Sean Leah Bowden, Thalia McCann & I made a performative installation involving an amplified tree shrine inhabited by insane bird creatures. Leah and Thalia did incredible work on the costumes and shrine. I offered as much help as I could with the shrine, and provided the following tape piece constructed from granulated field recordings of starlings. It played out of a small guitar amp at the base of the tree and an even smaller hidden speaker in the branches.

Continue reading

Songbird a

I am building a series of squelchy, tree-dwelling analog synths called 'songbirds.' Here's a picture of the prototype, whom I've nicknamed Tyagaraja, after the 18th century Karnatak composer. Songbirds are designed to react to ambient conditions. Tyagaraja responds to changes in light, because photocells are simple and cheap (at least, for now). He consists of a synth element, which can be calibrated using four knobs, an array of resonators (made from coffee cans), and a speaker. He runs on batteries. Future iterations will consider alternative sources of power- they don't need much current at all. The synth element has two audio-rate oscillators, two low-frequency oscillators, and several filters.

Continue reading

OZ

I recently had the privilege and good fortune to travel to Australia again! This time, I got to play some music and even do some teaching, thanks to Estee and the other kind folks at Electrofringe and UNSW. In the process, I really enjoyed making field recordings of trains & birds (thanks, Sam!), taking pictures of some beautiful street art, and hanging out with friends both old and new. I was again reminded of how incredible the coffee is down there. That alone would be reason to go back! I also got to visit Melbourne this time, which is a truly lovely city, with some lovely people in it.

Continue reading

Wrestling Jacob

wrestling_jacob by 3_spds

a cover of a hymn by charles westley.

performed live by adam tinkle and joe mariglio (me).

I have a recurring dream where I'm trying to squeeze the life out of a shape-shifting opponent. It's a frustration dream; I can never make my partner submit. If I loosen my grip, I find myself being overtaken. Either I or my consort will typically start whispering things like "I love you" as we suffocate each other.

"Wrestling i will not let thee go, till i thy name, thy nature know." - Charles Westley

who is anybody?

On Wednesday, December 1st, I had the pleasure of sharing the stage with MC Justin Zullo (AKA Conundrum), Kimbridge Balancier (AKA sPeakLove), and Diana Cervera. I DJ'ed (from the command line!) for Conundrum for the first half, and played a solo set for the second. Due to a few technical problems, I was unable to record the set with Conundrum, but I was able to reconstruct the solo set. I've decided to release this set online as an EP, in a manner similar to "gosling_glider_gun."

Continue reading

the universal language orchestra

in a project spearheaded by charles curtis, several ucsd grad students, including me, have embarked on an outreach project at the spring valley community center. we are working with children ages 8 - 12 years old, building instruments, making sounds and listening. the project will eventually culminate in a performance of some kind, which the entire group will plan and execute, with my fellow students and i acting as facilitators.

Continue reading

give me everything.

on thursday, october 28th, this piece made its debut at brooklyn college for their biannual "international electroacoustic music festival." despite the fact that i couldn't be there in person, i couldn't pass up the opportunity to have it premier in brooklyn, due to the subject matter. i'd been planning this piece for two years, and it was awesome to finally see it through to completion.

Continue reading

gosling_glider_gun

a recording of a short set i performed at the che in san diego, on saturday, september 25th. my primary tools were supercollider, matlab, and logic. i also used soundhack and audacity. all original unreleased material, slated for free digital release with abattoir records in new york sometime this year.

Continue reading

the thief (pitch -> midi app)

one of my clients is a sax player who also plays synths for his band. he wanted me to find some way for him to control his synths (hardware or software) with his saxophone. we discussed the ewi controller, and decided that this was unsatisfactory as a solution, due to his use of falsetto and multiphonics. he also felt that the ewi controller wouldn't be as comfortable as using his own familiar sax and reed. as a result, i wrote him a simple software solution that would read in pitch and amplitude values from the soundcard, and output midi values. the resulting software is super light-weight and ready to be deployed on a number of hardware configurations. it's also completely modular and networkable. the working title for this guy is 'the thief'.

requirements:
currently, this software only runs in osx. >_< . i may eventually port it to linux, but if you're running windoze, good luck to you. you will also need a soundcard (most computers these days have *some* kind of audio interface), and if you want to get fancy you should have a microphone.

dependencies:
this patch is not a single piece of compiled software that runs like a self-contained black box. rather, it is a system comprised of several programs networked together to provide a single service. so, in order to run this patch you will need the following (don't worry, they're all free):

occam, a lightweight midi client that speaks osc

supercollider, my development environment of choice for realtime audio

something to control with midi. this could be a hardware synth (if you have the appropriate cables), or ableton, logic, ardour, reason, whatever. choose your poison.

once you have these things installed and ready on your box, we can talk about the implementation. my source code is below. to run this code, paste it into a new window in supercollider, select the entire document, and press . note that is not . you should see the little server window on the bottom left boot up, some crap print to the post window, a hideous grey gui pop up, and occam open up. on the top part of my hideous grey gui there are two black buttons. one has a green [>], and one has a red []. pressing the green [>] will turn it on, while pressing the red [] will turn it off. hopefully that convention is obvious. there is also a single long slider with a number box next to it. you may type into the number box or you may move the slider with your mouse. this controls the rate at which new midi notes are sent to occam.

this patch simply analyzes audio for pitch and amplitude at a regular interval, and sends this data out as osc to occam, which converts it into midi. i chose this model instead of something more complex (that could, say, encode arbitrary audio into discreet midi notes) because i was hired to come up with a specific solution to my client's problem, not a general solution to all midi-related problems. if you are so inclined, please take my code and do whatever you want with it, so that it will suit your needs. all that i ask is that you inform me and credit me.

and with that, here's the code.

* * *

a few lessons stick out to me from this first development cycle.

1) i am reminded not to think like an engineer.
no offense to engineers-- you guys are very important! i'm just not one of you. my methods are generally faster, more improvisational, and way less general. i spent considerable time banging my head against a wall trying to make my solution work in all cases. this was wrong.

2) working with clients can be rewarding in ways that doing your own work isn't.
i now remember the midi spec.  i never use midi and honestly it makes me seasick.  as the result of this project, however, i've learned a lot about implementing solutions that use it. a major lesson regarding midi is that supercollider's midi output implementation is a total joke. another one is that osculator, a fairly highly regarded osc-midi client, is also a total joke. occam is the lightest, fastest, simplest midi client i've found. another major lesson? i learned how to use the scary 'environment' window in logic. it totally looks like max! which makes me want to vomit. regardless, these lessons will definitely prove useful to my freelancing and for my own projects.

3) sometimes walking a noob through your algorithm is the best way to debug it.
one night, when i was particularly frustrated with trying to do the impossible with my app, i vented by trying to explain how horrible my life was to my roommate, who despite some familiarity with supercollider is definitely not a programmer. this turned out to be the most useful few hours i spent on the project, not because we happened upon a new algorithm, but because it gave me the confidence i needed to let go of the grand vision in favor of something more realistic. also debugging was thereafter a snap because i knew the algorithm like the back of my eyelids.

keep in mind this is the first iteration on this thing. i'll be working on some other features, such as microtonal tracking, and the now-infamous "arrogance" knob. when i've brought this project up in conversation, there have been a few musician / producer types who really perked up. i want you cats to try this out, and complain to me about it! the more you complain, the more likely i will be to listen to you. also if you are a musician / producer / developer type, and you want to steal my code, please do that! but let me know so i can steal your code too. ^_^ happy tweaking!

ps - i made a short track with this patch, just for laughs. it doesn't really demonstrate the patch very well at all. something more demonstrative will come after i have a meeting with my client and we record some user testing. it's the track from this post, using a fender mexican strat through a homemade fuzz pedal.

the_interval_sessions

interval ylem:

chonyid bardo:

IMG_0489

i played these two sets livecoding in supercollider on my eeepc running pure:dyne, played through two homemade amp / speaker circuits left over from the project steve and i just worked on.

the material is inspired by some meditation and late-night internet research into vajrapani cosmology, specifically surrounding the bardo thodol. i actually have a copy of the translation (not the robert thurman) somewhere around here. the titles reflect this. "ylem" is a term first used by cosmologists george gamow and ralph alpher to describe the primordial substance. in tibetan cosmology and linguistics this relates to the chikhai bardo (first interval), or the clear lights; an experience of complete merging with blissful liberation. after this, the bardo thodol describes lesser lights, or the chonyid bardo (second interval), a highway through various inner realms, riddled with possible exits into liberation or eventual rebirth.

i have clearly been listening to a lot of eliane radigue's beautiful electronic compositions-- and a healthy amount of gabber. most of the sounds are produced by fm and waveshaping. the material is largely simple and repetitive, not just because it's the result of livecoding, but also as a choice. i generally go for a more granular approach to sound, so the sound of smooth oscillators is kind of a treat for me. the warm tone of the amps is really what pushed me in that direction. to record this, i used a pair of binaurals clipped to the little 8ohm speakers you see in the picture above. in these sets i'm particularly fond of waveshaping by phase-indexing a sinewave or set of sinewaves with very low frequencies (1-24 hz). this technique adds a throbbing nonlinear distortion that can be both timbral and rhythmic.

crudspades ginormous thing at bent 2010!

Left_SideCenterRight_Side

the crudspades ginormous thing went up at bent last week! i had to restrain myself from putting documentation online too early, because i didn't want to spoil any surprises. steve litt and i had a great time working on this project, and i look forward to working with him in the future... we have been talking about recording an album of crudbox music. i'll post updates here, of course.

here is a link to the concept art and proposal. the installation consisted of a small ensemble of self-amplified trash sculptures, with steve's crudbox as the conductor. we ended up making 6 pieces and finding 2 prefab appliances. they were:

  • a large metal slinky with several metal beaters hanging inside it, shaken with a solenoid
  • a metal grate, previously used as a gong with mike clemow for our performance at the silent barn a few weeks ago (as the todd walker tabernacle choir), hit with a solenoid
  • two amplified cans hit with solenoids
  • a piezo-speaker feedback synth, activated with a relay
  • an electro acoustic sculpture we came to call "richard." richard was made from a flat ribbon speaker i pulled out of the trash over a year and half ago. on either side hangs a small electret microphone. the mics are ring-moded together, using the jar, and the amplified signal is fed back into the speaker. crudbox sequenced richard by turning a small, hidden led on and off, which was paired with a photoresistor to pulse the feedback on and off. when used in this way, richard mainly provided a low end "wump!" sound. however, running in stand-alone mode, independently from crudbox, richard could generate a huge array of tones, and was immediately playable. furthermore, the no-frills interface provided an opportunity for meaningful collaboration with others. i am curious to build out the other flat speaker i found and make a second richard. i feel like i could play an entire set on this instrument alone. if i made a second one, it could be interesting to have three people interacting with the system onstage. i will post video of richard running solo in a later post.
  • an old vhf analog tv, activated by a relay. steve and i were amazed at the beautiful shapes and patterns one could get simply by turning the television on and off. we even got it to change stations sometimes! steve is now obsessed with tvs.
  • an old turntable, activated by a relay. steve had tried this approach before during the mmix festival last fall. we used an lp of 500 lock grooves.

Some video, taken in my kitchen just before we installed in dumbo, may be seen here.

the return of ®

r_tri

® is a duo project between guitarist devin drew connelly and me, which has been going on in the background for the past several years, although you probably haven't noticed yet. we are interested in accessing the trans-personal space through free improvisation and other occult practices. we often incorporate sigils, mantras, and cards into our performances. we borrow from many traditions, depending on the outcome we're going for.

following a brief hiatus, ® has reemerged with the following artifact. it has only been marginally edited, and is completely uncut from start to finish.

personnel:
devin drew connelly- guitar
joe mariglio- livecoding
kapow- parakeet

hello midi

i haven't used midi to do anything other than play an outboard hardware synth with a keyboard since 2006.

that being said, it certainly has its advantages as a protocol. actually, it only has one that i can think of: that everyone uses it. so far a few clients have asked me to make them things that use midi, so they can interface with other music software. the above track is a proof of concept working toward that project; yet another example of how projects i do for other people push me out of my comfort zone.

the code in supercollider is embarrassingly simple. like i said, it's a proof of concept. all it does is randomly plays midi notes. the timing is scaled so that lower notes last longer. that's it.

slightly more complex was the process of setting up an IAC bus in OSX, and getting logic to listen to that bus.

in "audio midi setup," select the "midi devices," pane. from there, you should see your "IAC driver". double click it and turn it on by checking the "device is online" box. you can add ports in the menu on the bottom left. each port has 16 channels. i have never been able to get the bottom right number boxes to say anything other than "1" and it doesn't seem like this matters much at all. that's it for setting up the IAC; supercollider and whatever other audio software you might want to use will now recognize these ports and treat them as physical midi connections.

in "logic," go to the "environment" window. from the top left of that window, you should select "all objects," and under the "view" menu, de-select "by text". then make a new "physical input" object and connect the little triangle next to your IAC port to the "sequencer input" box.

(an aside, do you see why i hate graphical interfaces?? this could have all been done with a few lines of code and there would be way fewer opportunities for error. i spent most of my night tooling around ambiguously titled menus and windows... rawr)

anyhoo, i used a bass sound from garage band-- the only instrument from garage band i have on my computer. also, the only software synth on my computer. while i was in logic, i decided to practice my spectral compression chops, and give the sound some depth. there is also a touch of reverb.

so there! you don't need max for live, you can just do this and use a free program like pd or supercollider to check your email with your daw. because that's so useful.

glib, numb, smile

this may or may not be finished. i submit it to you anyway, reader, because feedback would potentially be cool.

it began as a patch to keep my bird company while i was out at work. it uses brownian motion (random walk behavior) for several of the controls. it's a stream of sawtooth grains, each one no more than a few cycles long. each grain's frequency is an exponential curve, so each grain is a little glissando. i also passed the signal through a vintage spring reverb i pulled out of a weird old mixer. i left this patch running for about 3 days to keep kapow company. a lot of the textures remind me of analog stuff, a good thing in my book. also, he seems to enjoy it. mission accomplished. (the first one, anyway, read on...)

you can grab the code here.

so that explains the patch, but the recording you're listening to is a different thing altogether. i had two main goals with this, although i didn't originally set out with them in mind. the first one was to use only the algorithm as a base material, with no realtime gestural control at all --that will come later, be patient-- but to make it sound gestural with subtle mixing techniques. the other goal was to limit my palette to only those two textures (dry vs wet), while keeping the listeners' attention through, again, subtle mixing techniques. no "plug-ins" were used except eq and dynamics processing. i'm realizing now that these two goals are actually related. so what started as an exercise in algorithmic music turned into an exercise in mixing.

next steps are to turn this patch into an instrument, complete with realtime implementations of the mastering-like stuff (eg, eq + dyn processing), and some kind of an interface. there are a lot of different possible controls for this beast, but i'd love to have it be an intuitive, gestural sort of thing. my two options as i see them are buying a playstation style usb controller, with two joysticks, which is higher on ye olde $ / time metric, or to build one from all these linear faders i have lying around and an arduino, which is lower on the $ / time metric. i have to decide which is more of a priority. currently i have neither, so it's moot.

kids' music workshops at unsound festival!

from last sunday, february 7th, to tuesday, february 9th, a small but amazing group of kids, aged 6-12, built electro-mechanical instruments and talked about sound / music in ways even some educated adults might have problems comprehending. we didn't talk about scales, notes, or staves. we explored the sounds you hear every day, like dishwashers, telephones, and traffic. we discussed the multiplicity of sounds-- how each and every sound is different and how some are similar to others. we thought about where sounds come from and why they sound the way they do. most of all, we listened and played.

Continue reading

keith's drone engine

i have been debugging this fm instrument i'm building for a client. (his name is keith o_0 ). i installed supercollider on his machine and updated his class library to accommodate for the requisite changes i made to the class library (see previous post- nothing has changed in that regard). the demo was pretty successful- this project is turning out to be really fun. there's something really satisfying about watching someone interact with something you've built for them, especially if it's a creative tool. for me personally, the tool's interface and paradigm weren't terribly interesting, until i put it in keith's hands and saw his eyes light up. then the inevitable feature requests came, although honestly i think i made more of myself than he did. ok, so i should qualify the above statement further, the demo was pretty successful *until* he tried to save and reload some previous settings. a bug showed up that i thought had been ironed out. i've now fixed it. the code may be found here. make sure, if you want to run this on vanilla supercollider, to make the changes documented in this file.

here's a screenshot:

the program consists of 16 oscillators, which can be routed into one another for frequency modulation. more rigorously, the synthesis process is phase modulation, since the audible effects are more sensible, at the expense of slightly weirder math. i like to think the sonic results are pretty intuitive. the interface itself is designed to look like a 16 channel mixer, where you can re-route any channel into any other channel. i know what you're thinking, but audible feedback is not allowed since each oscillator can go to only one output. the pre-amplitude of the input for each channel may be set with the "pres" row of number boxes. this value is kind of like a "trim" setting on the input of a mixer- all the incoming signal of the channel is multiplied by this value. among the feature requests i am entertaining for the next iteration cycle are ringing filters, noise generation, and delays.

check out that previous post (linked above) for a delightfully blissed-out video demo! ^_^

hacksaw fuzz

i have been working on this fuzz / tremolo circuit on and off for a few weeks. i am very excited to put it all together, although at the time of the recordings below, it was not housed. i ran into some logistical problems with putting the circuit into the chassis of an old countryman di box, but i think i've figured it out at this point. its unhoused state was messy but totally functional:

open_fuzz_trem

the fuzz comes from a variation on craig anderton's circuit with two gainstages. the difference between mine and anderton's is that i used a 2n3904 npn transistor in the feedback path of the first gainstage, in parallel with a silicon diode. the result is a smooth, creamy fuzz with tons of overtones and plenty of sustain. it's nearly identical to the fuzz used in this previous post. the difference is that now the tremolo is optical, resulting from a blinking led and photocell pair. the trem has a variable speed and can also be completely bypassed. since i can't play the guitar to save my life, i invited my friend kunal prakash over to try it out.

here's an audio sample of a fast trem setting.

here's an example of a slower trem setting.

here's an example with less fuzz.

i was very pleased with this pedal's sound on a guitar. also kunal can really shred. we played for a few hours after these samples were made, using my ring mod to create a hybrid texture between his guitar and my laptop. i was livecoding on my eeepc in supercollider, starting with a simple sine tone to test the effect, and eventually experimenting with lowpassed sawtooth tones. sadly, i had stopped recording at this point so description is the only documentation i can provide. suffice it to say, it was lots of fun.

this morning, after naming the pedal the "jack chop fuzz," i tried to fit the circuit into the di chassis. in order to fit everything in, i ended up taking a hacksaw to it. thus, i arrived at the perfect name for this pedal: the hacksaw. unfortunately, the archive pages still say "jack chop," so i will accept either name, although "hacksaw" is more appropriate, in my opinion. housed, it looks like this:

hacksaw_side0

i've already got one taker. who else wants one? ^_^

up

click here to listen to the sound file.

click here to look at the code for generating the original pulse track.

click here to look at the code for turning the track into an irrational set.

this is a continuing experiment that extends my work in massive irrational rhythmic / harmonic sets. for some theoretical background, consult this earlier blog entry. i have been exploring the effects of taking many copies of a sonic event, retuning it to a large irrational set- generally some equal-temperament scale- and playing all of those copies simultaneously. the resulting copies begin perfectly aligned, but gradually move out of phase with one another and produce a doppler-like shift, with ever-increasing and expanding complexity since the sets never realign. i have tried other irrational sets, as you will see from my fairly messy code, but so far nothing has compared to the geometric series produced by many-toned equal-temperaments. the difference between this code and the earlier experiments is that instead of firing off events directly, i am retuning an entire track of audio to produce something like an irrational delay.

the one issue i have with the process is that the waves fall out of alignment very perceptibly quickly, and as this process progresses the changes become more subtle. the transformation becomes essentially less interesting as time goes on. i have found one way to combat this is to gradually speed everything up. this works to an extent, and i will continue looking into different curves that might aid this effect further. another set of experiments i have done in the past involved delaying the items in the set to cause the 'singularity point' to occur at a time other than the beginning. i would like to try that approach with this implementation, but that will have to happen later.

the original track, which i may also post later, is derived from a bizarre version of an excitation-response style percussion synthesizer. the filter responses themselves are irrational sets, which i have found produces a nice doppler-esque tail, similar to the inharmonic ringing of a piano string. the pattern itself repeats fairly quickly, and divides the pulse into fifths.

reptilian blues

click here to listen to the aiff file.

click here to look at the code.

this was partially improvised in supercollider. my goal is to start re-incorporating rhythmic elements into my work, while avoiding certain tropes of electronic dance music. ideally, i'd like to continue playing with things like tempo changes and metric subdivisions as i had been before. this piece also uses massive irrational pitch/rhythmic sets, although the main rhythm remains more or less stable. the two samples that i use are a brief recording of contact mic'ed guitar strings and a single distorted kick sample i generated previously. neither of these sources are very apparent, as the samples are very harshly manipulated. the piece's meter, determined by the interaction between the source sample and the function that scrubs through it, is in some larger even multiple of 5, possibly 40.

winter and bloom (dance set from 2007)

side one: winter.

side two: bloom.

i played this set in a basement in 2007. people seemed to like most of it, even the scary parts. i know it's ancient history, but i didn't have this blog back then, and steve wanted to hear it. so i've put it on archive.

set list:
//---winter---

pigs feet
a fern sullied
artichoke on the run
pay attention!

//---bloom---

blue bus (c'mon)
thingfetisher
equos onda
3 quarks for master mark (charmed mix) - feat. miles pearce on flute
piik

(180 bpm)

the set was made in supercollider, digital performer, and abelton live. it basically marked the end of my exploration of dance-music styles, mostly dark psy, happy hardcore, hard house, and drill n bass. in addition to these genres, compositionally i was (and continue to be) obsessed with microtonality, oddmeter and polyrhythms. the sound palette i used was very influenced by the work of curtis roads and iannis xenakis. i also regularly did studies where i'd use less than one second of sound material to generate entire 15 minute works. since then, i have almost totally moved away from beat oriented music, but i could definitely see myself revisiting this in the future. so wise up!

what was great about this night in particular was the sense of a truly open, positive community dedicated to experimentation and collaboration. everyone involved had something really unique and beautiful to offer, and i don't just mean the dudes with the gear (although they were awesome too!). only a few times in my life since have i felt such a sense of collective pride and accomplishment. one such night, in recent memory, was the night chronotronic played monkeytown. i sincerely hope to have more nights like this in the future.

let's make it happen!

concept art for crudspds ginormous thing!!

crudspds ginormous thing will be installed at bent 2010!!! stay tuned for updates!!!

the crudspds ginormous thing is an interactive installation by steve litt and me. it will consist of eight self-amplified, electro-acoustic trash sculptures that are activated by steve's crudbox sequencer. since the crudspades ginormous thing derives its sound from amplified physical objects, the user can appreciate the source of the sounds and control them intuitively, creating a wide range of noises. the sound sculptures are constructed from recycled junk, both as a statement of resistance to the throw-away culture that created them, and to subvert their iconic visual language into objects of creative empowerment.

the ‘brain’ of the crudspades ginormous thing is the crudbox, an open-source diy step sequencer designed to turn other devices on or off. instead of playing sampled sounds or controlling a synthesizer, the crudbox works by simply sending power to one of eight outputs. plug any device into an output channel, and that device can be sequenced in a manner instantly recognizable to electronic music fans everywhere. two or more crudboxes can be synced over midi, for virtually endless possibilities.

the sculptures are each unique in look and behavior. they are all brightly colored, dumpster-dived, electro-acoustic instruments that either generate enough sound acoustically, or contain embedded amplifiers and speakers. while most of these objects come already set up, a few of them will be made available for the user to experiment with. this way, the installation will not only serve as a source of passive entertainment, but of active collaboration.

confinement

prototype

click here to listen to the wav file (or play above).

this is a prototype i've been passively tweaking on for a while. i think it's near the point where i'd like it to be. all that remains is a true bypass switch. it's a fairly simple circuit that uses an fsr as an expression surface. there is a switch that engages this function, and it happens after the fuzz. the fuzz is generated with a variant of craig anderton's circuit, with an npn transistor's collector and base bridging the input and output of the feedback bus, respectively. in parallel to this, there is a silicon diode with its ground side facing the output of the feedback bus. schematics will follow once the design is completely set.

the transistor gives a warm, rich fuzz tone and the silicon diode adds plenty of harmonics. the expression pad is engaged such that pressing on it brings the amplitude up. this made the most sense for performance.

the recording was made from two tracks of a fender strat playing through the pedal, one note per track. the rhythms are the result of tapping the expression pad. i expect this to be really cool with my electric piano...

sweet acceleration

click here to listen to the uncompressed aiff file (or play in embedded widget).

this composition was made from field recordings of the 7 train in long island city, and a few of the water treatment plant on the brooklyn side of the creek. i did the plant recordings in the fall, documenting the work in this post. the plant was recorded using a pair of coresound binaural mics, mounted to my head as i practiced my daily meditation on the roof. i recorded the 7 train this past week, late at night so there would be less wind and traffic noise. i used a pair of akg c1000's (cardioids) in an ortf configuration (17cm tip-to-tip, 110° apart). a big thankyou to my housemate jake for loaning me the mics and for freezing with me while we stuck those suckers on a pole and chased some trains. after a few hours of utterly frigid conditions, we retired to the court square diner for milkshakes with whiskey in them.

the composition was done mostly in supercollider and sequenced / mixed in logic. it had been a long time (two years maybe?) since i had worked in any kind of daw, and it was interesting to revisit that style of working. i understand that tools like logic are good at doing very specific kinds of things, and the spirit of the piece called for a few of those things. i also used soundhack to perform strict convolutions between streams of particles and the field recordings to derive spectral granulation. this was more efficient (although non-realtime) than performing a partitioned convolution in supercollider, a technique i also used in places. many of the phrases in the piece were derived from pictures of maps of the surrounding area, although i certainly don't expect people to be able to hear this. i also used other formal systems like fractals and irrational sets. when i was looking for inspiration for gestural phrases, i took all of these formal techniques and tweaked them until they said something like what i wanted to say with the material. when this turned out to be insufficient, i drew the rest in by hand.

with this piece, i tried to accomplish a very different set of goals from what i'm used to and comfortable with. for one, i wanted to actually get down to telling a story. so much of my work only implies a narrative, typically an abstract or formal one, instead of telling a concrete story about concrete things. i was inspired by the work of trevor wishart and robert normandeau specifically. actually, if you're familiar with normandeau's work, you might hear some threads of his beautiful composition "tangram" in my piece. i haven't lifted them, obviously, but i had been listening to that piece on repeat during the production and planning of my own piece. i have also been reading wishart's book "on sonic art," which is simultaneously challenging and uplifting to read.

a narrative piece requires very different treatment than other forms. in order to successfully tell a story, the storyteller must play to the audience's willingness to suspend disbelief. this sort of charisma is ineffable and difficult to achieve. this is where audio fidelity weighs in for me. i don't necessarily want to reproduce the exact sound of a train passing the listener at a distance of 3 feet, but i want that option available to me. i want to be able to make it sound like a flock of trains, or a broken train struggling with each inch of track. if everything sounds like washed out whitenoise with little dynamic clarity, then there won't be much disbelief to suspend. i believe this happened at the debut, where the sound system was fairly unsatisfactory and the room inappropriate for the kind of listening required to actually hear the piece. again, this is not a universal need. much of my music (and the music of my friends) loves to be compressed. to this particular piece, however, the effect was detrimental (see my previous post for details).

so with that, i leave you with the original recording as i mixed it. you may listen to it in any number of sub-ideal situations, if you like. or, if you want to determine if i'm trying to blame the sound system for my own dissatisfaction, you may listen to it on closer-to-ideal setups, if they're available to you. for a frugal alternative to monitors, i recommend decent headphones in a dark quiet room. enjoy!

chronotronic wonder transducer strikes again!!!1

cwt
chronotronic wonder transducer is some kind of weirdo experimental arts collective (i think).

we are more or less comprised of the following personnel:

lori napoleon, ted hayes, steven litt, amy koshbin, mike clemow, oliver rivera-drew, and me (i'm joe mariglio, usually).

last night we played a show at monkeytown, in williamsburg. it was a lot of fun. everybody's performances were very dynamic, and i love the fact that each act is unrelated but the whole show is coherent somehow. also, monkeytown is a really wonderful space, and i'm sad to hear they will be closing after this month due to a legal battle with their landlords. montgomery was very nice to us (for the most part), and everybody there was super helpful. their food looks amazing, although i can't afford anything on their menu. and their taste in music for the front bar was refreshing! when i walked in, they were playing off the tellus audio magazine issue about just intonation! XD

all this being said, i was pretty unhappy about the way my set in particular sounded. while i intended the piece to be short and sweet, i feel like it came off as having not enough material. this was partly a pacing issue and also partly a dynamics issue. i spent the last two days of composing this thing on the dynamics alone. there is a huge difference between the loud and soft bits, which allows for a sensation of progression and phrasing that just didn't come through from where i was seated on stage. on speaking to people who had performed there before, i gathered that the system was intended to be heard from the sides, and that in the middle everything was muddy. the fact that everyone else's set sounded great to me supports this. but i think that horrible mackie sub they have only plays 80 hz, regardless of what's going through it.

i have learned a few lessons that i hope to take with me from this night.

1) it makes sense to "perform" a tape piece (ie pre-recorded), if it is in the spirit of the composition. i stand by the piece.

2) when you do a piece that depends heavily on faithful sound reproduction, make sure you either do it in a space meant for that kind of listening, or bring your own rig.

3) the next experimental music concert i perform will be much more lo-fi friendly.

4) i need to write a composition about the jz train now. (this will probably not be lo-fi friendly)

in the next post i will upload a link to the piece so you can decide for yourselves what it sounds like. i recommend headphones and closed eyes.

scale calculator (there's >1 way to split an 8va)

click the screenshot for a browser-based demo. (once you calculate a scale, you will not be able to save it unless you download the app.)

this is a simple, geometry-based calculator intended to demystify some of the concepts in just intonation. the boxes correspond to possible notes in a scale, and the vertical lines correspond to positions of equal-temperament tones. these lines are visual guides, and the number of equal-temperament tones per octave is user-adjustable. the ratios become more "complex" (higher value divisor) towards the bottom of the window. to approximate an equal-tempered scale, simply select the number of tones you'd like, select the boxes whose *left* sides most accurately line up with the vertical lines, taking into account the occasional trade-off between simpler ratios and a closer approximation. when you are finished with a scale, you can either clear it by pressing "c" or return it by pressing "r". when the scale is returned, a small text file will be created with the date and time as its title, in the directory of the applet. this file contains the array of ratios you chose using the calculator.

KEYBOARD COMMANDS:
"+" - increase number of vertical bars (tones equal temperament)
"-" - reset number of vertical bars to 2
"c" - clear scale buffer
"r" - return scale buffer, printing to file named with date and time, located in app directory (file io won't work in browser version)

i provide this mostly for didactic purposes, but i also use this personally to obtain some of my scales and i thought it might help other people interested in breaking into microtonal theory.

downloads: OSX WIN LNX
enjoy!

cold war fuzz

click here to listen to the wave file.

the above pedal is basically a squelchy octave-drop fuzz effect, with a fair amount of self-modulation and hard-clipping due to the silicon diodes used. the fuzz tone component and the octave-drop component both make use of cmos logic ic's, which is part of the reason why the octave drop is so easily fooled and modulates around.

the pedal has no bypass: the switch literally turns it on or off. at some point during testing, the unit was passing signal through without needing to be connected to power, however this is no longer the case. the three knobs are, from top (big red) to bottom: input gain, ???, output gain. a very nonlinear effect, the pedal's knob settings interact with each other and the source material in unpredictable ways.

i have tested this beast on guitars, electric pianos, and synths. a previous version also sounded great on bass and vocals. i love the inhuman, 8-bit sound of the dropped octaves. the fuzz is surprisingly smooth and complex. the build is also pretty solid. who wants one??

keith's function generator

click here to watch the video.

linked above is a demo of the first iteration of this compositional tool i'm developing for a client. the admittedly lame working title is "keith's function generator." this will most likely change as the development cycle progresses.

we decided early on that the layout should look and feel a lot like a mixer, since it's a familiar paradigm for keith. incidentally, that choice also makes life simpler for me because the requisite GUI primitives are readily available in SuperCollider/Cocoa.

it was important that this instrument provided a wide array of routing possibilities, while remaining simple and robust to operate. this required a few interesting (at least to me) programming sleights of hand, to ensure that all routing possibilities were useful and not destructive.

the intended application of this tool is not a performance interface, it is a compositional interface. this means that keith will use the program to generate preset files, and during performances he will select between these presets, rather than twiddling the virtual knobs on the fly as i do in the video. regardless, the video demonstrates basic tweaking, routing, and saving operations in the GUI.

click here to look at the source code.
(N.B. if you want to run the code on your own machine, you should take a look at this log file, where i list all the changes i'm making to the vanilla SC class library.)
click here to ogle over the screen shot.

dresses like earths

click here to look at the code.

click here to listen to the mp3 (or play in the embedded widget above).

i recorded this livecoding rehearsal on the bus back to new york from pennsylvania. it was done in supercollider, starting from a blank document. i am practicing with using convolution in livecoding contexts because i really love the flexibility afforded by the technique. here, i convolve a brief recording of contact mic'ed sculpture wire with band-limited impulses. as i improvise with the code, i eventually arrive at a configuration that uses feedback. ultimately this kills my soundcard and i have to stop, but i got a solid 37 minutes out of it before that happened. this rehearsal contributes to my livecoding practice as well as my general inquiry regarding convolution techniques. notice how i increase the server's memory size by several orders of magnitude to allow for such a massive convolution window.

the rhythms and formants are derived from the same pitch constellation : [1, 1.067, 1.422, 1.666, 1.875, 2].
in more traditional musical language this is : [tonic, minor second, sharp fourth, flat sixth, major seventh, octave].
the rhythms get more dense at some point because i convolve the array with itself to yield: [ [ 1, 1.067, 1.422, 1.666, 1.875, 2 ], [ 1.067, 1.138489, 1.517274, 1.777622, 2.000625, 2.134 ], [ 1.422, 1.517274, 2.022084, 2.369052, 2.66625, 2.844 ], [ 1.666, 1.777622, 2.369052, 2.775556, 3.12375, 3.332 ], [ 1.875, 2.000625, 2.66625, 3.12375, 3.515625, 3.75 ], [ 2, 2.134, 2.844, 3.332, 3.75, 4 ] ].

just in case you were curious. hopefully that is meaningful to someone. ^.^