Nuclear Rapture- 23:58

It’s two minutes to midnight. For the last 2.5 million years or so, we’ve lurched along a hairline equipotential in the vast, catastrophic universe. Rapture is so near you might even be able to peer over the ridge. This is where all that becoming has brought us. Finally, we will know instant solidarity. Annihilation unites us all.

Continue reading

Mechanical Jerk

"You know, a lot of great music was written during the Nixon regime. We're on the verge of a social movement not seen since then, so there's an urgent need for any music or noise that helps channel our collective energy and anger." - T.G.

"You got it." - me

Continue reading

Warm Jets

Warm Jets is a new vst plugin I've made. It provides three, independently bypassable, high-quality effect units in a network with feedback, which allows for a diverse range of transformations ranging from subtle psychedelia to bleary washes. Warm Jets can easily self oscillate, making it a powerful synthesis tool in its own right.
Continue reading

Etape

Creosota is an electro-folk project featuring Adam Tinkle on guitar and vocals, and Joe Mariglio on laptop & contact mics. Their live sound often incorporates complex rhythms, bluesy folk idioms, field recordings, and massive vocal textures. Skittering glitch-hop beats co-mingle with fingerpicked guitar and the sounds of distant (sometimes not-so-distant) trains.

Continue reading

Bovina Clovis

Shortly after their tour, the EYEBALL FRIENDS became infected with a strange and seductive sickness. They labored day and night, sleeping next to their tools, only to wake up in the same cold sweat, and with the same inexplicable drive to press onward. All told, the episode lasted a little under two weeks, but by the time it had run its course, a terrible and gorgeous new member of their band was born.

Continue reading

MLEF: the blended fetal eyeball paste tour

Last week, MOLTEN LAVA EYEBALL FIEND played a few shows in Texas and a show in New Mexico. We're calling this our first tour, although we really see it as a dry run for a much larger tour around SXSW. For the most part, the networking and brainstorming was invaluable to the EYEBALL FRIENDS. Also, it was really exciting to find new audiences that seemed to enjoy what EYEBALL FORNICATION is about. Several recordings exist that document this:

Continue reading

MUTANT SEAL HOMICIDE

MOLTEN LAVA EYEBALL FORCAST is:

Adam Goodwin: Upright Bass, Contact Mics
Joe Mariglio: Loathing & Disgust

This time the EYEBALL FRIENDS also incorporated a laptop into their noise matrix. This angered the mutant seals.

BUSHWICK

MOLTEN LAVA EYEBALL FIEND is

Adam Goodwin: Double Bass, Contact Mics
Joe Mariglio: Analog Noise Network

SCHOOL NIGHT

MOLTEN LAVA EYEBALL FIEND is

Adam Goodwin: Double Bass, Contact Mics
Joe Mariglio: Terror and Revulsion

Wrestling Jacob

wrestling_jacob by 3_spds

a cover of a hymn by charles westley.

performed live by adam tinkle and joe mariglio (me).

I have a recurring dream where I'm trying to squeeze the life out of a shape-shifting opponent. It's a frustration dream; I can never make my partner submit. If I loosen my grip, I find myself being overtaken. Either I or my consort will typically start whispering things like "I love you" as we suffocate each other.

"Wrestling i will not let thee go, till i thy name, thy nature know." - Charles Westley

who is anybody?

On Wednesday, December 1st, I had the pleasure of sharing the stage with MC Justin Zullo (AKA Conundrum), Kimbridge Balancier (AKA sPeakLove), and Diana Cervera. I DJ'ed (from the command line!) for Conundrum for the first half, and played a solo set for the second. Due to a few technical problems, I was unable to record the set with Conundrum, but I was able to reconstruct the solo set. I've decided to release this set online as an EP, in a manner similar to "gosling_glider_gun."

Continue reading

give me everything.

on thursday, october 28th, this piece made its debut at brooklyn college for their biannual "international electroacoustic music festival." despite the fact that i couldn't be there in person, i couldn't pass up the opportunity to have it premier in brooklyn, due to the subject matter. i'd been planning this piece for two years, and it was awesome to finally see it through to completion.

Continue reading

gosling_glider_gun

a recording of a short set i performed at the che in san diego, on saturday, september 25th. my primary tools were supercollider, matlab, and logic. i also used soundhack and audacity. all original unreleased material, slated for free digital release with abattoir records in new york sometime this year.

Continue reading

sweet acceleration

click here to listen to the uncompressed aiff file (or play in embedded widget).

this composition was made from field recordings of the 7 train in long island city, and a few of the water treatment plant on the brooklyn side of the creek. i did the plant recordings in the fall, documenting the work in this post. the plant was recorded using a pair of coresound binaural mics, mounted to my head as i practiced my daily meditation on the roof. i recorded the 7 train this past week, late at night so there would be less wind and traffic noise. i used a pair of akg c1000's (cardioids) in an ortf configuration (17cm tip-to-tip, 110° apart). a big thankyou to my housemate jake for loaning me the mics and for freezing with me while we stuck those suckers on a pole and chased some trains. after a few hours of utterly frigid conditions, we retired to the court square diner for milkshakes with whiskey in them.

the composition was done mostly in supercollider and sequenced / mixed in logic. it had been a long time (two years maybe?) since i had worked in any kind of daw, and it was interesting to revisit that style of working. i understand that tools like logic are good at doing very specific kinds of things, and the spirit of the piece called for a few of those things. i also used soundhack to perform strict convolutions between streams of particles and the field recordings to derive spectral granulation. this was more efficient (although non-realtime) than performing a partitioned convolution in supercollider, a technique i also used in places. many of the phrases in the piece were derived from pictures of maps of the surrounding area, although i certainly don't expect people to be able to hear this. i also used other formal systems like fractals and irrational sets. when i was looking for inspiration for gestural phrases, i took all of these formal techniques and tweaked them until they said something like what i wanted to say with the material. when this turned out to be insufficient, i drew the rest in by hand.

with this piece, i tried to accomplish a very different set of goals from what i'm used to and comfortable with. for one, i wanted to actually get down to telling a story. so much of my work only implies a narrative, typically an abstract or formal one, instead of telling a concrete story about concrete things. i was inspired by the work of trevor wishart and robert normandeau specifically. actually, if you're familiar with normandeau's work, you might hear some threads of his beautiful composition "tangram" in my piece. i haven't lifted them, obviously, but i had been listening to that piece on repeat during the production and planning of my own piece. i have also been reading wishart's book "on sonic art," which is simultaneously challenging and uplifting to read.

a narrative piece requires very different treatment than other forms. in order to successfully tell a story, the storyteller must play to the audience's willingness to suspend disbelief. this sort of charisma is ineffable and difficult to achieve. this is where audio fidelity weighs in for me. i don't necessarily want to reproduce the exact sound of a train passing the listener at a distance of 3 feet, but i want that option available to me. i want to be able to make it sound like a flock of trains, or a broken train struggling with each inch of track. if everything sounds like washed out whitenoise with little dynamic clarity, then there won't be much disbelief to suspend. i believe this happened at the debut, where the sound system was fairly unsatisfactory and the room inappropriate for the kind of listening required to actually hear the piece. again, this is not a universal need. much of my music (and the music of my friends) loves to be compressed. to this particular piece, however, the effect was detrimental (see my previous post for details).

so with that, i leave you with the original recording as i mixed it. you may listen to it in any number of sub-ideal situations, if you like. or, if you want to determine if i'm trying to blame the sound system for my own dissatisfaction, you may listen to it on closer-to-ideal setups, if they're available to you. for a frugal alternative to monitors, i recommend decent headphones in a dark quiet room. enjoy!

a tender palsy

click here to listen to the aiff file.
click here to look at the code (messy, i'm warning you).

when i was livecoding in the park yesterday, i was experimenting with sets of impulses. as i have been interested in irrational rhythmic relationships recently, i was playing with an octave of 32-tone equal temperament, from 1 - 2 hz. i love the sound of filtered impulses, especially when i use convolution as the filter, but i'm often disappointed with windowed convolutions because they don't provide ample time domain resolution in the response. while this is fairly trivial in certain contexts, the lack of resolution is very pronounced when attempting a convolution with something close to an ideal impulse, which is what i'm doing here. it occurred to me to use several layers of convolution, with different window sizes, to allow for more resolution. i had tried this a few years ago with somewhat unsatisfactory results, because i was using nested filters to cross-over between resolutions. this improvisation does no such thing. there are no cross-overs implemented here, so i imagine there's a fair amount of combing.

the recording i have provided does not follow the standard rules for livecoding because it is a reconstruction of a livecoded event. my laptop had one of its rare freak-outs this morning just before i could save yesterday's work. saving work is one of those areas where my process needs to develop. grrrr. anyways, reconstructing the code was a good exercise because i could then expand on the idea. i think i've found some interesting territory. i'll definitely be refining this parallel convolution process, ideally with some kind of cross-over that doesn't suck. now that i know a little more about what i'm doing, i think it's possible.

the piece itself made me think of some of the sleep incidents i've had growing up. i suffer from sleep paralysis, a disorder also called 'night terrors.' they aren't always bad, in fact sometimes they can be quite amorous, if you get my drift, but they always involve a large portion of my body being paralyzed, while i lie there completely alert. occasionally this is accompanied by the sensation of my head exploding. sometimes i'm able to move my head and talk, in which case i try to yell as loud as i can, in order to wake myself up. i remember, as a child, being able to predict one of these episodes coming on because i could hear it. the sound was a kind of whooshing noise, a bit like a off-balance drier, but more intense. this recording reminds me of that a little, hence the title 'a tender palsy.'

ps - i decided not to embed or link to the mp3 because lossy compression artifacts are absolutely wretched on material like this.

mornings sit on roofs

every morning i spend in greenpoint, i climb onto the roof and sit for about an hour. i have been doing this since i moved into the place in july. my practice is nothing fancy, mostly just counting my breaths up to 21 - if i make it that far - and starting over. sometimes i repeat specific phrases.

the neighborhood is rather industrial. largely it consists of auto-body garages, construction shops, and the water treatment plant. the first thing i noticed about the area was the beautiful swirling hiss that the plant emits. on some days, the various metalworking machines pound polyrhythms, their origins confounded by their reverberations through the concrete.

the recording was made on a pair of binaural microphones. they can very closely simulate the experience of being there listening to the sounds they record. i recorded for just over an hour straight, and i left the recording completely unedited and unprocessed. it won't stay that way, but i wanted to start by presenting the entire hour untouched. unfortunately, it was somewhat windy today and the wind-guards couldn't keep everything out.

i truly enjoy these mornings. it was a pleasure bringing you along with me this time.

hubris, for tape

i have been playing with this reel-to-reel, which i fixed for one of my clients, for the past few days. it has been an amazing experience working with tape. for the recording, i used only analogue equipment, most predominantly, an ancient and tortured fostex mixer that i used for "no input" mixing. additionally, i used a hacked small stone analogue phaser, some commercial distortion pedal, and two of my own pedals: a distorted bandpass kind of thing, and an arpeggiating octave divider. there is also some amplified spring in there. when i finished the improvisation, i digitized several versions of the tape, with slight variations in mix, tape speed, and additional slinky reverb. i arranged these different versions so they would line up at some points, creating a manual flange / delay effect. truly beautiful things happen to tape when you overdrive it. the tone is so dark and warm. i'm considering recording some computer music to tape and re-digitizing it, just for the saturation. i don't want to give it back!

fostex_with_2025_and_small_stone

yeah -  that one pedal's housing is from an old-school motion detector.  that's the octave arpeggiator, otherwise known as the lunar lounge 2025.

singularities

this post continues the idea from my previous post about massive irrationally related sets. one impressive thing about these things is that they extend infinitely and never repeat, despite that they initially are completely aligned. another way of saying this is that for each set there is only one singularity. we can assume that any set of irrationally related periodic signals will have, in the fullness of time and if we extend them in both directions of time, one and only one singularity. but how can we predict when that singularity will occur? if our set of periodic signals happens to be an equal temperament, we can use the following formula to delay the singularity by one cycle of the lowest signal in the set:

for every signal x, xdelay = 1/f0 - 1/x

where f0 is the frequency of the lowest signal in the set, and x is the frequency of the signal to which we apply the delay.

to delay the singularity by more than one cycle, simply replace those 1's with the number of cycles.

we can now generate sequences of mostly periodic signals whose phase we occasionally manipulate to get singularities whenever we want them.

click here to look at the code.
click here to listen to the mp3.

to make this improvisation, i not only play with definitions of the streams as they run, but also the synthdefs. among other things, i mess around with the probability of a grain to be muted. as you can see from the commented out bits, i was trying to create fadeIn and fadeOut routines to automate this with reflexive code.

recursion

click here to listen to the mp3.

click here to look at the code.

i wrote much of today's patch on the bus back to NYC.  i'd done some experiments in the past using sc3's JITlib to make recursive patterns, so i decided to start there and see where i ended up.  i kind of like where that paradigm took me, especially with the drums.  i'll definitely be revisiting it in the future.

* * *

here's a recording of a somewhat cleaner version of that patch.  i removed some of the reverb from the low end and softer bits.  (thanks to randall for the suggestion!)  by adding a gate before the reverb, and then mixing the original (ungated) material with it, i was able to make the quieter parts drier.  to remove some of the low end muck, i used a 2nd order high-pass filter before the control signal for the gate.  this way, the low end has to work harder to open the gate. in sc3:

SynthDef(\master, {|in = 16, out = 0, wet = 0.125, thresh = 0.1|
var input, signal;
input = InFeedback.ar([in, in+1]);
signal = Compander.ar(input, HPF.ar(input, 1600), thresh, 10, 1, 0.01, 0.01);
signal = Mix.ar([FreeVerb2.ar(signal[0], signal[1], 1, 0.7, 0.9)*(wet**2), input*((1-wet)**2)]);
signal = Limiter.ar(signal, 0.9);
Out.ar(out, signal)
}).store;

half-songs carved from intimacy

click here to listen to the piece.

click here to look at the code.

this piece is a further development of the material from this patch.  the samples i use here come from an extremely intimate source: the last night my partner and i spent together before she left me.  its liberating, in a way, for me to use this material to make something beautiful, and it kind of shocks me that i actually ended up documenting the event in the first place.  it's a little like how i described in a previous post about the method i used to gather the samples: the recordings are like jars of earth that sailors keep for long treks across the sea.  here, the process is one of sculpture in the sense of chipping away at masses of the material to derive shapes.  you can hear incomprehensible bits of candid speech, quiet artifacts of movement, and the motors and birds from the street outside the window in the small berkeley apartment.  from these sources, a fragile little half-melody almost shows itself, and then vanishes into the everyday.