automatic error-checking over an osc network

so i have been writing a composition for a cluster of linux compute nodes running scsynth. in doing this i have learned quite a bit about the perils of networking with a udp-based protocol such as OSC. since packets just get dropped all over the place, i set up an elaborate system for finding such errors and dynamically restoring the node hierarchy within a remote server. the way i've decided to do this was to clear the nodes being dynamically allocated and preserve the static nodes i'm using for mixing and routing. Here is my error checker:
~error = OSCresponder(z.addr, '/fail', {|t, r, msg|
if(msg[1].postln == '/n_set',
{z.sendMsg(57, 0)},
~tree = OSCresponder(z.addr, '/g_queryTree.reply', {|t, r, msg|
~array ={|x| x.isInteger}.select{|x| x >= 1000};
~array = ~array.sort.postln;
//s.sendMsg(15, ~array(..5), 'gate', 0);
Routine{{|x| z.sendMsg(15, x, 'ovr', 0.5)};
wait(0.125);{|x| z.sendMsg(15, x, 'gate', 0)};

so ~error is a responder waiting for the error message to come through.  when such an event occurs, the responder sends a "query tree" message that basically asks for a snapshot of the node tree on the remote server.  on receipt of the response, the ~tree responder parses this message and finds all the dynamically allocated node ID's, lengthens the release of each node's envelope, and closes all of them.  then a 4 second wait period counts off to ensure positive feedback is suppressed.

this method works swimingly, most of the time.  the only issue is, since both incoming and outgoing packets can get lost, sometimes an error will occur and then the notification that an error has occurred will get lost. this means the error has the potential to go completely unreported.  to combat this, I am deliberately throwing errors at regular points in the phrasing to ensure a fresh start for each phrase.  i really dislike this, but there isn't too much else I can do about it.  Errors happen often enough that I am considering making synthdefs that free themselves after a period of time, instead of this method of opening and closing envelope gates.  while there is more explicit control over when layers occur, the drawback is network instability.  for future compositions, i will probably still use this error checking mechanism, or something like it, but i certainly will change my strategy of control once more.  whereas this piece works almost completely off of one thread in series, with only the error checker operating on a separate thread, i will be taking advantage of parallelization and forking in the next piece.  Also, I think I can whittle down the number of messages i send over osc by finding redundancies in the code as well.

also i've been messing with performance situations that necessitate an external server.  i have a track that i'm working on right now that is basically a remix of the cluster piece, using a much more accurate version of the 'Codecs' spectral granular resynthesis patch (basically a time-domain wavelet transform).  this sample accurate analysis / resynthesis patch resides completely within a synthdef, and so is able to be implemented across a net using osc commands only for higher-order parameters, such as spectral envelope shaping, variable wavelet envelopes, frequency and time domain smearing, shifting and erosion.  since it is a realtime process, it lends itself well to application over a network.

one of my future goals is to build an analogue routing matrix that can be controlled by the serial port (ie from within supercollider) that will connect all the nodes together.  this would be good for chaotic feedback based systems as well.

other thoughts about the future lead me to acoustic analogues of granular synthesis, more specifically the use of piezos and solenoids activating resonant chambers of different sizes to resynthesize the human voice.  actually it could all be accomplished with piezos fairly easily just using a massively multichannel audio interface.  if that's not a job for a supercomputer, then i don't know what is.

Leave a Reply

Your email address will not be published. Required fields are marked *