A dully intelligent algorithm

Still pretty soft in the attic, this new load-balancing algorithm distributes the cpu load amongst n-servers by selecting the server with the lowest average CPU for that cycle and sending it the next synthesis job. It makes sound! It sounds pretty good! Only occasionally does it hiccup. This could be improved by letting the client keep track of some data about the synthesis job (ie how long it will last) and selecting the appropriate server for the job. I don't know. Maybe that's not going to help at all. Perhaps some kind of global note-stealing scheme would work better. Also a paradigm for recording and summing this final data would be nice. I'm currently listening to the fake cluster render a track i recorded at 1/16th the speed in terms of 16 cycle sinusoid wavelets.  8 cycle wavelets also sound pretty good, and reduce some strain on the synthesis nodes.  My theory is that the ring-mod like distortions I was hearing in earlier experiments are the result of the synchronous triggers exceeding the server's control rate.  The absence of this distortion in a multi-server context suggests that a little diffusion (on the level of a few samples, perhaps) was all that was needed to clean the signal up.  While the smaller particles reduce strain on the servers, there still are a few glitches, especially in broad or rapidly changing spectra.  Maybe non-realtime is the only way to avoid the glitches? That would be incredibly lame. there must be a way to secure the fidelity of the resynthesis...

Leave a Reply

Your email address will not be published. Required fields are marked *