tdat systems testing: a bottom-up approach
The vocal piece and its corresponding realization as an instrumental piece uses the following algorithm:
Four players share an identical sourcetext, divided into four sections. Each player has all four sections.
Each player has a unique ruleset consisting of four trigger words, each linked to one of the four sections of the sourcetext.
The piece starts with one player starting from anywhere in the text. The player reads aloud ten of the words in the section. Since the player may start anywhere on a given section, the player should wrap up to the top of the section when she hits the end.
When a player hears one of the other three players say one of her trigger words, she must move to the section of the sourcetext associated with that trigger word in her ruleset, beginning anywhere within the section. Players do not trigger themselves. If a trigger word is said that would move her to the same page she is already on, she ignores it.
An end state is reached when all four players have stopped reading.
In the above description I refer to the smallest significant unit of text as words, but this clearly need not be the case. It is actually a much easier prospect to listen for longer events such as phrases or sentences. Scale is a huge factor. For the instrumental realization of this piece I am planning to have my performers listen for musical phrases or contours, not absolute pitch.
I wrote a small class library in SuperCollider for handling simulations of rulesets. There are two classes: a player and a game. In various implementations ( 0 1 2 3 ) I have been using them slightly differently to pull different kinds of data from them. I started by running a few lines of code that generates sourcetexts with pseudorandom numbers. I was able to collapse rulesets and sourcetexts into one axis by keeping the rulesets the same throughout. I used the ascii characters [a-zA-Z] to stand for ‘words’, and the ruleset matrix remains
AÂ Â Â B Â Â C Â Â D
EÂ Â Â Â F Â Â G Â Â H
I Â Â J Â Â K Â Â L
M Â Â N Â Â O Â Â P
where along the rows are players, and along the columns are sections triggered by the character. Since all other ascii letters trigger nothing, they are interchangeable. What is really significant is where each of the trigger words are placed within the text. To simulate playing the game without turns, I had each player fork off and make moves after pausing for an exponentially weighted random amount of time, spanning from 1x to 2x. I am working, then, with sourcetexts of 52 ‘words’, divided into 4 sections with 13 ‘words’ in each.
When I was first testing this piece out with people, I would start with a sourcetext that I liked, divided it up by hand, and performed word frequency analysis to arrive at a ruleset. This was incredibly labour intensive. Also, I found that despite certain top-down constraints I would work within, such as maintaining a balanced transition matrix or placing trigger words for each player in each sourcetext, etc, the systems would sometimes misbehave. That could mean we’d get stuck in a loop or the system would end prematurely, or that players would miss out on reading a particular section of material entirely. There are so many sensitive factors that go into the initial conditions of this system that to parameterize these and come up with some linear solving strategy seemed way out of my league. And initial conditions aside, the mere fact that these games are played asynchronously provides us with a further level of inconsistency. There was no way I could be sure that my intuitions in composing the rulesets were guiding the outcomes at all! Plus, how many times can you really find four people willing to try out this exercise, even in grad school?
Using this class library, I’m able to test rulesets out at extremely high speeds, in high numbers, and generate massive datasets from the outcomes of those hundreds of games. I started by randomly generating about 50 systems and testing them for longevity, using a metric that makes sense in light of the fact that we can’t measure time in synchronous turns. From this first round, I selected 13 systems whose longevity metrics were significantly higher than the rest, and tested those for longevity, also generating markovian transition matrices to determine the probability of a particular rule being carried out at any time. A more balanced matrix would suggest a more even distribution of sections to be read by players. Finally, I honed in on two systems whose longevity scores were an order of magnitude higher than the others. I ran tests on both systems to determine the average length of a game started by each player. During this process, I selected the system that will become my piece by taking the average longevity score of all the players longevity scores: tHlcqbunKdrifgojIPEDLXwxmNGMQVJRahZBFUkWsOeAyCpTYzvS. I’ve been calling it ‘tHl’ for short. The initial condition preference weights end up thusly: [ 0.26285490520858, 0.45210917667393, 0.18168356048467, 0.10335235763283 ], so player 2 should start ~45% of the time, with player 1 starting ~26% of the time.
To programmatically allow for changes in density, I would like to try having certain non-trigger words resolve to silences. Because I decided this while I was running tests (a nice affordance calculating machines give us is to let us pay attention to how we’re interpreting them), I was keeping track of the number of times a player reached the end of their sourcetext before getting triggered to change to a new section.
Since I am to perform this piece (possibly as the thesis presentation itself!) in May, I have decided to stop there with the analysis / synthesis of systems and get on with replacing those ascii letters with meaningful bits of stuff. However, with a more rigorously organized setup, I could see myself crunching a lot of numbers to come up with something better. With all that data, I would probably be able to come up with alternative realizations / infra-notations of all kinds. One particularly interesting realization could be to produce a statistically analogous outcome using a classical markov chain based on the transition tables I’m getting from the analyses. That would mean very different things, since in the presentation of the piece I’m also trying to frame the performers’ dilemma for the audience. Because the outcomes could potentially be fairly similar, it raises some interesting points (at least to me) about the typical parameters that conventionally comprise an alternative realization.
ps I will be sonifying this system many times over– that’s why I decided to go with SC instead of python.