Nick Didkovsky demonstrates Hell Cafe
Photo courtesy of the composer
Hell Cafe is constructed in a hierarchy. On the bottom level of the hierarchy are sound samples, pre-recorded using a drum machine and live musicians: Tyrone Henderson (voice), Anne LaBerge (flute), and Didkovsky himself (electric guitar). These pre-recorded samples were stored as .wav or .aiff files on Didkovsky’s hard drive. During the live performance of The Technophobe and Madman, a “line in” to the computer was attached to Henderson’s mike, allowing the sound of his voice to be “cut up and rhythmicized” as if it were a sound sample.
Each sound sample in Hell Cafe is contained in a Hell Instrument. A Hell Instrument is responsible for converting numeric data (passed down to it by Hellable Music Shape, the next level up) into sound. The Hell Instrument triggers a JSyn circuit, which produces the sound. JSyn can load any .wav or .aiff sound file and play it back.
Each Hell Instrument is contained in a Hellable Music Shape. You could imagine a Music Shape as an improvising musician’s brain, holding the musical ideas and deciding when to pass the impulses on to the muscles that will blow the saxophone or play the piano keys. The generic pattern in JMSL works this way: a Music Shape sends its data on to an Instrument, the Instrument reports back a time and then passes the information on to an Interpreter like JSyn or MIDI. An Instrument can also interpret the data in some creative way on its own — as a series of colors or shapes, for example.
The data in each Hellable Music Shape is stored in a series of “virtual boxes” called an array. The array contains six of these boxes: one for duration, or how long the sound “event” lasts (both sound and silence); one for “rateFactor,” which affects the playback speed, and consequently the pitch level, of the sample; one for amplitude; one for stereo panning position; one for startFrameFactor, which determines where in the sample it starts playing; and one for lengthInSeconds, which determines how much of the sample is played. There are a variety of ways in which algorithms can be used to generate this data.
Backing up for a minute: when the data in this array is passed to Hell Instrument, Hell Instrument converts the numbers into numbers that JSyn can understand. For instance, if startFrameFactor is “0.5″ and there are 10,000 frames in the sample, then Hell Instrument translates that number into “5,000″ and JSyn starts playing the sample on the 5,000th frame.
So, to review, at the lowest level of the Hell Cafe “hierarchy” is a JSyn circuit that plays back a sample. Above that is a Hell Instrument that tells JSyn the specifics of what to play. Above that is a Hellable Music Shape that contains the numeric data and schedules it.
Up to ten Hellable Music Shapes, each containing its own Hell Instrument and its own sample, are contained in Hell Collection. Hell Collection is what is called a “parallel collection.” Didkovsky compares a parallel collection to a phrase in a multi-instrument composition. Take, as an example, a phrase in which a violin, a viola, and a cello are all playing simultaneously; the phrase itself would be the “parent” parallel collection, and the instruments its “children.”
Hell Collection is a sort of “master scheduler,” to which the children repeat back with the timings of their sonic events. Hell Instrument reports the time of its events back to Hellable Music Shape; Hellable Music Shape reports the time of its events back to Hell Collection. It’s as if Hell Collection were the leader of a jazz band, waiting for all of his or her musicians to finish their respective phrases, and then making a decision about whether to repeat a chorus (which could, of course, contain huge differences the next time around), or whether to stop.
The Hellable Music Shapes belong to what is known as a “subclass” of Music Shape. A subclass is a modified version of the generic class (Music Shape). In this case, Hellable Music Shape can do a few additional things that Music Shape can’t, like build and re-build itself using information that is passed on to it by the composer/performer.
The composer/performer passes that information on to the Hellable Music Shapes through a GUI (graphic user interface) panel. When Didkovsky performed The Technophobe and the Madman, he had ten such GUI panels on the screen in front of him. Each panel displays “upper-level” information about a given Hellable Music Shape, such as whether or not it is active, the total number of pulses in the repeating pattern, and other data that it uses to “build itself.” With the exception of the mute checkbox, this information can be entered in text boxes by the composer/performer.
Once the composer/performer deselects the mute checkbox and clicks the “build” button, the GUI sends the data he or she has entered to the associated Hellable Music Shape, and this Music Shape gets added to Hell Collection if it is not already there. In addition, that Hellable Music Shape’s “build” flag gets set to true. Because of this flag, the next time Hell Collection repeats and the Hellable Music Shape launches, it will construct or re-construct itself based on the values sent to it by the GUI. When the mute box is checked again, that particular Hellable Music Shape is removed from Hell Collection.
The addition or removal of a Hellable Music Shape is not reflected until the next repeat of Hell Collection. Hellable Music Shapes added during a cycle don’t get heard until the next repeat (after all the others have finished), and Hellable Music Shapes removed during a cycle finish their jobs and simply do not get launched the next time through.
For the Technophobe performance of Hell Cafe, Didkovsky saved 16 “mixes” – essentially GUI panels like the ones described above. That way, rather than getting wrapped up in actual data entry during performance, he was able to focus on accompanying the singer sensitively, switching from one mix to another using the mute checkbox.