Freeze Frame: A Snapshot of Music Making on the Internet
Once, every sound had a distinct source. A door slammed shut, a horn was blown, a guitar string was strummed. Audio came from a discrete event, it was tied to a discernable action.
Networked music challenges this notion by displacing sound from its origin, moving audio freely from one location to another, giving it a presence in and of itself. John Cage brought this quality into modern music with his 1939 piece, Imaginary Landscape 1. A performance that combined turntables and radio broadcasts, this work introduced networked interactivity into music making. Cage mixed into his performance various transmissions that came over the airwaves, and with them created an entirely new composition. Sound separated from its source in this manner becomes a “free floating signifier,” to borrow a phrase from Roland Barthes. The musical elements are liberated from a specific time and place, allowing them to be recontextualized in the final composition.
Robert Rauschenberg pursued something similar in the mid-1960s with his interactive, sound-emitting sculpture, Oracle. Rauschenberg’s collaborator on the project, Bell Labs engineer Billy Kluver, described Oracle as “a sound environment made up of five AM radios, where the sounds from each radio emanates from one of the five sculptures. The viewer can play the sculpture as an orchestra from the controls on one of the pieces, by varying the volume and the rate of scanning through the frequency band. But they cannot stop the scanning at any given station. The impression was that of walking down the Lower East Side on a summer evening and hearing the radios from open windows of the apartment buildings.”
By the early 1970s, as the technology became more accessible, more artists began to explore the potential of networked media—both audio and video—to create unique forms of interactive expression. These artworks grew from the notion that meaning would emerge from media as it circulates freely within a network—and that meaning can be enhanced through strategic interventions by the artist or audience. Douglas Davis‘ 1971 performance, Electronic Hokkadim, produced at the Corcoran Gallery, was based on the interactions between telephone callers and broadcast television. Nam June Paik pursued what he referred to as a “cybernated art,” based on the transmission of information through video and audio networks. Paik’s 1973 television broadcast, Global Groove, stands as a landmark event in this trajectory. Fragments of performances by artists of various traditions—Western and Eastern, popular and elitist, traditional and modern—were strung together in a frenetic, continuous flow across the screen. Paik himself “performed” the broadcast as a live mix, choosing his streams as a DJ does, manipulating images through a video synthesizer, using rhythm as the underlying principle of composition.
Enabling and manipulating the continuous flow of information was a principal concern behind the design of the networked personal computer. But before the mid-1980s, bandwidth constraints and limited processing power made the use of these tools prohibitively expensive for artists. However, it was long apparent to the pioneers of networked media—such as Davis, Paik, and Roy Ascott—that their artistic explorations with satellites and local wired networks would lead to computer-based work, once the technology had caught up to their vision.
Among the first musicians to dedicate themselves to the potential of networked computing was The Hub, perhaps the world’s first “computer network band,” which was founded at Mills College in 1985. One of the members describes their method as follows: “Six individual composer/performers connect separate computer-controlled music synthesizers into a network. Individual composers design pieces for the network, in most cases just specifying the nature of the data which is to be exchanged between players in the piece, but leaving implementation details to the individual players, and leaving the actual sequence of music to the emergent behavior of the network. Each player writes a computer program which make musical decisions in keeping with the character of the piece, in response to messages from the other computers in the network and control actions of the player himself. The result is a kind of enhanced improvisation, wherein players and computers share the responsibility for the music’s evolution, with no one able to determine the exact outcome, but everyone having influence in setting the direction. The Javanese think of their gamelan orchestras as being one musical instrument with many parts; this is probably also a good way to think of The Hub ensemble, with all its many computers and synthesizers interconnected to form one complex musical instrument. In essence, each piece is a reconfiguration of this network into a new instrument.”
Implicit in this approach is the idea that, within the network, a kind of intelligence is in circulation. David Wessel, at the University of California at Berkeley, has been working with his colleagues along these lines since the late 1980s, bringing together the fields of computer music and neural networks. Could an instrument become intelligent, and adapt to in an automated manner to a musician’s playing style? Using the Max programming environment, Wessel began to experiment with musicians in a network context. “We have obtained reliable recognition of complex guitar strumming gestures and limited numbers of spatial gestures,” he wrote. “With such procedures and much more research, we might conceivably move towards adaptive, personalizable instruments…one will have to decide when to standardize or fix the instrument and let the musician learn the appropriate gesture and when to let the instrument adapt to the specialized approach of a player. How to rig the training harnesses on ourselves as players and on our instruments as expressively responsive musical tools will be a question of scientific, aesthetic, and social concern.” Once meaningful information is circulating within a computer network, the opportunity emerges for a relevant interaction. As Wessel suggests, networked computer tools will lead musicians into making choices about aspects of their performance that had previously never had to be asked, such as: how “smart” do I want my instrument to be?
The notion that music can emerge from an intelligent, interactive environment has led some contemporary composers to pursue forms of music making that would be inconceivable without telecommunications technology. One example is Atau Tanaka‘s 1998 installation, Global String. The work consists of a physical string, 15 meters long, that stretches from a floor diagonally to the ceiling of a room. At the ceiling, the string is connected to the Internet.
“It is a musical instrument wherein the network is the resonating body of the instrument through the use of a real-time sound-synthesis server,” writes Tanaka. “The concept is to create a musical string (like the string of a guitar or violin) that spans the world. Its resonance circles the globe, allowing musical communication and collaboration among the people at each connected site.”
Ping, a site-specific sound installation by Chris Chafe and Greg Niemeyer, takes a similar approach. Ping has been described as “a sonic adaptation of a network tool commonly used for timing data transmission over the Internet. As installed in the outdoor atrium of SFMOMA,” for the millennial exhibition 010101, “Ping functions as a sonar-like detector whose echoes sound out the paths traversed by data flowing on the Internet. At any given moment, several sites are concurrently active, and the tones that are heard in Ping make audible the time lag that occurs while moving information from one site to another between networked computers.” In effect, Ping makes music out of the data flow of the Net—the constant motion of digitized fragments in real time is given an aesthetic form.
The composer and theorist Randall Packer has explored this line of telematic composition in a number of pioneering collaborative installations. For Mori, an “Internet based earthwork” first mounted in 1999 by Packer with Ken Goldberg, Wojciech Matusik, and Gregory Kuhn, the trembling movements of California’s Hayward Fault are picked up by a seismograph, converted into digital signals, and sent over the Internet to the installation. This data stream triggers a series of low frequency sounds that vibrate through the installation, viscerally connecting the visitor to the moment-by-moment fluctuations of the earth’s actual movement.
In what he has referred to as “artistic research projects,” Packer has further explored the possibilities of interactive, telematic musical works. One such installation, Telemusic, was staged by Packer and his collaborators Steve Bradley and John P. Young at the Sonic Circuits VIII International Festival of Electronic Music and Arts in St. Paul, Minnesota, in November, 2000. Telemusic brought together live performers, the audio processing of their performances, and the real time participation from the public through a Web site. As the performers read from a script, their delivery was effected by audio processing triggered by the mouse clicks of visitors to the Web site. The final mix in the room was then streamed to the Web site, so a visitor could hear the final musical composition that she had contributed to by clicking a mouse. In order to create this direct form of interactivity, Packer’s team had to develop an interface between impulses captured over the Internet and a server hosting Max software. This circular experience, in which listener is also a participant in the making of a musical work, is indicative of the direction the Internet is suggesting that music should go—as the distinction between “artist” and “audience” begins to slip away, and we find ourselves dipping into the data flow, listening to the music that it makes, and that we make with it.
From Freeze Frame: A Snapshot of Music Making on the Internet
By Ken Jordan
© 2002 NewMusicBox