Musicians Working with Video: A Primer
You can easily find great music with a great image added, or a great film with great music added, but there are fewer examples of perfect blends between the two media in which something new is created. The groundbreaking multi-media composer Jaroslaw Kapuscinski likens this “new” experience to being able to perceive the third-dimension (i.e. depth), when looking at a two-dimensional image. There are famous examples in popular culture of these audio-visual blends, such as the last space travel scene in Kubrick’s 2001: A Space Odyssey and the return of the theme song, with lyrics, during the last image of Gilliam’s Brazil. But, for the most part, the perfect synthesis of moving image and sound exist in the work of less commercial artists such as Oskar Fishinger, Norman McLaren, Bill Viola, and the Quay brothers.
I began working in this area almost ten years ago and recently premiered Cosmicomics, a work for narrator, chamber ensemble, multiple video, and electronics, created with Peter Nigrini (projection designer). This piece provided a chance to further develop ideas and techniques of audio-visual integration, including counterbalancing visual dominance by emphasizing the music, creating a counterpoint between music/video/story, and orchestrating main ideas across each of the three media.
Similar to the Florentine Camerata’s creation of the opera from music and theatre, the future for multi-media art—creating a fluid interchange between music, image, story, performers, and ever-evolving technology—opens up all sorts of new territory. What follows are a few notes from my own explorations into these new possibilities, drawn both from my own work and from experiencing performances of my colleagues’ ensemble/video compositions.
I. Types of moving images.
Since music performance and moving image create vastly different experiences, it is important to generalize what they might have in common and match the right type of image with the right type of music.
I deal with three different types of visual images:
- Realistic footage, such as a man walking down the street.
- Abstract footage, where realistic footage is processed or filmed in such a way as to not clearly represent anything in the real world. This footage moves in a life-like way, but has no identifiable subject, and most closely resembles chamber music.
- Computer generated (or non-filmed) footage. This most closely resembles non-performed, non-acoustic (electronic) sounds.
There are also musical categories, including realistic sounds (jackhammers, birds), abstract sounds (chamber music) and generated sounds (not performed, can be real or artificial).
Each of these categories has different degrees of identity. Joining realistic footage with abstract sounds means the video has a stronger identity, and hence, might dominate the music. Furthermore, since different members of the audience interpret abstract music differently, the realistic footage might limit someone’s experience of a multi-faceted score. Likewise, realistic sounds can bring a stronger identity to abstract images (imagine different sized dots appearing on the screen, each with a different animal sound), limiting the multi-faceted nature of the abstract image while heightening the created “realism.”
II. Creating the “meta-instrument”
Conceiving an audio-visual piece should not be like Kierkegaard’s leap of faith, jumping into the unknown with the belief that it will work. There need to be strong reasons why audio and visuals are synchronized. Many hours of experimentation are usually necessary, of course, to arrive at the full concept.
Equally important is conceiving the technological setup behind multi-media works.
I liken this to creating a meta-instrument of video, instruments, playback setup, stage setup, lighting design, etc., that you use to realize the concept of your piece. You have an idea, and in much the same way that you choose an orchestration to realize a musical idea, you have to carefully choose video equipment, controllers, and stage setup to build the perfectly matched meta-instrument. This requires time to get to know this new instrument, how it behaves, breathes, gets excited, looks bad, looks good, etc. It is similar to a solo percussionist having to lay out and learn a new instrumental setup for every piece.
Different technology sensitizes the audience to different types of perception.
Picking the wrong technology can make the work unfocused or unfulfilling just as easily as can the wrong materials or execution. Imagine writing a soft, fast piece for large orchestra, or a passionate requiem for solo piccolo. A spontaneous multi-media piece using click-track or a rhythmically precise multi-media piece where the real-time video is lagging will feel similarly disjunctive.
There are millions of unique ways to technically present visuals with live music.
For this article I’ll limit the discussion to an ensemble or soloist playing a notated score alongside a moving image presentation. But it’s worth mentioning that working with improvising musicians is a great way to elegantly create a meaningful dialogue between the disparate media.
There are theatrical, stage, and lighting elements to consider.
This might or might not include ascribing characters to the musicians and deciding how the musicians will ‘dialogue’ with the technology. Where onstage are the musicians? Are they visible? Are the musicians functioning like a pit orchestra, or are they characters in the piece? Where is the screen? How many screens? How is the stage lit? How are the music stands lit? Is the video rear or front projected?
In Cosmicomics, our meta-instrument allowed two video streams to be cued live, so the conductor was free to perform the entire piece without ever looking up at the screen while conducting. This kept the emphasis on the musical performance, and musical sensitivities that both musicians and audience members expect. It challenged us to create video footage with flexible durations, and conceive the music/video experience with a broader sense of synchronization. Short video clips were easily coordinated with this setup. However, the last section of footage, which runs a continuous six minutes, was too long to just hope the music (full of fermata’s and slow tempo’s) would end exactly in the same place every time. Therefore it had to be carefully cut, extended and overlapped so to provide flexibility of duration without showing these edit points.
III. How to synchronize
Here are some technical options, (with my impressions on each). The first one synchronizes music to video, the other two synchronize video to the music.
The click-track is a stable way of syncing studio musicians to pre-edited film or video footage. For live music, however, it removes a crucial level of musical thrust that any great conductor/musician instinctively gives when in front of an audience. It also makes the music sound less relaxed and subservient to the visuals.
With so many other options available for synchronizing musicians with video, why sacrifice the musical performance? There is also the possible technical glitch of the audience hearing the headphone click, not to mention musicians who really don’t look good or happy wired up. The click-track works great for rhythmic music that synchronizes every beat to the video, a style that was popular in the ’80s and early ’90s. Nowadays, we are exposed to many more subtle video techniques, making the click-track seem out-of-date and inelegant by today’s standards.
Live video can be a great option if professionally realized. The biggest problem is control: How much control of the image do you need to achieve your concept? The more control you need, the less variables the resultant image can provide. For example, if you want to avoid a shaky moving image, you require a fixed camera, which limits you to one subject for the length of the piece. I’ve used live video with a fixed camera on the keys of a piano, alongside two pre-recorded videos of the same image (playing the same piece, “Ruby My Dear” by Thelonious Monk). Both the live performer and the audience’s understanding of the piece were dependent on and reacting to the technology, making this piece truly interactive technology.
Real-time interactive software
With laptop CPU power and hard drive space close to the level needed for stable playback of uncompressed, full size video, it is only a matter of time before the majority of video playback in theatre, dance, and major shows all switch to real-time video interactive software. The best current option is Max/Jitter. There are other options developing, such as Miller Puckette’s PD, as well as hardware-based systems like Watchout.
More importantly, once you choose a real-time environment as part of your meta-instrument, you need someone to make all of those cues in real-time. The inventive solution is an interactive piece where the musician controls both the music and video. David Wessel at CNMAT (UC-Berkeley) developed a very elegant score following ‘meta-instrument’ for orchestra with real-time technology, where a musician in the orchestra (following the conductor) is playing a part for midi-keyboard that only performs the cues for the video software to realize.
IV. How to rehearse
No matter how well you conceive, prepare, and internally visualize/hear your work, if it is experimental or new to your way of working, it is all but impossible to completely predict how everything will look and sound. Fading to grey instead of black or keeping a low bass note throughout a video passage might make the difference between a good piece and a perfect piece. So, a completely practical and necessary part of the creative process is adequate time to stage your piece in advance of the rehearsals and performance.
This is where the world of academia provides a huge advantage over professional performances—the opportunity to test, re-test, and re-fine your work before presenting it publicly. Professionally, the ideal situation is to have a full run-through with musicians, edited video, and staging (including lighting) a month or two in advance of the performance to fine-tune the qualities most difficult to predict at the composing table. If a full run-through is not possible, leave as much flexibility as you can realistically keep track of in the score and video. If the musicians are willing to change, you can test and immediately decide on different possibilities during the rehearsal process.
Introducing video into a rehearsal is very tricky, as it changes the dynamic of the rehearsal and can be distracting. I like the IRCAM model for rehearsing with technology: the conductor should be able to run rehearsal the same with technology as he/she does without. This requires having many start points in the video that sync up with rehearsal letters in the score. More importantly, as any player in the ensemble does, the video engineer needs to rehearse playing back the video to perfection before bringing it into the rehearsal. These requirements should be discussed with the ensemble/musicians at the very beginning, since they usually require extra rehearsal(s) and rental of video equipment for rehearsals as well as the concert, which affects budget.
V. Interested in making an interactive piece?
If you’re interested in beginning multi-media work, here’s some advice.
I’m presently in Dar es Salaam, Tanzania, where walking around and seeing the movement of life on streets without sidewalks is mesmerizing. It reminds me of the experiences I had living in Holland in the mid-nineties that first got me interested in capturing visual perspective on video. The Dutch were less suspicious of my camera than the Tanzanians will be: time will tell whether that turns out to be the inspiration for my next piece.
Richard Carrick writes chamber music, multi-media works, performs as a pianist, and directs Either/Or. He lives in Manhattan.
Cosmicomics was premiered by the Sequitur Ensemble on January 12, 2005, at Merkin Hall. Video stream and complete information at www.richardcarrick.com/cosmic.html.