MusicWine

Vinfonies, Nessun Dorma, and Gastromorphology

It’s easy to recognize several time scales to a meal, from the succession of courses (even simply saving dessert for last) to the entropy that occurs as a hot dish cools or a frozen dish melts to the succession of individual bites. Recognizing these time scales is straightforward, but synchronizing music to them is a much trickier proposition.

Written By

Ben Houge

A bottle of wine and a wine glass on a tale alongside an electronic keyboard, music notation paper, headphones, and a computer screen.

(Photo by Ben Houge)

At just about every technology-oriented conference I go to, I bump into some representative of the Music Technology Group of the Universitat Pompeu Fabra in Barcelona. Probably best known for developing the Reactable, the interactive music control system prominently used by Björk, the MTG is one of the most well regarded centers for music technology in Europe. I believe it was at the Game Developers Conference in San Francisco back in 2012 where I first met the MTG’s Jordi Janer, who has been conducting fascinating research into voice analysis and audio source separation, work that he has made available commercially via his side company, Voctro Labs.

Jordi has another side project as one of the founders of Vinfonies, a series of events pairing wine and music that goes back to 2009. Jordi himself has done a number of sound installations related to various aspects of viticulture, developing pieces based on resonating cava bottles, an interactive grape press, and the sonic byproducts of fermentation. With my food opera events I’ve sought to bring the sounds of the farm into an urban restaurant, but Vinfonies goes the other direction, using the festival as a way to bring wine-themed new media art to rural settings. The events happen annually at harvest time in Vilafranca del Penedès (part of the well-regarded Penedès winemaking region, one of Spain’s sixty-five or so regions that have earned the Denominación de Origen appellation) about 50km from Barcelona. The festival includes concerts, sound installations, and wine tasting sessions. Similar to the work of Spanish sound artist Francisco López, participants experience the wine and music pairings blindfolded, eliminating visual distractions to focus on taste, smell, touch, and sound.

While I was living in Spain, I had a chance to reconnect with Jordi at Music Hack Day Barcelona, sponsored by the MTG as part of the huge Sónar electronic music festival. (Check out my brainwave sonification hack from 2014.) The MTG partnered with Berklee to bring computer music pioneer John Chowning to Spain for a series of events last summer (including his first visit to Valencia since he docked there in 1953, back when he was a drummer in the U.S. Navy), and in the planning conversations, Jordi and I realized we shared an interest in music and gastronomy. He invited me to write a piece for the 2015 edition of Vinfonies, which took place last September. Over the summer, he mailed me a bottle of Azul y Garanza 2012 Garciano (the name, which might perplex some oenophiles, derives from its composition: 50% Garnacha, 50% Graciano), and I set to work composing my musical accompaniment.

BTW, a playlist of previous Vinfonies compositions (including one by Edwin Van Der Heide, whose fascinating sound installation “Spectral Diffractions” took over Mies van der Rohe’s Barcelona Pavilion as part of Sónar in 2014) is available on SoundCloud.

Composing music to accompany a taste-based experience presents some unique challenges, particularly when it comes to synchronization. One of the epiphanies that sparked my food opera project was the recognition of a meal as a time-based art form. It’s easy to recognize several time scales to a meal, from the succession of courses (even simply saving dessert for last) to the entropy that occurs as a hot dish cools or a frozen dish melts to the succession of individual bites.

Recognizing these time scales is straightforward, but synchronizing music to them is a much trickier proposition. Typically, restaurants don’t even try, and in this they resemble far too many video games that simply loop the same piece of music over and over or shuffle a playlist: the only point of coordination between the music and the restaurant is that while you’re in this space, this is the music that’s playing. (See the paper I presented at Invisible Places Sounding Cities in Portugal for a more detailed discussion.) A few restaurants that explicitly stress the multisensory angle go further: Ultraviolet or the various Kitchen Theory dinners will present a track of music synchronized with each course, but for this to work, everyone has to be eating at the same time. This approach—when the dish is served, someone presses play on the CD player—works best when there are a greater number of relatively small dishes that don’t take too long to eat.

To coordinate music with individual bites is still trickier. One clever solution is the Tasteful Turntable, devised by Lars and Nikolaj Kynde to synchronize several small bites with key moments in a soundtrack presented to four diners at a time. Another, somewhat less practical approach is Naoya Koizumi’s Chewing Jockey, which uses a photoreflector sensor and a bone conduction speaker to alter the sound of chewing. But when the dining experience is opened up to multiple diners eating asynchronously (i.e., the way the usually eat), the logistics of synchronization become much harder to address. One solution is to give diners headphones, as Heston Blumenthal does in his famous Sound of the Sea dish, or in performance artist Marina Abramović’s collaboration with chef Kevin Lasko, Volcano Flambé. Headphones works as a personal experience, potentially a profound one, but they also cut people off from the environment and those around them. The other solution is to put a speaker at each seat, broadcasting sound unfettered into the restaurant space, and then (this is the hard part) avoiding cacophony by finding a way to incorporate the bleed of music from adjacent tables into the experience, so that all the sound in the restaurant is coordinated. This is what I’ve done in my food operas, and from my observation, this is the only solution that scales to an evening-length experience.

This solution requires all of the musical compositions written for a food opera to be modular, so that they can be put together in any combination and still sound harmonious. At the events I organized with chef Jason Bond in his restaurant Bondir, there were five courses, and diners chose between two possible dishes per course, so that’s up to ten different pieces of music that could be playing at the same time, each at different points in the composition, from the twenty-six seats in the restaurant. I wrote custom software to generate a new version of each musical texture and play it on the correct speaker every time a diner was served the corresponding dish; because the music was generated on the fly, the software could also ensure that all of these individual streams of music were coordinated in harmony and rhythm. In order to make this coordination as apparent as possible, I limited myself to a diatonic scale (slowly transposing over the course of the evening, with all of the music in the restaurant programmed to conform to the current key) and a steady underlying beat referential of 188 beats per minute.

However, with this Vinfonies commission as a stand-alone piece, I had no such constraints, and so I made it an objective to make the most of my liberty. I also wanted to avoid some of the techniques I had used in the past that I decided were a bit “too easy,” so I decided to eschew steady drones, for example, and the overt diatonicism and rhythmic grid of my previous work.

When I’m developing music to accompany some gustatory stimulus, my process isn’t so different from how I’d approach a video game or a choreography: I sample the experience of the thing I’m scoring (or at least a description or concept art or whatever state of completion the thing is in), evaluate my response, and try to capture what I hear in my mind’s ear. In this, the composer has a great advantage over the researcher; whereas academic studies might try to compare the appropriateness of different types of wine to existing music (as in a recent study contrasting Debussy with Rachmaninoff), a composer can hone in on and express an ideal imagined sound without having to choose between existing examples. (This may lead to less quantifiable data, but in some ways it resembles another study, in which participants were asked to find the single note on a keyboard that best corresponded to a scent stimulus.)

An interesting point came up in Oxford last February, when I collaborated with researcher and wine expert Janice Wang in an event at Alistair Cooper’s 1855 Oxford Wine Bar. We presented three pairs of wines alongside three pairs of musical textures and asked participants to determine which musical texture best matched each wine. In the ensuing conversation, it became clear that some people chose music to evoke the wine, whereas others chose music to complement the wine. This may be frustrating to the researcher, but it’s exciting to the composer, at it shows how much room there is for creativity in devising music/food pairings. Music really can serve as another kind of seasoning, flexible in the same way that an ingredient like mint might be equally at home in a sweet or savory dish. We can recognize this phenomenon from the world of film scoring; it’s an exercise we even give to students at Berklee, to compose different soundtracks to the same film clip to give it a different emotional spin, to make it happy or sad, wistful, nostalgic, or ominous.

The Azul y Garanza 2012 Garciano provided a great opportunity to stretch out a bit. It’s a complex wine, and working at the intersection of food and music, I’m sometimes bemused by the fact that, while it’s common for people to decry complexity in music, complexity in wine is pretty universally considered a positive attribute. Jordi Janer shares an interesting observation on this point; he tells me that when pairing music and wine at the Vinfonies events, he’s found participants much more receptive to challenging, experimental soundtracks than they might ordinarily be. Unlike the whisky project I wrote about earlier, my objective here was not to translate aspects of the wine into sound so much as to create a sonic context for it, and in this case I envisioned a kind of mysterious, mystical setting.

I decided to stick mostly to an octatonic scale, allowing the music to float for long stretches without reinforcing a particular key. I sought a harmonic language that was more dissonant than what I had used in the past. The spiky, disjunct marimba patterns evoke a bit of the spicy quality of the wine. The music has no pulse, evoking a kind of timelessness. I wanted a density to the music that reflected the wine’s complexity, and the delay lines help achieve that, providing a sense of depth; I imagine myself peering into the wine as though it were a dense forest. I tend to prefer delay lines to reverberation in these kinds of pieces, and I think there’s also something liquid in the rolling echoes. The lengths of my delays are periodically changing, which results in a bit of pitch shifting during the transition, and this subtle detuning evokes a kind of tartness. (I’m not sure if this kind of detuning has been subjected to a formal study, but I wonder if there isn’t a link to the expression “a sour note.”) Periodically, as a kind of refrain, a jutting theme outlines an augmented chord, stretching out over a tenth, and alters the mode, swapping a D sharp for a D natural, serving as a kind of punctuation. At these moments, a sustained bed based on cello sounds makes a rare excursion into bass clef territory. In some of my previous work, low cello phrases have been successfully linked to tannins in wine, so I decided to include cello here for the same reason. But in general, the frequency range of the music is in a high register, which reflects the sweet and sour fruit components I notice in the wine. Musical ideas float along as objects for the listener’s consideration, not developed, but juxtaposed for the listener to parse and contemplate. Overall, the music is peaceful, emphasizing stability and resolution, reflecting this well-balanced wine.

(If this description is a bit tedious, note that one of my original reasons for pairing music with food was the recognition of how poor words are for describing music and food, suspecting that they might do a better job of describing each other.)

The instrumentation comes from manipulated recordings of myself (whistling and playing piano strings with mallets) and my Berklee Valencia colleagues Victor Mendoza (marimba master) and Sergio Martínez (percussion savant), with Berklee alum Ro Rowan’s cello recurring periodically.

In the above description, you may readily detect the influence of a composer whose music has deeply influenced my thinking about real-time musical systems and video games—Olivier Messiaen. Much of his music exists in a continuous present, which is exactly what game music must do, as it waits for the next event to signal a transition. By making this piece almost a kind of homage, I sought to link my work to an ongoing musical tradition, something I think is important when venturing into new territory, especially in light of some of the cultural crosstalk I mentioned in my previous post.

My compositional process was idiosyncratic. First I determined the scale and the types of sonorities and simultaneities I wanted to use, and I sketched these out on staff paper, along with some thematic ideas. Then I took some of the instrumental recordings I had prepared to use as source material and turned them into playable instruments in Max/MSP. I wrote a program that would take any note I played on the keyboard and build one of my previously determined chords, in any approved inversion, on top of it; in this way I generated the “mallet piano” part, allowing me to manually control the range, rhythm, and dynamics of that part, while letting the computer generate the chords. Then I wrote a program that would look at the notes of the most recent two chords played by the mallet piano and choose from among them to generate a melody to play on the marimba; whenever I pushed a button, a new melody would be generated according to parameters I specified, and a different button allowed me to repeat a phrase with subtle rhythmic variations. The whistle part, except for the recurring theme (which was played manually), was also generated by looking at the most recent mallet piano chords and choosing a note from among them, although in this case I was determining the octave transposition, rhythm, volume, and duration manually by playing the keyboard. The rattle and cello parts were performed or input manually. To sequence everything into the requested time frame, I used Ableton Live, embedding my Max patches as instruments using Max for Live.

But wait: for this Vinfonies commission, since a linear recording of about three minutes’ duration was what was requested, I could have composed and sequenced everything linearly. So why this weird process involving programmed behaviors?

It comes back to the idea of synchronization. In most cases, eating and drinking are activities that can continue for an indeterminate duration, so I’ve been very interested in composing music that can continue indefinitely. (This is the same problem we face in video games, and much of my creative effort since starting in the game industry in 1996 has been addressed to it.) It’s possible that someday I might want to use this music again and present it in a real-time, indeterminate duration form. If I do, having these algorithmic processes already in place will make it easy to adapt; by reducing the input into the system (playing single notes or pushing buttons instead of playing full chords or phrases), I make it easier to replace my manual input with some automated mechanism. I also wanted to fit this piece into my growing body of “food opera” work, using instruments that I’ve built from acoustic samples in Max as my orchestra, organized around generative or procedural processes to accommodate the constraints of modular, real-time deployment discussed earlier. And I have to admit, composing algorithmic music in this way is simply an interesting challenge. So, since much of my work hovers around this idea, it just made sense to apply the techniques on my current workbench to this new commission. But more than any of these considerations, I feel that this approach to composing—using algorithms to develop textures of indeterminate duration—brings to the fore not just what things sound like, but how they work, and I fundamentally think there’s a real link between the way this music works and the way eating or drinking works.

Wine, perhaps more than any other comestible, brings the question of time to the fore. One of the commonly discussed attributes of a wine at a wine tasting is its finish, how long it lingers on the palette after tasting, which I’ve heard some sommeliers describe in very precise measurements. But thinking about how a sensation changes in the mouth is not limited to wine. After I gave my presentation to the R&D team at Heston Blumenthal’s Fat Duck restaurant in Bray, England, last June, I sat around with some of the chefs and played some of my musical textures, while we tasted caramel and graphed flavor profile changes over time on paper.

When I was studying computer music back in grad school at the University of Washington in Seattle, I remember encountering Dennis Smalley’s concept of spectromorphology. It was fascinating to think of a method for categorizing the different ways a sound could transform over time. Electronic musicians synthesizing sounds are familiar with the concept of an envelope, a shape that describes how a parameter changes over time (the most common variant being an Attack-Decay-Sustain-Release envelope, allowing a synthesist to specify different volume levels for different points in time), and game audio designers are familiar with the concept of a real-time parameter curve (RTPC) that links these parameters to real-time input. I have often wondered about a taste-based equivalent. Is there something about our rich musical language for working with time that can apply to the rhythms of the kitchen? How precisely can we quantify and categorize the way that taste sensations change over time? Maybe we could call it “gastromorphology.”

This issue comes up at the end of one of NPR’s recent features on Charles Spence. First Spence is quoted:

[For example,] a dark chocolate or coffee-tasting dessert, then something like Pavarotti’s [performance of Puccini’s] “Nessun Dorma,” making much more low-pitched sounds, seem to be the perfect complement to help bring out those bitter tastes in the dark chocolate or the coffee.

Then the author of the article gets in a little zinger of a last word:

Of course, ‘Nessun Dorma’ gets a little more high-pitched near the end—so there are still challenges in finding the perfect sound for a constant flavor experience.

This is exactly the problem I’m working to solve. “Nessun Dorma” does change at the end, building to a glorious climax; like most music, it evolves. This is the form of the work, and determining the form is traditionally a big part of the composer’s job. When composing for dance or film or setting text, that evolution may be hitched to another structure, but the end result is a fixed trajectory over time. And this is also why it may be problematic to use a finished piece by Debussy or Rachmaninoff in evaluating taste to music correspondences. Not only are there so many parameters to track, but the rate of change of these parameters (what we might call morphology, or, at a higher level, simply musical form) is also changing at a rate that is likely very different from the rate of change of the tasting experience.

So I return to Messiaen as a useful reference point. I sometimes find myself in the semantically awkward situation of using the antonyms “static” and “dynamic” to refer to the same thing. Sound by its nature is motion, vibration; it is dynamic. But when I talk about static music, I mean music that isn’t going anywhere, music that is nonteleological, music that is not progressing towards a specific goal. I often talk to students of game music about the challenge of taking the pre-rendered dramatic trajectory out of a piece of music they’re writing, so that the game can put it back in. Messiaen exemplifies this kind of stasis.

This project draws on a lot of ideas: Messiaen’s approach to static music, video game music that responds to user input, algorithmic or generative processes, crossmodal sensation, multimedia pairings, the rhythmic profile of a meal. Taken together, they suggest a flexible, dynamic approach to composition, and the applications are not limited to wine, or to a meal. In fact, a real-time, responsive system such as I’ve described could be equally put to use to create a customized soundtrack for any of the unpredictable events of daily life, opening up a whole new arena for creative work.