CyborgTypingOnComputer

Your Computer is Listening. Are you?

Due to the rate of growth and development of A.I. technology, #resistanceisfutile. Which is to say that computer-composed music is here, and the conversation needs to change.

Written By

Noah Stern Weber

Six years ago, I wrote an article stemming from a lively discussion that I had with a few friends on the work of David Cope’s artificial intelligence compositional program “Emily Howell.” My intention had been two-fold: to approach the philosophical challenges of our society accepting music originating from an extra-human source, while also attempting to discuss whether “Emily Howell’s work” met the definition of a composed piece—or if extraordinary human effort was involved in the final product.

This inquiry will take a very different approach.

We begin with the hypothesis that, due to the rate of growth and development of A.I. technology, #resistanceisfutile. Which is to say that computer-composed music is here, and the conversation needs to change.

Need proof? When I wrote the article six years ago, there were roughly two or three A.I. programs, mostly theoretical and almost exclusively confined to academic institutions. In the two weeks between agreeing to write this article and sitting at down to flesh out my notes, a new program using Google’s A.I. open platform was released. In the week and a half between writing my first draft and coming back for serious revisions, another A.I. music system was publicly announced with venture capital funding of $4 million.  The speed at which new technology in this field is developed and released is staggering, and we cannot discuss if it might change the musical landscape, but rather how we will adapt to it.

Advances in the capacity and ease of use in digitally based media have fundamentally changed the ways that creators and producers interact with audiences and each other and—in many ways—they have bridged some of the gaps between “classical” and “popular” music.

Ted Hearne introduced me to the beauty and artistic possibilities of Auto-Tune in The Source (digital processing design by Philip White). After seeing a demo of Kamala Sankaram’s virtual reality operetta The Parksville Murders, I programmed a session at OPERA America’s New Works Forum, bringing in the composer, producers (Opera on Tap), and director (Carri Ann Shim Sham) to introduce their work to presenters and producers of opera from around the country. While still a beta product, it led to a serious discussion about the capacity of new technologies to engage audiences outside of a more traditional performance space.

The Transactional Relationship 

In the tech world, A.I. is equated to the Holy Grail, “poised to reinvent computing itself.” It will not just automate processes, but continually improve upon itself, freeing the programmer and the consumer from constantly working out idiosyncrasies or bugs. It is already a part of our daily lives—including Google’s search function, Siri, and fraud detection on credit cards. The intuitive learning will be essential to mass-acceptance of self-driving cars, which will save tens of thousands of lives annually.

So why is A.I. composition not the next great innovation to revolutionize the music industry? Let’s return to the “Prostitute Metaphor” from my original article. To summarize, I argued that emotional interactions are based on a perceived understanding of shared reality, and if one side is disingenuous or misrepresenting the situation, the entire interaction has changed ex post facto. The value we give to art is mutable.

A.I.’s potential to replace human function has become a recurring theme in our culture. In the last 18 months, Westworld and Humans have each challenged their viewers to ask how comfortable they are with autonomous, human-esque machines (while Lars and the Real Girl explores the artificial constructs of relationships with people who may or may not ever have lived).

I’ll conclude this section with a point about how we want to feel a connection to people that move us, as partners and as musicians. Can A.I. do this? Should A.I. do this? And (as a segue to the next section), what does it mean when the thing that affects us—the perfectly created partner, the song or symphony that hits you a certain way—can be endlessly replicated?

Audiences are interested in a relationship with the artist, living or dead, to the point that the composer’s “brand” determines the majority of the value of the work (commissioning fees, recording deals, royalty percentages, etc.), and the “pre-discovery” work of famous creators have been sought after as important links to the creation of the magnum opus.

Supply and Demand

What can we learn about product and consumption (supply and demand) as we relate this back to composition in the 21st century?

If you don’t know JukeDeck, it’s worth checking out. It was the focal point of Alex Marshall’s January 22, 2017, New York Times article “From Jingles to Pop Hits, A.I. Is Music to Some Ears.” Start with the interface:

 Two JukeDeck screenshots--the first shows the following list of genres: piano, folk, rock, ambient, cinematic, pop, chillout, corporate, drum and bass, and synth pop; and the second shows the following list of moods: uplifting, melancholic, dark, angry, sparse, meditative, sci-fi, action, emotive, easy listening, tech, aggressive, and tropical

Doesn’t it seem like an earlier version of Spotify?

Two smartphone screenshots from an earlier version of Spotify, the first one features an album called Swagger with a shuffle play option and a list of four of the songs: "Ain't No Rest for the Wicked," "Beat The Devil's Tattoo," "No Good," and "Wicked Ones"; the second one features an album called Punk Unleashed with a shuffle play option and a list of five of the songs: "Limelight," "Near to the Wild Heart of Life," "Buddy," "Not Happy," and "Sixes and Sevens."

“Spotify is a new way of listening to music.” This was their catchphrase (see way-back machine to 6/15/11). They dropped that phrase once it became the primary way that people consume music. The curation can be taken out of the consumer’s hands—not only is it easier, but also smarter. The consumer should feel worldlier for learning about new groups and hearing new music.

The problem, at least in practice, is that this was not the outcome. The same songs keep coming up, and with prepackaged playlists for “gym,” “study,” “dim the lights,” etc., the listener does not need to engage as the music becomes a background soundtrack instead of a product to focus on.

My contention is not that the quality of music decreased, but that the changing consumption method devalues each moment of recorded sound. The immense quantity of music now available makes the pool larger, and thus the individuals (songs/tracks/works) inherently have less value.

We can’t erase the Pandora’s Box of Spotify, so it is important to focus on how consumption is changing.

A.I. Composition Commercial Pioneers

Returning to JukeDeck: what exactly are they doing and how does it compare to our old model of Emily Howell?

Emily Howell was limited (as of 2011) to the export of the melodic, harmonic, and rhythmic ideas, requiring someone to ultimately render it playable by musicians. JukeDeck is more of a full-stack service. The company has looked at the monetization and has determined that creating digital-instrument outputs in lieu of any notated music offers the immediate gratification that audiences are increasingly looking for.

I encourage you to take a look at the program and see how it creates music in different genres. Through my own exploration of the JukeDeck, I felt that the final product was something between cliché spa music and your grandparent’s attempt at dubstep, yet JukeDeck is signing on major clients (the Times article mentions Coca-Cola). While a composer might argue that the music lacks any artistic merit, at least one company with a large marketing budget has determined that they get more value out of this than they do from a living composer (acknowledging that a composer will most likely charge more than $21.99 for a lump-sum royalty buyout). So in this situation, the ease of use and cost outweigh the creative input.

The other company mentioned in the article that hopes to (eventually) monetize A.I. composition is Flow Machines, funded by the European Research Council (ERC) and coordinated by François Pachet (Sony CSL Paris – UMPC).

Flow Machines is remarkably different. Instead of creating a finished product, its intention is to be a musical contributor, generating ideas that others will then expand upon and make their own. Pachet told the Times, “Most people working on A.I. have focused on classical music, but I’ve always been convinced that composing a short, catchy melody is probably the most difficult task.” His intention seems to be to draw on the current pop music model of multiple collaborators/producers offering input on a song that often will be performed by a third party.

While that may be true, I think that the core concept might be closer to “classical music” than he thinks.

While studying at École D’Arts Americaines de Fontainebleau, I took classes in the pedagogy of Nadia Boulanger. Each week would focus on the composition of a different canonical composer. We would study each composer’s tendencies, idiosyncrasies, and quirks through a series of pieces, and were then required to write something in their style. The intention was to internalize what made them unique and inform some of our own writing, if only through expanding our musical language. As Stravinsky said, “Lesser artists borrow, greater artists steal.”

What makes Flow Machine or JukeDeck (or Emily Howell?) different from Boulanger’s methodology? Idiosyncrasies. Each student took something different from that class. They would remember, internalize, and reflect different aspects of what was taught. The intention was never to compose the next Beethoven sonata or Mahler symphony, but to allow for the opportunity to incorporate the compositional tools and techniques into a palate as the student developed. While JukeDeck excludes the human component entirely, Flow Machine removes the learning process that is fundamental to the development of a composer. In creating a shortcut for the origination of new, yet ultimately derivative ideas or idioms, composers may become less capable of making those decisions themselves. The long-term effect could be a generation of composers who cannot create – only expand upon an existing idea.

What would happen if two A.I. programs analyzed the same ten pieces with their unique neural networks and were asked to export a composite? Their output would be different, but likely more closely related than if the same were asked of two human composers. As a follow up, if the same ten pieces were run through the same program on the same day, would they export the same product? What about a week later, after the programs had internalized other materials and connections in their neural networks?

What makes Flow Machine unique is the acknowledgment of its limitations. It is the Trojan Horse of A.I. music. It argues that it won’t replace composition, but help facilitate it with big data strategies. If we were discussing any non-arts industry, it might be championed as a “disruptive innovator.” Yet this becomes a slippery slope. Once we can accept that a program can provide an artistic contribution instead of facilitating the production of an existing work, the precedent has been set. At what point might presenters begin to hire arrangers and editors in lieu of composers?

No one can effectively predict whether systems like Flow Machine will be used by classical composers to supplement their own creativity. Both recording and computer notation programs changed the way that composers compose and engage – each offering accessibility as a trade-off for some other technical element of composition.

I could foresee a future when multiple famous “collaborators” might input a series of musical ideas or suggestions into a program (i.e. playlist of favorite works), and the musically literate person becomes an editor or copyist, working in the background to make it cohesive. Does that sound far-fetched? Imagine the potential for a #SupremeCourtSymphony or #DenzelWashingtonSoundtrack. They could come on stage after the performance and discuss their “musical influences” as one might expect from any post-premiere talkback.

So what does it all mean?

In the short term, the people who make their living creating the work that is already uncredited and replicable by these programs may be in a difficult situation.

A classically trained composer who writes for standard classical outlets (symphony, opera, chamber music, etc.) will not be disadvantaged any further than they already are. Since Beethoven’s death in 1827 and the deification/canonization/historical reflection that followed, living composers have been in constant competition with their non-living counterparts, and even occasionally with their own earlier works. It will (almost) always be less expensive to perform something known than to take the risk to invest in something new. There may be situations where A.I.-composed music is ultimately used in lieu of a contemporary human creation, if only because the cost is more closely comparable to utilization of existing work, but I suspect that the priorities of audiences will not change quite as quickly in situations where music is considered a form of art.

Show me the money

I focused on JukeDeck and Flow Machine over the many other contributors to this field because they are the two with the greatest potential for monetization. (Google’s Magenta is a free-form “let’s make something great together” venture only possible with the funding of Google’s parent company Alphabet behind it, and various other smaller programs are working off of this open-source system.)

Acknowledging monetization is the key question when considering a future outside of academia. The supposed threat of A.I. music is that it might eliminate the (compensated) roles that composers play in the 21st century, and the counter-perspective is how to create more paying work for these artists.

Whether it is a performing arts organization looking to strengthen its bottom line or composers trying to support themselves through their work, acknowledging shifts in consumer priorities is essential to ensuring long-term success. We need to consider that many consumers are seeking a specific kind of experience in both their recorded and live performance that has diverged more in the last 15 years than in the preceding 50.

It is cliché, but we need more disruptive innovations in the field. Until we reach the singularity, A.I. systems will always be aggregators, culling vast quantities of existing data but limited in their ability to create anything fundamentally new.

Some of the most successful examples of projects that have tried to break out of the confines of how we traditionally perceive performance (in no particular order):

  • Hopscotch, with a group of six composers, featuring multiple storylines presented in segments via limousines, developed and produced by The Industry.
  • Ghosts of Crosstown, a site-specific collaboration between six composers, focusing on the rise and fall of an urban center, developed and produced by Opera Memphis.
  • As previously mentioned, Ted Hearne’s The Source, a searing work about Chelsea Manning and her WikiLeaks contributions, with a compiled libretto by Mark Doten. Developed and produced by Beth Morrison Projects (obligatory disclaimer – I worked on this show).
  • David Lang’s anatomy theater—an immersive experience (at the L.A. premiere, the audience ate sausages while a woman was hanged and dissected)—attempting to delve not just into a historical game of grotesque theater, but also creating the mass hysteria that surrounded it (the sheer number of people who were “unsettled” by this work seems to be an accomplishment – and once again, while I did not fully develop this show, I was a part of the initial planning at Beth Morrison Projects).

Craft is not enough. Quoting Debussy, “Works of art make rules but rules do not make works of art.” As we enter this brave new world of man versus machine, competing for revenue derived not just of brawn but increasingly of intellect, composers will ultimately be confronted—either directly or indirectly—with the need to validate their creations as something beyond that of an aggregate.

I am optimistic about the recent trend of deep discussion about who our audiences are and how we can engage them more thoroughly. My sincere hope is that we can continue to move the field forward, embracing technologies that allow creators to grow and develop new work, while finding ways to contextualize the truly magnificent history that extends back to the origins of polyphony. While I am doubtful about the reality of computer origination of ideas upending the system, I’m confident that we can learn from these technological innovations and their incorporation in our lives to understand the changes that need to be made to secure the role of contemporary classical music in the 21st century.