From Computer Models of Musical Creativity

From Computer Models of Musical Creativity

Excerpted from Computer Models of Musical Creativity by David Cope.

Written By

David Cope

Computer Models of Musical Creativity

The following excerpts are reprinted from Chapter Three, “Current Models of Musical Creativity” of the book, Computer Models of Musical Creativity by David Cope, pp. 51-60. Copyright (c) 2005 by the MIT Press. Used with permission of the publisher.

  • READ an essay about this and other recent music books published by the MIT Press.
    ***
    *Principle: Creativity should not be confused with novelty or comtivity.

    Randomness

    My afternoon walks often take me past a friend’s home. On one such walk, I found him busy scrubbing his driveway. We nodded greetings, and he complained about the horrible mess his oil-leaking car had made on his driveway, and how impossible it seemed to remove the stain. Rather than agreeing with him, I mentioned that I no longer had that problem. He eagerly asked what kind of detergent I used. I replied that what I used was better than detergent. By this time I had his undivided attention. I told him that I had decided to appreciate my oil stains as art—admiring the oil on my driveway like paint on a canvas. At first, he looked at me quizzically, to determine whether I was pulling his leg. I assured him that my change in attitude was real, and with the right light there were many extraordinary hues and images to be seen in these oil stains. Recently I passed his house and discovered him loosening the oil cap under his car. He told me that his detergent had mangled his “art,” and that he needed to seed it again.

    I recount this simple story because I believe it demonstrates how creativity can circumvent logic in order to solve problems. This story also demonstrates how important perspective is to creativity—surely many would argue that my solution here is simply illogical, not creative. They would be wrong. In fact, the programs I describe in this section should be gauged against this model, to see if they pose any true opportunities for creative output.

    Most research on creativity ignores the confusion that randomness poses to its recognition. On one hand, distinguishing a creative solution to a problem from a random solution to that same problem should be easy, and it usually is easy in scientific research. On the other hand, especially in the arts, random output often competes with creativity, at least in terms of novelty and surprise. Using computer programs to imitate creativity further confuses the issue, since computers process data so quickly and accurately that to some, their output is magical. In fact, the distinction between creativity and randomness had little relevance prior to the computational age, because random behavior and creative behavior seemed so apparently distinct. Since most computer programs that claim to create use randomness in some way, it is very important that we clearly define this term to determine whether or not what we are experiencing is the result of creativity, or simply a “random” shot in the dark. To complicate matters, we often use words such as random, unpredictability, chance, and indeterminacy rather freely in our speech and writings, without giving much thought to what they actually mean. Please indulge me, therefore, as I discuss in detail what I think random means, and whether it truly exists. In so doing, a number of important features of computational creativity will, I believe, become clearer.

    Clearly, we must not rely simply on perception for our definition of randomness, since not only will perception differ from one person to the next, but perception is often far from reality. Like creativity, randomness is a process, not a thing, a process that cannot always be discerned from output alone. Even scientists use the word random to mean several different things:

    …applied for instance to a single string of a thousand digits, random means that the string is incompressible. In other words, it is so irregular that no way can be found to express it in shorter form. A second meaning, however, is that it has been generated by a random process, that is, by a chance process such as a coin toss, where each head gives 1 and each tail 0. (Gell-Mann 1994, p. 44)

    One method scientists use to help define randomness is to test processes under identical conditions.

    The key to thinking about randomness is to imagine such a system to be in some particular state, and to let it do whatever that particular system does. Then imagine putting it back into exactly that initial state and running the whole experiment again. If you always get exactly the same result, then the system is deterministic. If not, then it’s random. Notice that in order to show that a system is deterministic, we don’t actually have to predict what it will do; we just have to be assured that on both occasions it will do the same thing. (Stewart 2002, p. 280)

    Of course, the fallacy with this description of randomness is its use of the word exactly. While all of the known variables may appear to be exactly the same, the unknown variables may not be the same. For example, when quantum theorists have spoken about absolute randomness—believing that they know the exact state of all of the variables—they have ignored the as-yet unproven, and thus incalculable, existence of dark matter and dark energy, important cosmological components necessary to account for the composition of the known universe. As well, according to many philosophers and scientists, no two initial states can have exactly the same conditions, since at least one known variable—time—will have changed, no matter what state the other conditions occupy. In short, no two experiments can ever have exactly the same initial conditions.

    What many people mean when they use words such as “random” is simply that conditions are too complex for them to understand what is occurring. The actual position, speed, and direction of a single atom in an ocean wave, for example, result from such incredibly complex competing and reinforcing processes that calculating these parameters seems impossible. This complexity may or may not actually involve randomness (I will speak about chaotic behavior momentarily). For most of us, then, using the word “random” really means that a process is simply too complex to sort out. We do not mean, at least in this instance, that the atom moves about without any cause and effect resulting from the energies and other atoms that surround it.

    Another common interpretation of randomness is without pattern. The numbers that follow the decimal point in π seem random to us simply because they lack repeating patterns—at least as far as humans have been able to ascertain. The number π is fixed, however, and represents (as Gell-Mann says) the shortest form in which it can be expressed. Likewise, the cosine function output in figure 3.11 apparently lacks repeating patterns and thus seems random. However, each time the formula that produced it is run with the same input, the same numbers appear in the same order as output.

    Programming languages provide so-called random processes that produce unpredictable results. However, computer randomness (often called pseudo randomness) is actually deterministic. The reason for this is that deterministic algorithms—the basis for all computation—produce deterministic outcomes. Whenever a programmer calls upon a computer language’s random function, that programmer is depending on the irrelevance of the data chosen to provide the sense of randomness. Given enough time and provided with the generating algorithm, programmers could accurately predict each new datum produced by computer pseudo randomness.

    In all three of the cases I have just described—complexity, lack of patterns, and irrelevance—apparent randomness arises not out of a lack of determinism, but rather out of a lack of perceivable logic. Indeed, Sir Isaac Newton, whose third law of motion describes determinism (Actioni contrariam semper et equalem esse reactionem, in Newton 1726, p. 14; translated as “to every action there is always opposed an equal and opposite reaction”—Cajori 1934, p. 13), would argue that ocean waves, π, and computer randomness all result from very deterministic causes.

    According to Newtonian mechanics, when we know the state of a physical system (positions and velocities) at a given time—then we know its state at any other time. (Ruelle 1991, p. 28)

    Therefore, according to Newton at least, none of what I have described here represents randomness; it is just that we perceive these various actions as random because we cannot, will not, or do not want to actually follow the deterministic processes that produced them.

    From the above examples, then, we could say that when we typically use the word “random,” we do not actually mean “without regard to rules” (Webster’s Collegiate Dictionary, 1991, p. 1116); rather, we are simply expressing our lack of comprehension of the determinism present in the system we are encountering. Given enough time, we could predict the result of any process, no matter how apparently unpredictable it may seem. If this is so, then creativity is also predictable, for one assumes that it derives from a deterministic system, no matter how imposingly complex, lacking in perceivable pattern, or irrelevant the output of that system may be.

    However, I have just begun to describe the controversies surrounding randomness. The sciences of quantum physics and of chaos have recently provided arguments for the underlying indeterministic nature of the universe. Many scientists believe that randomness exists at the quantum level—the world of the very small. Richard Feynman, one of the proponents of such randomness, and an articulate spokesperson for QED (quantum electrodynamics), describes the quantum phenomenon thus:

    Try as we might to invent a reasonable theory that can explain how a photon “makes up its mind” whether to go through glass or bounce back, it is impossible to predict which way a given photon will go. Philosophers have said that if the same circumstances don’t always produce the same results, predictions are impossible and science will collapse. Here is a circumstance—identical photons are always coming down in the same direction to the same piece of glass—that produces different results. We cannot predict whether a given photon will arrive at A or B. All we can predict is that out of 100 photons that come down, an average of 4 will be rejected by the front surface. Does this mean that physics, a science of great exactitude, has been reduced to calculating only the probability of an event and not predicting exactly what will happen? Yes. (Feynman 1985, p. 19)

    However, there are two different views as to what has actually occurred here. According to the first, the photon is a part of an ensemble of photons, all of which are distributed through space. The overall intensity of this group of photons corresponds to our usual interpretation of groups of similar events: a probability distribution no more mysterious than an actuarial table or a human population census giving the distribution of ages or genders. If this viewpoint is correct, then the lack of predictability again describes only our ignorance, and nothing more, and the photon to which Feynman refers here is still behaving deterministically. However, a second view is also possible. According to this second view, we are not ignorant of anything, and quantum mechanics is complete in its description of individual events. The photons decide to enter the glass or bounce off it without cause, and prediction beyond probability distribution is now and forever impossible.

    Feynman sums up the apparent randomness of electron motion in this way:

    Attempts to understand the motion of the electrons going around the nucleus by using mechanical laws—analogous to the way Newton used the laws of motion to figure out how the earth went around the sun—were a real failure: all kinds of predictions came out wrong. (Feynman 1985, p. 5)

    Murray Gell-Mann agrees, adding that

    …the probabilistic nature of quantum theory can be illustrated by a simple example. A radioactive atomic nucleus has what is called a “half-life,” the time during which it has a 50% chance of disintegrating. For example, the half-life of Pu239, the usual isotope of plutonium, is around 25,000 years. The chance that a Pu239 nucleus in existence today will still exist after 25,000 years is 50 percent; after 50,000 years, the chance is only 25 percent; after 75,000 years, 12.5 percent, and so on. The quantum-mechanical character of nature means that for a given Pu239 nucleus, that kind of information is all we can know about when it will decay; there is no way to predict the exact moment of disintegration…. (Gell-Mann 1994, pp. 132–133)

    Of course, what Gell-Mann recounts here could be seen as testimony for human inadequacy, not testimony against determinism.

    It is also quite possible that objects which appear to disintegrate randomly actually move as the result of undetected internal pressures or delayed reactions to previous external actions that we cannot yet detect, or that we have overlooked because these objects are so small. We would not, for example, suggest that an amoeba moves randomly simply because there is no observable external action/reaction process involved in that motion.

    In the early 1950s, David Bohm led the chorus of those who followed Newton’s principles in a revival of the search for hidden variables as a cause for the apparent randomness we perceive. Using statistics, Bohm pointed to a key difference between classical and quantum mechanics called the quantum potential (see Wolf 1981, p. 200). In Bohm’s theory, the laws of physics are totally deterministic.

    Quantum indeterminacy is not a sign of anything irreducibly probabilistic about the universe, but a sign of the inescapable ignorance of the observer—human or otherwise. (Stewart 2002, p. 342)

    German physicist Werner Heisenberg’s 1927 uncertainty principle grew out of the notion that simply observing quantum-level mechanics disturbs the accuracy of any measurements of its mechanisms. In other words, observation itself may be the cause of the apparent randomness at atomic and subatomic levels. Brian Greene argues Heisenberg’s principle:

    Why can’t we determine the electron’s position with an “ever gentler” light source in order to have an ever decreasing impact on its motion? From the standpoint of nineteenth-century physics we can. By using an ever dimmer lamp (and an ever more sensitive light detector) we can have a vanishingly small impact on the electron’s motion. But quantum mechanics itself illuminates a flaw in this reasoning. As we turn down the intensity of the light source we now know that we are decreasing the number of photons it emits. Once we get down to emitting individual photons we cannot dim the light any further without actually turning it off. There is a fundamental quantum-mechanical limit to the “gentleness” of our probe. And hence, there is always a minimal disruption that we cause to the electron’s velocity through our measurement of its position. (Greene 1999, p. 112–113)

    This uncertainty principle means that no matter how accurately we measure the classical quantities of position and momentum, there will always be errors in our measurements.

    Predicting or determining the future of atomic objects would be impossible under these circumstances. This was called the Heisenberg Principle of Uncertainty or the Principle of Indeterminism. It had little relevance in the world of ordinary-sized objects. They were hardly bothered by disturbances produced through observation. But the uncertainty principle was serious business when it came to electrons. Indeed, it was so serious that it brought the very existence of electrons into question. (Wolf 1981, p. 115)

    Many quantum physicists counter these arguments by suggesting that randomness exists at the quantum—atomic and subatomic—levels, while cause-and-effect exists at larger-size levels as probabilistic certainties; thus, in a sense, they are arguing for both randomness and Newtonian (classical) mechanics. The problem with this dual model, of course, is that size—the very essence of such a model—is an arbitrary standard. From our perspective the atom is small and the universe is very large. To a being the size of an atom, the universe (if even observable) would seem monstrous and the quantum world would seem normal.

    Chaos theory appears at first glance to support the case for a kind of deterministic randomness. Chaos is the study of turbulent behavior in which some feel that incredible complexity makes predictability at the level of the very small scale impossible. James Gleick observes that chaos

    …brought an astonishing message: simple deterministic models could produce what looked like random behavior. The behavior actually had an exquisite fine structure, yet any piece of it seemed indistinguishable from noise. (Gleick 1987, p. 79)

    At first glance, this version of chaos resembles the notion of randomness occurring as a result of our inability to predict events when faced with great complexity. However, Stephen Kellert notes that

    …chaotic systems scrupulously obey the strictures of differential dynamics, unique evolution, and value determinateness, yet they are utterly unpredictable. Because of the existence of these systems, we are forced to admit that the world is not totally predictable: by any definition of determinism that includes total predictability, determinism is false. Thus begins the process of peeling away the layers of determinism that are not compatible with current physics, impelling us either to revise our definition of determinism or reject it as a doctrine. (Kellert 1993, p. 62)

    Chaos theory, however, does involve prediction. This prediction occurs as the result of something called the calculus of probabilities, long considered a minor branch of mathematics. The probabilities of chaos enlighten us to the predictability of events such as attractors, patterns that, given the right initial conditions, can be foreseen and even measured in advance of their occurrence.

    A central fact of the calculus of probabilities is that if a coin is tossed a large number of times, the proportion of heads (or the proportion of tails) becomes close to 50 percent. In this manner, while the result of tossing a coin once is completely uncertain, a long series of tosses produces a nearly certain result. This transition from uncertainty to near certainty when we observe long series of events, or large systems, is an essential theme in the study of chance. (Ruelle 1991, p. 5)

    The word “utterly” used by Kellert, and the phrase “completely uncertain” in Ruelle’s comments, characterize what I consider flaws in the arguments for randomness: arrogance. While I agree that we truly do not know why photons move in the way that they do, this lack of knowledge should not necessarily lead us to the conclusion that we therefore can never know, or that an entire canon of physics should be revoked.

    The bottom line for my own research is that randomness is not an engaging mystery, but a simple reflection of ignorance. Aside from the possible exception of quantum physics, randomness refers to behavior that is either too complex, too patternless, or too irrelevant to make prediction possible. None of these features seem to me to be associated in any way with creativity. In fact, while much of what we call creativity is also unpredictable, creativity often turns out in hindsight to be the most rational way to have proceeded. Reverse engineering even the most complex creative processes demonstrates this rationality. Randomness, on the other hand, perpetuates or even complicates problems—and should never be confused with creativity.

    Music Programs and Research

    Rather than describe individual music algorithmic composing programs, many of which may no longer be available by the date of this book’s publication, I have opted to discuss the basic principles of algorithmic composing programs and the degree to which each principle allows for, or models, creativity. The approaches I discuss here include rules-based algorithms, data-driven programming, genetic algorithms, neural networks, fuzzy logic, mathematical modeling, and sonification. Though there are other ways to program computers to compose music, these seven basic processes represent the most commonly used types.

    Before describing these basic program types, however, it may be useful to define the term “algorithm” as I will use it in this book. Algorithms are recipes, sets of instructions for accomplishing a goal. An algorithm typically represents the automation of all or part of a process. Importantly, there is nothing inherently inhuman about algorithms. To understand this, one has only to remember that deoxyribonucleic acid (DNA)—the genetic basis of life—is an algorithm. Algorithms simply make tasks easier and, often, more bearable. If we had to repeatedly step through the processes of, for example, breathing, making our hearts beat, or blinking our eyes (all algorithmic processes), we would have no time to think about or do anything else.

    It is also important to differentiate between composers who use algorithms and algorithmic composers. While such a differentiation may seem polemic, it is critical to the computer modeling of creativity. Composers who use algorithms incorporate them to achieve a momentary effect in their music. In contrast, algorithmic composers compose entire works using algorithms, thus dealing with important issues of structure and coherence. The differences between these two seemingly comparable views resemble the division between so-called aleatoric and indeterminate composers of the mid-twentieth century. As Morton Feldman remarked: “You can see this in the way they have approached American ‘chance’ music. They began by finding rationalizations for how they could incorporate chance and still keep their precious integrity” (Schwartz and Childs 1967, p. 365). While I do not share Feldman’s vehemence in separating the two camps of aleatorism and indeterminacy, I do feel strongly that the differences between composers who use algorithms and algorithmic composers is substantial. In this book, I refer almost exclusively to algorithmic composers.

    Sixteenth-century print

    Figure 3.1
    A sixteenth-century print reflecting a competition between an algorist, on the left, and an abacist, on the right.

    Figure 3.1 is a sixteenth-century print possibly reflecting similar differences—a kind of competition between an abacist on the right and an algorist on the left. The abacist uses an abacus, an ancient tool designed as a kind of simple slide rule. By physically sliding beads (numerical representations) various ways, abacists can add, subtract, and so on. The algorist, on the other hand, manipulates standard mathematical equations or algorithms to compute the same results. If facial expressions are any indication here, the algorist holds the upper hand in this competition. For those believing that using algorithms to create music somehow removes imagination, inspiration, and intuition from the composing process, know that defining a good algorithm requires as much imagination, inspiration, and intuition as does composing a good melody or harmony. Neither good algorithms nor good musical ideas grow on trees.

    The musical examples that demonstrate the processes described in the following sections have been reduced to simple keyboard notation in order that they may be usefully compared with one another. Obviously many, if not all, of these examples could be far more elaborate, and each lasts significantly longer than shown. However, I did not want the effectiveness or lack of effectiveness in emulating creativity in some of the examples to eliminate the effectiveness of other examples, especially when the choice of which music was used was solely my own. Readers are encouraged to use the programs available on my Web site and listen to the related MP3 files, in combination with reading the descriptions of those programs, for a more “musical” interpretation of the techniques presented here.

    Many of the principles of the processes described here overlap. For example, mathematical models can be seen as sonifying abstract formulas. Cellular automata resemble mathematical models—and hence sonification—in that they graphically represent mathematical computations. Fuzzy logic is a type of mathematics. Neural network output results from the mathematical calculations in collaborating hidden units. As well, virtually any process can be described as some form of Markov chain. The processes described below also have substantial differences, as their definitions demonstrate. I have arranged these algorithmic processes into these particular categories to help delineate their different approaches to producing musical output.</p