Oram

Digital to Analog: The Needle and Thread Running Through Technology

A deep-cut exploration of the nature of technology, our relationship with it, and how decisions about it in one place and time shape attitudes in another place and time.

Written By

Matthew Guerrieri

Daphne Oram making hand-drawn inputs to the Oramics apparatus. (Via.)

Daphne Oram making hand-drawn inputs to the Oramics apparatus. (Via.)

This is a picture of Daphne Oram, demonstrating the technology she invented: Oramics. Oram (1925-2003) learned electronics as a studio engineer at the BBC in the 1940s. She composed the first all-electronic score broadcast by the BBC—in 1957, for a production of Jean Giradoux’s Amphitryon 38—and, a year later, co-founded the BBC Radiophonic Workshop. A year after that, dismayed at the BBC’s lack of enthusiasm for her work (which may be sensed in the fact that the Workshop was not allowed to use the word “music” in its name), she struck out on her own and began to develop Oramics.

Oram’s conception was a radical union of audio and visual. It was a synthesizer, but one in which the input was hand-drawn patterns on strips of 35mm film. The strips of film rolled past photoelectric sensors, and the resulting currents were converted to sound. The avant-garde possibilities of sound-on-film had been explored previously—by Oskar Fischinger, for example, or Arseny Avraamov and his Soviet counterparts (the latter well-chronicled in Andrey Smirnov’s essential study Sound in Z)—but Oramics was more ambitious, more innovative. Oram’s machine ran up to ten strips of film at once, controlling not only pitch, but amplitude, waveform, and various filters. Sound-wise, it was miles ahead of the voltage-controlled analog synthesizers of the time.

HB Oram 2This is a picture of my daughter playing with the Oramics app, an iOS-based simulation. It was released in 2011, to coincide with a special exhibition at London’s Science Museum. Oram’s original apparatus was on display—now behind glass, no longer functional. Oram had stopped working on Oramics in the 1990s, after suffering a pair of strokes; by then, the advance of electronic music had left her and her machine behind. To consider why is to, perhaps, get close to something about the nature of technology, our relationship with it, how decisions about it in one place and time shape attitudes in another place and time.

Fair warning: this article is going to take the scenic route getting to its destination—more suite than sonata. But that I should feel compelled to give such a warning is not irrelevant. Because the real question is why some things are at the center, and why some things are peripheral, and how those things get to where they are. And a good starting point for answering that question is another technology: clothes.

*

The Great Masculine Renunciation, M division.

The Great Masculine Renunciation, M division.

One of the great geologic-level events in the history of fashion was first named by the British psychoanalyst J. C. Flugel in his 1930 book The Psychology of Clothes. The event was, as Flugel put it, “the sudden reduction of male sartorial decorativeness which took place at the end of the eighteenth century”:

[M]en gave up their right to all the brighter, gayer, more elaborate, and more varied forms of ornamentation, leaving these entirely to the use of women, and there by making their own tailoring the most austere and ascetic of the arts. Sartorially, this event has surely the right to be considered as ‘The Great Masculine Renunciation.’ Man abandoned his claim to be considered beautiful. He henceforth aimed at being only useful.

Flugel attributed the Great Masculine Renunciation to the spread of democratic ideals in the wake of the French Revolution: with all men now theoretically equal, male fashion converged on a kind of universal neutrality. In other words, according to Flugel, the more utilitarian style of fashion spread outward from the middle class, mirroring the rise of middle-class economic power.

Flugel was, perhaps, too optimistic. David Kuchta, in his book The Three-Piece Suit and Modern Masculinity: England, 1550–1850, traces the origins of the Great Masculine Renunciation much further back, to the 1666 introduction of the three-piece suit by Charles II. Sobriety in dress was first a symbol of masculine, aristocratic propriety. Only later would the style be adopted by the middle class, in order to criticize aristocratic wealth and assert their own political power; in turn, the upper class would re-embrace the style in their own defense. Both sides, at the same time, accused the other of being insufficiently modest in their dress, of embodying not masculinity and prudence, but effeminacy and indulgence.

And note: it is entirely a parley between middle- and upper-class men. Kuchta concludes:

The great masculine renunciation of the late eighteenth and early nineteenth centuries was thus less the triumph of middle-class culture; rather, it was the result of middle-class men’s appropriation of an earlier aristocratic culture, of aristocratic men’s appropriation of radical critiques of aristocracy, and of a combined attempt by aristocratic and middle-class men to exclude working-class men and all women from the increasingly shared institutions of power. (emphasis added)

What really solidified the Great Masculine Renunciation was the great geologic-level event in the history of technology: the Industrial Revolution. What was once a symbol of judiciously wielded privilege now became a symbol of efficiency, of diligence, of devotion to productivity. The uniform of economic and political power could also signify a complete congruence of work and life. Anthropologist David Graeber, in a recent article, put it this way:

[T]he generic quality of formal male clothing, whether donned by factory owners or functionaries, makes some sense. These uniforms define powerful men as active, productive, and potent, and at the same time define them as glyphs of power—disembodied abstractions.

Dress for the cog in the machine you want to be.

A couple of months ago, I was at the annual Fromm Foundation concerts at Harvard University, which featured the International Contemporary Ensemble, for which the group opted for outfits that, while realized in individual ways, still hewed close to standard new-music-ensemble dress. In fact, the few nods in the direction of rebellion—some bright leggings here, some gold-studded boots there, ICE founder Claire Chase’s metallic silver jacket—mostly just reinforced how closely the performers still orbited the standard all-black contemporary music uniform.

I’m not sure exactly when it became standard (a day of hunting through a few decades of archived newspaper reviews yielded precious little record of what performers were wearing—something, I realize, that might very well be symptomatic of what this article is discussing), but that all-black uniform has held sway for at least thirty years, which is not insignificant. Concert dress had long since conformed to the ideals of the Great Masculine Renunciation, so it makes sense that avant-garde concert dress would go even further in realizing those ideals: more stark, more neutral, more sober. And 20th-century avant-garde music, to an unprecedented extent, was a process-based movement—serialism to minimalism and everything in between—so one might expect its interpreters to take their fashion cues from the similarly streamlined and orderly world of the factory and the assembly line. But there’s something else going on with that parade of all-black, I think, and it is a bit of fallout from technological advance. And advance isn’t really the right word, in this case. We think of technological innovation as always being expansive, opening up possibilities and dimensions. But technological innovation also contracts dimensions. And the shadow of one of those contractions survives in all those black clothes.

One of the most sweeping changes wrought by audio recording and broadcasting technology was that, for the first time ever, music was no longer, by necessity, a visual as well as an aural experience. Music had always been only heard in live performance—which meant the listener was there, looking as well as hearing. (Even exceptions—Vivaldi’s female choristers singing behind a screen or Wagner’s enclosed pit orchestra or the like—were more like unusual variations of the visual context.) But with recordings and radio, the visual portion of musical performance disappeared. All one had was the sound. The technology decoupled eye and ear.

It is, actually, akin to the Great Masculine Renunciation. The process is the same: reduce a given media—and remember, as Marshall McLuhan was fond of pointing out, clothes are just as much a form of media as any other—to its discrete components, isolate what is essential, streamline it into its most basic, direct form, cast away everything else. In this case, you have two media changing in tandem: concert dress evolved toward this extreme neutrality in order to better mimic the non-visual experience of music that recordings and radio increasingly made the norm. You could even argue that the music itself started to amplify this evolution, ever more focused on sound, how the sound is organized and produced, techniques and presentation styles following the sonic impetus toward abstraction. It echoed the favored toolbox—scientific, industrial, political—for making sense of what was turning out to be a very complicated world: divide and conquer.

*

Pythagoras; woodcut from the Wellcome Library, London.

Pythagoras; woodcut from the Wellcome Library, London.

The purest expression of philosophical allegiance to the sound-only experience was and is acousmatic music. The term was invented by Pierre Schaeffer, the French musique concrète pioneer, to describe the experience of hearing musique concrète, or any other sonic experience in which the source of the sound was hidden. The goal of acousmatic experience was to stop thinking about how the sound was produced and start noticing the sound itself, qualities and textures that might be elided or ignored in an audio-and-visual presentation. Schaeffer likened it to Pythagoras, the ancient Greek philosopher, supposedly lecturing from behind a veil in order to focus his students’ attention on the substance of his teachings. Thus, Schaeffer insisted, the modern technology of electronic sound reproduction was simply a recreation of ancient experience: “[B]etween the experience of Pythagoras and our experiences of radio and recordings, the differences separating direct listening (through a curtain) and indirect listening (through a speaker) in the end become negligible.”

Does it change the nature of Schaeffer’s thesis to note that the Pythagorean veil probably didn’t exist? The earliest references to it come long after Pythagoras’s time and make the veil more allegorical than real—an exclusionary implication, dividing Pythagoras’s followers into those who really got what he was teaching and those who didn’t. (Brian Kane has unraveled the Pythagorean veil—and much else—in his book Sound Unseen: Acousmatic Sound in Theory and Practice.) Then again, Schaeffer’s real, acknowledged philosophical reference point wasn’t Pythagoras. It was the phenomenology of Edmund Husserl.

Phenomenology is not an easily summarized thing, but at its core is the act of examining what exactly we perceive in order to bring to light ways we organize and narrate our perceptions. One of the better descriptions of the phenomenological process was given by Husserl’s disciple Maurice Merleau-Ponty, in his 1945 Phenomenology of Perception:

It is because we are through and through compounded of relationships with the world that for us the only way to become aware of the fact is to suspend the resultant activity, to refuse it our complicity…. Not because we reject the certainties of common sense and a natural attitude to things — they are, on the contrary, the constant theme of philosophy—but because, being the presupposed basis of any thought, they are taken for granted, and go unnoticed, and because in order to arouse them and bring them to view, we have to suspend for a moment our recognition of them…. [Phenomenological] reflection… slackens the intentional threads which attach us to the world and thus brings them to our notice[.]

It’s easy to see how Schaeffer’s acousmatic idea transfers this process into the realm of sound, veiling the relationship between a sound and its production in order to reveal how much of the sound’s nature gets lost in our compulsion to categorize it.

Sounds like a great idea, doesn’t it? But beneath that bright, objective surface is a nest of problems that can reiterate the sorts of presuppositions that phenomenology is meant to exorcise. Feminist interpretations of phenomenology, for instance, face the difficulty of Husserl’s idea of intersubjectivity, the assumption that other people will perceive and classify the objective world in much the same way I will. As it turns out, the “I” in that sentence is not incidental. As scholar Alia Al-Saji has written:

The consciousness that results is not only an empty, pure ego, it is also a universalized (masculine) consciousness that has been produced by the exclusion of (feminine) body, and hence implicitly relies on the elision of sexual difference. The phenomenological method’s claim to “neutrality” thus appears rooted in a form of double forgetfulness that serves to normalize, and validate, the standpoint of the phenomenological observer.

Johanna Oksala, similarly, acknowledges the suspicion “that the master’s tools could ever dismantle the master’s house.”

This might seem far away from the actual experience of music. But the thing to keep in mind is that to make some definition of the “actual experience” of music is, almost always, to make a claim of neutrality—to privilege one aspect of music (usually the sensual and aesthetic sense of timbre and rhythm and syntax) over another (usually the ramifications of the societal conditions under which the music is created or performed). And it runs into the same problem: who decides what’s essential? Every single categorical division I’ve been talking about—plain and fancy, sound and vision, parts and whole, past and present, musical and extra-musical—is similarly implicated. We call some kinds of dress sensible and some ostentatious because long-dead men (and only men) were locked in competition for who would be in and out of favor, and broadcast their convictions via the media of clothes. We analytically divide every human activity into component parts because the mechanical demands of industrial development got us in the habit. We separate aspects of musical performance by sense because a particular form of technology first did it for us, decades ago. We make divisions along lines that we never laid down.

*

From Daphne Oram, An Individual Note of Music, Sound and Electronics (1972).

From Daphne Oram, An Individual Note of Music, Sound and Electronics (1972).

Daphne Oram was temperamentally disinclined to make such divisions. Her work on Oramics turned into something resembling a new-age quest, a search for enlightenment at the boundaries of technology.  In 1972, Oram published a short book called An Individual Note of Music, Sound and Electronics. It is, on the one hand, a chatty, primer-like overview of basic ideas of sound synthesis and electronic music, but one that, at every possible opportunity, analogizes and anthropomorphizes its subject on the grandest possible scale:

In every human being there will surely be, as we have said, tremendous chords of wavepatterns ‘sounding out their notes.’ Do we control them by the formants we build up… by tuned circuits which amplify or filter? Are we forever developing our regions of resonance so that our individual consciousness will rise into being—so that we can assert our individuality? In this way does the tumult of existence resolve itself into a final personal waveshape, the embodiment of all one’s own interpretations of the art of living?

What emerges over the course of the book is that Oramics was conceptually inseparable from Oram’s critique of technology itself—but that technology could, indeed, dismantle and rebuild its own house.

If the machines, which replace the human interpreters, are incapable of conveying those aspects of life which we consider the most human, then… the machines will thwart the communication of this humanity. But need machines be so inhuman? Could we so devise a machine that, in the programming of it, all those factors which are deemed to be the most ‘human,’ could be clearly represented?

Her positive answer was the development of Oramics. Her vision of technology was—to put it as she might—one of additive, not subtractive, synthesis.

Oram ended up on the margins of the perceived mainstream of innovation, even as she pursued her uniquely holistic conception of technology. One can speculate as to why. She was too far ahead of her time for the BBC, and, perhaps, too far out of time for the electronic music community at large. Her machine was never finished. (“It is still evolving all the time,” she wrote, “for one lifetime is certainly not long enough to build it and explore all its potential.”) She had an all-or-nothing attitude—toward her work, her employers, her colleagues. She could be exacting, stubborn, single-minded, and other adjectives that would probably sound somewhat less pejorative if she had been a man.

But Oram also never got her due because she was singular, in a way that all the technocracies that make up society, explicit and implicit, couldn’t quite encompass or process. (“My machine does not really fit into any category,” she admitted.) For all her technological prowess, Oram was the opposite of what gets assigned technological value. She was integral. She was non-repeatable. She was non-modular. She was indivisible.

In the first article in this series, I wrote:

I’ve found that one really fascinating question to ask myself while listening to music that utilizes technology—old technology, new technology, high technology, low technology—is this: What’s being hidden? What’s being effaced? What’s being pushed to the foreground, and what’s being pushed to the background?

Oram is a reminder that it’s not just what gets pushed to the background. It’s also who.