Brian Eno’s Recording Studio as Sackbutt

When teaching the history of electronic music, I naturally spend a fair amount of time talking about the technologies of the fifties and sixties, and even the forties and before. Univac computers, punch cards, and 78 rpm disc-cutters are all technologies I had heard about but never seen, let alone worked with. Almost all of my undergraduate students, born at the end of the 1980s, have never heard of, let alone seen, a reel-to-reel tape recorder or even a reel of tape. So it becomes a bit of challenge to explain the hybrid splicing techniques of James Tenney or the multiple looping process that gave birth to Steve Reich’s It’s Gonna Rain, as a fair amount of cognitive leaping is required on the part of the kids. But I had a real shock when I met with a group of first-year students, all from the media department of the School of Information Technology at the university where I teach, for an introductory class last week. While talking about Morton Subotnick’s Silver Apples of the Moon—which, after all, was the first work commissioned specifically for the medium of the LP—I discovered that almost all of them had never seen a record up close and in person. In fact, some had only the vaguest idea how a record and turntable work. It all made me feel quite old.

I don’t mean to suggest that the LP record is a dead technology, lest Christian Marclay, followed by a thousand DJs, come after me to put my head on a spindle and watch it turn around at 33 1/3 rpm. Yet as more and more iPod scratchers and Wii musicians emerge, its passing may come sooner than one might think, and will surely turn on generational grounds. Analog tape’s death as a creative tool certainly seems unexaggerated—the only person I know scratching and performing with it live these days is Joseph Hammer. Still, the paradigms of the analog recording studio inform music of the digital age—looping is just one example. Now plug-ins on the market are designed to affirmatively install the artifacts and flaws in processing that people used to work quite hard to remove. I’m surprised no one has invented a looper that adds the slight—and sometimes not so slight—speed variations that occurred as a long tape loop traveled around the circumference of an electronic music studio, threaded around microphone stands, chairs, and whatever else was handy. But this is just a kind of sonic nostalgia, as the processes for composing and performing evolve forward without looking back. It’s interesting to observe how young musicians think about time and process. Loops, tape, even CDs are linear and time-based. Young musicians today use flash memory, random access, broadband, hand-held devices. The LP record will fade out in the creative community, and the compact disc won’t be all that far behind.

It goes without saying that the means of composition effect, consciously or not, the results. In computer music, today it’s hard to imagine having to wait days before hearing the results of your meticulous program carefully punched on IBM cards and submitted to the batch processing center at your university—results that probably were one or two seconds long, at most. Since the availability of analog synthesizers, the computer music composer, the programmer, the engineer, and the performer have merged into one, while taking on the role of the audience as well, using the instantaneous feedback the medium provides in a way that was unimaginable in the era when the biggest sound you could make came from an orchestra. It’s interesting that Brian Eno, who postulated the idea of recording studio as instrument, has expressed discomfort with the computer, considering that it has encapsulated and extended the functions of said studio, and put it into the hands of hundreds of thousands of artists.

Not sure exactly what point I’m trying to make here—maybe I’m just smarting from the realization that such a gap would exist between a teacher who somehow still likes to pick up and hold his music, and his students, to whom a song, a track, a tune, a composition, is as ephemeral as an infrared signal from a game controller. I’d love to hear your thoughts.

20 thoughts on “Brian Eno’s Recording Studio as Sackbutt

  1. Chris Becker

    Some of Eno’s reservations and observations about the computer as a recording tool can be found in this archived article from Wired magazine. http://www.wired.com/wired/archive/7.01/eno.html

    It’s an article I’ve circulated quite a bit (being someone who uses the laptop computer and digital technology a lot).

    I do wish more composers understood that technology as it is introduced to the marketplace isn’t necessarily an improvement upon what came before.

    Reply
  2. rtanaka

    Personally I don’t really put much emphasis on the differences between analog and digital recordings — they both point to something that might have happened at some point in time or another, but they’re both never going to be quite as “real” as a live performance.

    Heh, come to think of it, I haven’t even really listened to much music lately unless it’s done live. (I commute to work every day but usually keep the radio off.) Maybe my ears have become spoiled.

    Reply
  3. Chris Becker

    When I first heard “I am the Walrus” as a very (very) young music student, I wondered how the Beatles managed to notate all of those parts, cue and conduct them and finally record them in one take…

    Ryan, don’t forget the studio is an instrument and can be used compositionally to create worlds you can’t create live. It’s another medium…another animal altogether.

    This is also how I feel about analog tools and digital tools…they are their own instruments. And I have my preferences, and I do believe we have a generation of young composers who really don’t know what tape hiss, microphone bleed, tube amplifiers and / or “live” takes sound like when it comes to recordings. The data is there…but not everyone is listening.

    However, THAT said…you talk with anyone who masters music for a living (and actually knows what the hell they’re doing…) you’re not going to get a glib response like “I can’t tell the difference between analog and digital…” Unless they are trying to take your money. Which many are…

    Reply
  4. Chris Becker

    I am the Eggman
    Oh lord…I’m now wondering if anyone reading this thread has actually heard “I Am The Walrus” (the track from Magical Mystery Tour…not the new version with Bono and The Secret Machines…)???

    Reply
  5. rtanaka

    Oh, I didn’t mean to imply that there was no difference between digital and analog. I used to study engineering and I took a couple of courses in acoustics so I’m well aware that there are major differences in their function and construction. The resultant sounds, though, are always analog because the output still remains the same — AC current coverted into vibrations outputted through a speaker. Except for audiophiles and people who have extensive experience with the stuff, most people can’t hear the difference. There is a difference, but most people can’t hear it. I think it helps to keep this in mind, at least.

    I have a soft spot for electronic and electroacoustic music since what got me started into composition was a sequencer, but lately I’ve been more interested in writing music for acoustic instruments. I’ve found it very difficult to do developments of musical ideas without the result sounding like a distorted or warped version of the original source material. Processed sounds have a tendency to sound…well, processed.

    Someone mentioned to me that doing electronic music is “hard” because with the technology changing so fast the medium has never really had the time to develop a performance practice around itself like with the other instruments. (Where’s its notation, for example?) I know it can be done and done well, but it’s just a personal decision to stay away from the stuff for now.

    Reply
  6. carlstone

    rtanaka (and anyone else who would care to chime in) what would you say about the differences in composing with analog systems vs digital, as opposed to listening.

    Reply
  7. rtanaka

    A few years ago I read a book called Artificial Intelligence: The Very Idea by John Haugland. The book obviously touches on other subjects but he has a section in there where he attempts to make a distinction between analog and digital. I found it to be pretty illuminating.

    The way he described the differences between the two was that analog systems were “medium dependent”, while digital systems were “medium-indepedent”. He made a contrast between board games and sports games as an example.

    Chess games are medium-independent because the size, shape, and materials of the pieces themselves are largely incidental. The function of the game doesn’t change whether its a marble carved set, pocket-chess set, or a computer game. Whereas in sports (lets say basketball for now), the the game is medium-dependent because the size of the ball and the physical composure of the players themselves makes a big difference in the outcomes. You couldn’t reduce the size of the basketball to a handheld size and still call it the same game, for example.

    Using these categories you can sort of see that some art mediums tend to lean towards another — writing and composition tend toward “digtal”, while dance and the performance arts tend toward “analog”. The closest thing I can think as being an “analog system” is improvisation, because the outcome is very reliant on the medium itself, which is the person. Course most things we do are always a mixture of the two both in art and in life, and the intersections between them is what makes things interesting on some level, I think.

    So for me, music composition is “digital” because it was always digital to begin with. I don’t make a distinction between writing on paper or writing on the computer because in terms of functionality, notes are largely the same type of abstract representation similar to chess peices and the alphabet system. I use Sibelius mostly because of its fast-editing capabilities and playback function that let’s me double-check what I’ve just written. It’s useful only because its faster and more efficient.

    Reply
  8. carlstone

    So you would say that, aside from the speed and efficiency, the result is the same for you. But I wonder if, for no other reason but the speed and efficiency having their own consequences on your composing, the process influenced the results – in other words if you had started with the same “inspiration” and used writing on paper instead of using Sibelius, would the results have been identical? Of course it’s impossible to prove because there’s no way to really test, but I believe they probably wouldn’t be.

    Reply
  9. Colin Holter

    Not to be a stickler, but I have to point out: There’s nothing “analog” about writing music on paper that isn’t also the case with notation software. In both cases we’re making symbols that represent (analogically, if you will) the desired result of a player doing certain things with his or her instrument.

    Having composed on magnetic tape as well as with computers/digital media, I’d say that composing in “analog”–i.e., with tape–is something I never ever ever want to do again. That said, like Ryan, I also notate (instrumental/vocal music, that is) at the computer; I work with extensive plans/graphs/verbal scribblings, but the I don’t usually write notes per se by hand.

    Reply
  10. rtanaka

    The process definitely affects the result, but in my opinion, in ways that don’t differ much from what one does in actual practice as a composer. Composers write pieces, they will play it back on the piano as a way to get the idea of it, do lots of editing, and often revise pieces upon hearing the performance thereafter. (Mahler often reorchestrated his pieces after rehearsals, for instance.) Playback just allows composers to “double-check” their output in this manner, but at a much faster rate.

    One thing that might be interesting to point out is that “medium-independent” systems like chess technically don’t actually exist. If you really think about it, chess is just a set of rules with symbols acting as placeholders for certain bits of ideas…in a lot of ways very etherial because it’s not something that has any physical manifestation. We only recognize as it being there because we agree that it’s there. (Say, I could replace the pawn with a rock and the game would function totally the same as long as we both “agreed” that the rock replaces the pawn. This works because the game isn’t reliant on its medium.)

    I tend to think of music in the same way, for the most part. We agree that certain symbols mean certain things, and use this metaphysical connection in order to communicate with each other. It’s quite a remarkable what we do with these things, considering that they don’t actually have any physical manifestation — they don’t really exist anywhere except in our own minds.

    Like Colin said, there’s no such thing as an analog composition because music notation itself is inherently digital. All digital really means is a method of data storage that utilizes discrete values, which goes back to older technologies of the abacus and the printing press. (Metric pulsations, for instance, is a form of binary with its own on and off values.) I think it’s common to think that digitization started with the computer, but that’s not really the case — it was always there as far as anybody can remember, but nowadays it’s just done a hell of a lot faster.

    Reply
  11. rtanaka

    Oh, I forgot to mention that paintings and sculptures can be considered to be “analog” artworks, because the result of it is very much dependent on its medium. Seeing pictures of paintings and sculptures aren’t quite the same as looking at it directly because you miss out on a lot of the textures and affects that come with the materials themselves. Haughland actually mentions in this book.

    It’s sort of interesting because the way painters and sculptors make a living is fundamentally different than us musicians because they put a very strong investment into the creation of one object, where they receive a large sum of money in exchange for their one unique work. Composers, on the other hand, tend to make their living by publishing, i.e. duplication. Digital systems are more duplication-friendly, and I think this idea is very much reflected in the way we use computer-related things today as well.

    Reply
  12. Chris Becker

    I am an active composer who mainly uses both the medium of recording as well as my relationships with musicians – many being improvisers – to create both prerecorded work as well as hybrid works for live performance. I rarely notate music or create charts – and many musicians I work with do not read music – but notation and charts are a part of what I do. Although much of my music comes out of a process, preplanning and conceptual ideas are also a part of what I do (see Saints & Devils or my current Shanty Town Suite).

    For me, the recording process is a part of the composition. It is a part of the compositional gesture that separates one piece from another in this medium. I am very interested in the creative dialogue between the recording studio and live performance.

    People’s ears are in fact able to hear a wide range of subtleties. I do not believe that listeners simply “don’t hear” the differences between an mp3 or a vinyl record, or a digital recording vs and analog recording.

    I’ve found that with digital recording, one has to work very hard at the editing and mixing stage to get some sense of space, dynamics and warmth in your music. I’ve spent six to eight months mixing a single piece of music in the digital realm. Time isn’t really an issue – it’s a part of the process. Taking that time is part of the deal if you want to explore and strive for certain results. But that said, I also like working fast, recording first or second takes only and keeping things “rough.” Sometimes you have to be punk rock about it if you want things to sound good.

    The tools and your relationship with them are a part of the creative process. One tool – even a consumer level piece of software – won’t give each person identical results. Not if you dig in and push the tool to it’s limits.

    Reply
  13. rtanaka

    Yeah I agree. It can be done well in the right hands (I’ve met some laptop players who were really amazing at what they do), but it’s largely a personal reason to stay away from the stuff for now. After tinkering around with different kinds of mediums I just found it most comfortable to write in more traditional types of instrumentations. (I’m doing lots of things with functional harmony now, so I figure it would be better to use live instruments.) I do enjoy improvising with electronic musicians, however.

    I think the danger of using technology is that it becomes very easy to do medeocre work precisely because its capabilities are so powerful. Nowadays there are programs that will pretty much let anybody with two hands “produce” a techno song with a few touches of a button. Likewise, the playback function of Siblius and Finale can often cause the composer to develop a dehumanized perspective for the instruments they’re writing for, since it can play just about anything. I think in order to do something creative with the tools available, you also need a strong background in musical fundamentals and idiomatic writing, otherwise the tool ends up dictating your output rather than the other way around.

    The piano (which is a technology in itself) has had similar developments and criticisms thrown against it, oddly enough. Because it became so “easy” to produce a note, a lot of people thought it would become the end of all other instruments. That did not quite happen of course, although the keyboard seems to have become an instrument of choice for many composers at this point in time. It’ll be interesting to see how computers might be integrated into the music community in the near future…if you think about it, home computers have only been around for a few years so it’s mostly uncharted territory at this poitn.

    Reply
  14. Chris Becker

    Ryan, no disrespect here. But if you don’t actually work with electronic music and in fact prefer acoustic instrumentation, then why are you posting so many comments related to Carl’s initial questions?

    Maybe you have some questions for Carl (or me) or other composers who have worked a lot with these tools?

    I disagree with a lot of your points – but they seem to indicate a lack of experience with the medium which isn’t such a big deal. For instance, I don’t see why you can’t address functional harmony in a solo or ensemble that includes say a laptop or tape…who told you you couldn’t?

    Have you heard “I Am The Walrus”? Or Carl’s music?

    Not trying to pick on you…

    Reply
  15. rtanaka

    Like I said, I starting writing music on the sequencer (mostly bad techno music at first) so I have some familiarity with working with computers. I did some projects involving Max/MSP, including one with an real-time animated score, and a patch that harmonized the sounds of the piano depending on how fast the right and left hand were playing in relation to one another. (With the help of another programmer who was more adept in that area than myself.)

    The most interesting thing I’ve done with the computer, though, is probably my Autonomous Fantasies that uses Stephen Malinowski’s “Music Animation Machine” to visualize MIDI data. The “piece” itself exists as a MIDI file, but different “renditions” can be generated by the user changing the MIDI drivers themselves. A lot of musicians tend to think of music in visual terms, so I thought maybe this could bridge that gap a little bit for audience members.

    The last piece in particular pointed out to me that in terms of what’s being heard, I was mainly interested in pitch, rhythms, volume, and spatialization — all of which were squarely within the boundries of MIDI file capabilities so I never really bothered trying to add anything more fancier than that. And in a lot of ways MIDI is actually much more efficient and flexible than recording formats, because I managed to cram 40 minutes of music and some several-hundred hours of work into a file less than the size of a megabite, and the piece theoretically has endless ways of being interpreted since the MIDI drivers could be changed at whim of the user. Adhering to a rigorous standard actually managed to open up a lot of new possibilities, ironically enough.

    But the visual aspect of the piece was mostly for educational purposes and maybe a different way to “experience” the music…sonically, though, there wasn’t really any reason why the piece couldn’t have just been written as an acoustical work. So that’s why lately I’ve been working on my chops in writing for more standard instrumentations.

    I guess my question for musicians working within technologically oriented mediums would be this — what does it exactly mean to push electronic mediums “to its limits?”. Noise and distortion? High volume levels? Generating new timbres? Lots of information crammed into a short duration? How do these gestures relate to the idea of being “progressive”? I suppose there’s the “colliding of the worlds” aspect that can be done by layering foreign sounds together, but this too, can also be done acoustically by evoking certain modal structures and utilizing certain instruments…

    Having lived and been around Japan a number of times in my life I know all too well that its technological advancements have done little to nothing in actually changing its predominantly monoethnic and patriarchical society. Scientific progress does not equal social progress, and since finding out that I’m more interested in the latter, I’ve abandoned a lot of the notions that were rooted in scientific and technical approaches to music-making. (Which meant having to steer away from a lot of modernist philosophies in general.) Again, this is mostly a personal decision and I don’t expect everyone to agree with this — maybe others have had different experiences with technology…

    Sorry for the long post and shameless plug, but I thought I might have had to explain myself in more detail…

    (Oh, I’ve heard The Walrus before but I can’t say that I have much affinity for it. I don’t mean this in a derogatory way…it’s just that I come a different kind of background and rock music just hasn’t ever been a significant part of my life. Haven’t had a chance to listen to Carl’s music though, maybe I will!)

    Reply
  16. carlstone

    Please do! Lots of free goodies can be found at http://www.sukothai.com/v.2/CSMusic.html, as well as the usual music commerce sites.


    what does it exactly mean to push electronic mediums “to its limits?”.

    Usually the cliche is “electronic media have no limits.” But trying to answer your question, well, I don’t know what it means, but to the extent there might be an answer out there, I don’t think it lies within the choices you listed.

    Reply
  17. rtanaka

    Usually the cliche is “electronic media have no limits.” But trying to answer your question, well, I don’t know what it means, but to the extent there might be an answer out there, I don’t think it lies within the choices you listed.

    I don’t think it does either, which is probably why I decided that the medium probably isn’t for me. I’m more interested in the idea of the linguistic aspects of music, and my choice in using MIDI clearly reflects this, I think.

    I do have a lot of respect for people who’re willing to work with such a relatively new medium, however, and I’ve met a few laptop players who rivaled traditional instrumentalists in terms of their sensitivity and awareness. These were all done within improvisational contexts, however — another problem that you run into with electronic mediums (other than the usual tech-related nightmares) is that it’s fairly difficult to do live performances where a good amount of precision is needed. Turns out that I mostly want certain notes played at a certain times in certain rhythms played in certain relations with other instruments with a certain amount of flexibility…and well, this also can be done with with electronic musicians as well, but you gotta admit that there aren’t many of them around who’re going to be willing to practice their instrument in such a way.

    I’ve listened to some of your tracks Carl and it’s pretty cool stuff, by the way. Love the rhythms in there. :)

    Reply
  18. dalgas

    Ryan wrote:

    what does it exactly mean to push electronic mediums “to its limits?”.

    Carl replied:

    Usually the cliche is “electronic media have no limits.” But trying to answer your question, well, I don’t know what it means, but to the extent there might be an answer out there, I don’t think it lies within the choices you listed.

    There’s the simple limit, that gets expanded with a new technology. There’s also the supposed “limit” of technology that’s deemed just too retro, but plenty of folks from Yoav Gal to the “speak & spellers” tell a different story.

    The biggest limit I see is not what composers have available, but what they make with whatever they do have. Part of the problem is the same lack of artistic vision and skill we prize no matter the “instruments”. But another part is superficial understanding; there are only so many dimensions of great art possible when there’s no intimate understanding, command and nuance of the medium.

    The limits of any particular technological “bubble” aren’t just at the edge; they’re also scattered all over inside, all kinds of them still unrealized.

    Steve Layton

    Reply
  19. rtanaka

    The biggest limit I see is not what composers have available, but what they make with whatever they do have.

    That’s basically it, I think. I’m genuinely interested in what sorts of things “new” instruments have to offer, and since computers are obviously here to stay, what artists can make of them will be something of an importance in the future, I think.

    But it’s a tool, after all, so I think it’s better to have a very sober attitude toward the instrument itself. Computer scientists would say that software capabilities are always strictly limited by hardware capabilities, so the abilities of computers is by no means even near infinite, no matter what sorts of tasks it could be given.

    Most of the good performers I’ve met tend to have a level of contempt for their instrument in a lot of ways — they know that the instrument is just an object, and having dealt with the object for such a long period of time they’re intimately aware of its limitations. They’ll maintain and take care of it to the best of their ability of course, but they know that its always going to filter out most of what’s in their head by the time it reaches anybody’s ear so the instrument itself isn’t what’s to be revered. Brass instruments are kind of crude instruments to begin with, but all I can see my horn as nowadays is a piece of piping that just so happened to gained some social legitimacy.

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

Conversation and respectful debate is vital to the NewMusicBox community. However, please remember to keep comments constructive and on-topic. Avoid personal attacks and defamatory language. We reserve the right to remove any comment that the community reports as abusive or that the staff determines is inappropriate.