Insights on Blindness and Composing
It has taken me decades to be able to say that I’m a composer. When I was young, I used to say that I was a pianist, then a jazz pianist, then a songwriter. I only settled on considering myself a composer a few short years ago. I know that many of you who are reading this have taken similar journeys. What makes mine unusual is the role that a lifetime of physical disability has played in that journey. It is through the lens of that disability that I wish to present my story to all of you.
I was born legally blind. While I could see colors, shapes and movement, I could not see in depth. I couldn’t read or write in print without great difficulty and I lived in a world that lacked the clarity of image that the fully sighted take for granted. For much of my life, I thought that the greatest impediment that poor vision presented was the inability to drive a car in the suburbs. I had little or no real appreciation for the constraints that blindness enforced upon my career choices, my creative development, and my entire musical evolution because, having never seen the world as the fully sighted do, I had no way of imagining any other world than the one in which I lived.
I began playing the piano very young. From my earliest days, I found that I had a singular aptitude for memorizing all kinds of music ranging from classical pieces to the hits of the day. I realized that a classical education was important, but I couldn’t live in a world where one missed note could compromise a career. I also couldn’t imagine myself being happy if all I did was interpret the original ideas of others. In fact, I was happiest when I was creating my own stuff. Unfortunately, I couldn’t write anything down and I forgot almost all my ideas right after playing them. Also, while I found that I could easily reduce complex orchestral textures to one piano and two hands, I just as quickly discovered that I couldn’t do this trick in reverse. I couldn’t expand what I heard in my head beyond what one piano and two hands could do. Again, without some means of notating or at least preserving what I was hearing and thinking, if I couldn’t play it myself, I was out of luck.
When I was seven, I met the great jazz pianist George Shearing. Shearing became a lifelong friend and provided me with a critical way forward for the creative person I was, even then. It was clear to me that jazz, with its emphasis on the ability to improvise over small, repeatable, and easily memorizable forms would be the means by which I could realize my ambition to become a creative musician and not just an interpretive one. With George’s guidance, I began to take greater and greater joy in the spontaneous creativity of the improviser’s art. I was experiencing instant composition with every solo and that was good enough, or so it seemed, for a while.
When I was eleven, I auditioned for the man who would become my primary classical instructor, Mischa Kottler. Kottler was a Ukrainian classical pianist and teacher who had taught several competition-caliber classical performers including Ruth Meckler. But within some twenty minutes of listening to me improvise at the piano, he determined that he was teaching a composer rather than a player. As a result, he set about making sure that I had a thorough grounding in all the major style periods in order to facilitate the education of the composer he was sure I would become. As I look back on it, I’m still amazed that he could figure out in mere minutes what it would take me decades to fully understand.
I wrote my first truly top quality song in 1975. I was seventeen and I was finally beginning to get a handle on how melodic shape and form worked in the genre of the popular song. When I finished this song, a light seemed to go on in my head that said, “You know how to do this at the professional level.” Unfortunately, while I would write a few more songs of that quality in succeeding years, another two decades would pass before I could get myself to the point where I could turn the tap on and off and create on command. Once again, the problem was that I had no means of committing my ideas to any durable medium. I did have a cassette recorder and that prevented me from losing absolutely everything. But a cassette machine did not make it possible for me to develop my skills the way they needed to be developed.
When I started college as a theory major at the University of Michigan, I took a class in composition for non-majors and was given the services of a doctoral candidate as transcriber for my final project. The man was a jazz trumpet player and he patiently tried to take dictation of my multi-part score. Unfortunately, we quickly discovered a basic problem. After I dictated, say, thirty-two bars of a flute part, the transcriber would have to ask whether the next instrument was continuing from where the flute had left off, or being brought up even with the flute music he’d just taken down. The need to keep orienting the transcriber to his place in both vertical and horizontal score time destroyed the creative flow. It also compromised the quality of my ideas because I was more focused on remembering where everything was, than where it might eventually go. Needless to say, this was not a glowing success.
The next year I transferred to New England Conservatory as a jazz piano major. There, I discovered a new compositional vehicle, electronic music. I took two classes with Robert Ceely and spent hours in the electronic music studio splicing segments of quarter-inch tape together containing stuff I’d improvised on the Arp 2600 they had, interspersed with stuff I’d improvised on a Steinway at a local recording studio. The class didn’t involve notation. I had an “Editall” block and razor blades and a grease pencil and quarter inch tape. The editing block is a tactile experience. The tape lays in a groove that you can feel, and I could feel where the sections of tape could be joined together. Also, when you move the tape slowly across the heads, you can hear where you want to cut. The result was a piece I called “Concerto for Piano in Tape,” because all the piano parts were surrounded by the electronic ones.
The piece was well received when it was played and I found myself feeling proud of what I’d done. For the first time, I was able to realize a compositional idea by myself and have it come out right. Unfortunately, this was still new and expensive technology and my good feelings were short lived. After I left school, I couldn’t afford the gear.
So I went back to attempting to notate music by hand. Starting in college, I tried to make the shapes of notes with a pen or pencil. By the time I was in my later college years, I was simply at the point where I had no choice but to do by best with the poor penmanship of which I was capable. My vision had not changed. I just decided to struggle with it.
My next challenge was to prepare a big band arrangement. However, I realized that my only hope for preparing the assignment was to skip writing a score and cut immediately to preparing the horn parts one at a time. This was something like playing chess without a board. I did have the advantage of working in a tonal framework and in a small repeatable form. So, I wrote the lead trumpet, lead alto, and bass trombone parts first and then filled in the harmonies using the logic that the outer lines provided. I played the piano part myself and explained what I wanted to the drummer so that I could conserve on time and minimize eyestrain. While the NEC big band was filled with excellent sight readers, my handwriting was so bad that the music I heard was a crushing disappointment. I knew it could sound better than what I was hearing and there had to be a better way.
The world began to change slowly when I became acquainted with the first IBM PC with CGA graphics in 1981. I’ll never forget standing before the computer as it drew and filled in a circle. Instantly, I understood that if the machine could do that, it could draw and fill in a notehead. If that was possible, we were on our way. I reached out to some programmers at IBM to see if they’d be interested in developing a music notation program for their new computer. Unfortunately, I was told that, in their opinion, there was no money in it. So, I waited until music notation software came on the market.
In 1986, I bought my first PC and my first notation package. With the aid of telescopic glasses, which I had acquired for the purpose of being able to read the computer screen, I began to write a book of chord charts that bass players could use when I had jazz trio gigs. At the time, I felt great because, for the first time, I was hearing ideas that were more fully fleshed out than I’d ever heard before. But eventually, I would find myself dissatisfied. It was one of those, “the more you have, the more you want,” situations and, like any addict, once hooked, I continually wanted more.
While the 1980s brought us MIDI, the 1990s would eventually bring us two pivotal innovations. The first was the graphical user interface. The second was text-to-speech conversion software. GUIs would eventually bring us a new generation of composing tools like Sonar and Finale. While text-to-speech conversion software would provide the missing link that would make it possible for the blind to take advantage of this new world of creative possibilities since a totally blind user could know exactly what was on the computer screen without having to actually look at it.
I first connected Cakewalk, the precursor to Sonar, to a MIDI keyboard in 1994. Not since that electronic composition class at NEC had I experienced this kind of instant gratification and creative freedom. There was no more frustration with listening to live players get the accidentals wrong because of sloppy pencil calligraphy, no more not being able to find players willing to struggle with my illegible parts. Similarly, there was no more being limited to hearing just piano, or piano and bass. But most importantly, I would soon discover that I didn’t have to rely on blowing over changes to lengthen my ideas. That realization meant that jazz was not the only creative game in town and that new musical forms were now in play for the first time. With these new tools, I could create something that was more completely structured from beginning to end than ever before. Not only that, I could hear the beginnings of the orchestra that I had previously only imagined, because the quality of synthetic timbres began to more accurately reflect the sounds of the symphony orchestra. And, as the quality of sampled instruments improved, I found that the quality of my ideas did likewise. I found myself pretending that I was a violinist, or a clarinetist, or whatever I needed to be to hear that next idea. This ability to morph into the identities of various acoustic instruments other than piano has made a critical difference in the shape, form, and content of my music.
Technology was not the only ally in my quest for compositional liberation. Beginning in 1995, I started a second career creating music in the New Age genre under the name Kevin Kern. The music I created as Kern, while intentionally simple for commercial reasons, provided me an indispensable opportunity to create a quantity of music to order on demand. While I still had the benefit of working within the familiar confines of the song form, meeting this challenge over the course of eight CDs helped me to develop a set of musical muscles that hadn’t been fully exercised before. Utilizing my growing arsenal of MIDI instruments and my specially adapted copy of Sonar, I found myself now able to do what I knew my peers were already doing, namely, creating music when it was needed, not just when the inspiration struck.
In 2003, I became aware of Sibelius. Unlike Finale, Sibelius, with its exhaustive supply of keyboard commands, seemed designed for users, particularly blind and visually impaired users, who didn’t always use a mouse for everything. I contacted David Pinto, a gifted software developer who had previously made Cakewalk’s Sonar accessible to the blind user, and asked him if there was anything he could do to make Sibelius accessible for the blind composer. He took a demo version of Sibelius and loaded it on his laptop and began to work. Within 45 minutes, he had Sibelius talking. In that instant, I knew my world would never be the same. All the tools were finally in place. Now, it was up to me.
With my new version of Sibelius installed on my computer, I decided that the first test of my new found possibilities would involve preparing my own parts for an upcoming series of concerts I was playing in Korea as Kern. The shows would feature songs that had become popular in Korea and throughout the world through their inclusion in a television miniseries called Autumn in my Heart. On the original CD from 1998, the songs had powerfully romantic solos for clarinet, violin, and cello which were played by live players using parts that were prepared by an L.A. copyist named Ron Hess. I played synth beds to compensate for the string orchestra for which I could not write. The challenge was to write not only the original solos, but also the synth parts so that all I had to do was play the piano parts on stage. I also had to make sure that the parts were crystal clear in the event that a language barrier prevented me from explaining what I wanted in the limited rehearsals I would have.
When I arrived, I found that the violinist was a Queen Elizabeth Prize winner. The woman reading the synth book was an accomplished jazz pianist who had studied with Kenny Barron in New York, and the cellist and clarinetist were of similar musical pedigree. Since the music was not technically challenging, it was clear that the quality of the parts would make the difference between a successful performance and a failure. As I sat at the piano and cued the introduction, I felt expectant, not nervous. But nothing prepared me for the electric jolt I felt when I heard these amazing musicians bring my music to life with all the emotion and musical eloquence I could have asked for. While I’ve had this experience several times since, the raw emotional power of it is always the same for me. Since hearing my own music played back is something I never thought I’d live to hear in the moment like that, I don’t think that will ever change.
In 2005, I attended a concert in San Francisco featuring a new work by jazz pianist/composer Fred Hersch. Hersch had been one of my instructors at NEC and he was and is definitely one of my heroes because his jazz playing shows a world where lines almost appear to move in four directions at once. The performance I heard featured Fred’s setting of sections of Walt Whitman’s poetry, and featured an ensemble of some twelve players including two singers. As I listened to the performance, I was amazed at the number of blowing sections there were in the piece. I’m not saying they didn’t work. But I found myself wondering why a man who could see well enough to write down anything he could hear would choose to do that. As I left the theater, it was as if a bolt of lightning hit me—yes, I mean this—right between the eyes. It was clearly not the choice I would have made if I had the resources to assemble all those players, but Hersch chose to write this because he wanted to! That’s when it hit me that my entire creative experience was driven more by what I had to do than by what I wished I could do.
Then in 2008, I took the opportunity provided by the American Composers Forum to audit the various seminars presented by the Minnesota Orchestra Composers Institute. That was the clincher for me. Attending the Composers Institute brought me back into contact with modern art music for the first time since my college days almost thirty years before. The critical difference was that as I listened to the constructive criticisms provided to the composers whose works were being performed by the Minnesota Orchestra, along with listening to the rehearsals and the performance of the completed works, I realized for the first time that I had the potential to do what these people were doing. The Composers Institute also made it clear that I needed to expand the forms in which I composed to include vehicles beyond the song form in which I’d made a living for decades.
That realization has inspired me to actually prepare written scores and submit them for inclusion in various concert situations outside of jazz. With each new effort, the quality of my notation improves. It’s far easier to read than any handwriting could ever be. Second, I’m learning more and more each day about notational conventions that I never had to deal with when I was memorizing music from recordings. And as my notation has improved my confidence has grown, and I look forward to creating ever more expansive musical works.
Today, through a combination of top quality soft synths, Sonar, Sibelius, and a screen reader called JAWS, I’m able to create in a diverse collection of genres governed only by my own musical taste and the professional opportunities that present themselves.
Kevin Gibbs is a pianist and composer based in Minneapolis, Minnesota. His concert music compositions have been performed by groups including the contemporary music ensemble Zeitgeist and the Dallas Wind Symphony. His jazz pieces have been recorded by pianist George Shearing and vocalist Meredith d’Ambrosio. In addition to his growing output as a jazz and classical composer, Kevin also enjoys a parallel career recording and performing in the New Age genre under the pseudonym Kevin Kern. Beginning with the release of his debut album In the Enchanted Garden in 1996, a series of eight CDs containing over eighty original compositions has spawned an international following and an active performing schedule in the Far East. Kevin has also prepared two books of piano solo reductions of music from his first and seventh Kern CDs and additional piano reductions of individual pieces not contained in these books are available for digital download through Musicnotes.com.