A survey of the field

 

As part of the project “New creative possibilities through improvisational use of compositional techniques, - a new computer instrument for the performing musician”

 

Øyvind Brandtsegg, research fellow.

NTNU, Trondheim.

Program for Research Fellowships in the Arts

 

A survey of the field. 1

Introduction. 1

Background. 2

Improvisation. 2

Instruments. 6

About composition versus improvisation. 7

The relationship between composition and improvisation in the jazz tradition  11

Composition. 12

Computer software that has influenced on the composition process. 16

About sound synthesis, choice of techniques in the scholarship work. 18

Other artists’ work. 18

Types of algorithmic composition. 19

Composers. 21

Improvisers. 23

Theoretical inspirations. 27

Closing remarks. 30

References. 30

 

Introduction

The project explores the artistic potential that is found in the cross-over between real time composition and improvisation, and a computer based instrument that facilitates improvisational exploitation of composition techniques is developed. In this document, I attempt to draw some lines to other pieces of work that the scholarship work may be related to.

 

The scope of this survey is rather wide, as it deals with artistic, theoretical and technical issues that are relevant to my scholarship project. As my primary concern, I want to present examples from the areas that are touched on, more than provide a supplementary and complete review. The way the concrete elements of the scholarship work relate to the examples appears implicitly on the background of the relation in which the examples have been used in the survey. The wide scope of the survey is partly explained by the fact that the work is interdisciplinary by nature, and because the artistic impulses are collected from many fields. In part, the scope is wide also because I myself do not have the required distance to position the work clearly within an aesthetic or stylistic trend. The Program for Research Fellowships in the Arts is assumed to have a utility value beyond the concrete projects that are carried out; this is also considered to be a good foundation on which to draw parallels on a broader basis.

This document aims at placing my project into the context of other work it relates to. To put an artistic expression in a historic context is the field of expertise of the arts critic or theorist, not the artist. It is natural that the artist accounts for his sources of inspiration and the historic trends that have influenced on the artistic process, and the result of it. Simultaneously, it is not a matter of course that the artist may or should be required to present scientifically verified documentation of any and all sources. This is in part explained by the fact that an analysis of the work and its context requires a certain distance, something which an art historian or musicologist can be assumed to have. Partly, it is also because the artist often works by internalizing knowledge, a process in which ideas have been through a long and diverse reworking process, and that this results in a situation where the source of the idea(s) no longer can be determined unambiguously. Bjørn Rasmussen focuses on some of these issues in his essay “Kunst er viten” (Art is knowledge) (Rasmussen 2005).

One of the implicit objectives of my scholarship project is to find new approaches to improvisation and the survey has its starting point in an improvisational perspective.

 

Background

Improvisation

Generally, one may assert that the early history of Western art music contains many examples of a combination between improvisation and composition. The beginning of polyphony in the Middle Ages is assumed to be the result of partly improvised harmony voices, later, in the baroque age we can observe the sketched notation of the “basso continuo” This type of notation does not only open for improvisation, but it also requires that the performer adds the details to the music. Later examples are to be found in the concert form in the 19th century in which a soloist performs improvised “cadenzas” as a culmination of the soloist’s equilibristic efforts as a soloist. A lot of folk music contains strong elements of improvisation, for example the flamenco in Spain as well as the music of India. The music of India has different philosophical implications in the Northern and Southern parts of the country, and this also influences how the musicians relate to the concept of improvisation. Basically, this survey relates to improvisation as it is found in the genre of jazz, even if the music that is produced in my project does not necessarily fall under the jazz genre.

Improvisation is, to a great extent, an orally taught and practically based tradition, and in most circumstances based on non-academic knowledge. Derek Bailey describes in his book “Improvisation: its Nature and Practice in Music” (Bailey 1992) the difficulties in describing and evaluating improvisation.

”For the musical theorist there seems to be no description or evaluation without technical analysis which in turn relies on transcription and dissection. For the description – or evaluation – of improvisation, formal, technical analysis is useless.

Firstly, it is not possible to transcribe improvisation.” … ”Even when man’s senses are supplemented by such devices as the oscillator and the frequency analyzer the result is only a more exact picture of the irrelevancies.” … ”When the object of examination is improvisation, transcription, whatever it’s accuracy, serves only as a misrepresentation.

The improvisors I spoke to, in almost all cases, did not find any sort of technical description adequate.” (Bailey 1992, s. 15)

Bailey’s attitude to describing improvisation, and to make comments about it, is rather categorical. However, I must admit that I, to a great extent, agree with his views, based on personal experiences from many years of practice. Further, his clear emphasis may provide a perspective on, and sound skepticism towards the content of truth in any documentation, analysis or review of improvisation. This also includes the present survey.

”I couldn’t imagine a meaningful consideration of improvisation from anything other than a personal point of view. For there is no general or widely held theory of improvisation and I would have thought it self-evident that improvisation has no existence outside of it’s practice.” (Bailey 1992, introduction chapter)

The resistance against arresting the moment by documenting and analyzing improvisation must, in my opinion, not be taken as a support for the myth that improvisation is the art of the moment and is performed without preparation, and that it relates to nothing else than itself. One would rather say that an improvising musician has spent his whole life preparing for a given performance and that this performance relates to everything the musician has listened to. In some cases, the result will be that the musician best can learn about improvisation through experiencing the phenomenon in practice, optimally through the musician’s own practice.

 

During the 1950’s and 60’s, we saw a development within jazz in which several approaches to European art music were investigated. Even on a general level, one may say that the 60’s brought about important changes in jazz, to the extent that the genre was split up into a number of subgenres. Naturally, the changes that became evident in the 60’s had their predecessors in activity that went on some years earlier, from the end of the 40’s and onward.  The following summing up refers mainly to Michael Budds’ “Jazz in the Sixties” (Budds 1990). The genre “Cool Jazz” represented by for example Lennie Tristano and Lee Konitz, employed a more intellectual approach to the music than would have been the case for example with the Bebop genre. The tempo was reduced somewhat and the focus was on long melodic lines, whereas the harmony had a certain impressionistic character. Further, an attempt was made to bring jazz and European art music together within the genre “Third Stream”. Important performers and composers within this genre were John Lewis, Günter Schuller and Jimmy Giuffre. Lewis’ ensemble “Modern Jazz Quartet” is one of the most profiled groups from this genre. They tried to use classical forms, for example the canon, fugue, rondo, and even the sonata. Further, imitating counterpoint was used, and more emphasis was put on the development of the motifs than had been the case in previous jazz genres. The experiments in Third Stream may seem like a compromise between the characteristics of the different musical styles it mixed and it resulted in a less flexible framework for improvisation. The result became less spontaneous. The genre made use of alternating sections, in which composed parts were followed by looser parts with greater opportunities for improvisation. Serial techniques were also employed in compositions; an early example of this is Leonard Feather’s “Twelve Tone Blues” from 1959. Here, the serial techniques are not used for improvisation, but the composer sketches a whole tone scale as a possible framework. Günter Schuller’s “Conversation” from 1960 is a serially composed string quartet, which during performance is brought into a dialogue with collective improvisation by the Modern Jazz Quartet. Further, we find Schuller’s “Abstraction” from 1961, in which serially composed themes form the background for "atonal” improvisation by Ornette Coleman. In 1964, Jon Benson Brooks described a “system” for serial improvisation; this system tried to describe the degree of freedom an improviser may have in the intuitive use of a tone series. Brooks states that it is legal to repeat parts of the series, or to jump to a randomly selected note in the series, as long as the series is followed strictly until the next such repetition or jump. The system may possibly have served just as much to bring confusion as practical help in the exploration of serial improvisation. All in all, this technique has not been used a lot, with sporadic exceptions. “The Music Improvisation Company”, which included Derek Bailey and Evan Parker, to some extent, employed serial techniques in improvisation, whereas a somewhat better known example is Bill Evans’ “Twelve Tone Tune” (TTT) from 1971. Evans constructs a harmonic basis that to a certain extent relates to the twelve tone melody (from the composition’s main theme) and the improvisations relate more closely to this harmonic progression than to the twelve tone series. Improvisation, in a somewhat stricter serial style, is found in the composition “Madrigal" and Rosebush” by the group “Krøyt” (Brandtsegg/Asbjørnsen/Dahl 1997 -1), and in “Sjakktrekk for Trio” by the group BEN-L (Brandtsegg 1997 - 2). In these examples, fragmentary repetition of the tone series occurs, in a way theoretically compliant with Brooks’ system of 1964.

 

A general trend in the 60’s may be said to be a movement away from the traditional concepts of form within jazz, where the archetype the “theme with variations” and the harmonic progressions are repeated during the improvisations. One of the methods to get away from this strict form was to employ modal scales and to discard harmonic progressions altogether. Miles Davis’ album “Kind of Blue” represents the best known example of modal jazz. The composition “So What” from this album employs a classical (classical even within jazz) ABA form, in which the A parts relates to D Dorian, whereas the B part is played in E flat Dorian. The transposition a semitone up to E flat in the “B” section represents the classical change of mood usually found in the middle section of a number of jazz standards. Another composition from the same album, “Flamenco Sketches”, is just a structured form of a certain sequence of modal scales. The composition does not have a fixed theme, and the duration of each harmonic level is not predetermined, the soloist gives a signal when he wants to go on to the next level. It may be remarked that the freedom that the open form provides, is not exploited to the full as the musicians still use 4 or 8 bar periods as duration for each harmonic level. Later, Miles Davis used modal techniques in the rock related jazz of the late 60’s, as heard in the album “Bitches Brew” from 1969. John Coltrane also employed the modal technique further, for example in “My Favourite Things” from 1960. He employed notes that were foreign to the prevailing scale in musically important places in the phrases, thus erasing the listener’s perception of the underlying scale. He developed the modal playing even further, exceeding the limits for what it in principle could contain, and in this respect disintegrated the modal concept. Another example of the disintegration of tonality is Herbie Hancock’s bi-tonal approach in “Maiden Voyage”. Further, one may mention the free collective improvisations represented by Ornette Coleman, Archie Shepp and others; here we can hear melodic lines that contain tonal fragments, each relating to different tonal centers. In this context, we may talk about a kind of pan-tonality. The movement away from the traditional concepts of form in jazz developed, as we see, both in the direction of experiments with strict composed forms and simultaneously in a movement towards totally free, i.e. intuitively composed forms. Ornette Coleman reportedly once said that he bases his improvisations more on his “impression” of a piece of music than on harmonic or metric details.

 

Another concept of form that became widely used in the 60’s was the bass ostinato, in which a short melodic theme is repeated. Somewhat more complex forms were constructed by joining several ostinatos and by varying between them in the various parts of the form. This technique came from rock, in which it forms a well known basis for most compositions. The use of this type of ostinatos in jazz and rock may be followed directly back to the baroque use of bass figures.

 

Instruments

The use of electric instruments within jazz increased considerably in the 60’s. Perhaps the most prominent example is Miles Davis’ band on records like “Bitches Brew” and “In a Silent Way”, in which the characteristic sound of the electric piano (Herbie Hancock, Chick Corea, Joe Zawinul) with a mild distortion is prominent. Electric amplification of acoustic instruments was also used, partly for practical purposes to amplify the volume, but also to achieve variations in the tonal quality. Effect units such as phaser, octave divider, varitone, distortion and wah wah were used. An important tool in the development of improvisation techniques were echo machines, for example the Echoplex. This created the possibility of playing canon with oneself, as a strict form of counterpoint. The improvisational use of echo effects can be seen as a direct continuation of the guitarist Les Paul’s experiments with Ampex tape machines in 1949 (http://music.columbia.edu/cmc/courses/g6630/recordproduction1.html). This use of echo and tape loops is developed further in the 70's by Robert Fripp and Brian Eno in the form of “frippertronics” (LaFosse 2004, Peters 2004). Fripp also represented an approach that combines popular music with European art music and improvisation through works in the group King Crimson. Other prominent groups from the 70’s that combine rock with inspiration from the tradition of art music are Yes, Genesis, Rush, Pink Floyd and U.K.

 

Purely electric instruments were also more widespread late in the 60’s. Previously, the Hammond organ had been used in the 1950’s rhythm & blues music, but the Hammond organ can in many respects be considered to be more of a continuation of the church organ than a new kind of instrument. The electronic synthesizer, however, provided opportunities that may classify it as a new instrument. Robert Moog, Don Buchla and Tonus Inc (Arp synthesizer) are important instrument makers who spearheaded the development of the new instrument. An important reason for the limited use of the synthesizer previously is that the instruments were made for use in the studios, typically in academically based electronic music studios or radio studios. The introduction of a traditional keyboard as found on a piano or an organ, as well as new portable versions of the instruments made them attractive for “live” use. Of the early outstanding performers on the instrument we must mention Joe Zawinul and Jan Hammer. Zawinul is known for his work in the group “Weather Report”, exemplified by the album “I sing the Body Electric” (1971). One of Jan Hammer’s early works was a contributor in the “Mahavishnu Orchestra”, exemplified by the album “Birds of Fire” (1973).

 

The development of improvisational craftsmanship through the history of jazz has taken place more in the night clubs than within the universities. This is also true for genres such as electronica and techno; these are used and developed to a greater extent in clubs than in academia. Both techno music and jazz music contain subgenres that may be considered more as art than as commercial music in that certain performers explore the limits of the genre and cross the genre with other genres. In the last few years, we have also seen laptop performers emerge (exemplified by Kim Cascone, Boris Hagbart, Lasse Marhaug, Jeff Carey) who perform their music on an instrument mainly based on a portable computer. To talk about a laptop as a musical instrument may, however, provide a basis for misunderstandings or confusion. This is because the use is so different from performer to performer. The degree of freedom during the performance may vary within such a vast scope that you may talk about totally different instruments. In the simplest versions, the computer is used as a playback machine, with quick access to a great selection of sound files. This is not unlike the functionality a disc jockey has when using vinyl records.  In more complex situations, the performer has access to a great number of parameters with reference to the musical expression, tonal variation, sound synthesis, automated composition techniques, arrangements, instrumentation etc.

 

An example of Frank Zappa’s use of self-produced instruments for the mixing of sound samples is provided by this story told by John Kilgore – (jkilgore@masque-sound.com):

“Back in the dark ages when I was a young whippersnapper of an apprentice
engineer, I worked in a studio (Apostolic) where Frank Zappa made four
albums (We're Only in it for the Money, Lumpy Gravy, Reuben and the Jets
and Uncle Meat).
When Frank was building ...Money, we used this thing
called the Apostolic Blurch Injector. Frank would fill up the Scully 12
track with snippets of his old albums (varispeeded, of course), interviews
with guys who were trying to get him to drop acid (Frank's only vices were
Coffee, Kools and CocaCola), chopped up snippets of stuff the censors
wouldn't let him use (no kidding - and this was 1968) and mics planted to
catch what the cops said when they came to bust us in the middle of the
night cause we were keeping the neighbors awake. All of this would be mixed
down to a single track and put on a new fresh 12 track tape which he would
fill up with these collage tracks. The Blurch injector was a keyboard made
up of twelve switches which were patched in line between the 12 track
outputs and the console. Then he would play the 12 track, which he called
the BROWN NOISE master, and wail away on the keyboard. This is how he made,
in part, Nasal Retentive Calliope Music and other stuff of that ilk.”

 

About composition versus improvisation

A common perception about composing is that one may take advantage of working in non-real time, and thus work with structures and ideas in another manner than what can be done in improvisation.  In my opinion, this view is based on a misunderstanding as to what improvisation in practice means. The lexical meaning of the word improvisation is “something that is not planned” (http://www.answers.com/topic/improvisation), whereas intuitive means something that spontaneously emanates from a natural tendency, something that emanates from intuition, in contrast to something that is analytically thought through (http://www.answers.com/topic/intuitive). This is in contrast to the use of the concept of improvisation in a musical connection. In musical practice, improvisation is exercised most often within frames in which it has been thoroughly prepared. This view is also reflected in Bjørn Alterhaug’s “Improvisation on a triple theme” (Alterhaug 2004). The preparation may be said to consist in the fact that the performer internalizes knowledge related to the genre as well as idiomatic characteristics that apply to the musical situation at hand. This also applies, as a paradox, to the so-called “free improvisation”, a genre that developed in the 1960’s on the background of "free jazz" and European art music. Free jazz (http://en.wikipedia.org/wiki/Free_jazz) is characterized by a reduced emphasis on the formal frames of composition compared to the previous genres of jazz. The genre was developed in the 1950’s and 60’s by Ornette Coleman, Cecil Taylor and Paul Bley, to mention a few. An obvious forerunner of free jazz is Lennie Tristano’s “free form” documented by the recordings “Intuition” and “Digression” in 1949 (Tristano 1949). Free form is described in Tristano’s own words as:

”…playing without a fixed chord progression; without a time signature; without a specified tempo. I had been working with my men in this context for several years so that the music which resulted was not haphazard or hit and miss.”

The difference between free jazz and free improvisation may be said to be that free jazz still relates to idiomatic characteristics of jazz, whereas free improvisation, to a greater extent, tries to free itself from the conventions of previous genres. Important representatives for free improvisation are Derek Bailey, Evan Parker, Peter Brötzmann and Elliott Sharp. Derek Bailey has suggested the term “non-idiomatic improvisation” as an alternative name of the genre. Both free jazz and free improvisation may be said to have developed idioms as to genre, with expectations that are fulfilled or broken. All performance of music may be said to be a play concerning expectations, to create expectations with the audience, further to fulfill the expectations or to surprise.  The improviser is, on the basis of his musical competence, well prepared in a general sense. The development of this competence takes place both when he plays, but also, to the highest degree, through reflection in the time that he does not play. This is analogue with the reflection the composer makes by non linear work with a piece of music. The improviser reworks and restructures his ideas by improving his improvisation in the next performance. This reflection may deal with improvisation related to a specific piece of music, a musical situation, or some more general or basic characteristics. Tor Dybo (1999) compares improvisation to driving a car, in the sense that internalized knowledge is used to take intuitive choices in situations in which the surroundings and the assumptions are in continuous change. The improviser is trained in recognizing musical scenarios, or situations, and the reflection when you do not play serves to work up a repertory of solutions to deal with these situations. Generally speaking, one may say it is difficult to find musical situations that may be composed, but not improvised, possibly with the exception of very complex mathematical relations internally in the musical structure. This depends, to a high degree, on the specific training the improviser has and which technical tools he has at his disposal. Improvisation is by all means prepared.

 

The use of the term improvisation is context dependent. In the context of the scholarship work, improvisation is used in a jazz related sense, even if the resulting music does not necessarily fall under the jazz genre. An ordinary approach to improvisation is through preparation; the performer prepares for musical situations that make up the frames for later improvisation. In this context, the concept “intuitive” is used about decisions the performer takes at the spur of the moment, i.e. in the performance situation. The improvisational context requires that the performer makes a great many decisions per second, for example the selection of individual pitches and rhythms etc. The performer does not have the time for in-depth reflection in connection with his choices.  In order to obtain sufficient speed in the decision of the choices the improviser faces, it is necessary to learn a technique in which a lot of these choices are made on a subconscious level. This is referred to as the “intuitive” musical choices in improvisation. George Lewis describes in his article “Improvisational Music after 1950: Afrological and Eurological Perspectives” (Lewis 1996) improvisation as a phenomenon in afro-american and euro-american music of the modern age. Lewis also describes methods of development for hermeneutics around improvisational music and uses the music of John Cage and John Coltrane as his starting point. Lewis also discusses the relationship between composition and improvisation and refers to the theoretician Carl Dahlhaus' rather narrow criteria as defining a piece of music as a composition (Dahlhaus 1979) These criteria state that the piece is a unified structure in itself, totally prepared and fixed in writing. Further, that the aesthetic object that is constituted on the performance mainly is contained in the prepared and notated representation of the piece of work. This definition of composition is in conflict with the general view within the improvised genre, a fact Lewis also points out in his discussion of the theme. Evan Parker defines improvisation as composition discipline, in his attack on what he calls a false anti-thesis in which improvisation is considered as an activity distinctly different from composition (Lake 1997). He defends his view by means of this statement:

”After all, whether music is played directly on an instrument, read or learnt from notes made on paper beforehand or constructed from algorithms or game rules operating directly on the sound sources or controlling the players, the outcome is music which in any given performance has a fixed form.” (Parker cited in Lake 1997).

In my conversations with the composer Bertil Palmar Johansen, it became apparent that his view on this is that the composition work involves a lot of improvisation and that this is an opportunity given to the composer through modern technology represented by sequencers and samplers.

 

Christopher Dobrian has written down some thoughts around the relationship between composition and improvisation (Dobrian 1991). He points out that the two concepts each include a group of activities connected to making music, that these groups contain overlapping elements, and that there are clear similarities as well as dissimilarities. The different processes used for music making each represent a different set of values. One of the issues Dobrian mentions is that compositions are written down, whereas improvisations are not. Here, he touches on notation as one of several storage mediums for music. Notation is a stable medium, in that a notated representation is not changed over time. On the other hand, the process of notation is exposed to loss of information, when musical ideas are to be coded into notation and later interpreted by a performer. In improvisation, the performer himself stores the musical ideas in the form of memorized mental sound images. This is a technique that has the potential for a much more precise and complete representation of the music, but which is also exposed to changes (loss of information, possibly also gain) over time. In the performance of a notated work these processes merge, as the performer’s interpretation adds musical elements that either are improvised on the spot or practiced and memorized. One of Dobrian’s assumptions is that the point of notating music is to enable the possibility of recreating the work at a later occasion, further that the notation also acts as documentation of the process leading up to the completed work. A sound recording of an improvisation for example, only documents a short creative activity, while the preparation for the performance is not, to the same extent, taken care of in this type of documentation. This difference in form of documentation suggests to a certain degree, that a composed work that is written down has better opportunities of being taken seriously than what is the case with the more transitory improvised work. Dobrian also discusses that the composition process contains a potential for constructing more complex musical structures, as the work does not have to take place in real time. This is considered as one of the main differences between composition and improvisation. Here, improvised use of a computer tool with built in algorithms is considered as a form of composition, because the computer tool’s composition properties necessarily must have been “composed” beforehand. This is, in my view, a point that is debatable, as a general musical preparation for improvisation entails to review potential musical situations and find possible solutions to them. Moreover, the point that touches on complexity is also only true to a certain degree, as improvisation, particularly in the instances in which several performers interact, may assume a complexity that surpasses our capability to analyze it precisely and completely. An analysis of the interaction between performers (and the interaction between the performers and the audience) ought to, in addition to the purely musical parameters, also include reflections of a socio-psychological and anthropological nature. This is because the basic communicative elements do not exclusively belong to music, but have parallels in other forms of inter-human interaction.

In an article in “Jazzforschung” (Dybo 1999), Tor Dybo discusses elements of the analysis of interaction, both specifically in an analysis of “Blow Away Zone” by the Jan Garbarek Quartet, and in a more general context by means of references to other research in the area. He also refers to several alternative forms of documentation as an aid in the analysis, among those Peter Reinholdsson’s use of video recordings of the rehearsing process and preparation of a jazz group for a performance (Reinholdsson 1998).

Peter Tornquist, also a scholar of the Program for Research Fellowships in the Arts, has researched the use of improvisation as a generator of musical material for composition (Tornquist 2006). His work touches upon some of the same issues as my own scholarship project, but in a sense approached from the opposite direction as he is a composer utilizing improvisation methods, while I would define my own work more as an improviser utilizing composition methods.

 

The relationship between composition and improvisation in the jazz tradition

As a perspective on the relationship between composition and improvisation, it might be appropriate to look at some details of a specific musical example from the jazz genre.  Here, the composed theme is called “the composition”, even if many jazz composers and performers would include the whole performance, with improvisations as a unified composition. This unity is called “the performance” in the next few sections. The issue “What is included in the concept of composition” has been debated over a long period of time; in Norway also between jazz composers and the copyrights organization TONO. The use of the concept composition in this section does not take a position in the debate, but is used analytically in order to divide the various parts of the performance.

The example is Mike Stern’s “D.C" from the album “Odds or Evens” (Stern 1991). The example has been chosen because it illustrates a common practice in terms of the relationship between composition and improvisation in a clear way. Here, a number of clearly compositional methods have been used and this is explained to the extent they are reflected in improvisation within the same performance. In the following sketched analysis, I have used references with indication of minutes and seconds in the recording. It is a conscious choice not to transcribe the music, but rather refer to precisely indicated places in the recording. The composition has been built up with a melodic focus on relatively large intervals, particularly in the most prominent theme that appears the first time at 0:12. In this connection, a minor sixth is regarded as a relatively large interval because jazz improvisation traditionally has had a clear gravitation towards (preference for) step wise melodic contours. The focus on the large intervals is pursued only to a small extent in Stern’s improvisation (that starts at about 2:05 in the recording). We hear the interval structure from the composition again in the time interval 2:12 to 2:20 and again from 2:36 to 2:39, and again from 2:57 to 2:59. The improvisation lasts to about 4:07 (3:45 if you do not include improvised fills over the thematic break that occurs here). Other places in the improvisation, large intervals occur as a function of change in register in that the melody moves from one register to another. This works more as a dramaturgic effect than a melodic compositional effect. An example of this is found at 2:27 – 2:29; and here you also find a division of phrase in connection with the large interval. An example of a large interval used as a dramatic effect in the middle of a phrase is found at 3:37. With this, I intend to point out that the melodic guides that have been incorporated in the composition are not pursued to the same extent within the improvisation where the emphasis is put on step wise and linear movements. In the same way, one could analyze the next solo (Jim Beard, piano, from 4:12 to 5:55). The introduction includes a part in which the large intervals only to a certain degree are prominent (4:14 to 4:27), whereas these are only sporadically emphasized later in the improvisation. Beard’s piano solo, nevertheless, has a more fragmentary character than Stern’s guitar solo, but this is due just as much to the fact that he uses longer pauses and chord based breaks. Stern’s clear focus on long melodic lines draw the total impression in the direction of a linear and step wise focus.

 

Composition

One may say that a significant part of the art music from about 1900 until the present time has shown an increasing tendency towards formalization. Examples of this tendency are serialism (Schönberg 1944), indeterminism (Cage 1951), and stochastic music (Xenakis 1956). Increasing use of computers in the composition process has also made its mark. The work “Illiac Suite” (Hiller and Isaacson 1957) is considered the first work composed by means of a computer. In the time since this work was written, the field has undergone an explosive development. Methods that were not feasible in the 1950’s and 60’s have become available by means of more modern computers with higher performance. The computer has been used as a tool in the process of composing for traditional instruments, and it has been used for realizing the collection of instruments the work is performed with. The computer has facilitated new types of sound synthesis and processing (e.g. Roads 1996 and 2001) and these have been used for synthesizing sound offline (by storing to file) and during the last few years in real time (e.g. Boulanger 2000).

 

Formalism in music has deep historical roots and is thus not a new feature that has emerged with the technological innovations. At the same time, computers are well suited to handling many types of algorithms, and it is therefore natural that we have seen an explosive development in this area during the latter half of the 20th century. The term algorithmic composition is most often used in settings where computers are involved in the composition process. Kristine H. Burns (Burns 2004) has collected a point by point chronological survey of the history from Pythagoras and the harmony of spheres to the modern age. Here, both algorithmic music, automatic instruments, electronic and computer generated music is included. Burns’ survey is rather complete until the second part of the 1980's, whereas the more recent history deals more exclusively with technical commercial equipment. The inclusion of technical innovations in such a survey is relevant because the available tools at any given time have made a mark on the artistic results. For the period before 1980, the survey compares the new tools with the topical works of the same period. Another good historic survey is Electronic Music Foundations “The Big Timeline” on http://emfinstitute.emf.org/bigtimeline/1900s.html.

Electronic music has a history that goes back to the end of the 19th century, with individual experiments in the construction of new instruments. It is worth noticing that the development of electronic music all the way has been parallel with the development of new instruments or tools. In part, the new instruments have come into being in order to cover concrete musical requirements, in part, composers and musicians have made use of technological innovations that, as a starting point, were not constructed for creative purposes. From about 1950, the electronic music acquired a substantially larger scope and clear stylistic trends developed. Examples of these trends are Pierre Schaeffer’s work within Musique Concrete in Paris from 1948 and Karlheinz Stockhausen’s work within pure electronic music in Cologne from 1952 and onwards. Further, work was made within computer assisted composition, with the aforementioned “Illiac Suite” from 1957. In line with the scholarship project’s involvement with live sampling, it is also worth mentioning Maurizio Kagel’s “Transicion II” from 1958, in which tape recorders are used as an integrated part of the work, both for live recording of fragments and playback of previously recorded fragments.

 

“Live electronics” is a concept that has a fairly wide meaning, as it may include almost all electronic equipment that may be used expressively as an instrument in a performance. In its most extreme, one may say that the radios that are used during the performance of Cage’s “Imaginary Landscapes” are an example of live electronics, even if it is more common to think of filters and ring modulators as they are used in Stockhausen’s “Mikrophonie” (I/II) as early examples of live electronics. This type of instruments has no standard form of notation and is therefore in many works given a greater degree of interpretational freedom than traditional instruments. In free improvised music, the use of live electronics appears in the late 1960’s, with Hugh Davies in the ensemble “The Music Improvisation Company” (MIC). In addition to Davies, Evan Parker, Jamie Muir or Derek Bailey participated in this ensemble. At this point in time, live electronics as an instrument was noticeably limited and more primitive than the present instruments. In “MIC”, one could notice a greater tonal variation in Parker’s saxophone than in Davies’ electronics. The use of the electronic instrument pointed onwards by way of bringing new instrumental timbres into the ensemble playing, but backwards by way of the fact that the performer was not able to obtain the same instrumental freedom and thus did not get the same scope of expressive register (Bailey 1992).

In the 1980's and 90's, the technological development had come far enough to make audio production equipment and computers much more available, even outside the electronic music studios. This situation made experimentation with the medium more accessible for a greater number of composers. The introduction of MIDI as a standard for communication between electronic instruments made real time automation and processing of event based compositions possible. In many ways, MIDI events correspond to notes in standard music notation, and are nothing new in principle. On the other hand, real time automation and manipulation of these events, combined with a standard playback protocol, give new opportunities for structural and expressive models. An example of this is Tod Machover’s “Bug Mudra” (Machover 1990), in which MIDI instruments are used to perform notated progressions while the conductor may form expressive variations in the sound of the instrument by means of a computer glove. Such a glove registers the movements and gestures of the hand and translates these to digital signals that control sound production units connected to the performers’ MIDI instruments. Another example of this type of glove is Michael Waiswisz “The Hand” from the early 1980’s, used in various connections as a performative instrument (Krefeld 1990).

A combination of several techniques is found in Ivar Frounberg's “What did the Sirens Sing, as Ulysses Sailed by?” (Frounberg 1987-89, 1996). Here pre-recorded electronic parts are used as individual events, and the performer controls playback of the parts. Further, Markov chains are used in interactive playing, where the performer and the computer together navigate through a network of chords. The decisions relating to choice of direction alternate between man and machine.

 

The introduction of MIDI was also followed up by mechanically automated acoustic instruments, among those the Yamaha DiskClavier, an acoustic piano enabled to record a performance in the form of which keys are activated at any given time. The recorded notes may thus be played back on the same instrument, or the performance data can be generated artificially, e.g. in software. This provides an opportunity for performances in which the physical limitations of the performer no longer makes a practical limitation for the music playable on the instrument. As an example, one individual pianist may perform musical passages that require more than ten fingers or play musical progressions that are faster and more precise than the performer himself is able to carry out. This was exploited in Tod Machover’s “Bounce” (Machover 1993) and in Jean-Claude Risset’s “Eight Sketches: Duet for one pianist” (Risset 1989). The use of DiskClavier in a free improvised musical dialogue is exemplified in my own “FollowMe -97” (Brandtsegg 1997 -3).

 

The introduction of IRCAM’s ISPW in 1989 (Puckette 1991 – 1, 1991 – 2) as a tool for real time sound synthesis and processing was an important event in that it gave the opportunity for a substantial flexibility and access to a great library of functions on one single platform. ISPW was in no way the only tool that was able to carry out these tasks, but was widely used because of its flexibility and because it could be integrated with the graphical programming tool MAX. The use of ISPW led to a number of compositions of the type “acoustic instruments processed in real time by a computer”. An example of this is Cort Lippe’s “Music for sextet and ISPW” (Lippe 1993), in which the analysis of the acoustic instruments’ pitch, amplitude and frequency content controls the parameter in the electronic part of the sound image. In principle, this is in many ways a direct continuation of “acoustic instruments accompanied by electronic sounds on tape”, with the important difference that the electronic sound image may be influenced during the performance of the sound from the acoustic instrument and that the acoustic sound of the performance may be used as the basis for the electronic elements of sound. We may term this type of technique instrument processing because it is based on an input sound signal that is transformed, in its simplest form for example by just adding reverb, in advanced situations the spectral content of the sound is transformed. In addition to Lippe’s work, Pierre Boulez' ”Répons” (Boulez 1984) and Kaija Saariahos “Lichtbogen” (Saariaho 1986) are examples of this technique. Conceptually, this type of instrument processing differs from live sampling in that the instrument processing to a greater extent is based on a (gradually varied) linear relationship in time between the acoustic input sound and the processed sound out. In this specific situation, a linear relationship is defined by the fact that the electronic sound comes as a direct consequence of the acoustic sound and that any variations in time difference between the two events often takes place by means of gradual changes. Live sampling as a technique has greater opportunities for abrupt and non-linear transformations in the formal structures in the acoustic sound, for example by means of segmentation. Examples of the use of live sampling is found for example in Lawrence Casserley and Walter Prati’s instruments from the 1990’s and my own “ImproSculpt” (Brandtsegg 2002 – ).

 

A combination of event based composition and instrument processing may be found in Horacio Vaggione’s “KITAB” (Vaggione 1992) in which samples (pre-recorded) are combined with acoustic instruments in the performance. The samples are recorded from the same acoustic instruments used during the performance, and this fact affects the interplay between the electronic and the acoustic components to an extent that we experience a unified musical situation rather than a dialogue between contrasting elements. Further, one may imagine that combinations of event based composition and live electronics to an even greater extent have the opportunity for development when live sampling is used; the conceptual and practical difference is then found in the fact that the sampling takes place in real time during the course of the performance of a work.

 

Computer software that has influenced on the composition process

A great number of various computer software systems have been used for composition and sound synthesis. The resulting music has been clearly influenced by the technical tools that were used in composition and realization. As previously described, new technology had an especially evident influence on the works produced in the 1980’s and 90’s, but technology has influenced composers and performers all through the history of music. Therefore, it is natural to mention a few selected computer programs as references in line with literature. The list is not exhaustive, but provides examples of various techniques and programs that are particularly relevant for the scholarship project. Music 1, written by Max Mathews in 1957, is considered to be the first digital sound synthesis program; later, it has been further developed in a number of incarnations, among those Csound (Vercoe, ffitch, et.al. 1986 -, Boulanger 2000). Csound has also been extended even to include functions for algorithmic composition (Gogins 2003, Gogins 2003, -), and realtime audio and video processing (Maldonado 1997, -). The earliest incarnations of the Music programs were not able to calculate sound in real time, and the storage media for configuration files and sound files were of another character than we are used to now. Configuration files and program files were stored in cardboard punch cards and the out data could be written as digital audio on magnetic tape. These magnetic tapes were not recording tapes, but digital computer tapes that had to be transported to a separate laboratory for conversion to analog audio. A similar development has taken place for other sound and composition programs, but the Music/Csound family of programs serves as a good example because they have the longest history. Other important tools that may be used for sound synthesis and composition are Max (Puckette 1988, Puckette and Ziccarelli 1991, cycling74 -), PD (Puckette 1996, Puckette -), and Supercollider (McCartney 2002). A particularly interesting quality related to Super Collider is that the sound synthesis and parameter control are separated. This means that these processes may be run on two different computers, or in separate processes in one and the same computer. This opens for better utilization of computer processing power because audio synthesis may get higher priority than control functions. It also facilitates a higher degree of operational reliability, if one of the processes should (God forbid) crash, the other process may still continue.

 

The previously mentioned ISPW (IRCAM Sound Processing Workstation) was designed as specially adapted hardware in order to carry out audio processing in real time. ISPW was used together with the Max software, where Max controlled the system. There are many examples of this type of specially adapted hardware, among those Kyma/Capybara (Symbolic Sound -) and “Extended Csound” (Vercoe 1996). In 2000, Øyvind Brandtsegg and Soundscape Studios in Trondheim initiated a cooperation project with Analog Devices in Boston with the purpose of developing a generic hardware for sound processing based on Extended Csound. The project was, however, abandoned after a year and a half because of conflicting interests with reference to market segment targeting of the product. Today, a common view is that standard laptop or desktop computers constitute a fast enough platform for audio processing. Specially adapted hardware will still be able to process faster, but standard hardware provides substantially greater flexibility in terms of upgrades etc. The development of hardware is expensive and must be carried out fast if one is to fully utilize the processing potential before new and more efficient components are made available. By using standard hardware it will be easier to take advantage of the general technological development and at the same time participate in greater development cooperation in the form of use and distribution of open source code. For research based projects, this flexibility and interaction is essential, and it seems appropriate to focus on the development of software rather than hardware.

 

A number of programs have been developed in order to perform more specialized operations, among these “AthenaCL” for composition within set theory (Ariza -), “Blue” for object based composition (Yi -), “Recycle” for reorganization of sound phrases (Propellerheads -), “Live” for loop based improvisation (Ableton -), “GenJam” for improvisation assisted by genetic algorithms (Biles 2002), and “Scheherazade” for automatic generation of accompaniment by means of Lindenmayer systems (DuBois 2003). Max Mathews’ “GROOVE” from 1970 is also historically interesting. This program is performative oriented software for playback of pre-composed sequences in which you are able to control tempo, dynamics and tonal balance. Later, Mathews even developed an electronic baton, RadioBaton (Mathews & Boulanger 1997). I have also been inspired by Orm Finnendahl’s (http://icem-www.folkwang-hochschule.de/~finnendahl/) programs and instruments for live performance, but these have not been made public to date. The knowledge about these systems stems from personal meetings with Finnendahl. In addition, traditional montage programs, for example ProTools (Digidesign -), Logic (Emagic -) and Cubase (Steinberg -), are of interest.

 

About sound synthesis, choice of techniques in the scholarship work

Particle based sound synthesis will be an important element both technically and aesthetically. Historically, the basis for this technique was first described by Einstein in 1907 when he predicted that ultrasonic vibrations could occur at the atomic level (Cohran 1973 in Roads 2001) in the form of phonons, sound particles. Later, the theoretical basis was developed by the physicist Dennis Gabor (Roads 2001, p. 57 pp.). Xenakis developed a separate variant of the technique, as exemplified in his “UPIC” that is a drawing board for sound. By using UPIC, the user can visually draw partial tones and thereby define the spectral content in a sound particle, and when these sound particles are positioned in sequence along the axis of time, a dynamically changeable object of sound is composed. Curtis Roads “Microsound” (Roads 2001) provides a thorough review of the various techniques that are included in the concept of particle synthesis. More often, this technique is also called granular synthesis, and the word “grain” will mean the same as “particle” when talking about audio synthesis. Variants of this technique are found in most computer programs for sound synthesis, often in specialized editions. However, the particle technique can be generalized to create a sound generator that is capable of performing all time domain variants of this type of sound synthesis. By following my specifications derived from “Microsound”, diploma students (Thom Johansen and Torgeir Strand Henriksen) at NTNU/Acoustics implemented such a general sound particle generator in the spring of 2005. Artistically, this technique provides the opportunity to move freely about within a great number of dimensions of timbral design. One of these dimensions is characterized by natural or recorded sound in one end of the scale, to manipulated and fragmented playback of the same sound material. It seems absolutely necessary to explore every part of this continuum in order that the potential timbral specter can appear as a coherent and linear continuum in all dimensions.

 

Other artists’ work

In the preceding, references have been made to some historically significant works within both composition and improvisation as well as to technological elements that have influenced the process of creating music. In the following, some of the more current references the scholarship work relates to are discussed. For the references that relate to the field of composition, this section serves as an extended summing up, whereas for the field of improvisation, some new performers that form a relevant background for the work are mentioned. In most of the instances, the technical aspects are comparable, whereas the artistic choices relative to genre and sound picture diverge.

 

Types of algorithmic composition

Firstly, however, a review of a few techniques for algorithmic composition might be appropriate. Existing work within algorithmic composition may be divided up into three main categories:

1.    Automated music machines; the composition is carried out by a computer program or a machine without interference from the composer after the configuration of the system.

2.    Traditional composition methodology; the algorithm adds new ideas that the composer assumes not to be able to obtain in any other way. The result of the algorithm is used as material in the further composition processes, and the material may be subjected to further algorithmic processing. The analogy to traditional composition method is found in the fact that the composer specifies a material, and that this material goes through a (possibly nonlinear) process before the composer assesses and processes the material further.

3.    Interactive systems, compositionally enabled instruments. Here, the algorithms are used as a tool of automation, as an extension of the performer’s normal abilities. The scholarship work is focused on this category.

 

The first category often employs various forms of artificial musical intelligence. At times, this may present an high level of musicality, as exemplified in David Cope’s software EMMY (Cope 2001). As a general remark however, it could be said that the artificial intelligence is not sufficiently intelligent, or lacks some of the musical intuition and taste that is required to create convincing and new music. The result is often theoretically correct music, but never groundbreaking.

 

In the other category, the composer’s musical control is more focused on the planning stage, i.e. in the choice of algorithms, as well as in the work of adjustments for further iterations.  This may be conceived as analogue to the classical method of composing, in which the work is notated, thereafter played, and subsequently may be reworked into a new notated version. This is a fairly common method of working for a composer in our time, and as one of several examples I could mention the Norwegian composer Anders Vinjar and his use of algorithms to generate and process musical material.

 

In my scholarship work, I want to enable the composer (the improviser) to influence on the algorithm interactively, on the background of musical intention and intuition during the performance of the music. The algorithms may contain elements of artificial intelligence. Earlier works in the third category have been made by for example Curtis Bahn, Dan Trueman, Lawrence Casserley, and Tim Blackwell (http://www.timblackwell.com). The research network “Live Algorithms for Music” (http://www.livealgorithms.org) represents a collection of resources related to the same area.

A comprehensive account of techniques and algorithms may be found in "Opportunities for Evolutionary Music Composition" (Brown 2002), and on the web site algorithmic.net (Ariza -).

 

The division into 3 categories above may be expanded by means of another type of division, relative to the type of material that forms the basis for the decisions that are taken by the algorithm. Here, the division is between

1.    Systems based on rules. A set of rules is defined either on the basis of music theoretical issues (harmonics etc.), on the basis of a set mathematical system, or the rules are defined as a formal compositional decision made on a free basis.

2.    Systems based on databases. An analysis of numerous existing works makes the basis for a database containing potential musical solutions to concrete issues.

 

The advantage of the systems based on rules is that one easily may change the rules and observe the changes, whereas the advantage of systems based on databases is that they most often embody a greater extent of human musicality because all material originates from existing composed works. The disadvantage of systems based on rules is that the results may be conceived as mechanical, whereas the disadvantage of systems based on databases is that one is not able to intervene manually and change the rules. Further, a relatively big database is required in order to avoid that motifs or even whole parts of the analyzed works may be heard in the generated musical material.

 

Another and parallel method of division may be based on whether the composition and production technique is intended to process individual musical events or if real time instrument processing of the audio signal is used. Somewhat simplified, one may say that individual events are what we associate with notes in the traditional score, whereas a traditional image of instrument processing is the expressive interpretation a performer adds to the tone when it is played. The potential for manipulation of both types of data is rather extended by use of modern techniques. The event based music had a sharp upturn in the 1980’s, when MIDI as a standard for communication between computers and other musical instruments was introduced. At the same time, we should make a note of the fact that event based composition has been the norm, rather than the exception, all through the European history of music.

 

Composers

The use of generative systems for music is found with Cope, Miranda, Vinjar, Schönberg, Cage, Xenakis and others. Cope (2001) tries to learn the algorithm to copy a known musical genre and this coincides with an objective of the scholarship work, as it is desirable to find algorithms that satisfy a partly known musical situation. In the scholarship work, I want to try to make the algorithm flexible in order to be able to adjust the output according to taste (in realtime). This openness to the use of the technique is found with Miranda (Miranda 2001). A comparison of my own approach to that of Vinjar[1] is possible in the sense that he uses a number of algorithms, often entwined in each other so that one algorithm creates an individual link in a chain controlled by another algorithm. The difference from Vinjar is that he, according to himself, wants to let the algorithm set his personal preferences aside. In my work, I want to adapt the algorithms to comply with my personal musical preferences. Still, I am letting myself be surprised and thus led on by the result in certain cases. Computer music has been criticized for being intellectual, cold and calculated music. Particularly, this was a widespread view in the early phases of this field; the work “Illiac Suite” (Hiller 1957) got a very varied reception and within the classical audience, it was not even accepted as music until long after the first performance. The scholarship work, with its intent of using algorithmic improvisation, puts a greater focus on the human aspects of computer based music, on interpretation and the choice that is made in the moment of performance.

 

Cage’s use of indeterminism within defined frames is comparable to some of my intentions represented by the element of improvisation, whereas Cage’s aesthetical view that "everything is music", is not valid in this context. Likewise, large parts of Cage’s productions follow a timbral ideal that is not pursued in the scholarship work. An exception is Cage’s “16 dances” (Cage 1951, 1994). Xenakis’ stochastic methods represent limited or controlled chance. These elements are to be found to a limited degree in the scholarship work, most often as a selection mechanism between outcomes of equal musical value (according to the selection rules of an algorithm). Xenakis’ aesthetics as to the timbral ideal will not necessarily be reflected in my scholarship work. In the liner notes (Toop 1995) to the CD “Iannis Xenakis 2: La Légende d’Er”, a reference is made to the effect that, of all electro acoustic composers, Xenakis is the one that has most distinctly refrained from “the cosmetic temptations” the medium offers.  I tend to disagree with this, and hold a view that the “cosmetic temptations” do not only carry negative connotations, but that they may be used as one of several efficient tools in composition. The cosmetically attractive is a natural part of the full expressive register. ”La Légende d’Er” is however, historically significant also because of technical reasons as it is one of the first works in which Xenakis uses his graphic "drawing board" for composition, UPIC. It is also the first work in which stochastic sound synthesis is used on sample level (Toop 1995).

 

Both Xenakis and Cage have worked with techniques where global parameters for a work are specified by the composer, whereas the control of the details is transferred to an algorithm, statistical selection or chance. This is in contrast to the methods of serialism in which each event is specified in detail on the basis of the serial system, as an example represented by Schönberg, Berg and Webern's works from 1920 and onwards. The serial technique is continued in a more comprehensive manner in the 1950’s, exemplified through “Le marteu sans maître” (Boulez 1951) and Stockhausen’s “Mantra” (Stockhausen 1970). Boulez also explores indeterminism and improvisation, but within a serial paradigm. This may be conceived as a paradox. The free aspect in Boulez’s music may be exemplified by his use of collective versus individual conception of time as well as the use of individual “cadenzas” in ”Structures pour deux pianos” (Boulez 1961, Piencikowski 1989). One of the American pioneers within electronic music, Milton Babbitt, has worked a lot with serial techniques such as in “Semi-simple variations” for piano, where he also exploited the precise control that the electronic medium offers for further exploration of serial principles, for example in “Occasional Variations” (Babbitt 1971, 2003).

 

Joe Chadabe has been involved in a number of groundbreaking projects, from the 1970’s until the present day. The work “Ideas of Movement at Bolton Landing” from 1971 represents one of the earliest examples of interactive composition. The concept of interactive composition represents a technique that encompasses both composition and performance, and it entails a reciprocal influence between the instrument and the performer. Often, the composer is also the person that performs the work. Composition technical guidelines are made in the design of the instrument, which in most cases is electronically based. Chadabe develops this technique further in “Solo” from 1978 and “After Some Songs” from 1988. Chadabe also participated in the development of the computer program “M” together with David Ziccarelli and Anthony Widoff in 1986. “M” is considered as one of the forerunners of the computer program “Max”. In Chadabe’s later works even live sampling and processing has become important elements, as seen in “Many Times” (Chadabe 2001). The work is performed in various versions with various performers, based on the interplay between a performer on an acoustic instrument and the electronic instrument that processes the audio signal from the acoustic performer. The algorithms that control the sound processing are based on stochastic methods.

 

Improvisers

Curtis Bahn works in the intersection between electronic composition and improvisation, and has developed his own instruments that implicitly embody composition techniques. In part, these are algorithmic methods of composition and sound processing, and he also enables the use of pre-composed segments during improvised performance (Bahn 2000). The violinist Dan Trueman has cooperated with Bahn in the group “Interface”. Trueman also works with electronic extensions of the acoustic instrument and he calls this type of instrument technology “compositionally enabled instruments”. Trueman has also worked with folk music instruments (hardanger fiddle) in the group “Trollstilt” with Monica Mugan. This duo plays compositions that has not been notated and that often includes improvisation and open form (Trueman -).

 

Elliott Sharp (Sharp 1998, Sharp 1999) has been a pioneer within electronic improvisation. By exploring the possibilities of the (at the time) new technology, he played a part in developing the basis for what we today have given the genre name electronica. As a somewhat stripped-down description, one may say that electronica is a genre that is based on the use of electronic instruments, that the genre processes inspiration from pop music in an experimental manner, and that it to a greater or lesser extent includes elements of improvisation. In the period 1979 to about 1988, Sharp was among the front runners in using computers and digital samplers on the New York musical underground scene. Sharp was also one of the first to exploit this technology in improvisation. The limitation and shortcomings of the technology were used as a potential, for example in that the connection between the guitar and the computer includes certain inaccuracies. This means that the performer does not have full control over the instrument, but that he also exploits the more “capricious” aspects of the instrument in a creative manner. The concept of exploiting such technical shortcomings as musically meaningful material has later been developed further by the electronic music genre “glitch”. Sharp has also cooperated with Zeena Parkins (electric harp) in projects based on improvisation, for example in Psycho~acoustic > Blackburst” (Sharp/Parkins 1996). Just as Sharp, Parkins is considered a very interesting performer within improvised electric music. Further, he has also cooperated with experimentally orientated disc jockeys, among those DJ Soulslinger (Sharp 1998 b); here tonal and rhythmically based improvised soundscapes are explored.

 

The trombone player George Lewis has written “Voyager”, an interactive instrument/computer program that listens to improvised input via a microphone. The program makes decisions as to the line of melody, harmonics, orchestration, etc. on the background of analyses of the input data. The computer program does not generate sound, but the playing data in the form of MIDI messages that are sent to an external synthesizer or sampler. “Voyager” has been performed in a number of manifestations, for example on the CD “Voyager” (Lewis 1992).

 

Matt Ingalls (http://www.sonomatics.com/matt.html/about.html), clarinet player and programmer has explored improvised interplay with acoustic instruments and computer in several implementations from the mid 1990’s onwards. In some of his more recent work, sonic gestures on the clarinet are pre-sampled and loaded as a library into the software. He has not focused so much on live sampling because, in his own words “I already know what I’m going to play anyway”[2]. In the discussion of this subject, he also states that he appreciates the superior audio quality attainable when preparing the samples in the studio as opposed to live sampling during performance. Ingalls is also one of the developers of Csound for the Macintosh platform.

 

Saxophone player, pianist and composer Anthony Braxton is associated with the jazz genre, but he moves clearly beyond the admittedly diffuse limits for what may be called jazz. Part of Braxton’s music is inspired by John Cage and Karlheinz Stockhausen, both by means of formal and tonal aspects. He has written compositions in which the performers’ physical location in relation to each other during the performance is included as an integrated part of the composition. This may, to a great extent, be said to influence on the conditions that are put up for collective improvisation, because the opportunities of interaction between the performers have been determined by the composer. Braxton’s compositions, for example “Composition no.100” (Braxton 1993), show a very clear compositional consciousness in terms of collective improvisation in which the ensemble interacts with each other and with Braxton’s composed structure. This way of playing together indicates several years’ effort in an ensemble (or a very clear conductor/composer), where a common compositional idea is pursued simultaneously as each of the performers in principle is free to bring in new material and new ideas during each individual performance. The inspiration from contemporary music has led other black jazz musicians (for example Wynton Marsalis) to distance themselves from Braxton’s music, on the basis that they consider contemporary art music based in the European tradition as “white” music. Similar objections to being influenced by white music can also be found in George Lewis’ articles (see e.g. Lewis 1996).

 

Saxophone player and composer John Zorn combines free jazz with a number of other genres, often in the form of distinct “blocks”. Here, a limited part of a composition will relate very clearly to a genre, followed by a contrasting part in another and often very dissimilar genre. Jewish music, rock and trash metal are among the genres Zorn combine. Some of Zorn’s compositions are characterized as “game pieces”, in which the performers improvise within structurally defined rules. Zorn states in an interview with Derek Bailey:

What I was really fascinated with was finding a way to harness these improvisers’ talents in a compositional framework without actually hindering what they did best – which is improvising. An improvisor wants to have the freedom to do anything at any time. For a composer to give an improvisor a piece of music which said ‘play these melodies – then improvise – then play with this guy – then improvise – then play this figure – then improvise’, to me, that was defeating the purpose of what these people had developed, which was a very particular way of relating to their instruments and to each other. And I was interested in those relationships. ” (Zorn cited in Bailey 1992, s 75)

Compositions as Zorn’s “game pieces” may be reminiscent of indeterministic compositions with clear references to John Cage. Conceptually and practically, however, they function in another manner in that quite specific musical intentions may be built into the work by using knowledge about the performers and their improvisational practice. In such musical settings, it is quite common to handpick musicians for an ensemble on the basis of personal musical characteristics than on the basis of a specific required instrumentation.

 

Lawrence Casserley has worked with improvisation and electronics since the early 1970’s. During the 80’s, he prepared some prototypes for electronic instruments particularly intended for improvisation. In the early 1990’s, when IRCAM’s “Signal Processing Workstation” (ISPW) became available he found the technical basis for implementing his ideas on a broader basis. The main focus has been real time processing of sounds from one or several fellow musicians, and in this manner, Casserley’s instruments are dependent on the sound from an acoustic instrument as input to the system. In this context, the fellow musicians that provide the source material for processing are also called source musicians. Casserley points out certain issues that indicate problems and potential solutions (Casserley 1997). He considers the performance technique of playing a computer instrument a particularly difficult field. In this subject, parameter mappings are included, i.e. translations of gestural input to expressive parameters, a function that is commonly both complex and intuitive when playing acoustic instruments. The preparation of functional and intuitive mappings forms an important aspect in the design of electronic instruments. At the same time as this is a problem area, he also sees the potential in that the mapping is not fixed, as is the case with acoustic instruments. Further, the relationship between acoustic and computer based performers is discussed and the ambivalence as to the understanding of which performers contribute with which sounds in performance. The choice of sounds and the choice of processing methods for these sounds is an important focus for all electro acoustic music; in studio work one has the opportunity to adjust these elements whereas in improvised situations, it is important to find ways to handle these choices in the moment of performance. Because Casserley works with processing of live input, he finds it appropriate to divide the various musical processes in categories regarding which performer has the main articulation control. The division is based on whether the source musician or the computer musician has the main articulation control, whereas a third category creates a field in which the initiatives from both players are combined and the expression creates an overall unity. Casserley uses this division as a basis for the construction of his instrument. Further, he chooses to base the instrument on the use of delay lines, where the choice is between using sampling (recording with storage) and a delay line (recording without storage). This is because a characteristic property of the improvised context is that it is the music of the moment, and that it would therefore be more natural to work with sounds that only recently have been played instead of recreating sounds that were played several minutes ago. Another of his arguments is that in the case of sampling, one will constantly have to decide which sounds are to be recorded, and that this is considered a distraction in the performance. Casserley says that the first instrument was built by means of empirical testing and development, and that further extensions thus would be difficult to develop significantly. The development of a more generic framework in which experimental modules could be tested is mentioned as an important focus for further work.

 

Walter Prati also works with live electronics and sound processing, and since 1993 he has developed the tool MARS workstation for algorithmic composition, live sampling and transformation (Lake 1997). MARS is an abbreviation for “Musical Audio Research Station” and is a real time based hardware and software system. It is related to other audio processing software like MAX/MSP and Csound in that it constitutes a collection of routines that may be assembled as required for different applications. Since ordinary consumer computers have become fast enough that such processing may be carried out in real time on a laptop, the system in itself is no longer particularly interesting. But Prati’s use of it in performance with Evan Parker shows innovative approaches to live sampling and sound processing.

 

Richard Zvonar worked with live processing together with Diamanda Galas in the period 1981-85. He did live effect processing by means of delay, reverb and harmonic effects. Manual control of parameters was used with direct mapping, i.e. every physical switch controled one individual parameter. Later, he has done work with MAX, focused on spatialization and sound diffusion.

 

The cello player Hugh Livingston has been involved in several projects under the title “Strings and Machines” where the interaction between the Cello and the electronics is explored (Livingston 2000). Various approaches to musical composition are used, and there’s also exploration of which elements constitute “the composition” (as opposed to what constitutes the interpretation of the composition). Sound processing in real time is an important element in Livingston’s projects of this type and live sampling is used as a compositional structural element. Improvisation in the sense of interaction between the instrumentalist and the electronics is used as a technique in order to explore compositional potential, for example in the composition “Qwfwq” that has been developed in cooperation between Livingston, Mark Danks and Michael Theodore. The use of live sampling in this composition may, historically speaking, be traced back to Maurizio Kagel’s use of tape machines in the piece “Transicion II” (1958).

 

Ian McCurdy (http://iainmccurdy.org/) has worked with live sampling and improvisation. His work with the group subfusc (http://www.subfusc.com, with Saul Rayson) incorporates the use of fruit and vegetables as controlling devices, connected to the computer via “do-it-yourself” type sensor interfaces. He is using elaborate forms of realtime granular synthesis with live sampled audio segments as source material for the process.

 

The N-collective (http://n-collective.com) explores various compositional paradigms inspired by compositional methods of contemporary music. This is a pool of musicians, performing in various constellations for different events. Electronics and computers are used by some of the performers, but more interesting in the specific scope of the scholarship project is their investigation of group improvisation related to specific compositional ideas. An example of such a technique is their improvisations in a distinct “pointillist” style, where musical phrases are divided among the musicians, each providing a single note (or small group of notes) to a collectively improvised phrase.

 

Theoretical inspirations

Stockhausen has made a number of basic compositional reflections on themes that touch upon the scholarship project. These relate to the form and structure of music, to spatial design of a sound image, and to performative aspects. The article “…how time passes” (Stockhausen 1957) discuss how rhythm relates to pitch, that the fundamental frequency of a note and the tempo in a rhythm are contained in a unified and coherent dimension. This is also elaborated in Curtis Roads’ reflections on musical time in “Microsound” (Roads 2001, pp.3 – 40), where the whole range of the time scale from “Supra” (eternity) to micro second level is discussed. I regard these reflections around time in music as very relevant for the scholarship project. In relations to the use of time in musical form, Stockhausen’s “Moment Form” adds a useful perspective. This is a form without a dramatic curve of tension, but that pursues musical situational images or moments that follow in sequence, one after one another. Crest of wave follows crest of wave more or less continuously through all of the composition. The work is concluded either when the composer decides that the material has been exhausted, or after a predefined time has elapsed. “Kontakte” (Stockhausen 1960) is a work that exemplifies this form concept. This way of thinking in terms of form is to some extent taken up by performers within free improvisation. Chris Koenigsberg writes in his review of the theme:

You know, it (Stockhausen's paper, and my exigesis etc.) all relates to that
wonderful moment in "Kontakte" when the "buzzing airplane" sound slows down
and decomposes into a series of individual pulses, each with the same pitch as
the overall stream had in the beginning....
(Koenigsberg 1996)

 

Theoretical reflections around the use of algorithms and artificial intelligence is found with Douglas Hofstaedter both in “Virtual Music” (Cope 2001), and earlier in ”Gödel, Escher, Bach” (Hofstaedter 1979). Hofstaedter both reviews examples of use and also poses philosophically related questions as to the consequences of artificial intelligence used in creative and constructive work. David Cope’s work with the computer program “EMMY” (Cope 2001) has created a storm of opinions around the classical creative genius’ real character, and Cope has, on the background of this, great difficulties in performing his works that have been created with this tool. In some instances, he has kept the process of creation of a work concealed, with the consequence that the work in fact has been performed and received good reviews. Cope’s work with “EMMY” is, to a great extent, based on the re-combination of elements from a database, in which the database has come about on the background of analyses of existing works. One of the applications for EMMY has been to create new works in well known composers’ style, for example Bach’s corals and inventions. In part, this has been so successful that even expert musicologists have been confused by which works have been written by Bach and which have been written by Cope/EMMY. The debate around EMMY has mainly been concerned with the fact that many think it’s an insult to the pride we have in the human creative force that a computer may carry out something similar. Cope emphasizes that EMMY is not able to create anything truly original, but re-combines material from the database. Jan Brockmann [3] has delivered the following points of view with reference to this issue: If Bach’s music contains such clear characteristics of Bach that one may cut it into fragments and let a computer assemble them and still have music that sounds like Bach, this says more of Bach’s genius in the form of the fragments’ integrity than it says of the computer’s genius in the form of re-composition.

 

Bentley (2002) has collected a number of articles around the use of algorithms in creative and artistic processes, whereas Strogatz (2003) and Barabási (2002) focus on interdisciplinary relations and thus aim at showing the algorithms' general validity. This is done by focusing on analogies between fields of expertise and how algorithms and function descriptions act in similar ways in various professional connections. An example that Strogatz focuses on is the seemingly spontaneous synchronization that may be observed in as varied connections as fireflies’ synchronous light flashes, the synchronization of a great many electronic oscillators, the tendency to accelerando in the audiences’ applause after a successful concert and further to the various parameters that affect our periodic sleep and wake cycle. Barabási discusses the reciprocal influences we find between nodes in social, financial, physical or electronic networks. It is assumed that such nodes may, and possibly should, also be found in an algorithmic composition tool with a unified design.

 

The music theoretical works of Allen Forte (Forte 1977) have been used as inspiration for compositional algorithms within the scholarship work. Forte’s ideas on interval vectors contain very interesting elements with reference to musical tension, but also have clear weaknesses. The weaknesses include by means of example that the theory reduces the number of interval classes to a great extent, a situation which makes interval configurations that traditionally would have been designated as major or minor triads appear as analogue within the vector classification. Regardless of the “correctness” of the vector theory, I view the technique as a pregnant generator of musical material. It is my firm belief that nothing in music is absolutely true anyway, and appropriateness can only be judged by interpretation and context. The following citation might be used as a perspective:

“If it is easy to define melody, it is much less easy to distinguish the characteristics that make a melody beautiful. The appraisal of a value is itself subject to appraisal. The only standard we possess in these matters depend on a fineness of culture that presupposes the perfection of taste. Nothing here is absolute except the relative.” (Igor Stravinsky 1942)

 

George Russell’s book “Lydian Chromatic Concept” (Russell 1959) represents one of the first formulated theories around jazz improvisation. The theory can be said to have been ground breaking in that it has inspired a number of prominent improvisers, for example the modality with Miles Davis and John Coltrane (Berendt 1992). The general validity of the music theoretical elements that are used in order to construct the Concept may well be an object of discussion. As an example, Russell insists on always relating to the Lydian mode as a starting point for all analyses. This is drawn to an extreme in that, in addition to the Lydian scale, he constructs a number of augmented and diminished scales. By combining these scales, a chromatic scale can be constructed in which all the 12 tones are defined as members. Russell’s theory sets any chromatically based phrase in relation to a scale and a fundamental, and it insists that the chromatic scale is derived from the Lydian mode and its extensions. The concepts “ingoing” and “outgoing” melodies are used in order to designate melodic phrases that move within or outside of, one of the defined scales. The theory clearly relates to traditional tonal jazz harmonics, but at the same time points towards such a degree of chromatics that one may start talking about a type of atonality. The theory’s weakest point, scientifically speaking, is that the framework for analysis partly is based on “our own aesthetic judgment”, as a starting point for the analysis. I quote the definition of an “Ingoing Horizontal Melody” in order to highlight the situation:

”Ingoing Horizontal Melody: A scale (Major, Blues, Auxiliary, Diminished) of the Lydian Chromatic Scale determined by the resolving tendency of two or more chords, the key of the music or our own aesthetic judgement, used as a frame for absolute or chromatically enhanced melodies.” (Russell 1959)

This, seemingly, approximate treatment of theoretical facts enlightens that a theory for improvisation may be fruitful in practice, even if it, in scientific terms, is ambivalent to an extent that it borders on being self-contradictory. I consider the criterion “our own aesthetic judgment” as contradictory to the criteria that relate to the objective determination of tonally fixed points and gravitation. However, our own aesthetic judgment is the only truly important factor in determining if something works in music. Still, it would be hard to construct a theory based upon it.

 

Closing remarks

In this survey, some references to previous works within composition, technology and improvisation have been reviewed. Within the field of composition, I have referred to some works that may be said connect with my scholarship project, and that may show the ongoing work as a natural continuation of the development over the last 20 years, with roots back to the middle of the 20th century. Within the field of improvisation, it is difficult to point out very clear references, partly because of the field's inherent reluctance towards precise description and documentation. I have referred to some important conceptual trends, mainly from the 1960’s.  Further, I have used some examples of more recent performers and their approaches to formal, structural and performance related aspects.  Some theoretical aspects of algorithms, composition and improvisation have also been set forth, and some references made as to how these have affected the scholarship work.

 

References

The references are to be found in a separate document here.



[1] Private conversations and e-mail communication between Vinjar and Brandtsegg in the spring of 2005.

[2] My transcription (from memory) of what Ingalls said during a lecture at the Sounds’ Electric conference in Dundalk, Ireland November 2007.

[3] Private conversations between Brockmann and Brandtsegg July 2005.