Beruflich Dokumente
Kultur Dokumente
By Jenifer Jaseau
Pre-MIDI paper
Submitted Winter 2010
Box of Cards Jaseau 2
Introduction
The development of computer music began with the innovations of Max Mathews
at Bell Telephone Laboratories. His work focused on the controlled generation of sound
Music-N series, an early computer sound synthesis software program. The first efforts at
digital synthesis were non-real-time, and after many trials from composers and
modifications from technicians, there became a need for real-time application. The
GROOVE system came out of the Music-N series in response for that need, becoming a
template for live computer music systems today. These two structures of computer
architecture became the basis for all computer systems we use today, including Max/MSP
and Kyma.
The Beginning
New Jersey after graduating from M.I.T with a doctoral degree in electrical engineering.
developing computer equipment to study telephones. His task was to create a listening
test for telephones in order to judge the quality of a sound received. He constructed a
device that would convert an analog representation of sound, enabling the sound to be
to get the sound back out of the computer and into an analog representation of sound that
a listener could hear. In this way, he developed a way to store and manipulate sound
Box of Cards Jaseau 3
the air. How sound is made is dependent on how air pressure varies over time. So then,
if a computer can generate a pressure function, then a computer can generate sound by
generating the sounds pressure function. All that is needed for a computer to make sound
combination of devices comes very close to being a capable source for pressure function.
The device he invented became the fundamental seed of all computer music; it
was an analog-to-digital converter, and the digital-to-analog converter (ADC and DAC
respectively). He is now known as the “Father of Computer Music” because of the many
contributions he made towards the development and discovery of making sounds with a
computer and for applying digital techniques to speech transmission problems and then to
music.
DAC
The DAC takes a binary number (such as 01101) and expands it as the sum of its
digits and then multiplies it by the power of two. The input part of the converter takes
the five-digit number and represents voltages on five lines going into the switch controls.
The number “1” equates a positive voltage, and the number “0” equates a negative
voltage. The switch control will send a positive value when the switch is closed, and a
negative value when the switch is open. The switches were transistors1 , replacing a
1
A term coined by John Pierce, it is a semiconductor device commonly used to amplify
or switch electronic signals
Box of Cards Jaseau 4
digits were needed to obtain samples, which could be easily done by adding more
ADC
The ADC was the equivalent of a DAC with the exception that it needed a
feedback mechanism and a programmer, which was a small computer. The ADC had
three essential components: the comparer, the programmer and the DAC.
The ADC would convert sound pressure level variations into a series of discrete
voltage would be applied to the analog input terminal, then the programmer would set its
calculations to zero for each of the five branches. After five cycles of five decisions from
the comparer, the five branches would output a digital equivalent of the analog input.
The ADC was n times slower than the DAC (n representing the number of digits.) It had
computer music. Accessibility to recall digital data within the limits of the system was
steady sampling rate, for if there were any variations in the sampling rate, it would then
equate to fluctuations similar to wow and flutter in the analog domain. The numbers of
samples were greater than the magnetic core memory of a computer so the computer
would store samples in bulk. Samples were stored and retrieved in sequence by a digital
magnetic tape that would record in bulk or groups called records, and these records
2
Mathews, Technology of Computer Music, 26
Box of Cards Jaseau 5
would not store continuous data. There became a need for a small core memory or buffer
to be inserted between the tape and the converter in order to maintain a constant sampling
converter under the control of an external oscillator digital tape transport associated with
a computer. Core memory equaled a buffer, and the functions of control circuits were
accomplished with a program. This allowed the same computer that synthesized the
sounds to then communicate directly in sound with the external world, being us humans.
sound, the next logical step was to write programs that could play music through the use
of a computer. He made several versions of sound synthesis software that gradually got
better and better, gradually using less computation power and gaining more of a user-
friendly environment for the composers use. This software program is commonly referred
opened up many areas and opportunities for other programmers and composers to make
sounds that were new, and make environments for electronic compositions.
Music-N
Music I was the first attempt to apply a computer to musical goals. Being the first
of the Music-N series, developed in 1957 on the IBM 704 computer4. The IBM 704 was
3
Chadabe, Electric Sound, 108
4
The very next model after IBM’s first ever computer, the IBM 701
Box of Cards Jaseau 6
located at IBM’s world headquarters in New York City. Mathews would run his music
program, get a digital magnetic tape, and then bring that tape back to Bell Labs to convert
it into sound using the DAC. Bell Labs were the only one’s to have the right DAC
hooked to a digital tape transport that could play a computer tape. This first sound
generating computer program had only a single voice, one waveform, had the same rise
and decay characteristics, and only had the ability to change 3 expressive parameters;
pitch, loudness, and duration. The first ever piece composed digitally was a seventeen-
second piece by psychologist Newman Guttman entitled In the Silver Scale. Mathews
said it was quite terrible, and the program was terrible, yet it was the first5. In later
Music-N programs, the sound synthesis software was written on an IBM 7094, unique
because it was the first computer to use transistors in addition to being very effective for
the work done in speech processing and visual signal processing. Some of the early
notable operation systems were developed for the IBM 7094 such as the Bellsys 1 and 2.6
of three voices, totaling four independent voices, had been made. Sixteen waveforms
were stored in memory thus introducing the wavetable oscillator, a new concept that
allowed a user to select from arbitrary waveforms such as sinusoidal, square and saw
tooth waves.
of an increment and reading the wavetable content at that position. The wavetable is a
list, stored in memory, of the calculation for one cycle of the waveform. A computer
5
Chadabe, 109
6
Roads, Music Machine, 6
Box of Cards Jaseau 7
mathematical formula. After calculation, the samples are sent to the DAC. To generate a
periodic sound, the computer program read through the wavetable again and again,
sending the samples it read to the DAC in order to convert the waveform to sound.
Music III followed in 1960. Mathews wanted to create a set of universal building
blocks that gave a musician freedom and task to put blocks together into his or her
instrument. He thought if someone knew how to patch a Moog synthesizer, then they
could put together unit generators7 into a computer. This version introduced the concept
was also implemented, listing notes in order according to starting times. Each note could
Shortly after Music III, word got out there was a computer able to make music.
This discovery got many people excited, a few composers and a collection of folks from
Princeton University such as Hubert Howe, Godfrey Winham, and Jim Randall were
curious about the new development of computer based music and visited Mathews at Bell
Labs. But Bell Labs was not the only place that had a computer capable of making
music.
In 1961, John Pierce, director of Bell Labs and a crucial supporter of Mathews
work, heard of the piece Illiac Suite for String Quartet and was interested in the work
7
Software modules for signal processing. Languages based in the concept of unit
generators have been the foundation of most research in digital sound synthesis
algorithms to date. Roads’ Foundations in Computer Music, 370
Box of Cards Jaseau 8
being done by the programmers and researchers there. Since Pierce did not know the
names of the researchers there, he addressed his letter to the Illiac computer at the
University of Illinois as “Dear Computer,” expressing his interest for the computer that
was composing music. The researcher at Illinois, Lejaren Hiller, then invited Pierce for a
visit; while there, Pierce met Harry Partch and graduate student, James Tenney.
James Tenney was the first composer to take an interest in computer music
computer music where he remained until 1964, becoming one of the first composers to
work extensively in the area of digital synthesis. During his time there, Tenney created a
series of tape pieces in addition to writing software that utilized a computer to make
complex task oriented musical functions in way that had not yet been explored in music.
Mathews’ software made music in a traditional way such as making a printed score that
was encoded on punch cards. Tenney had a different approach; he worked on realizing
theory, stochastic music, and the theories of John Cage. Tenney created several works
while at Bell Labs such as Analog #1: Noise Study (1961) inspired by traffic noises inside
the Holland Tunnel, Dialogue (1963) exploring parameters that were controlled
according to probability distribution, Phases (for Edgard Varese) (1963) a piece said not
Box of Cards Jaseau 9
to have been made by a computer or a human, but by a hybrid of the two, and Ergodos II
(for John Cage) (1964) created as a series of parameters and probabilities that change
over time.
In 1963, Mathews was joined by Joan Miller to work on Music IV, a little more
convenient to use, although not more musically powerful than the previous version, but
program that had been recently invented. The same computer, an IBM 7094, was also
available at the nearby Princeton University, providing the people there access to run
Music IV easily. They made improvements in user-friendliness and called the version
John Chowning, then a graduate student at Stanford University, was one of the
first people to work with Music IV. He had read Mathews’ “The Digital Computer as a
Musical Instrument” article in Science magazine, and in 1964, went to meet Mathews at
Bell Labs. Chowning came home with Music IV and a box of cards. Chowning and
friend, David Poole, began working on Music IV, for they had an IBM 7094 in their
facility. The Artificial Intelligence laboratory (AI lab) had a PDP-1 used to convert
samples into sound. Later in 1966, the AI lab bought a PDP-6 and Chowning and Poole
In 1965 there was a great increase in the making of different computers, and this
called for a non-machine specific language program. Music V was written in the high
computers. Previously the Music N series were written on a low level, machine specific
assembly language. This meant that the program could not run on any other computer
than the one it was written on. The desired goal of Music V was to be a universal
program using FORTRAN as its complier. FORTRAN had already been implemented at
V including Miller in addition to some new faces such as F. Richard Moore, Jean-Claude
Ussachevsky.
The fundamental concepts of Music V were the unit generators and the stored
functions. Music V was a modular system of software defined unit generators. It had an
OSC acting as a waveform generator, an AD2 as a 2 input adder, and RAN as a random
number generator, all of which could be linked together to create a new instrument.
connecting to unit generators in a variety of ways. The stored functions became the
score; analogous to a traditional musical score except that this score specified all acoustic
properties of each instrument including information about the discrete sound called notes.
Each note contained information about an instrument definition, an algorithm for the
instruments unit generators, a starting time and the duration of the note.
The program was simple in the amount of coding required to put it on new and
different computers. It was efficient and ran rapidly. It was the means to spread
computer music into the world. The first main users of these music programs were
Guttman, Pierce and Mathews who were all fundamentally scientists. Mathews wanted
Box of Cards Jaseau 11
musicians to try the system and see if they could learn the computer language and express
themselves with it. Mathews and Pierce approached several composers, including
Copeland who denied their inquiry, hoping someone would be willing to embark on the
journey of composing with a computer. Some composers replied and Music V was sent
out to them with two boxes of punch cards and a note saying “Good Luck!”
and all compositions were disseminated as recordings on tape because the computers
were not able to be carried into concert halls being that they were far to large. What was
the attraction of using a mainframe computer to compose music? How did composers
use computers to composer? Chowning reflects, “Generality was a great part of the
attraction,” meaning that any sound-generating algorithm could be tried, tested, tweaked
and perfected in software.8 This was true for all software synthesis approaches.
continuing to develop timbres and the use random noises of various sorts. After Tenney
left, Risset was invited from France to work as a composer in residence, on his thesis,
analyzing the timbre of a trumpet’s sound. His work lead to new techniques for analysis
becoming analysis for synthesis, the most powerful tool for analyzing natural music
sounds.9
Music V had progressed to be a flexible sound synthesis program. The only rigid
aspect was the input process, which lead to the need for a graphical system of display for
interaction. Mathews desired to broaden and make easy the techniques for specifying
compositions. He felt that the Music V language was more important than the program
8
Chadabe, 127
9
The Music Machine, 8
Box of Cards Jaseau 12
itself. One of the attributes of the graphical system was that you could describe an
accelerando with a single line, and the value of the line was the tempo. This is very
similar to Xenakis’s graphics tablet used in his UPIC instrument in that one could draw
Several ‘Music’ versions spawned from Music V such as SCORE, Music 360 and
Music 11. SCORE came about from Music 10, also at Stanford, using the then new PDP-
10. In 1974, John Chowning discovered frequency modulation, which was patented by
Yamaha in 1977, an integral part in the founding of the Center for Computer Research in
Music and Acoustics (CCRMA). There was some success in real-time programs such as
the Sampson Box utilizing the PDP-10, making a real-time version of Music IV,
Coordination Acoustique Musique) was also involved, they worked with the PDP-10 and
the PDP-11 in 1977, and in 1976, they created sounds from Music V, Music 10 and other
software programs from CCRMA. Barry Vercoe, while at Princeton, developed Music
360 between the years of 1968 and 1971. Music 360 ran about five times faster than
Music IB-BF, and in 1973 at MIT, Vercoe developed Music 11, a sound synthesis
language for a DEC PDP-11 computer with a floating point processor based in real-time.
Tests were run at a 40 kHz sampling rate using a 1024 word function tables. The last
Laboratory) at the Center for Music Experiments at the University of California at San
Diego (UCSD) in 1979. After Music-N was discontinued, Moore wrote software that
Box of Cards Jaseau 13
was called cmusic10. By the end of the 1970’s most of the work in computer music had
been started, worked on and was close to perfection, due to contributions from people at
spoke about the early reactions people had to computer music. Mathews said that people
were skeptic, had fear and a lack of comprehension for computer music. Composers
were the most interested, and performers were not interested at all in the new
developments. By 1980, many computer music centers had been established and
training. The computer programs were used to specify a composition in order to create it.
Box of Cards
of calculations, meaning that you had to wait before ever hearing a sound. Music V
worked in several steps; first, sound was generated by the main computer which would
accumulate an inaudible table of numbers known as samples, and store those samples as
samples were converted to sound and stored on a normal audiotape. People from
Princeton University would drive to Bell Labs to convert their paper cards onto a
magnetic audiotape. It was primarily Moore’s job to convert these samples to sound.
This process would take about two weeks. It was common to be surprised at the end
result of the tape, for the sound did not sound as they had expected. Batches were sent in
10
An acoustic compiler program
11
Music Machine, Roads 8
Box of Cards Jaseau 14
30-second segments because the composers had to be sure they worked. Then the
process would start all over again, using the computer program to punch audio
information onto the paper cards, and then send the cards to Bell Labs to be converted in
hope that this time, the sound would be representative of what the composer had tried to
specify. Barry Vercoe who was working at Princeton in 1968 and would continually drive
It was the only working converter and it was a long trek… one had to go to Bell
Labs to convert the sound and drive through this dreadful traffic, and you could
only play it when you got back to Princeton and think, ‘My God, that’s’ not what
I wanted at all’12
This angst of driving, and waiting to hear the result of the box of cards really
increased the desire for a real-time performance capability. Although the alterations made
great strides when compared to the Music I, it still lacked some intelligence. The
program was not intuitive in any way, and was not able to operate in real-time. In 1967,
Mathews and Moore began developing the GROOVE system, obtaining real-time editing,
control and the performance of musical scores by 1970. Commercial hybrid synthesizers
“keyboard scanner” program to record information as it was played and relayed that
information to the synthesizer13. Although many attempts were made, such as the Illiac
computer in 1967 at the University of Illinois, or the PIPER 1 from the University of
Toronto in 1965, it was again Max Mathews that pioneered the first fully developed
12
Chadabe, 113
13
Computer Music Tutorial, Roads, 614
Box of Cards Jaseau 15
GROOVE
Controlled Equipment. It was a hybrid system meaning that is was a digital computer
used to control an analog synthesizer. F. Richard Moore who worked with the
system. This program was linked to a real-time generation system that allowed a
composer to be in direct contact with the processes of digital synthesis. The great
advantage of the GROOVE system was that is had real-time editing, control and a
performance of musical scores. It had many input devices, such as a joystick and a knob,
for conducting a score that was prepared beforehand and had input into the computer.
concepts such as time, notes, duration, and velocity as building blocks. These building
blocks could then be put together to create a ‘score’, which was a list of defined and
undefined parameters that would change over time either from instructions from the
synthesis system. The computer in use was a Honeywell DDP-224 minicomputer and
attached to this computer was a large auxiliary disk drive, a digital tape drive, an
interface for the analog device that incorporated twelve 8-bit and two 12-bit DACs, and
sixteen relays for switching functions. Information was updated every one-hundredth of
a second. There was also an additional set of converters that provided the horizontal and
vertical coordinates for a cathode ray display unit that displayed visual representations of
control functions for an analog synthesizer. Functions of time specified the way
frequencies and other parameters that one controls in an analog synthesizer. The
parameters would change over time to make music. This graphical monitoring system
had software that generated a linear time-scale on the horizontal axis. The time span was
ten seconds and acted as a timing block or page. The screen would automatically clear
itself at the end of the page to display the next page. Ten different functions could be
displayed without crowding. The use of the monitoring system linked with the real-time
generation system allowed for the intimate connection of digital synthesis between the
terminal using a mnemonic code that was translated by the computer into sequential
controls values for the analog device interfaces. Since air pressure and air molecules vary
over time to give a listener the perception of sound, so must a computer-generated sound
change over time. It is not just a command that changes a sound, but a gesture from a
human that occurs over a time span that allows music to breathe as if coming from the
There were support input devices that provided varying device functions during
the performance of a computer score. The computer score contained a list of control data
that changed over time, and allowed specified parameters to be modified in real-time.
These support devices consisted of a 24-note keyboard (now MIDI triggers), four rotary
knobs, and a 3D joystick. The voltage output from the knobs and joystick were
Box of Cards Jaseau 17
was connected directly to a 24-bit binary register allowing each key to control the state of
a bit. The output of control data was regulated with a variable-rate clock pulse generator.
The computer acted as a control device in hybrid systems and not as a direct source of
audio signals. Adjustments to the clock rate would vary the rate of change in the events,
not vary the events themselves. Therefore a performance could be halted at any given
time.
the manual specification of the basic controls associated with the circuits. All
interconnections for the whole system were routed via a five hundred element central
patch field.
The synthesizer had devices that were built from components that were laying
around or they would build one. It had a patching system of patch boards that plug into a
holder allowing users to change connections rapidly. It also had a display system that
showed a subset of 14 lines of control information as a cursor went across the monitor.
This made for continuous control of parameters as opposed to event based control, so
14
A transmission system that carries two or more individual channels over a single
communication path.
Box of Cards Jaseau 18
synthesizer by storing its performance information into memory in the form of functions
of time for each synthesis parameter. The functions could be edited to change the
performance.
research applications. It recorded time functions in sampled form at 100-200 Hertz (Hz),
storing this information on a disk. The program played back the functions and combined
them with other functions of time generated by a performer playing on the sensors of the
instrument. These combined functions were used to control the analog synthesizer. The
program had editing facilities to change stored functions; change could me made to one
sample of one function without affecting anything else. One could then get a printout of
the functions, allowing musicians to edit their improvisation. The functions were
displayed on a scope that could move to a particular sample in the function. One could
then hear a sustained sound, flip the editing switch on, change the value of the function
and hear what it was doing to the sound.15 The data being edited was the stored time
function on disk instead of the live data coming into the computer from the various
People of GROOVE
A few individuals chose to work with the GROOVE system such as Laurie
Spiegel and Emmanuel Ghent each working in different ways. Spiegel begun working in
1973 and was interested in the real-time interaction process and compositional logic. She
made several pieces such as Appalacian Grove (1974), Drums (1975) and The Expanding
15
Music Machine, 8
Box of Cards Jaseau 19
Universe (1975) among many others. Drums was composed by using a bank of resonant
(high-Q) filters, built by Mathews, which oscillate if pulsed. She sent the amplified sound
of literally turning a digital bit on and off through the filters16. In other compositions she
used analog and digital input and output, hardware modules, library-resident code
modules that had been written by previous users, and gizmos built by Max Mathews.17 In
1974 she wrote software to synchronously compose music and animated video and called
sonic art, but as the art of composing abstract pattern of change within time.” (Spiegel,
191) This program allowed for control of sound based on the GROOVE system, with an
addition on a drawing program that did not yet exist. The VAMPIRE died in 1979 when
its CORE was dismantled, “the digital equivalent of having a stake driven through its
heart.” (191)
one of the first people to work with computers as a way to generate and control sound.
Back then it was incredible amounts of work to get the simplest things to happen
musically and it really generally sounded pretty awful compared to today's refined
highly-controllable sounds. There was a lot of excitement but a lot of frustration
too. People gave the few of us who were into computers for doing music a lot of
grief and flack because computers were still viewed by the general public
(including just about everyone in the arts) as "dehumanizing" - cold clinical, to be
feared, in general inheriting the characteristics of those who tended to own
computers. Access was extremely limited. And at Bell Labs we had to pretend not
to be doing music and were always afraid of getting caught because it was not
really permitted under the "regulated monopoly" status that BTL had up until they
broke the company up. But you would have love the sense of discovering sounds
and ways of doing music that were altogether new, even if it did sometimes take a
whole 6 months work to get something like a computer-controlled reverb to work,
including not knowing if it ever would or not.
16
correspondence with Spiegel
17
Chadabe, 158
18
Video And Music Program for Interactive Real-time Exploration
Box of Cards Jaseau 20
The most notable person working with the GROOVE system was Ghent. He was
already interested in rhythmic and tempo relationships and interacting with algorithms
while composing. Ghent was looking for a device that could represent any rhythmic
I remember many occasions working with Laurie all night at this and thinking
what the computer was doing was incredible. It would produce lines that were
musically so interesting, but who ever would have thought of writing a musical
line like that? We had the sense that here we had hired the computer as a musical
assistant and it was producing something that we never would have dreamed of.19
His most notable piece was Phosphones (1971) being the most interesting because
the Mimi Gerrard Dance Company used the piece for a performance, in addition to
Gerrard and her husband James Seawright developing a lighting system for synchronized
theatrical lighting that was based off the technology of the GROOVE system.
This lighting system was called CORTLI20 and was funded by a grant from the
NEA. The piece required complex and rapid stroboscopic changes in lighting. The
realization of this “polyphony of music and lighting” in relation to the dance is extremely
dramatic. As the placement, intensity, and color of the lighting are precisely programmed,
and change with great rapidity in relation to the music, a subtlety of interaction results
The sound source for Phosphones was almost entirely comprised to a special
group of resonator circuits dubbed Resons designed by Mathews for use in his electronic
violin. When tuned and adjusted to ring when dampened, the produce the array of
19
Chadabe, 162
20
Acronym for Composing and Outputting Real Time Lighting Information
21
Computer Music Journal Vol. 32 no. 4 Winter 2008
Box of Cards Jaseau 21
showing of Phosphones he wrote: “Nothing quite like what had ever happened before.
Actually, nothing quite like it has ever happened since.” Gerrard reflects in the
difference between the original performance and one that was hep 30 years later, “The
initial reaction ranged from amazement to hostility. Thirty years later the piece was
In 1977, Ghent begun a series of twenty-nine studies called Program Music based
on algorithmic modes of interaction. “They were studies for what was to come”22 This
series never got a chance to grow for Ghent received a Bell Labs notification that the
DDP-224 computer was to be removed from service thus ending the life of the GROOVE
system. Mathews did not recreate the program for other computers because by that time,
Computer Architecture
ideal for digital audio synthesis. For this there is no single answer, yet there are four
modules. Kyma is a great example for how to get around the issue of communication
graphical display.
22
Chadabe, 163
Box of Cards Jaseau 22
The collection of suitable primitive elements, such as memory, shift registers, and
Max/MSP allows a user to define their own parameters for how devices can change a
music element over time by programming their own ‘patch’. This leads to a variety of
success and errors in microprogramming depending on the skill of the user and the
achieved with a small amount of hardware, yet it is difficult due to determining where in
the datum lays within the pipeline in addition to it being difficult to re-patch an
instrument on the fly. The way around these difficulties is to flush the pipeline after
every sample, which is a lot of work for a processor to do and therefore not a great
approach.
digital synthesis was performed with a 16-bit or a 8-bit microprocessor until the
appearance of the 32-bit microprocessor with built in floating point instructions. This
was a great advantage because the amount of specificity between ‘0.’ and ‘1.’ is almost
limitless thus allowing more finite specification of musical parameters that can slowly
and accurately modulate over time in small and precise ways. This is useful for
amplifier or generator.
In many languages, these activities can be substituted for variables, also known as
functions in the language of FORTRAN and BASIC. Roads recommends this capability
Box of Cards Jaseau 23
“because it seems to mirror something about the way we think and therefore make a
A system that grew out of GROOVE was PLAY, the first software sequencer,
built in 1977 at the Electronic Music Studio at State University of New York at Albany
by Joel Chadabe and Roger Meyers. Chadabe and Myers based the PLAY system on a
process model of music with functions and timers similar to the modules derived from
analog synthesis. PLAY was developed from the knowledge of GROOVE, the
Conductor program, and the Buchla Series 500 electric music box and used the PDP-11
It used a small portable computer that controlled an external synthesizer in real-time with
the capability of interaction. It had two stages, the first is design, and the second is
operation.
In the design phase, the composer designs a specific compositional process using
any modules. The design phase uses 3 steps; first, a composer specifies data generators
from some type of device, then determines how the data generators will affect the
attributes of sound such as pitch, rhythm, envelope shape and loudness. Second, the data
generators are organized as modules that are interconnected. Third, the composer sets the
rate of system clock and timing for each individual module. The typical operating
In the operation stage, the composer’s process plays back and the composer
interacts with the playback according to the design. The composer will control, in real
discontinuity in playback.
The three fundamental concepts derived from the PLAY system that we still use
today are: function, such as specifying a data list; patch, defining modules; and playback;
The importance of the GROOVE system lies in the fact that it was the first truly
interactive system comprised of many devices such as modules, sensors and a graphical
monitor. The modules allowed a user to define their own sound digitally from an analog
synthesizer. One could control musical parameters over time by moving a sensor, or
adjusting the sensors position over time. Nowadays sensors include infrared sensors,
anything that sends out data. The use of sensors and modules that could change over
time made composing music with a computer less like using a machine by becoming
responsive to human gestures, but also lead to difficulties. These difficulties were the
problems faced by early composers and technicians working with these systems.
The systems built in Max/MSP are very similar to the system developed by the
capabilities.
GROOVE founded the system of design and playback we use today. One could
design a score and then play it back while modulating some specific parameters of the
analog synthesizer. The difference between then and now is then the storage medium
Box of Cards Jaseau 25
was paper cards and now the storage medium is a combination of a buffer, the random-
GROOVE is in disguise today, the use of sensors to change a score over time, and
the ability to go back and manipulate what parameters was changed during an
by Carla Scaletti and Kurt Hebel at the University of Illinois, Kyma is made up of many
objects that reflect the GROOVE’s modular system. Instead of using hardware modules,
Kyma uses software objects. These objects can be placed together and connected using a
virtual patch cord, connecting all the module objects in various ways. An object can be
played back by itself, or in combination with other objects. This is the first step of
designing a real-time sound synthesis piece. The second step after defining all the
objects is to place these objects on a ‘timeline’. The timeline will then play back the
parameters of control for each of the modules. A user can choose to set some of the
objects used or to modulate the objects over time either with some kind of sensor, or by
using an envelop as a function of time that will change defined parameters by following
the value of the line set by the composer. This is fundamentally the same structure that
the GROOVE system used, a sequence of events that becomes the predetermined score,
and an array of devices to change the content within a specified parameter thus becoming
Conclusion
Most computer music systems today are comprised from the basic functions and
developments of the Music-N series and the GROOVE system. Mathews and his
colleagues formed the common foundation for what a composer can design and monitor.
The basic concept of input and output is still the fundamental seed for the designing of a
computer music system, yet the path between input and output can form many different
journeys, which composers can freely design and manipulate. With systems such as
Max/MSP and Kyma, computer music today presents a composer with a world of
limitless possibilities for sound reproduction, synthesis, and the ability to control it in
real-time. Mathews, still alive today, must be reveling in the astonishment that he began
“It opened opportunities that had been unthinkable—it enabled me to try all kinds of
ideas, listen to them in real time, modify them in real time, and thereby get a chance to
experiment in ways that would be prohibitive using standard methods like paper and
pencil and human musicians.” -Emmanuel Ghent23
23
Chadabe, 163