Sie sind auf Seite 1von 17

Joe Higham

MUSI2M 2013-2014

SEMINAIRE DE RECHERCHE HIST. EN MUSICOLOGIE:


LMUSI2861
Professeurs : John Van Tiggelen
& Jean-Lambert Mercier

MUSIQUE ET SES SUPPORTS:


THE DEVELOPMENT OF PURE DATA
AS A MUSIC MEDIUM

Annee Academique 2013-2014


Universite Catholique de Louvain
Faculte de philosophie, arts et lettres
Departement histoire de lart, archeologie et musicologie
Louvain-la-neuve

From Organs To Computer Music: A Brief Historical Overview

If we are to agree that Ban Ms water organs existed, the possibility to generate music
mechanically has been with us since the 9th century. There might have been earlier water organs, in
the 1st century (Hydraulis water organ), although there seems to be little proof that they were fully
automated. Early automatons, such as organs, musical clocks, polyphon disc players, or pianola to
name just a few all used musical sounds based on flutes, bells, pianos or organ pipes. Some
machines went as far as creating whole orchestras (Johann Nepomuk Mlzel's panharmonicon, built
in the early 19th century), whilst others tried to mimic animal vocal effects (those devised by Jaquet
Droz, for instance, whose bird watches are still considered masterpieces of their genre). The
Componium of Dietrich Nikolaus Winkel, built in the early 1800's, has similarities to Mlzel's
Panharmonicon, with the added feature of aleatoric composition. However, when looking at the
development of automated musical instruments, what is equally interesting is that, although
mechanical systems have evolved, there are many similarities between modern day systems or
software, and those from previous eras. In his treatise Musurgia Universalis, Kircher1 writes about
the automated musical instruments. These devices used cylinders with precisely placed pins, that
would trigger an action and generate music. At a later stage punched cards replaced cylinders, as
these were cheaper to produce, and probably easier to sell. In reality, it would be possible to suggest
that, as early as the fourteenth century, those devices put the idea of storing data in a system into
practice. In a brief article, Michael Kassler (University of Sydney) reminds us that, as early as 1840,
Ada Lovelace, who worked with Charles Babbage, suggested that an analytical machine might act
upon other things than number, [], the computer not having been invented as such. Supposing, for
instance, that the fundamental relations of pitched sounds in harmony science and, of musical
composition were susceptible of such expression and adaptations, the engine might compose
elaborate and scientific pieces of music of any degree of complexity [...] 2. Others also saw the
possibility for computers to generate melodies. R.C. Pinkerton's experimental remarks suggest that
a computer could compose melodies in accordance with stylistic criteria 3. He inspired Lejaren
Hiller and Leonard Issacson, who, in 1957, composed the Illiac Suite for String Quartet in 1957, on
the ILLIAC computer.
Although we cannot discuss the history beyond the computer, or its debt to the automats
1
2
3

George J. BUELOW. "Athanasius Kircher." Grove Music Online.


<http://www.oxfordmusiconline.com/subscriber/article/grove/music/15044>, consulted: 16 May. 2014.
Michael KASSLER, Computers and Composition, The Musical Times, Vol. 121, No. 1643 (1980), p. 14
John MOREHEN and Ian BENT, Computer Applications in Musicology, The Musical Times, Vol. 120, No. 1637
(1979), p. 563.

inventors, it is clear that automatons and computers both come from the same lineage. It is
important to add that it is not only the decoding of data that shares common roots, but also sound
synthesis. Jean-Claude Risset's4 interesting comments suggest that organs were the first information
machine:
The organ's keyboard was the first in history, appearing long before the alphanumeric
keyboard of typewriters and computers. In fact, the organ key may have been the first
switch. Four centuries before Fourier, organ makers implemented Fourier synthesis in
so-called mutation stops: timbre was obtained by adding the sound of pipes tuned as
harmonics of the intended tone5
Since the development of electricity, processes that are used to either record or decode data,
in forms varying from magnetic tape, vinyl or shellac discs to the most recent media of digital data
have progressed considerably, paving the way for a whole new area of sound synthesis, music
production and composition.

The possibility to manipulate (musical) data has become much easier with the development of
digital media
Starting from the early 1990s, the main advance has been the availability of desktop and
portable laptops. Which grant artists and the general public access to complex musical processing
systems, without working in the confines of a studio or sound laboratory. In addition to the
computer revolution, the development of the synthesizer also has to be taken into account, as this
instrument enabled those working on producing sounds often imitating instruments to have at
their fingertips a wide range of sounds, that could be produced electronically, removing the need to
4
5

Adrian MOORE. "Jean-Claude Risset." Grove Music Online.,


<http://www.oxfordmusiconline.com/subscriber/article/grove/music/23518> Consulted: 16 May. 2014.
Jean-Claude RISSET, Sculpting Sounds with Computers: Music, Science, Technology, Leonardo, Vol. 27, No. 3,
Art and Science Similarities, Differences and Interactions: Special Issue (1994), p. 257.

have large pipes, plucked strings and other instruments requiring physical actioning. in 1964,
Robert Moog produced the first commercial synthesizer, 6 making sound synthesis something that
could be experimented with, and at the same time portable enough to work outside studios, although
one had to wait until the 1970s for the arrival of the commercial synthesizer, available to only the
lucky few with the finances to purchase such an instrument7. However, it is of interest to note that,
even State-of-the-art technologies have difficulties reproducing certain sounds, such as that of wind
instruments.

The Moog MiniMoog was the first fully integrated synthesizer, making it
one of the most important developments in electronic music. In 1971 its
original price was: $1495
Since the late 1970s, the possibility to communicate via midi and digital signal processing
techniques has changed the way we work with music 8. With recent developments in these
technologies, when combined with the computer and especially that of the laptop, we begin to
recognise what could be acknowledged as modern day automats. After inputting data into a DAW 9 a
button on a keyboard can trigger multiple events, yet no instruments are visible. Could we argue
that a computer be classified as both an instrument and a media or support. Furthermore, like any
automat, it needs instructions to run and generates sound at a command.

Data Conversion Software: DAWs


It goes without saying that computers cannot produce any sound or action without the help
of a compiled script. There are several ways of producing musical sounds via a computer.
6
7
8
9

Hugh DAVIES. "Robert A. Moog", Grove Music Online,


<http://www.oxfordmusiconline.com/subscriber/article/grove/music/19054>, consulted: 16 May. 2014.
<http://www.factmag.com/2014/02/28/the-14-synthesizers-that-shaped-modern-music/>, consulted: 16 May. 2014.
Sterling BECKWITH, "The Well-Tempered Computer", Music Educators Journal, Vol. 62, No. 7 (1976), pp. 32-36
DAW = Digital Audio Workstation.

In this paper, we will take a brief look at two systems, that can produce sound, either as
compositional tool, or as a sound manipulator. It is important to highlight that both systems have
similarities with automats, in that they need read data to produce sound, with the possibility of finite
sound control = volume, pitch, tempo, to name a just a few examples.
The first class of software is the DAW, described by educator Don Muro, as a portable music
studio. Not only does it operates as a studio, but it also plays the roles of a synthesizer, a drum
machine, an effects device, and a sequencer, capable of reading, recording and processing MIDI 10
data. It is also a sampler, and can control sounds that have been digitally recorded 11. Nowadays,
there are three types of DAW: those working with Audio and Midi (Pro-Tools, for instance) 12, those
working only with Audio (Audacity13) and, a third category, that only use Midi, are often found in
drum machines and small workstations, as an example, could be classed as sequencers14.
The development of DAWs in the 1970s and 1980s was limited, partly because of the cost of
storage and slow processing speeds. In 1978, the first digital audio workstation (Digital Editing
System) was developed, by Thomas Stockham 15 of Soundstream. This bulky system was a mixture
of computer and digital tape machine, which could edit sounds and provide effects, such as
crossfades. By the 1980s, computers started handling digital audio editing, when combined with
some of the earliest systems, such as Macromedia's Soundedit or Digidesign's Sound Tools.
They could be used in conjunction with sampling keyboards and utilized as simple two track audio
editors16. These systems were known as non-destructive editing, which means that when, for
instance, an edit is undone or pasted, there is no waiting for the audio file to be rewritten 17. Soon
afterwards, with the introduction of Pro Tools 18, most studios went digital. By 1993, Steinberg
introduced Cubase Audio, which had built-in digital signal processing, known as DSP effects, but
was only an eight track audio recording/playback system. Eventually, in 1996, Cubase VST (virtual
studio technology) was released. It was a fully integrated no need for external DSP effects 32
track digital audio studio. This software program had a visual presentation with tape-like interface,
mixing desk and effects racks, mirroring an analog studios layout, that was very quickly copied and
updated by all other firms. It is what present models are all based upon.
There are many examples of digital workstations today, amongst which, the most well10
11
12
13
14
15
16
17
18

MIDI = Musical Instrument Digital Interface. A protocol for communicating and controlling information between
electronic instruments.
Don MURO, Technology for Teaching: The Music Workstation: A New Tool for Teaching, Music Educators
Journal, Vol. 76, No. 5 (1990), pp. 10-13
Product website: <http://www.avid.com/FR/products/family/Pro-Tools>, consulted: 17 May, 2014.
Product webpage: <http://audacity.sourceforge.net/>, consulted, 17 May, 2014.
A full list of DAWs can be found at: <http://www.synth.tk/daw/ >, consulted: 17 May, 2014.
<http://www.aes.org/aeshc/docs/recording.technology.history/stockham.html>, consulted, 18 May, 2014
Products of Interest, Computer Music Journal, Vol. 20, No. 3 (1996), p. 105.
Ibidem.
Op.cit. Pro Tools, cf. n.12.

known are ProTools (developed by Evan Brooks and Peter Gotcher in 1984), Cubase (developed by
Steinberg Company in 1989), Nuendo and Logic Pro (both popular systems developed by C-Lab
programmers in 1993), Logic, designed to be compatible with Apple's Mac OS X platform.
Interestingly, Garage Band (developed in 2004 by Dr. Gerhard Lengeling), is pre-installed on
Apple's Macintosh computers, making it easily available to all those who purchasing Apple
computers. Audacity was launched in May 2000, by Dominic Mazzoni and Roger Dannenberg, at
Carnegie Mellon University (Pittsburgh). It is one of the few free programs, but it works only with
audio. And, more recently Reaper19 has been designed. It offers an advantage as compared to all
other systems: it makes it possible to program music with the Python programming language.

Even though these workstations have many similarities, they are often incompatible and data cannot
be exchanged with other models. Looking at the screen shots above, we notice the similarities
between Garage Band (left) and Logic Pro (right), both in recording mode. Although a long way
away from automats such as the Serinette, these systems use virtual organ cylinder or punched-card
technology. Midi information can be passed (input) via two very similar systems know as Piano
Roll and Hyper Editor. These visualisations are not so far removed from the barrel-pinning

principles first introduced by Kirchner, in Musurgia Universalis but they use the more sophisticated
dynamics from the Reproducing Piano20 system. Although musical data can be put in via a note
19
20

Reaper = Rapid Environment for Audio Production, Engineering, and Recording


Frank W. HOLLAND, "Reproducing piano." Grove Music Online,
<http://www.oxfordmusiconline.com/subscriber/article/grove/music/52058>, consulted: 18 May. 2014.

input, pitch, note length, hyper draw (notes, and other events) and velocity can be controlled via the
Piano Role window. In the Hyper Editor, many parameters can be influenced and controlled,
including, volume, panning (left-right positioning), modulation, pitch bend, channel pressure,
polyphonic pressure and velocity, amongst many other possibilities.
The role that these programs play in performing and composing music, their availability and
ease of use cannot be understated. Although there is little left of the mechanical aspect of automatic
musical machines, it is likely that todays digital audio workstations are a logical offspring of earlier
automats, which, in turn, could be seen as early forerunners of the computer.
However, other systems have been developed since the mid1980s, to interface with
computers and work in the domain of DSP, FFT and other areas of sound manipulation. During a
conference, Max Mathews was asked how he got involved in computer music systems developing.
He replied that he had played the violin as a child, but could never play it very well, so I wanted to
make an instrument that was easier to play. 21 His work has opened new paths for the music scene
and enabled the appearance of new systems that are powerful alternatives to synthesizers, and are,
simultaneously, capable of operating in real time, or working with digital sound files, recording
sound. Furthermore, random parameters can be introduced. The most popular systems are Csound 22,
Max/MSP23, SuperCollider24 and Pure Data25.

Musical Programming Systems


Since the 1980s, several systems have been developed, with a view to associate the
potentialities of a synthesizer, with the processing power of a computer and create music from the
analog sounds of oscillators, with sine waves, saw waves, noise generators and other DSP
possibilities, in real-time audio synthesis. The history and development of these systems is rather
convoluted, as the pace of changes and developments has been very quick in recent years. The
forerunner of these programs was 4Ced, developed at IRCAM, by Curtis Abbott, in the late 1970s.
4Ced was made up of a text compiler and converter which could translate text files into control
commands, allowing users to control the digital processor in great detail. This early text-based
program served as a blueprint for Max Mathews, who transformed the idea and remodelled it into
21

22
23
24
25

Eric LYON, Max MATHEWS, James MCCARTNEY, David ZICARELLI, Barry VERCOE, Gareth LOY and Miller
PUCKETTE, Dartmouth Symposium on the Future of Computer Music Software: A Panel Discussion, Computer
Music Journal, Vol. 26, No. 4, Languages and Environments for Computer Music (2002), p. 21
Csounds website: http://www.csounds.com/
Max/MSP website: http://cycling74.com/products/max/
Supercollider website: http://supercollider.sourceforge.net/
Pure Data website: http://puredata.info/

his RTSKED program, written with Joseph Pasquale, in 1981 26, according to Miller Puckette, the
main source of inspiration for the development of Max/MSP 27. Several developments took place in
the next few years: in the mid 1970s, Max Mathews' pupil, Barry Vercoe, designed a new
environment, named Music 11, working in real-time computer systems, that used a graphic interface
but with text input. After a few modifications, this was to become today's Csound (1987). More
importantly, Music 11 was the first program written for mini computers rather that for main frame
systems.
Miller Puckette's Max/MSP was developed around 1982, under the Vercoes tutorage.
Combining features from Music 11 and RTSKED, it produced a graphical interface that functioned
with modules. In the course of time, Max/MSP was developed as a commercial product by
Cycling74. Puckette made the system into a freeware program called Pure Data, similar nearly
identical to Max, even though it introduced newer features, to expand the data structures
limitations of Max. Because of its open-source status, the program has developed at a fast rate,
additions being regularly made to the original version.
By the 1990s, James McCartney's SuperCollider brought a new aspect to DSP. His program
updated real time audio synthesis, with a text-based version that could work with externals and
build GUIs28. His aim was to improve Max/MSP and correct its weaknesses. Max[...], which is a
different programming language, provides an interesting set of abstractions that enable many people
to use it without realising they are programming at all. [] The Max language is also limited in its
inability to treat its own objects as data, which makes for a static object structure 29. Since then, the
rise in university computer engineering courses has brought forth a wide range of new
environments, even though most are rooted in the original programs.
The systems or environments have several advantages over the analog synthesizer: not
only are they portable, but they are also very powerful programs, able to process much more than a
synthesizer from the 1980s. They can work with externals, and so can be connected to a keyboard or
midi controller, making them practicable systems, with an enormous potential for experimentation.
Furthermore, being code-based environments, they are able to read data in various formats, tackling
voice synthesis, MIR30 and sonification, as well as musical functions. In this section, I will briefly
describe the environments that have been developed and the difference between the systems, one
being modular, the other text-based. Then, as an example, I will outline some of the basic
26
27
28
29
30

Miller PUCKETTE, Max at Seventeen, Computer Music Journal, Vol. 26, No. 4, Languages and Environments for
Computer Music (2002), p. 31.
Ibidem.
GUI = Graphical User Interfaces
James MCCARTNEY, Rethinking the Computer Music Language: Super Collider, Computer Music Journal, Vol.
26, No. 4, Languages and Environments for Computer Music (2002), p. 61.
MIR = Music Information Retrieval

functionalities of a Pure Data program, that I wrote, with a view to create a simple data-driven
machine.
The various systems mentioned above can be divided into two basic categories, graphic and
text-based, both having advantages and disadvantages, that we wont have space here to detail. The
graphical-based systems are: Max/MPS, Pure Data and, more recently, Sensomusic-Usine 31 and
Integra Live32(an interesting modular system developed at Birmingham University, in 2007).
Graphical systems work by placing objects/modules on a canvas, to create data flow charts, in these
cases with a musical objective.

Above, we see two examples of modular environments: Max/MSP (left) and Integra Live (right).
Both systems work more or less in the same way, even though Max/MSP (like Pure Data) is a little
more skeletal. Integra Live has tried to simplify the process, so that non 'computer musicians' can
easily understand the uses of modules, which can be connected and adjusted without any, or little
understanding of DSP.
The other systems are text-based. Examples of these are Csound, SuperCollider and, more
recently, ChucK33, developed by Ge Wang and Perry Cook (Princetown University, 2002). These
environments use language to compile their code and all have similar interfaces. Unlike modular
systems, one has to be able to write and understand basic code in a variety of languages from C+ to
Java. Some users claim that writing code is easier and quicker than building a patch 34. Declaring
variables can be done easily, but a modular system has the advantage of being a physical image with
sliders and knobs, similar to that of an analog studio, with cables and sound modules.

31
32
33
34

Official webpage: http://www.sensomusic.org/usine/


Official website: http://www.integralive.org/
Official webpage: http://chuck.cs.princeton.edu/
'Patch' is a term used in modular systems to describe the canvas (page) where you place modules in graphical
representation.

The two screenshots show us that Supercollider (left) and ChucK (right) have few differences. Both
systems are operated by lines of code. Both have terminal windows which update automatically,
telling the user what is happening at any given moment. Supercollider combines its server window
to show that the compiled code is working, whereas ChucK has a separate window thanks to which
any patch of code that is running can be switched off, or taken out, without stopping others.

A brief example using Pure Data


To demonstrate how a graphic environment works, I have chosen to take a closer look at
Pure Data. It might be easier to explain how such a programming environment works, by writing
and explaining a simple program. Although not the perfect programming language, Pure Data has
interesting abstractions which enable people to use it without realising they are programming, as
James McCartney points out35. Along with its 'hands on' feel, conjured up by the use of the
graphical environment, modules and cables/lines can connect to each object, almost as if in an
analog studio. Pure Data began as an extension from Max, in the words of Puckette [...] an attempt
to make a screen-based patching language that could imitate the modalities of a patchable analog
synthesizer36.

35
36

Op. cit. McCartney.


Miller PUCKETTE, "Pure data." Proceedings of the International Computer Music Conference, San Francisco:
International Computer Music Association, (1997), p. 43-46.

The program is made up of two windows: the main window, a terminal window which prints out
information and error messages and a canvas or program patch, where the program is written. On
the previous page (left), we see the terminal window, and on the right, the blank canvas or program
window where the patches are built. The term patch is the name given to a program written on the
canvas. To understand the connection between Pure Data, a computer and an automat, I built a small
patch, using some of the random elements found in Winkel's Componium and cylinder of an organ,
such as a Serinette. Although quite simplistic in sound and built, this program illustrates some of the
ways Pure Data controls and processes data as any automatic machine would.
The Main Patch
The Main window of the patch is
divided in three key areas. 1) The first area
controls the tempo, but to follow the idea of
introducing random aspects, the tempo is a
random number between 0 and 1000. I have
linked the tempo with the computer keyboard
so that every-time the letter 'B' is pressed
another number/tempo will be called. 2) This
area controls the sound which is found inside a
sub-patch called pd music_box. 3) The third
area is where the signal is converted from
digital to audio, hence the object 'dac~',
meaning digital to audio converter. All objects
dealing with audio signals are followed by a tilde (~). Some examples to be found on the main
canvas are send~ which sends audio signals to the dac~. Another example is found in freeverb~
which is an audio reverb unit.
The Sub-Patch
A sub-patch is a program within a program. Pure Data
uses this system to:
a) Make a program easier to read, or simply make the patch tidy.
b) duplicate and alter without affecting other parts of a patch,
saving on the computer's buffering power, or CPU.
It is important to note that the sub-patch has inlets (top) and
10

outlets (bottom), thanks to which the rest of the canvas/program can be connected to these
individual units. The 'outlets' allow for the sending of data (notes in this case) from the sub-patch.
Inlets and outlets can either communicate data without a tilde ~, or audio using the tilde, example
'outlet~'.
The Sub Patch Window
In this programme, the sound sub-patch's main
areas consist of the sound generators.
At the top of the patch (1) we find four inlets,
stop/start, speed or tempo, volume control and
envelope speed. In this case, the envelope is a simple
'on/off' switch. A metronome object (2) metro is
attached to the tempo inlet. It is set at 400 (tenths of a
second), but the tempo can be controlled by changing
the slider on the main window. The metronome
activates the shuffle 60 70 object (3). Shuffle randomly
selects numbers between 60 and 70, but never repeats
any number until all digits have been used. These are
sent to the mtof~ object (5), which converts midi
numbers to frequency in Hz. These are passed immediately to the osc~ object (sine wave oscillator).
A Han window envelope (6) helps to stop 'clicks'. A sine wave passes constantly between 1 and -1
and if a signal is switched on whilst the sine wave is not at zero, we will hear a click, as if we were
switching on an amplifier, after a CD has started, causing a surge in volume. A simple envelope (7)
is activated via the inlet at the top of the canvas, setting the decay time. The envelope is activated by
the metronome (2) which sends a message from s bang_3 to r bang_337. This, in turn, opens a gate
and sends the sound to (8), which transfers the message back to the main patch.
Final stage
At the final stage, sound from
the sub-patch is picked up on the main
window by r~sound_1 (2 or 3). It is
sent to the dac~ passing via an effects
object freeverb~, which simply adds a
37

S and R = send and receive.

11

reverb, to change the colour of the notes from short marimba sounds to long metallic chimes. There
is also a small recording patch for recording the program.
The possibility to mix tempos, random notes and simple sound synthesis gives an idea of
how with few elements it is possible to create a programmable machine that produces simplistic
stochastic compositions built on repetitive lines. It would be possible to develop more control over
each element and feed data, such as financial figures, or statistics, by means of graphs and arrays.

Recordings
I have included a selection of recordings from the patch, demonstrating the various elements
and variations possible by changing simple parameters, such as random speed, reverb/decay or
room size. The results created, although always simplistic, seem to have elements that encompass
the process music of Steve Reich, stochastic music of Xanakis, or facets of Harry Partch even.
Music Box
1) music_box1.wav (15 sec.)
An extract using just one sound.
2) music_box2.wav (20 sec.)
An example of running two sounds together. Notice the interesting rhythmic changes created
by the two separate patterns.
3) music_box3.wav (20 sec.)
Same idea as #2 except running the three different modules. In the section The Main Patch
(page 10) you will notice three main sound producing modules. Each one divides or
multiplies the tempo differently one dividing the tempo 2, one multiplies tempo 2, and
the last tempo 3.
4) music_box4.wav (20 sec.)
Same idea as #3 but with changes in decay times. In the section The Main Patch (page 10)
you will notice the two sliders, orange and purple. These are changed slightly whilst
pressing the 'B' on the computer keyboard to produce random changes in tempo.
5) music_box5.wav (20 sec.)
Same idea as #4 running the three different modules. This version is with subtle changes in
room-size, dry sound and reverb/decay. The results give a steel sound, somewhere between a
gamalan orchestra and the repetitive music of Steve Reich.
12

DIR or Sonification using Pure Data


I include five short examples of Sonification using a programming environment. Although I
haven't included the patch details, due to space I thought it interesting to show some examples
using a similar type of patch, but in this case read text files into a graph. I have used one week's
weather data (temp.), morning, midday and evening, from four major towns Brussels, Delhi,
Addis Ababa and Brasilia. These are placed into four different graphs/arrays and read by a phasor
(saw wave). The sounds are modified slightly to give different timbres.
1) sonification1.wav
Brussels
2) sonification2.wav
Brussels + Addis Ababa
3) sonification3.wav
Brussels, Addis Ababa + Brasilia
4) sonification4.wav
Brussels, Addis Ababa, Brasilia + Delhi
5) sonification5.wav
All four towns, but with added random elements such as tempo activated by the speed of the
phasor (which is reading the files), and pitch which is triggered by multiplying or
diminishing the frequencies read on the graphs.
These short extracts from both the Music Box and the above sonification examples, give an idea of
possibilities for using a program such as Pure Data. These examples could have been easily
achieved using the other systems such as SuperCollider, Max/MSP or Csound, also.

13

Conclusion
In my research it became apparent that automats were, to a certain extent, early relations to
the computer. This in turn gives us an intriguing viewpoint on music technology, developed both
due to the computer and inspired in part by the musical automatons of earlier periods. What makes
this most interesting is that when working with recent software, such as Pure Data or SuperCollider,
we are still connected to past technology of these automatic machines such as the Serinette,
Pianolas or even the musical clocks of Jaquet Droz.
Furthermore, the resulting development of these technologies has enabled access to many,
who, whilst programming a computer, even using a graphical-based environment, are also working
in an area originally exclusive to the inventor or visionary, such as Johann Nepomuk Mlzel. With
these new programs we are able to make the sound of many instruments, and control their
parameters, all within the space of a small room. If so, the evolution of the automat could have had
a more far-reaching effect on the development of modern composition techniques than might at first
be thought.

14

Bibliography:
BECKWITH, Sterling. The Well-Tempered Computer, Music Educators Journal, Vol. 62, No. 7 (1976), p.
32-36
BEN-TAL, Oded and BERGER. Jonathan, "Creative Aspects of Sonification", Leonardo Music Journal, Vol.
37, No. 3 (2004), p. 229-232
BUELOW, George J. Athanasius Kircher, Grove Music Online.
<http://www.oxfordmusiconline.com/subscriber/article/grove/music/15044>, consulted: 16 May. 2014.
COLLINS, Nicolas. "Introduction: Noises off: Sound beyond Music", Leonardo Music Journal, Vol. 16,
Noises Off: Sound Beyond Music (2006), p. 7-8
DAVIES, Hugh. Robert Arthur Moog, Grove Music Online,
<http://www.oxfordmusiconline.com/subscriber/article/grove/music/19054>, consulted: 16 May. 2014.
GERZSO, Andrew. Paradigms and Computer Music, Leonardo Music Journal, Vol. 2, No. 1 (1992), p. 7379
HOLLAND, Frank. Reproducing piano. Grove Music Online,
<http://www.oxfordmusiconline.com/subscriber/article/grove/music/52058>, consulted: 18 May. 2014.
KASSLER, Michael. Computers and Composition, The Musical Times, Vol. 121, No. 1643 (1980), p. 14
KOETSIER, Teun. On the prehistory of programmable machines: musical automata, looms, calculators,
Mechanism and Machine Theory, Vol. 36 (2001) p. 589-603
LIPPE, Cort. Real-Time Interactive Digital Signal Processing: A View of Computer Music, Computer
Music Journal, Vol. 20, No. 4 (1996), p. 21-24
LYON, Eric. MATHEWS, Max. MCCARTNEY, James. ZICARELLI, David. VERCOE, Barry. LOY, Gareth. and
PUCKETTE, Miller. Dartmouth Symposium on the Future of Computer Music Software: A Panel
Discussion, Computer Music Journal, Vol. 26, No. 4, Languages and Environments for Computer Music
(2002), p. 13-30
MATTIS, Olivia. Max Mathews, Grove Music Online.,
<http://www.oxfordmusiconline.com/subscriber/article/grove/music/47039>, consulted: 16 May. 2014.
MCCARTNEY, James. Rethinking the Computer Music Language: Super Collider, Computer Music
Journal, Vol. 26, No. 4, Languages and Environments for Computer Music (2002), p. 61-68
MURO, Don. Technology for Teaching: The Music Workstation: A New Tool for Teaching, Music
Educators Journal, Vol. 76, No. 5 (1990), p. 10-13
MOORE, Adrian. Jean-Claude Risset, Grove Music Online.,
<http://www.oxfordmusiconline.com/subscriber/article/grove/music/23518> Consulted: 16 May. 2014.
MOREHEN, John. & BENT, Ian. Computer Applications in Musicology, The Musical Times, Vol. 120, No.
1637 (1979), p. 563-566
ORD-HUME, Arthur W. J. G. Cogs and Crotchets: A View of Mechanical Music, Early Music, Vol. 11, No.
2 (1983), p. 167-171
ORD-HUME, Arthur W. J. G. Mechanical instrument, Grove Music Online.
<http://www.oxfordmusiconline.com/subscriber/article/grove/music/18229>, consulted: 18 May. 2014.
POLLI, Andrea. "Heat and the Heartbeat of the City: Sonifying Data Describing Climate Change", Leonardo
Music Journal, Vol. 16, Noises Off: Sound Beyond Music (2006), p. 44-45
Products of Interest (DAWs), Computer Music Journal, Vol. 20, No. 3 (1996), p. 101-113
Products of Interest (SuperCollider), Computer Music Journal, Vol. 24, No. 3 (2000), p. 97-103
PUCKETTE, Miller. "Combining Event and Signal Processing in the MAX Graphical Programming

15

Environment", Computer Music Journal, Vol. 15, No. 3 (1991), p. 68-77


PUCKETTE, Miller. "FTS: A Real-Time Monitor for Multiprocessor Music Synthesis", Computer Music
Journal, Vol. 15, No. 3 (1991), p. 58-67
PUCKETTE, Miller. "Something Digital", Computer Music Journal, Vol. 15, No. 4, Dream Machines for
Computer Music: In Honor of John R. Pierce's 80th Birthday (1991), p. 65-69
PUCKETTE, Miller. "Pure data." Proceedings of the International Computer Music Conference, San
Francisco: International Computer Music Association, (1997), p. 43-46.
PUCKETTE, Miller. Max at Seventeen, Computer Music Journal, Vol. 26, No. 4, Languages and
Environments for Computer Music (2002), p. 31-43
REES, Mina. "Digital Computers", The American Mathematical Monthly, Vol. 62, No. 6 (1955), p. 414-423
RISSET, Jean-Claude. Sculpting Sounds with Computers: Music, Science, Technology, Leonardo, Vol. 27,
No. 3, Art and Science Similarities, Differences and Interactions: Special Issue (1994), p. 257-261

16

Das könnte Ihnen auch gefallen