Sie sind auf Seite 1von 21

Journal of Mathematics and Music

Vol. 5, No. 2, July 2011, 63–82

A few more words about James Tenney: dissonant counterpoint


and statistical feedback
Larry Polanskya *, Alex Barnettb and Michael Winterc
a Music Department, Dartmouth College, Hanover, NH 03755, USA; b Department of Mathematics,
Dartmouth College, Hanover, NH 03755, USA; c Department of Media Arts and Technology, University of
California, Santa Barbara, CA 90021, USA

(Received 4 November 2009; final version received 6 November 2010)

This paper discusses a compositional algorithm, important in many of the works of James Tenney, which
models a melodic principle known as dissonant counterpoint. The technique synthesizes two apparently
disparate musical ideas – dissonant counterpoint and statistical feedback – and has a broad range of
applications to music which employs non-deterministic (i.e. randomized) methods. First, we describe the
historical context of Tenney’s interest in dissonant counterpoint, noting its connection to composer/theorist
Charles Ames’ideas of statistical feedback in computer-aided composition. Next, we describe the algorithm
in both intuitive and mathematical terms, and analyse its behaviour and limiting cases via numerical
simulations and rigorous proof. Finally, we describe specific examples and generalizations used in Tenney’s
music, and provide simple computer code for further experimentation.

Keywords: James Tenney, Charles Ames, Ruth Crawford Seeger, Henry Cowell, Carl Ruggles, dissonant
counterpoint, statistical feedback, algorithmic composition, random, Markov chain, compositional balance

1. Tenney, dissonant counterpoint, and statistical feedback

Carl Ruggles has developed a process for himself in writing melodies for polyphonic purposes, which embodies a
new principle and is more purely contrapuntal than a consideration of harmonic intervals. He finds that if the same
note is repeated in a melody before enough notes have intervened to remove the impression of the original note,
there is a sense of tautology, because the melody should have proceeded to a fresh note instead of to a note already in
the consciousness of the listener. Therefore Ruggles writes at least seven or eight different notes in a melody before
allowing himself to repeat the same note, even in the octave. [1, pp. 41–42]

Avoid repetition of any tone until at least six progressions have been made. [2, p. 174]

1.1. Tenney and dissonant counterpoint

The music of James Tenney often invokes an asynchronous musical community of collaborators
past and present. Many of his pieces are dedicated to other composers, and poetically re-imagine
their ideas. In these works, Tenney sometimes expresses connections to another composer’s music

*Corresponding author. Email: larry.polansky@dartmouth.EDU

ISSN 1745-9737 print/ISSN 1745-9745 online


© 2011 Taylor & Francis
http://dx.doi.org/10.1080/17459737.2011.614732
http://www.tandfonline.com
64 L. Polansky et al.

via sophisticated transformations of the dedicatee’s compositional methods. For example, Bridge1
(1984), for two pianos, attempts to resolve the musical and theoretical connections between
two composers – Partch and Cage – both of whom influenced Tenney [3,4]. The resolution, or
‘reconciliation’ [3], in this case, the marriage of Cage-influenced chance procedures and Partch-
influenced extended rational tunings, is the piece itself. The construction of that particular bridge
made possible, for Tenney, a new style with interesting genealogies.
As a composer, performer, and teacher, Tenney took musical genealogy seriously. He seldom
published technical descriptions of his own work,2 but he wrote several innovative theoretical
essays on the work of others [4,5]. Some of these explications are, in retrospect, transparent
theoretical conduits to his own ideas. Many of the historical connections in Tenney’s work (to
the Seegers, Partch, Cage, Varèse, even Wolpe) appear in or are suggested by his titles. However,
Tenney’s formal transformations of other composers’ideas are less well understood. The few cases
in which he wrote about his own pieces [for example 6] demonstrate the amount of compositional
planning that went into each composition.
In Tenney’s theoretical essay on the chronological evolution of Ruggles use of dissonance [7],
he used simple statistical methods to explain Ruggles’ (and by extension, the Seegers’) melodic
style. For example, he examined how long it took, on average, for specific pitch-classes and
intervals to be repeated in a single melody. Tenney visualized these statistics in an unusual way:
as sets of functions over (chronological) time – years, not measures. The article consists, for the
most part, of graphs illustrating the statistical evolution of Ruggles’ atonality along various axes.
In general, the x-axis is Ruggles’ compositional life itself. Tenney considered the data, such as
the ways in which the lengths of unrepeated pitch-classes increased over time (and if in fact they
did). Focusing on this specific aspect of Ruggles’ work allowed Tenney to explain to himself, in
part, what was going on in the music.
In the two decades that followed that paper, Tenney widened his theoretical focus to include
the music and ideas of what might be called the 1930s ‘American atonal school’, including Rug-
gles, Henry Cowell, Charles and Ruth Crawford Seeger, and others. These American composers
employed their own pre-compositional principles distinct from, but as rigorous as, European 12-
tone composers of the same period. In the article on Ruggles, Tenney had described a way to
statistically model an aspect of these composers’ compositional intuitions. In the 1980s, he began
to formally and computationally integrate Seeger’s [2,8] influential ideas of dissonant counter-
point into his own music. What seems at first to be a series of titular homages (in pieces like
the Seegersongs, Diaphonic Studies, To Weave (a meditation), and others) are in fact a complex,
computer-based transplantation of dissonant counterpoint into the fertile soil of his own aesthetic.
In each of these pieces, and several others, Tenney employed a probabilistic technique which we
call the dissonant counterpoint algorithm. Seegerian dissonant counterpoint encompassed a wide
range of musical parameters (rhythm, tonality, intervallic use, metre, even form and orchestration).
In this paper, we focus on an algorithm Tenney devised to make a certain kind of probabilistic
selection (mostly pitch, but sometimes other things as well). This algorithm was in part motivated
by Tenney’s interest in the ideas of dissonant counterpoint. To our knowledge, he never published
more than a cursory description of this technique [6]. One of the goals of our work is to present
the algorithm and explore some of its features in a mathematical framework.

1.2. Statistical feedback: probability versus statistics

Along with backtracking, statistical feedback is probably the most pervasive technique used by my composing
programs. As contrasted with random procedures which seek to create unpredictability or lack of pattern, statistical
feedback actively seeks to bring a population of elements into conformity with a prescribed distribution. The basic
trick is to maintain statistics describing how much each option has been used in the past and to bias the decisions in
favour of those options which currently fall farthest short of their ideal representation. [9]
Journal of Mathematics and Music 65

The dissonant counterpoint algorithm is a special case of what the composer and theorist Charles
Ames calls statistical feedback:3 the current outcome depends in some non-deterministic way upon
previous outcomes.4 Tenney’s algorithm is an elegant, compositionally motivated solution to this
significant, if subtle compositional idea. Statistical feedback is another form of reconciliation – that
of compositional method with musical results – and it has ramifications for any kind of computer-
or anything-else-aided-composition or art form. We first give a general (and mathematically
simple) introduction to this idea.
Imagine that we flip an unbiased coin N = 1000 times. We might end up with 579 heads and 421
tails. This is close to the equal statistical mean we might expect, want, or intend. With N = 10, 000
flips, we would do better, in the sense that although the numbers of heads or tails will most likely
have larger differences from their expectation of 5000 than before, the fraction of heads is likely to
be closer to 1/2 than before. This illustrates the law of large numbers: the average of N outcomes
from an independent and identically distributed (iid) random process converges (almost surely)
to its expected value as N increases. For example, baseball wisdom holds that ‘any given team
on any given day can beat any other given team’. But because the number of games played in a
season, 162, is a pretty large number of trials, things generally (but not always) work out well for
better teams.
But what about a more local observation of a small number of trials, or frame? For example,
some run of ten flips might yield:
HTHHHHHTHT

This statistical frame contains, not surprisingly, something worse: seven Hs, three T s.5 Nothing
in our method suggests that we want that: the act of flipping an unbiased coin most likely (but not
unequivocally) suggests that we desire a uniform distribution of outcomes. The random process
creates a disjunct between compositional intention and statistical outcome.
Composers have long used probability distributions, but have not often worried about the
conformance of observed statistics to probabilistic composition method over short time frames,
what Ames calls ‘balance’.6 This is perhaps due to the typically small populations used in a
piece of music, or because of a greater focus on method itself. Ames’ work suggests a variety of
ways to gain compositional control over this relationship. Statistical feedback ‘colors’ element
probabilities so that over shorter time frames, the statistics (results) more closely correspond to
the specified probabilities. A scientist might call this variance reduction; we will analyse this in
Section 2.3.
Let us return to the ten coin flips. We had seven Hs, three T s. Using statistical feedback, we
can compensate so that our frame, of, say, twenty trials, is statistically better. The obvious thing to
do is positively bias the probabilities of depauperate selections. For instance, we might now use
for the eleventh toss p(H) = 0.3 and p(T ) = 0.7, favouring the selection of a T . To paraphrase
Ames, we use the preceding statistics as an input to the generating probability function.

2. The dissonant counterpoint algorithm

2.1. Informal description

Tenney often discussed his interest in ‘models’, and his criteria were in how the brain, the ear,
the human accomplished something. His theories and compositional algorithms generally placed
a high priority on the success of the theory or algorithm in elucidating some cognitive process.
The algorithm described here reflects this design goal in an efficient and elegant way. A list of
n values is maintained, one for each pitch element, which will be interpreted as relative selection
66 L. Polansky et al.

probabilities. We initialize these to be all equal, then proceed as follows:

(1) Select one element from the list randomly, using the values as relative probabilities for
choosing each element,
(2) Set the selected element’s value to zero,
(3) Increase the values of all the other elements in some deterministic way, for instance by adding
a constant,
(4) Repeat (go back to step 1)

The algorithm is deceptively simple. Note that once selected, an element is temporarily removed
from contention (its probability is zero). That element and all other unselected elements become
more likely to be picked (their probabilities climb) on successive trials or ‘time steps’7 of the
algorithm. The longer an element is not picked, the more likely it is that it will be picked.8
Tenney’s use of this algorithm is an extension and abstraction of one particular aspect of the
compositional technique of Ruggles and/or Crawford Seeger, that of non-deterministic non-
repetition of pitch-class or interval-class. Figure 1 provides the simplest possible example: only
pitch-classes are chosen by the algorithm. There is no explicit control of intervallic distribution
(which would, of course, be of concern to Ruggles or Crawford Seeger). We give a more complex
musical composition using this basic algorithm in Figure 2.
Let us say we run the algorithm for 100 time steps, and then re-run it from the same initial values
again for 100 time steps: due to the randomness in step 1, we will get different sequences (but with
similar statistics). There is an important difference between sequences produced in this way and
those produced by an ‘un-fedback’ random number generator with statistically independent (iid)
trials. The statistical feedback reduces the sequence-to-sequence fluctuations (i.e. variance about
the expectation) in the total number of occurrences of any given element over the 100 time steps,
when compared with that of the un-fedback case. In Section 2.3, we will also show that, depending
on the increment rule, the generated sequences exhibit different kinds of quasi-periodicity and
‘memory’, as measured by decay of the so-called autocorrelation, over durations much longer
than the time to cycle through the n elements.

2.2. Formal description

By constructing a slightly more general model than sketched above, we open up an interesting
parameter space varying from random to deterministic processes, that includes some of Tenney’s
work as special cases. For the remainder of Section 2, we assume more mathematical background.
For each element i = 1, . . . , n we maintain a count ci describing the number of time steps
since that element was chosen. We define a single growth function f : Z+ → R+ which acts on
these counts to update the relative selection probabilities following each trial. Usually, we have

Figure 1. A simple melody generated by the algorithm, using a linear growth function (see next section), with n = 12
pitch elements lying in one octave. (Accidentals carry through the measure unless explicitly cancelled.)
Journal of Mathematics and Music 67

Figure 2. A short example piece, a trio for any three bass instruments. Each instrument plays one of four pitch-classes,
which are selected by the linear version of the dissonant counterpoint algorithm. Durations are chosen by a narrow
Gaussian function whose varying mean follows a curve which begins quickly, gets slower, and then speeds up again. A
simple set of stochastic functions determine the likelihood of octave displacement for each voice.

f (0) = 0 (which forbids repeated elements), and f non-decreasing. We also weight each element
i = 1, . . . , n with a fixed positive number wi whose effect is to bias the selection probabilities
towards certain elements and away from others. We now present a pseudo-code which outputs a
list {at }Tt=1 containing the element chosen at each timestep t = 1, . . . , T .

Dissonant counterpoint algorithm


input parameters: weight vector {wi }ni=1 , growth function f
initialize: ci = 1, i = 1, . . . , n
for timestep t = 1, . . . , T do
wi f (ci )
compute probabilities: pi = n , i = 1, . . . , n
k=1 wk f (ck )
randomly choose j from the set 1, . . . , n with probabilities {pi }ni=1
update counts: cj = 0, and ci = ci + 1, i = 1, . . . , n, i  = j.
store chosen note in the output list: at = j
end for
The normalizing sum in the denominator of the expression for pi merely ensures that the total
probability is 1.
One simple but flexible form for the growth function is a power law,
f (c) = cα (1)
for some power α ≥ 0. For now, we also fix equal weights wi = 1 for all i. Note that α = 1 gives the
linear case where the relative selection probabilities are the counts themselves. A typical evolution
68 L. Polansky et al.

Figure 3. Simulation of the dissonant counterpoint algorithm for the simplest linear case (power α = 1), and uniform
weights wi = 1 for i = 1, . . . , 4. (a) Greyscale image showing counts ci for each element i = 1, . . . , 4 versus time t
horizontally, with white, indicating zero and darker larger counts. (b) Graph of elements selected at versus time t. (c)
Distribution of N1 (500), the total number of occurrences of element 1 in a run of 500 time steps, shown as a histogram
over many such runs; shaded bars are for the dissonant counterpoint algorithm, white bars are for random iid element
sequences with uniform distribution pi = 1/4 for i = 1, . . . , 4 (note the much wider histogram).

of the counts ci that form the core of the algorithm, for this linear case, is shown in Figure 3(a). A
typical output sequence at is shown graphically in Figure 3(b) (also see melody Figure 1). A large
reduction in variance of element occurrence statistics, relative to uniform random iid element
sequences, is apparent in Figure 3(c).
Large powers, such as α > 5, strongly favour choosing the notes with the largest counts, i.e.
those which have not been selected for the longest time. Conversely, taking the limit α → 0 from
above gives a process which chooses equally among the n − 1 notes other than the one just selected;
because of its relative simplicity this version allows rigorous mathematical analysis (Section 2.4).
Observe that α < 1 leads to concave functions, i.e. with everywhere negative curvature (extending
f to a function on the reals we would have f  < 0), and that α > 1 gives convex functions, positive
curvature (f  > 0). These cases are illustrated by Figure 4, and their output is compared in the
next section. In Figure 5, we give a more complex musical example in which the power α, and
hence the sonic and rhythmic texture, changes slowly during the piece.

2.3. Curvature of the growth function

In this section, we investigate the effect of varying α, and introduce the autocorrelation function.
We will take the weights wi to be all equal. Figure 6(a) shows, for the case n = 4, sequences
of algorithm outputs for a logarithmic range of α values. (Behaviours for other numbers of
elements are qualitatively very similar.) The linear case, where α = 1, lies half-way up the figure.
Journal of Mathematics and Music 69

a b c

Figure 4. Illustration of three types of growth function f , depending on choice of power law α in Equation (1).

Figure 5. In this short quartet, 20 elements (percussion sounds) are selected by the algorithm, and distributed in the
hocket fashion to the four instruments. Durations are selected by the algorithm independently from a set of nine distinct
values. Durations and elements are selected independently, by different power functions of the form (1). The power α is
interpolated over the course of the piece from 1 to some very high power (or vice versa). In the case of durations, the
exponent begins high (maximum correlation) and ends at 1 (little correlation). In the case of the percussion elements, the
interpolation goes in the other direction. An exponentially decreasing weight function is used for durations, favouring
smaller values.

Note that the larger α becomes, and consequently the more positive the curvature of the growth
function, the more temporal order (repetitive quasi-periodic structure) there is in the occurrence
of a given element. The sequences become highly predictable, locking into one of the n! possible
permutations of the n elements for a long period of time, then moving to another closely related
permutation for another long period, and so forth. We may estimate9 this typical locked time
period, when it, and n and α, are all large, as eα/n . Thus, to achieve the same level of temporal
order with more elements, α would need to be increased in proportion to n.
What is the effect of varying α on the resultant statistics of a particular element? The number
of times the element i occurs in a time interval (run) of length T is defined as

Ni (T ) := #{t : at = i, 1 ≤ t ≤ T }, (2)
70 L. Polansky et al.

a occurrence of note 1 b
1
10

power α

0
10

−1
10
50 100 150 200 250 300 350 400 450 500 0 5
(Var[N (T)])1/2
t 1

Figure 6. Dependence of periodic correlations on power law α. (a) Occurrence of element 1 (black) for n = 4 elements,
versus time 1 ≤ t ≤ T = 500 horizontally, for 100 values of α spanning logarithmically the vertical direction from low
(concave) to high (convex). Results are similar for the other elements. (b) Standard deviation of the number of occurrences
of element 1 in the run of length T = 500 (shown on a linear scale), as a function of α on the same vertical axis as in a).

Ni (T ) varies for each run of the algorithm, and is therefore a random variable, with mean (in
the case of equal weights) T /n and variance Var[Ni (T )]. In Figure 6(b), we show its standard
deviation (Var[N1 (T )])1/2 (indicating fluctuation size) against α, again for n = 4 and T = 500.
This was measured using many thousand runs of this length. The standard deviation is large for
low powers (concave case), and decreases by roughly a factor of 10 for α = 10 (convex case).
By comparison, the standard deviation for an iid uniform random sequence is larger than any
of these: since Ni (T ) then has a binomial distribution with p = 1/n, it has a standard deviation

Tp(1 − p) = 9.68 · · · .
The change of ‘rhythmic patterning’ seen above can be quantified using the autocorrelation in
time. Consider the autocorrelation of the element signal at , defined as,
T
1 t=1 (at − ā)(at+τ − ā)
Caa (τ ) := lim , (3)
T →∞ T Var[a]

where ā = (n + 1)/2 is the mean, and Var[a] is the variance of the element signal. Figure 7
shows Caa (τ ) plotted for four types of growth function, now for n = 6. For α = 0, we obtain the
max-1 rule discussed in the next section: it has almost instantaneous decay of correlations, i.e. its
‘memory’ is very short. For α = 1, the linear model shows anti-correlation for the first few time
steps, and virtually no memory beyond τ = n. For a high α (convex growth function), the tail of the
autocorrelation extends much further (longer memory) up to τ = 50 and beyond. Periodic peaks
which soften with increasing τ also indicate quasi-periodic structure with a dominant period n.
Journal of Mathematics and Music 71

a α=0 (max−1) b α=1 (linear)

0.8 0.8
0.6 0.6
Caa(τ)

Caa(τ)
0.4 0.4
0.2 0.2
0 0
−0.2 −0.2
0 20 40 60 0 20 40 60
τ τ
c d c
α=8 (highly convex) f(c) = 2 (exponential)

0.8 0.8
0.6 0.6
Caa(τ)

0.4 Caa(τ) 0.4


0.2 0.2
0 0
−0.2 −0.2
0 20 40 60 0 20 40 60
τ τ

Figure 7. Autocorrelation functions measured from the dissonant counterpoint algorithm (with n = 6 elements, and
equal weights) for four choices of growth function: three power laws, and an exponential law used in Section 3.1.

For this n, the exponential function (9) discussed in Section 3.1 gives correlation decay roughly
similar to that of a power law model with α ≈ 4.
On a technical note, a well-known mathematical result from stochastic processes (related to
Einstein’s fluctuation–dissipation relation in physics [10, Equation (2.1)]) equates the rate of
growth with T of Var[Ni (T )] to the area under (or sum over τ of) the autocorrelation function
graph.10 It is surprising that the above results on variance of N1 (T ) imply that for large α, the
signed area under the autocorrelation graph is actually smaller than for small α, despite the fact
that the tails extend to much longer times τ . It would be interesting to find an explanation for this.

2.4. Vanishing power: ‘max-1’ version and the effect of weights

When α = 0, the growth function (1) becomes the function



0, c = 0
f (c) = (4)
1, c = 1, 2, . . .
This max-1 rule is the linear algorithm truncated at a maximum value of 1. As discussed above,
its element statistics have a large variance and an almost instant decay in the autocorrelation (i.e.
almost no memory). The algorithm chooses between all n − 1 notes other than the current one,
weighted only by their corresponding weights wi . For convenience, in this section, we assume
these weights have been normalized, thus

n
wi = 1. (5)
i=1

Since no account is taken of the number of counts each eligible note has accumulated, the algorithm
becomes what is known as a Markov chain [11], with no explicit memory of anything other than
72 L. Polansky et al.

the current selected element. Its n-by-n transition matrix M then has non-negative elements
⎧ w
⎨ i
, i = j
Mij = 1 − wj (6)
⎩0, i = j,
which give the probability of element i being selected given that
 the current element is j. By using
Equation (5), one may verify the required column sum rule i Mij = 1, ∀j.
Recall that weights wi were included in the algorithm to give long-term bias towards vari-
ous elements. So, how do the long-term frequencies of elements depend on these weights? The
relationship is not trivial: frequencies are not strictly proportional to weights. The relative fre-
quencies tend to the Markov chain’s so-called
n steady-state probability distribution p := {pi }ni=1
(whose components pi are normalized i=1 pi = 1), for which we can solve11 as follows.

Theorem 1 The max-1 rule with normalized weights {wi }ni=1 and Markov transition matrix (6)
has a unique steady-state distribution given by
wi (1 − wi )
pi = n , i = 1, . . . , n. (7)
j=1 wj (1 − wj )

Proof Consider a candidate distribution vector v ∈ Rn \ 0. We multiply v by a non-zero scalar


to give it the more convenient weighted normalization ni=1 vi /(1 − wi ) = 1. Using this and
Equation (6), we compute the ith component of (M − I)v as follows:
 vj 
vi
(Mv − v)i = wi − v i = wi 1 − − vi
j =i
1 − wj 1 − wi

1
= wi − vi .
1 − wi
The condition that this vanish for all i = 1, . . . , n, in other words vi = wi (1 − wi ), is equivalent
to the statement that v is an eigenvector of M with eigenvalue 1 and therefore an (unnormalized)
steady state vector. Hence the eigenvector is unique up to a scalar multiple, i.e. this eigen-
value is simple. Finally, normalizing (in the conventional sense) this formula for vi gives the
expression (7). 

In other words, with weighted versions of the algorithm, it turns out that statistical differences
in element frequencies are less pronounced than the corresponding differences in weights.
How frequently may one of the n elements occur? Even if we push one of the weights towards
1 at the expense of the others (which must then approach zero), the corresponding element may
occur no more than 1/2 of the time. Intuitively, this follows since repeated elements are forbidden,
so one element can be chosen at most every other timestep. Rigorously, we have the following.

Theorem 2 Let n ≥ 3. Then the max-1 rule with positive weights {wi }ni=1 has a steady-state
distribution whose components obey pi < 1/2 for i = 1, . . . , n.

Proof Fixing i, we have, using the fact that 1 − wj > wi for all j  = i,

n 
wj (1 − wj ) = wi (1 − wi ) + wj (1 − wj )
j=1 j =i

> wi (1 − wi ) + wi wj .
j =i

The result then follows from j =i wj = 1 − wi and Theorem 1. 
Journal of Mathematics and Music 73

12
10
8

note i
6
4
2
50 100 150 200 250 300 350 400 450 500
t

Figure 8. Greyscale image of counting function produced for the power-law with α = 6 and n = 12 pitch-classes, with
highly unequal weights wi = i5 . T = 500 time steps are shown. This is discussed in Section 2.4.

For the case n = 2, the possibility pi = 1/2 must also be allowed. The result carries over to the
general growth functions f with f (0) = 0, again for the simple reason that repetition is excluded.
If, however, the ‘drop-down’ value f (0) (the value to which a selected element is reset prior to
the next selection) is greater than zero, repetition becomes possible. In some of Tenney’s music
(such as the piece about which he first published a description of this algorithm, Changes), he
specifies a ‘very small’ drop-down value (Section 3).
Finally, we note that a wide variety of complex behaviours can result from combining the
convex (large-α) power law with unequal weights. This seems to result in a competition between
the tendency for locked-in permutations of all n elements due to the large α, and the strong bias for
heavily weighted elements. For example, Figure 8 illustrates a selection process among n = 12
elements, with weights strongly biased towards ‘high’ elements. The resulting behaviour consists
of disordered clusters of arpeggiated sequences. It is striking that even with such extremely unequal
weights (element 12 is 125 = 248, 832 times more preferred than element 1), element 12 only
occurs a few times more often than element 1.

3. Examples from Tenney’s work12

Tenney’s interest in the ideas of dissonant counterpoint dates back to the 1950s, as evidenced
by pieces like Seeds (for ensemble) (1956; revised 1961) and Monody (for solo clarinet) (1959)
[12]. These pieces, while through-composed without the use of a computer, show his nascent
fascination with achieving what he later refers to – with respect to the early electronic works – as
‘variety’:
If I had to name a single attribute of music that has been more essential to my aesthetic than any other, it would be
variety. It was to achieve greater variety that I began to use random selection procedures in the Noise Study (more
than from any philosophical interest in indeterminacy for its own sake), and the very frequent use of random number
generation in all my composing programs has been to this same end. [13, p. 40]

Tenney began using the computer for his compositions in 1961. These works, produced at Bell
Laboratories, are among the first examples of computer music. Most dealt primarily with both the
new possibilities of computer synthesis and his ideas of hierarchical temporal gestalt formation
[14,15]. Yet he recognized that randomly generated events without memory of prior events would
not produce the variety (Cowell: non-tautology) that he wanted.
While the early computer pieces predate a formalization of the dissonant counterpoint algorithm,
the ‘seeds’ of this idea are clear in his description of an approach to pitch selection:
Another problem arose with this [Stochastic String] quartet which has led to changes in my thinking and my ways
of working, and may be of interest here. Since my earliest instrumental music (‘Seeds’, in 1956), I have tended to
avoid repetitions of the same pitch or any of its octaves before most of the other pitches in the scale of 12 had been
sounded. This practice derives not only from Schoenberg and Webern, and 12-tone or later serial methods, but may
be seen in much of the important music of the century (Varèse, Ruggles, etc).
In the programs for both the Quartet and the Dialogue, steps were taken to avoid such pitch-repetitions, even
though this took time, and was not always effective (involving a process of recalculation with a new random number,
74 L. Polansky et al.

when such a repetition did occur, and this process could not continue indefinitely). In the quartet, a certain amount
of editing was done, during transcription, to satisfy this objective when the computer had failed. [13]

Tenney continued to explore this process throughout his life, and began using the algorithm
described in this paper as early as the 1980s. The first published description of it occurs in a
sentence in his article on Changes (1985):
Just after a pitch is chosen for an element, [the probability of] that pitch is reduced to a very small value, and then
increased step by step, with the generation of each succeeding element (at any other pitch), until it is again equal to
1.0. The result of this procedure is that the immediate recurrence of a given pitch is made highly unlikely (although
not impossible). [6, p. 82].

Note that the above description seems to describe a linear model with a small positive ‘drop-
down’ value, and truncation, i.e.

f (c) = min[ + ac, 1] (8)

for a > 0 and some small  > 0. The fact that  is positive allows pitches to be repeated.
From the composition of Changes in 1985 until 1995, Tenney wrote a number of computer-
generated pieces, including Road to Ubud (1986; revised 2001), Rune (1988), Pika-Don (‘flash-
boom’) (1991) and Stream (1991), each of which warrant further investigation with respect to
the use of the dissonant counterpoint algorithm. However, many of Tenney’s works after 1995
implement the dissonant counterpoint algorithm explicitly including: Spectrum 1–8 (1995–2001);
Diaphonic Study (1997); Diaphonic Toccata (1997); Diaphonic Trio (1997); Seegersong #1 and
#2 (1999); Prelude and Toccata (2001); To Weave (a meditation) (2003); Panacousticon (2005);
and Arbor Vitae (2006).13
At a certain point, the dissonant counterpoint algorithm simply became Tenney’s de facto
pseudo-random element chooser. He used it to determine pitches (Seegersongs and others),
timbre/instrumentation (Spectrum pieces, Panacousticon), and register (To Weave), and even
movement through harmonic space (Arbor Vitae). Early drafts of computer programs written to
generate Spectrum 6–8 are labelled with the word diaphonic. Tenney used that term to refer to
most of his computer code after about 1995. Specific titles notwithstanding, he may have consid-
ered many or all of these works as ‘diaphonic’ studies after Ruth Crawford Seeger’s four studies
from the early 1930s (taking their name from Charles Seeger’s earlier use of the term to mean,
roughly: ‘sounding apart’ [8]). Over time, the algorithm’s role seems to change from that of a
principal formal determinant (as in the Seegersongs and To Weave) to an embedded, deep-level
selection technique which was combined with and modulated by larger formal processes.

3.1. Seegersongs

Seegersong #1 and #2 are perhaps the clearest examples of Tenney’s use of the algorithm.
These pieces exemplify Tenney’s integration of the algorithm with larger formal concerns. Both
Seegersongs used the convex growth function that Tenney most commonly employed in these
later works, in this case by repeated doubling, thus

f (c) = 2c . (9)

Seegersong #1 and #2 explicitly model Ruth Crawford Seeger’s approach to dissonant coun-
terpoint in the avoidance of pitch-class repetition. However, they also suggest other aspects of
her work, such as the technique of ‘phrase structure’ discussed by Charles Seeger in the ‘Manual
…’ [2] and exemplified by Ruth Crawford Seeger in Piano Study in Mixed Accents (1930/31)
as well as by Johanna Beyer in her two solo clarinet suites (1932) [16–18]. As with Seeger’s
Piano Study in Mixed Accents, both Tenney’s Seegersongs consist of delimited phrases (or, in
Journal of Mathematics and Music 75

Tenney’s terminology, gestalt sequences). Each phrase has an associated ascending or descend-
ing pitch-trajectory over some duration. This is achieved using a generalization of the dissonant
counterpoint algorithm of Section 2.2, in which the weights wi are allowed to change with time
in a prescribed fashion, and therefore are labelled wi,t . That is, the probabilities in the above
algorithm are computed according to

wi,t f (ci )
pi = n , i = 1, . . . , n. (10)
k=1 wk,t f (ck )

Tenney used weights wi,t that decrease linearly with pitch distance from a pitch centre, giving
a triangularly shaped weight vector which reaches zero a pitch distance of a tritone from the
centre. The centre itself moves in time in a piecewise linear fashion, with each linear trajectory
being a phrase. The resulting weights are shown as greyscale density in Figure 9. The moving
weights are used by the dissonant counterpoint algorithm to follow the desired registral trajectory.
The interpolation points defining the linear trajectories are themselves chosen randomly within a
slowly changing pitch range illustrated by the grey region in Figure 10(b).
The large scale form of Tenney’s Seegersongs resembles Ruth Crawford Seeger’s Piano Study
in Mixed Accents, in which the registral profile similarly ascends and then descends (Figure 10(a)).
However, in Tenney’s re-imagining, the range’s upper limit follows the positive part of a smoothly
distorted cosine function peaking at the golden mean division of the piece’s duration. The lower
limit of the pitch range remains constant (Figure 10(b)).

3.2. The Spectrum pieces

The pitches in the Spectrum series are derived from a harmonic series with a fixed fundamental
[19]. In these pieces, Tenney also used the algorithm to determine non-pitch parameters. In the
Spectrum works that use percussion, the algorithm selects, for those instruments, from a set of
pitched and unpitched sounds. When the algorithm selects a pitch that cannot be played accurately
by a pitched percussion instrument, a number is returned indicating an unpitched percussion sound.
In this case, ‘accurate’ is defined as a pitch in equal-temperament that is more than 5 cents from its
cognate harmonic. In the instructions, Tenney states that ‘numbers in place of note-heads, denote
non-pitched sounds or instruments to be freely chosen by the player’ [20].

90

85
pitch number

80

75

70

65

365 370 375 380 385 390 395


t

Figure 9. Seegersong #2 excerpt showing pitch profile (solid lines) and dynamic weights wi,t (grey density: white is
zero and darker larger positive values) used in the dissonant counterpoint algorithm (see text for how wi,t is generated).
76 L. Polansky et al.

100

90

80
pitch number

70

60

50

40

0 10 20 30 40 50 60 70
t

b
95

90

85
pitch number

80

75

70

65

60
0 100 200 300 400 500 600 700
t

Figure 10. Pitch profiles of (a) Ruth Crawford Seeger’s Piano Study in Mixed Accents (less than a minute and a half
long, about a six octave range), compared against (b) Tenney’s Seegersong #2 (12 min long, about a three octave range).
In each case time is horizontal (seconds) and pitch vertical (semitones, where 60 is middle C). The grey region shows
the time-dependent pitch range used; see text for discussion of the algorithms. In (b) the upper bound is proportional to
25 − 13 cos[2π(t/tmax )1.44 ], and the vertical lines show the start and end of the excerpt in Figure 9.

In the Spectrum pieces with piano and/or harp, those fixed-pitch instruments are retuned
and thus not subject to the process described above. However, because these instruments are
polyphonic, the counts for more than one pitch (in the selection process) are reset to zero simulta-
neously. That is, all the notes for the chord are treated as having been selected. Tenney chooses the
number of chord tones stochastically, based on a function of upper and lower density limits over
time. All other parameters of the Spectrum pieces such as duration, loudness and pitch (which
integrates the dissonant counterpoint algorithm) are determined in similar ways (for more on
parametric profiles see [15]).
For the note selection algorithm, the Spectrum pieces use a growth function similar to that of
the Seegersongs, but with a larger base for exponential growth:

f (c) = 4c . (11)

Each part in each Spectrum piece is generated individually. Since the larger the base, the more
convex the function (this being similar to the effect of a larger power explored in Section 2.3),
note selection will tend to be more correlated than in the Seegersongs. Perhaps, because there is
Journal of Mathematics and Music 77

Figure 11. To Weave (a meditation) score excerpt.

a large variety of instruments in the Spectrum pieces, Tenney might have used the higher base in
order to give individual voices a more coherent, even melodic character.

3.3. To Weave (a meditation)

In To Weave (a meditation), for solo piano, the selection algorithm determines not only pitch-
class but also register, or what can be called ‘voice’. Three such voices are used in the piano part
(Figure 11), with the max-1 algorithm of Section 2.4.
The algorithm determines voices note by note, where a voice is defined as one of three registers
(low, middle or high). Each voice is thus a possible element for selection. For each note, the
two voices not selected for the previous note become equally probable and the selected voice’s
probability is set to 0. In other words, if a pitch occurs in the low register, then the next pitch
must occur within one of the two other registers (middle or high). The stochastic pitch sequence is
woven ‘non-tautologically’ into a three-voice virtual polyphony (thus the titular pun on the name
of the pianist, Eve Egoyan, for whom it was written).
The growth function (2c ) is the same as the Seegersongs, but in To Weave pitch-class probabilities
are incremented both globally and locally (for each individual voice). These two values are
multiplied to determine the pitch-class probabilities ultimately used to select a pitch-class. Once
selected, that pitch-class is placed within the range of the currently selected voice. As in the
Seegersongs, the ranges of the three voices change over time, peaking at the golden mean point
of the piece. According to Tenney:
Waves for Eve, wave upon wave, little waves on bigger waves, et cetera, but precisely calibrated to peak at the
phi-point of the golden ratio. To weave: a three-voice polyphonic texture in dissonant counterpoint, with a respectful
nod in the direction of Carl Ruggles and Ruth Crawford Seeger. [21]

3.4. Panacousticon

In Panacousticon for orchestra, the algorithm selects both pitch-class and instrument. As in the
Spectrum pieces, the pitches are derived from the harmonic series on one fundamental. Both
implementations of the algorithm (for pitch-class and instrumentation) use linear growth functions
78 L. Polansky et al.

with an upper bound, of the form


f (c) = min[c, 5]. (12)
Thus after an element is chosen, its probability reaches a maximum if it is not chosen within
the next five selections.14 For each note, the dissonant counterpoint algorithm is combined with
another procedure that determines the register of the chosen pitch-class and which instruments
are available to play the pitches (i.e. the instruments that are not already sounding and whose
range covers the determined pitch).

3.5. Arbor Vitae

In his last work, Arbor Vitae for string quartet, Tenney uses the algorithm to explore complex
harmonic spaces using ‘harmonics of harmonics’, in perhaps his most unusual usage of this
selection method. The pitch material, or the harmonic space [4] of Arbor Vitae is complex.
However, ratios are generated via reasonably simple procedures, in a manner which recalls Lou
Harrison’s idea of ‘free style’ just intonation (such as in A Phrase from Arion’s Leap, Symfony in
Free Style, and At the Tomb of Charles Ives; see [22–26]).
The title is a metaphor for the work’s harmonic structure [27]. The dissonant counterpoint
algorithm first calculates roots, which are harmonics of a low B flat. These are treated as temporary,
phantom fundamentals. Next, the algorithm calculates branches which terminate in the pitch-
classes for sounding tones. Lower√ harmonics are biased for root calculation by assigning initial
probabilities (pi proportional to 1/ i where i is the harmonic number of the possible root).15 The
growth function depends on harmonic number i. The selection algorithm can be summarized as
that of Section 2.2, but with the probabilities computed via

fi (ci )
pi = n , (13)
k=1 fk (ck )

where fi (c) now depends both on element i and counts c, as follows:




⎨0, c=0
fi (c) = i−1/2 , c = 1 . (14)

⎩ −1/4
i , c>1

Initial counts ci are all set to 1, and counts are only updated for elements that have been selected
at least once.
For branch selection, the set of possible elements are the primes – 3, 5, 7, 11 – of a given
root. Owing to the use of negative powers, the growth function becomes a kind of harmonic
distance measure (with higher primes less favoured). This tendency towards ‘consonance’ is
reflected in the bias towards selected roots which are closer to fundamentals, as well as in selected
branches which are closer to those roots. Tenney seems to be imposing a kind of natural evo-
lutionary ‘drag’ on the tendency of the harmonic material in the piece to become too strongly
disassociated with its fundamentals. This ensures a kind of tonality, albeit a sophisticated and
ever-changing one.
In other ways, compositional procedures of Arbor Vitae resemble those of Seegersongs, Pana-
cousticon, To Weave (a meditation), and the Spectrum pieces. Final pitch determinations use
time-variant profiles of pitch ranges. Instrument selection, as in some of the other pieces, is
performed by the dissonant counterpoint algorithm. As in Panacousticon, the software first deter-
mines whether an instrument’s range can accomodate the selected pitch. In contrast to To Weave
(a meditation), the growth function for instrument selection in Arbor Vitae has f (0) = f (1) = 0:
Journal of Mathematics and Music 79

in general no instrument may be chosen until two other instruments have played. For a more
detailed discussion of this piece, see [28].

4. Conclusion

The dissonant counterpoint algorithm is, in some respects, just a simple method for choosing
from a set of elements to give a random sequence with certain behaviours. In other respects, it is
an ingenious way of marrying both an important historical style (that of Ruggles/Seeger/Cowell
‘dissonant counterpoint’, or ‘diaphony’) with a more modern and sophisticated, but poorly under-
stood set of ideas from computer-aided music composition (Ames’ statistical feedback). The
algorithm elegantly embeds the latter idea in a manner characteristic of Tenney’s interest in the
idea of a model (a method that reflects how humans do or hear something).
We have analysed the algorithm mathematically, explaining how a convex/concave choice of
the growth function controls the correlation in time (rate of memory loss) of the sequence. This can
lead to surprising musical effects, such as quasi-rhythmic permutations with long-range order. We
included a formula explaining how weights determine the statistical selection frequencies (in the
case of the max-1 growth function). We illustrated a few of the algorithm’s wide variety of musical
possibilities with two example compositions. Combined with our discussion of the algorithm’s
role in Tenney’s work, this work suggests ways in which it might be used further, in experimental
and musical contexts.
For example, none of the versions of the algorithm presented above consider an ‘ordered’ set
of elements in which the proximity of one to the other is significant. Simple examples of this
are pitch-sets, registral values, durations, and so on. One might want to select from meaningful
regions of the set of elements (e.g. ‘shorter durations’, ‘higher pitches’). In a simple extension
to the algorithm, one of the authors (Polansky) has implemented what he calls ‘gravity’: not
only is the chosen element’s probability affected post-selection, but so are the probabilities of
surrounding elements. By defining the shape of the ‘gravitation’ (the width of the effect, the
slope of the effect curves, whether the ‘gravity well’ is negative or positive) one can increase or
decrease the probabilities of neighbourhood selection in a variety of ways. Polansky has used this
extensively in his piece 22 Sounds (for percussion quartet) [29].
The algorithm, in its simplest form, is meant to ensure a kind of maximal variety with a
minimum amount of computation. However, by varying explicit parameters, it can produce, in
both predictable and novel ways, a continuum of behaviours from completely non-deterministic
to completely deterministic.

Acknowledgements
Thanks to Dan Rockmore for his important help in describing the relationship of the mathematics to the aesthetic ideas
here. Kimo Johnson made many valuable suggestions, and helped work on some of the software. Thanks to Lau-
ren Pratt and the Estate of James Tenney for making valuable material available to us. AHB is funded by NSF grant
DMS-0811005.

Notes

1. All Tenney scores referenced in this article are available from Smith Publications or Frog Peak Music. Most of the
works mentioned are recorded commercially as well.
2. Tenney’s personal papers, however, include many descriptions and notes pertaining to his pieces. Most of his music
after about 1980 was written with the aid of a computer (after a long hiatus in that respect), and the software itself
is extant. In addition, Tenney frequently described his compositional methods to his students.
80 L. Polansky et al.

3. For a good explanation of this, see [28]. However, several of Ames’ other articles discuss it as well, including
[30–35].
4. Ames [28], in discussing his early compositional use of this technique, describes it more simply as ‘the trick of
maintaining statistics detailing how much each option has been used in the past, and of instituting decision-making
processes which most greatly favour those options whose statistics fall farthest behind their intended distribution’.
Also see [36], which surveys Ames’ work (up until 1992), and contains an alternate mathematical formalization of
statistical feedback (p. 35).
5. And, from a more local, musical perspective, there are five Hs in a row, an example of what Ames refers to as
‘heterogeneity’ or ‘dispersion’ [28], and Polansky refers to as ‘clumping’ [37,38]. This is a slightly different, though
related, problem to that of monitoring the global statistics of a probability distribution, but the solution also may
employ statistical feedback. These terms refer to the difference between the following two, equally well-distributed
coin toss statistics:
HHHHHTTTTT and HTHTHTHTHT
6. See, for example [28,30,31].
7. We use ‘time steps’ to refer to repetitions of the algorithm; note that this does not necessarily imply regular time
intervals in a musical sense.
8. Note that to fully ‘avoid repetitions of any tone until at least six progressions have been made’ (as suggested by
Seeger), an element’s probability would have to remain at zero for six trials after its choice. In Section 2.2, we show
how this idea is incorporated in the general description of the algorithm, using growth functions with a high power
to produce a similar result.
9. This is done by estimating the ratio between the selection probabilities of the elements with n − 1 versus n − 2
counts as [(n − 1)/(n − 2)]α ≈ (1 + 1/n)
α ≈ eα/n .
10. Strictly speaking, the relevant area is τ ∈Z Cii (τ ) where Cii is the autocorrelation of the binary signal which is 1
when element i is selected, and 0 when any other element is chosen. We have checked that Cii (τ ) and Caa (τ ) look
very similar.
11. Some intuition as to why an analytic solution is possible here is that M may be factored as the product of three
simple matrices: M = diag{wi }(1 − I)diag{(1 − wi )−1 }, where 1 is the n-by-n matrix with all entries 1.
12. Please see [41–47] for a selected discography of the works discussed in this section.
13. For these latter pieces (unlike those between 1985–1995), the computer code is available. Other pieces may use the
algorithm in some way that is not yet known.
14. Generally, the effect of such an upper bound on f is to equalize the probabilities of all elements not chosen within
the upper bound number of time steps. This may be viewed as negative curvature as in Figure 4, and it serves to
reduce the already small autocorrelation of the resulting sequence at large times. An upper bound of 1 would be the
same as a power law α = 0, and is the max-1 rule of Section 2.4.
15. In fact, rather than handling counts as in Section 2.2, Tenney directly updated relative probabilities pi , normalizing
them to sum to 1 whenever random selection was needed.

References

[1] H. Cowell, New Musical Resources, Something Else Press Edition, New York, (reprinted 1969), 1930.
[2] C. Seeger, Manual of dissonant counterpoint, in Studies in Musicology II: 1929–1979, A. Pescatello, ed., University
of California Press, Berkeley (republished 1994), 1930.
[3] J. Tenney, Liner notes to James Tenney: Bridge and Flocking, CD, hat ART CD 6193, 1996.
[4] J. Tenney, John Cage and the theory of harmony, in Soundings, P. Garland, ed., Vol. 13, Soundings Press, Santa Fe,
1983.
[5] J. Tenney, Conlon Nancarrow’s Studies for Player Piano, Soundings Press, Conlon Nancarrow: Selected Studies for
Player Piano, Berkeley, California, 1977.
[6] J. Tenney, About ‘Changes’: Sixty-four studies for six harps, Perspect. New Music 25(1–2) (1987), pp. 64–87.
[7] J. Tenney, The chronological development of Carl Ruggles’ melodic style, Perspect. New Music 16(1) (1977),
pp. 36–69.
[8] C. Seeger, On dissonant counterpoint, Mod. Music 7(4) (1930), pp. 25–31.
[9] C. Ames, Tutorial on automated composition, Proceedings of the ICMC, International Computer Music Association,
Urbana, IL, 1987, pp. 1–8.
[10] L.A. Bunimovich, Existence of Transport Coefficients, Encyclopaedia Math. Sci., Vol. 101, Springer, Berlin, 2000,
pp. 145–178.
[11] D. Stroock, An Introduction to Markov Processes, Springer, Heidelberg, 2005.
[12] L. Polansky, The early works of James Tenney, in Soundings, P. Garland, ed., Vol. 13, Soundings Press, Santa Fe,
1983.
[13] J. Tenney, Computer Music Experiences, 1961–1964, Electron. Music Rep. 1(1) (1969), pp. 23–61.
[14] J. Tenney and L. Polansky, Hierarchical temporal gestalt perception in music: A metric space model, J. Music Theory
24(2) (1980), pp. 205–241.
[15] J. Tenney, Meta + Hodos, Frog Peak Music, Oakland, CA, 1964 (republished 1986).
[16] M. Boland, Johanna Beyer: Suite for Clarinet Ib, Frog Peak Music (A Composers’Collective),Annotated performance
ed., Hanover, NH, 2007.
Journal of Mathematics and Music 81

[17] M. Boland, Experimentation and process in the music of Johanna Beyer, VivaVoce, Journal of the Internationaler
Arbeitskreis Frau and Musik (in German) 76 (2007). Available at http://www.archiv-frau-musik.de.
[18] L. Polansky, Liner notes to ‘Sticky Melodies’: The Choral and Chamber Music of Johanna Magdalena Beyer, CD,
New World Records 80678, 2008.
[19] R. Gilmore, Liner notes to James Tenney: Spectrum Pieces, CD, New World Records 0692, 2009.
[20] J. Tenney, Spectrum 8 (for Solo Viola and Six Instruments), Frog Peak Music (A Composers’ Collective), Hanover,
NH, 2001.
[21] J. Tenney, Liner notes to Weave: Eve Egoyan, CD, EVE0106, 2005.
[22] L. Harrison, A Phrase for Arion’s Leap, Recording on Tellus #14: Just Intonation, (re–issued recording, 1986),
1974.
[23] L. Polansky, Three Excellent Ideas, 2009, Talk given at the Drums Along the Pacific Festival, Cornish Institute
for the Arts, Seattle. Available at http://eamusic.dartmouth.edu/ larry/misc_writings/talks/three_excellent_ideas/
three_excellent_ideas.front.html.
[24] L. Polansky, Item: Lou Harrison as a speculative theorist, in A Lou Harrison Reader, P. Garland, ed., Soundings
Press, Santa Fe, 1987.
[25] L. Polansky, Paratactical tuning: An agenda for the future use of computers in experimental intonation, Comput.
Music J. 11(1) (1987), pp. 61–68.
[26] L. Miller and F. Lieberman, Lou Harrison: Composing a World, Oxford University Press, New York, 1998.
[27] M. Winter, On James Tenney’s ‘Arbor Vitae’ for string quartet, Contemp. Music Rev. 27(1) (2008), pp. 131–150.
[28] C. Ames, Statistics and compositional balance, Perspect. New Music 28(1) (1991), pp. 80–111.
[29] L. Polansky, 22 Sounds (for Percussion Quartet), Frog Peak Music (A Composers’ Collective), Hanover, NH, 2010.
[30] C.Ames, Thresholds of confidence: An analysis of statistical methods for composition, part 2: Applications, Leonardo
Music J. 6 (1996), pp. 21–26.
[31] C. Ames, Thresholds of confidence: An analysis of statistical methods for composition, part 1: Theory, Leonardo
Music J. 5 (1995), pp. 33–38.
[32] C. Ames, A catalog of sequence generators, Leonardo Music J. 2 (1992), pp. 55–72.
[33] C. Ames, A catalog of statistical distributions: Techniques for transforming random, determinate and chaotic
sequences, Leonardo Music J. 2 (1991), pp. 55–70.
[34] C. Ames, Automated composition in retrospect, Leonardo Music J. 20(2) (1987), pp. 169–185.
[35] C. Ames, Two pieces for amplified guitar, Interface: J. New Music Res. 15(1) (1986), pp. 35–58.
[36] M. Casey, HS: A symbolic programming language for computer assisted composition, Master’s thesis, Dartmouth
College, 1992.
[37] L. Polansky, No replacement (85 verses for Kenneth Gaburo), Perspect. New Music 33(1–2) (1995),
pp. 78–97.
[38] L. Polansky, More on morphological mutations, Proceedings of the ICMC, Interational Computer Music Association,
San Jose, 1992, pp. 57–60.
[39] The MathWorks, Inc., MATLAB Software, Copyright (c) 1984–2009. Available at http://www.mathworks.com/
matlab.
[40] J.W. Eaton, GNU Octave Manual, Network Theory Limited, 2002. Available at http://www.octave.org.

Selected discography of cited Tenney compositions

[41] Q. Bozzini, Arbor Vitae, CD, CQB 0806, 2008.


[42] E. Egoyan, Weave: Eve Egoyan, CD, EVE0106, 2005.
[43] M.P. Ensemble, Pika-don, CD, hat(now)ART 151, 2004.
[44] M. Lancaster, io, CD, New World Records 80665, 2009.
[45] E. Radermacher, G. Schneider, M. Werder, and T. Bächli, James Tenney: Bridge and Flocking, hat ART CD 6193,
1996.
[46] M. Sabat and S. Clark, Music for Violin & Piano, CD, hat(now)ART 120, 1996.
[47] J. Tenney, James Tenney: Selected Works 1961–1969, CD, New World Records 80570, 2003.
[48] The Barton Workshop, James Tenney: Melody, Ergodicity and Indeterminacy, CD, Mode 185, 2007.
[49] The Barton Workshop, James Tenney: Spectrum Pieces, CD, New World Records 80692, 2009.

Appendix. Matlab/Octave code examples

Here we give the code that we use to simulate the dissonant counterpoint algorithm and collect statistics. It may be run
with MATLAB [39] or its free alternative Octave [40]. To measure autocorrelation accurately, we need many (e.g. 105 )
samples. We generate N realizations of length T simultaneously, since this is equivalent to, but (due to vectorization) much
more efficient than, generating a single long realization of length NT . This also allows us to see ‘vertical’ correlations
between different runs launched with the same initial conditions (as in Figure 6).
82 L. Polansky et al.

n = 12; % # notes or elements


N = 500; % # simultaneous realizations
T = 500; % # timesteps
f = @(c) c.ˆ4; % fourth power law growth function (convex)
w = ones(n,1); % selection bias weights (column vector)
a = zeros(N,T); % histories of which notes chosen
c = ones(n,N,T+1); % histories of counts for all notes
for t=1:T
fc = repmat(w, [1 N]) .* f(c(:,:,t)); % feed c thru func & bias
p = fc ./ repmat(sum(fc,1), [n 1]); % selection probabilities
cum = cumsum(p,1); % cumulative probs
x = rand(1,N); % random iid in [0,1]
a(:,t) = sum(repmat(x, [n 1]) > cum, 1) + 1; % chosen notes
c(:,:,t+1) = c(:,:,t) + 1; % increment counts
c(sub2ind(size(c), a(:,t)’, 1:N, (t+1)*ones(1,N))) = 0; % reset chosen
end
figure; imagesc(a); colorbar; xlabel(’t’); ylabel(’run’); title(’notes’);
To compute autocorrelation of the element sequences a we use,
M = 100; % max correlation time to explore
ma = mean(a(:)); zma = a - ma; ca = zeros(1,M+1); l = 1:(T-M);
for t=0:M, ca(t+1) = mean(mean(zma(:,l).*zma(:,l+t))); end
figure; plot(0:M, ca./ca(1), ’+-’); xlabel(’\tau’), ylabel(’C_a(\tau)’);
Finally, to generate an audio file output of realization number r we use,
dt = 0.125; fs = 44100; Tsong = 60; % Tsong: song length (sec)
fnot = 440*2.ˆ((0:n-1)/n); % list of note freqs (Hz)
t = single(1:floor(fs*Tsong)-1)/fs; % list of time ordinates
wavwrite(0.9*sin(2*pi*fnot(a(r,1+floor(t/dt))).*t)’,fs,16,’out.wav’);
Copyright of Journal of Mathematics & Music is the property of Taylor & Francis Ltd and its content may not
be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written
permission. However, users may print, download, or email articles for individual use.

Das könnte Ihnen auch gefallen