Beruflich Dokumente
Kultur Dokumente
ABSTRACT
This paper focuses on a need to design of a template for an
A.I. system that analyzes computer music and simulates
the audience's emotional reaction. This system may
possibly aid computer music composers in creating pieces
that are more appealing to the wider audiences. Conceptual
design of the proposed system is based on the most recent
findings in understanding of how the human brain
processes musical patterns
1. INTRODUCTION
Music, unlike the language, is a fuzzy concept. If you hear
the words "Puntius Tetrazona" and are unaware that they
refer to a kind of tropical fish, your brain will not respond
at allmaking a bivalent decision. On the other hand,
essentially all music spawns at least some meaning to a
brain.
Even music that we have difficulties
comprehending, such as Eskimo Inuit singing or highly
original computer music, will sound like something to
western-trained ear-mind of humans. Listening to this
kind of music represents a fuzzy concept containing high
noise capacity or high percentage of fuzzy entropy. Thus,
in order to increase the human interest in any kind of
information, one has to decrease its noise capacity. Let's
see why is this particularly important to the computer
music composers.
2. CREATIVITY AND MUSIC
It is very logical to assume that humans intelligently create
musical pieces rather than producing them as random (high
noise capacity) innovations. But, looking at the state of
affairs in computer music will show more examples of
innovations than creations. Let me clarify the distinction
between the two. Quality of computer music and for that
matter the quality of anything does not depend exclusively
on its structural organization, but it is rather rooted in the
transaction that occurs between the music and the
audience. To evaluate music, is to find the quality of
transaction between the musical configuration and its
cultural response. If that response is positive, meaning
In strictly neural terms the synapses that are kept are the
ones that are active and connections that are not used are
eliminated.[LeDoux 2002]
Since we know that humans constantly judge by
comparison, and our judgment of any item depends upon
what we are comparing it to at that moment, lets be wise
and use this knowledge in composing music. If there is a
pattern that reflexively reoccurs throughout the musical
composition it will create its memory in the brain and that
will become a "point of reference" to which future
transformations of the same pattern may be compared to.
If humans are able compare, then in return, they will be
also able to evaluate. If this evaluation process keeps
going on, that probably means there is a growing interest
in what is going on. This still does not mean by any means
that a musical composition containing reoccurring patterns
of some sort and their transformations will be a guaranteed
recipe for the creation of a successful music. It simply
means that unless there is a point of reference, which may
be just about anything previously digested and
recognizable to the ear-minds of the audience, there is a
significantly much lesser chance for produced musical
piece of being able to catch on and make people react to it
with positive feelings. How successfully one is going to
play with musical patterns will, in the end, always remain
the matter of human musical talent.
Researh of Aniruddh D. Patel from the Neuroscienece
Institute in San Diego shows that interactions between
human brain regions are the greatest when subjects listen
to a melody-like (pattern) structures, rather then when
listening to random structures.
Ongoing timing patterns in the neural signal showed a
strong dependency on the structure of the tone sequence. At
certain sensor locations, timing patterns covaried with the
pitch contour of the tone sequences, with increasingly
accurate tracking as sequences became more predictable. In
contrast, interactions between brain regions (as measured by
temporal synchronization), particularly between left posterior
regions and the rest of the brain, were greatest for the tone
sequences with melody-like statistics. This may reflect the
perceptual integration of local and global pitch patterns in
melody-like sequences.[Patel 2000]
See illustration attached at the end of the paper.
Figure 1. (a-d): Example of stimulus and brain response over 4 seconds: (a) Tone frequencies; (b)
Stimulus waveform; (c) Waveform detail (150 msec), showing constant modulating frequency (41.5
Hz, blue line) overlaid on changing carrier frequency; (d) Neural signal from one sensor. (e)
Successive 2-second spectra of neural signal. The brain signal shows an energy peak at 41.5 Hz,
whose phase (inset arrow) varies with carrier frequency. (f-i): Phase-tracking of individual tone
sequences. Pitch-time contours (thin lines) illustrate the four different stimulus categories.
Associated neuromagnetic phase-time series (thick lines) from a single sensor during one trial in
one subject were randomly drawn from the top 10% of sensor correlation values for each stimulus.
The correlation between the resampled pitch-time series and the neuromagnetic phase-time series is
given in the inset to each graph.