You are on page 1of 28

Jaseau 1

Introduction
! Gesture-based sensor technology is a relatively new field in electronic music
performance. Alternative performance interfaces are largely based on the recent and
developing technologies such as serial communication and open sound control (OSC).  
One can use micro-controllers (a small computer on a single integrated circuit containing
a processor core, memory, and a programmable input/output peripherals) for controlling
virtually anything including music, robotics, 3D animation and in recent popularity,
wearable electronics. This new advancement in physical computing makes it possible to
translate or convert human energy into electrical energy.
! In acoustic music, sound can be produced by a physical action towards to
instrument such as depressing the keys of a piano or by plucking strings of a violin. In
present time music can be produced by alternative methods such as tracking the
movements of a body in space with a sensor or videocamera, or by measuring the force
exerted by a human gesture on a tangible object.  The purpose of such musically odd
activities is to generate data streams that will effectively create music and shape sound.
Gestures that once were foreign to musical reproduction of sound are now being used in
a musical fashion. Now there are other options besides striking, hitting, plucking or
bowing in order to make sound, we can now implement other ways for triggering sonic
events. As stated by Nick Collins, a computer music lecturer and author from Sussex,
“with appropriate sensors, new digital music instruments can be caressed, squeezed,
kissed, licked, danced, hummed or sung.”1 This is gestural control.
! Gestural control can be executed with the use of Ready-Mades such as the
Wiimote and the iPod touch or handmade Do-It-Yourself (DIY) controllers. Here in this
paper, the focus will be towards the DIY category that requires an artist to both design
and build the hardware, in addition to designing and programming the software.
! In electronic music performance, sensors can provide the user with an array of
tangible objects to interact with, thus creating data streams that have a direct relationship
(mandated by the programmer) with the applied human action. This conversion of energy

1 (Collins, & d'Escrivan, 2007)


Jaseau 2

from the sensor by human action to electrical charge creates options for finer resolution,
and thus has potential for greater expression.
! How the process works is this: human action gets translated into electrical energy
by the sensor, referred to as data acquisition. That data is mapped to the appropriate
characteristic of the synthesis algorithm, considered as mapping strategies. This is where
technical and creative decisions must be carefully considered to maintain a useful
relationship. In regards to mapping, the sensor type must be considered. For example, a
digital sensor would probably not be appropriate for amplitude adjustment since it has no
other resolution besides the Boolean “on” or “off”, and would be better suited as a switch
or gate to select between a few or several data streams. Each sensor has its own
characteristics such as resolution that must be considered and deemed appropriate for
various uses.
! The advantage of working with analog sensors is that they usually have a greater
resolution range (1028 is common), and they help open the computer up to the outside
world. The outside world is like a window looking into the computer. The use of a mouse
or keyboard makes a limited window for the computer to receive human action. One can
either point and click, or hit a key. With the addition of these new gestural controllers, the
window becomes wider. The computer can receive information about a human action
through the use of touch, action or effort on a analog sensor.
! This alternative way of working with touch, action and effort allows the translation
of physical energy into electrical energy, which will then drive a synthesis algorithm based
purely upon a human gesture. The computer becomes an interpreter and intermediary
between the human body and the sound production. Sensor technology is like tapping
into the source of pure energy, and thus provides an opportunity to shape and control
music in new ways.!
! In this paper I examine the use of gestural control in live electronic music through
the case studies of three musical artists; Michel Waisvisz, Kevin Patton, and Jaime Oliver.
The questions that I address are:
! 1) What is gesture and why is gesture important in the real-time control and
! performance of live electronic music?
2) Why is the selection of sensors important when utilizing gestures?
Jaseau 3

3) What are some of the problems and challenges of building a gestural controller?
4) How is the data acquired through the use of sensors utilized and mapped?
5) How does the selection of sensors influence the relationship between the
performer and the audience?
6) What type of gestural control classification does each artist implement and how
does this choice serve the musical work?

Classifications and Definitions


! Gestural controllers fall into the broad category of Human Computer Interaction
(HCI) where interaction takes place between user and a computer. Digital Musical
Instruments (DMI) are a subset of HCI and include a gestural controllers, among others,
as a subset of DMI. Here we will look at the breakdown from broad classifications to
specific classifications in reference to gesture and gestural controllers.

Human Computer Interaction (HCI) is the study of involving the simultaneous control of
multiple parameters, timing, rhythm and user training. Gestural control of computer
generated sounds is a specialized class of HCI. Many parameters are controlled, and the
human operator has an overall view of what the system is doing and how it is operating.
HCI is where the “human operator is totally in charge of the action”2 Often times, the user
of such systems is the same person as the designer.

Digital Music Instruments (DMI) are instruments that contains a separate gestural
control interface from a sound generation unit.3 A gestural controller is considered as the
input portion of the DMI, where physical interaction takes place and is then sent via
mapping to a synthesis algorithm. The separate sound generation unit contains the
synthesis algorithm and the relevant controls. An example would include STEIMʼs
LickMachine by Michel Waisvisz.
!

2 (Wanderley & Battier, 2000)


3 Ibid.
Jaseau 4

! In recent history, there has been a surge of three main types of DMI controllers;
augmented musical instruments or hybrid controllers, instrument-like gestural controllers,
and alternate gestural controllers.4
! Hybrid controllers are acoustic instruments that have been augmented with the
addition of extra sensors. This includes the meta-sax, the hyper-flute (IRCAM) and
several others such as Laurie Andersonʼs tape-bow violin.
! An instrument-like controller is an input device that reproduces features of a
previously existing acoustic instrument. In present day we see MIDI controllers shaped
like pianos, guitars, saxophones, drums. Instruments such as a Yamaha WX7 MIDI
saxophone and the Akai EWI (Electronic Wind Instrument) are included in this type.
! Alternate controllers will be made up of a design that does not follow an
established known acoustic instrument design. Examples include Laetitia Sonami Ladyʼs
Glove built by Bert Bongers in 1991, the Wacom graphical tablet, Donald Buchlaʼs
Lightening and Max Mathewʼs Radio Baton.
!
! The basic structure of an alternate or gestural controller includes the following five
components;
1. an input device such as a sensor and a data acquisition subsystem to capture the
gestures of the performer
2. a mapping algorithm to translate gestural data into musical information
3. a real-time sound synthesis software or hardware where sound synthesis parameters
are modulated by live input
4. a compositional structure that defines the form or the musical progression of the piece
5. an output system that includes output channel configuration and digital-to-analog
converters

4 Ibid
Jaseau 5

Gesture is a movement of a part of the body, such as the hand or head, to express an
idea or meaning. Gesture is the main non-verbal communication channel that points to
many aspects of movement in the different domains of communication: visual, auditory
and haptic. In order to call any movement a genuine gesture, it is required that the
movement is in some way be a carrier of expression and meaning.
! In the context of musical performance, gestures are movements made by a
performer to control the musical instrument to create music. Miranda and Wanderley 5
categorize musical gestures into three categories. Performance gestures are performer
actions produced by the instrumentalist during performance. Effective gestures are
gestures that generate sounds, the means of action on a physical world. Other gesture
classifications include ancillary or accompanist, which are gestures included in a
performance but do not produced sound but instead relate to a musical context. This
includes nodding to cue in a performer, or a shoulder twist that relates to a performerʼs
expression of a musical line, or the general tapping of the foot while listening to music.
! Musical gesture is the fundamental connection that exists between movement and
music that reflects the expression of engagement with the music. Author of Interpreting
Musical Gestures Robert Hatten, suggests that musical gesture is a “significant energetic
shaping through time”.6 The study of musical gestures is reshaping our concepts of
music and sound in general because the cause and effect relationship between
movement and sound has been pertinent throughout history.
! This cause and effect relationship encodes physical gestures as sonic information
that the audience decodes during a performance. The encode-decode relationship is a
very important component of performer-audience connection. Alternate controllers along
with the addition of physical gestures provide opportunities to create meaningful music-
based interactions that create a relationship between computer and user, and builds a
connection between the performer and the audience.

5 Ibid.
6 Hatten, R.S. (2004). Interpreting Musical Gestures, Topics, and Tropes (Godøy, R.I. (2010)) (page 18)
Jaseau 6

Gesture Acquisition and Mapping is the next step after the gesture and their
characteristics or the types of motion have been decided upon. The next step is then to
devise a system that will capture the characteristics of the gesture for use in an interactive
system. According to Miranda and Wanderley there are three modes of acquisition;
direct, indirect and physiological.
! Direct acquisition is where one or more sensors is used to monitor the performerʼs
actions. Usually a different sensor is needed to capture different gestural physicality's.
The signals from the sensors are isolated from the physical features of a gesture:
pressure, linear displacement, and acceleration. Direct acquisition is most common in
gestural controllers.
! Indirect acquisition is where gestures are isolated from the structural properties of
the sound produced from the instrument and is usually based on signal, physical or
perceptual information. “Unlike direct acquisition, indirect acquisition provides information
about performer actions from the evolution of structural properties of the sound being
produced by the instrument”. A gesture may not have a musical consequence, but the
data acquired may instead be routed to other parameters. This method then needs signal
processing techniques to derive performers actions into analysis such as the frequency of
sound and spectral envelope. Lets say I have a sensor that measures light, and I have
an array of LEDs that will turn on when an accelerometer has passed a specified
threshold. The light sensor will receive information about the change of state, and this
data can have either a musical response or effect, or the data can make adjustments on a
musical parameter or control change. The latter option is indirect acquisition.
! Physiological acquisition is the analysis of physiological signals and is most
common in bio-signal instruments such as the BioMuse by Atau Tanaka. Some
commercial systems have been developed that measure the muscle tension or tracj eye
movements. These systems capture the essence of a movement, but make it difficult to
extract the meaningful parts of a human gesture. Since brain wave data is continuous,
the emphasis of a specified gesture, such as facial expressions, cannot be identified or
specified in this type of acquisition since there is no ʻlookingʼ per se.
Jaseau 7

Gestural Controllers is where human interaction is received as input and is then sent via
mapping to a synthesis algorithm. With gestural controllers acting as a subset of DMI
alternate classification, there are some additional characteristics to consider. Axel
Mulder7 suggests 3 classifications for alternate controllers based on their characteristics.
! Most alternate controllers that expand the gestural range requires a performer to
touch the controller surface in order to make sound. These controllers are referred to as
touch controllers. These control surfaces can be either mobile or fixed in space. What is
unique and interesting about this class of controllers is that it provides the user with a
haptic representation.
! The word haptic comes from a Greek term meaning “able to lay hold of”, and it
operates when a human feels things with the body or the extremities of the body.8 The
haptic system then is an apparatus by which the individual gets information about both
the environment and his/her body. The human feels an object relative to the body and the
body relative to an object. A human can either feel what the object is, such as location in
space, texture, weight, or a human can feel how the object indents on the skin, pushed
back against the body. In haptic representations, touch is a two way street. Haptic
controllers will be further explored later on in this paper.
! Expanded-range controllers use limited physical contact or do not require any
physical contact but instead will have a limited range of effective gestures. Here, the
performer is able to escape from the control surface, that is make a physical move that
does not have a musical consequence, or produce sound. Here a haptic representation
may be reduced or absent due to less physical contact.
! Immersive controllers have few restrictions on the performers movements and are
best suited for adaptation to the specific gestural capabilities and the needs of the
performer. The user is always in the sensing field and thus cannot escape from the
control surface. Every move that is made by the performer has some kind of musical
consequence.

7 (Miranda & Wanderley, 2006)


8 Gibson, J. (1966)
Jaseau 8

Sensor Technology
! Bert Bongers, a well-known designer of alternate controllers, considers sensors as
such: Sensors are the sense organs of a machine. Sensors convert physical energy
(from the outside world) into electricity (into the machine world). There are sensors
available for all known physical quantities.9
! A sensor is akin to a transducer; both convert one type of energy into another type
of energy. Whereas a transducer can only convert one type of energy to another type,
such as air pressure to voltage, or voltage to binary digits, a sensor can capture the
phenomena of action via electrical signals. A sensor can respond to external stimuli
(human action) by providing electrical signals to an internal system (a computer). A
sensor can convert types of energy (wind, water, light) into electrical energy and can be
used as an input device that produces a useable output of electrical energy in response to
a specific physical quantity.
! Some of the necessary specifications for sensors have been agreed upon among
several scholars in the physical computer and electronic music research, and include the
following: sensitivity, stability and repeatability, in addition to linearity and selectivity of the
sensor output and ambient conditions. Garrett10 adds that accuracy, error, precision,
resolution, span, and range must be considered when designing a new gestural controller.
! Precision is important because it is a separate characteristic of the measurement
system in sensors being either static or dynamic. Static describes the behavior of a
measurement system under slowly varying input signals and includes attributes such as
accuracy, error, resolution, linearity, reproduce, and sensitivity. Dynamic describes the
behavior of a measurement system under variable input signals and include dynamic
error and speed of response. The type of action and the type of sensor, being dynamic
or static, has a synergistic relationship that is based upon action and response.
! Humanʼs actions can be sorted into areas such as force, position, velocity and
acceleration. It is important to sort sensors according to the type of body actions they can
sense, such as movement or pressure for example. A proper sensor must be chosen to

9 Bongers, B (2000)
10 P.H. Garrett author of Advanced Instrumentation and Computer I/O Design
Jaseau 9

suit the musical intentions, and the gesture. This matching of sensor to gesture is
important for the aural-visual relationship between gesture and sound that communicates
meaning and intention between a performer and the audience. The action of the gesture,
the sensor receiving the action, and the musical result coming from the sensor by the
action all must work in tandem together creating a complex system for an alternative
controller to produce music.
! There are a variety of sensors available for musical use. The selection of sensors
mentioned here are the sensors that have relevance to the case studies of the musical
artists and are the most common sensor choice in the field. I am not covering the broad
array of digital sensors, such a tilt switches, or push buttons, but the uses of these will be
later explained in the case studies. There are many more types of sensors ranging from
relatively simple to highly complex, and this technology is growing.
!
Sensor Selection:
! Force-sensitive resistor (FSR) is a tactile sensor that is non-linear, meaning that
the data can skip numbers. The advantages are that FSRs are easy to use and widely
available, and the disadvantage is that they are rather fragile. The measurements of
human action and data are non-linear and qualitative due to drifting resistance.
! Flex (Bend) is a tactile sensor that increases resistance when it is bent. It can be
easily attached to the hand or body, but breaks easily due to excessive bending.
! Infrared (IR) works with light signals with frequencies below the visible red. The IR
sensor measures light in nanoseconds (versus milliseconds in other sensor technology).
IR needs to be either in reflection mode (emitter and receiver in same device) or direct
measurement mode (a pair of IR devices). The advantages of the IR is it is relatively
simple and is sold at a low cost. Disadvantages include its sensitivity to visible light, the
need for a direct line of sight, and low resolution.
! Accelerometer- a device that measures the vibration or the acceleration of a
motion or force. The force may be static, such as the constant force of gravity, or
dynamic, responding to an applied action or motion from a user. Accelerometers
measure the amount of static acceleration due to gravity, and provides information about
the angle the device is tilting with respect to the ground. An accelerometer can be used in
Jaseau 10

a few different ways, one being the piezoelectric effect in which the microscopic crystal
structure inside the accelerometer is stressed by accelerative forces and cause voltage to
be generated. A different use is to sense changes in capacitance. If two structures are
next to each other and one structure changes positions then the capacitance triggers
voltages when used with the right circuitry. There are both analog and digital
accelerometers, the appropriate selection is dependent by the hardware one is interfacing
with. Analog accelerometers output continuous voltage that is proportional to
acceleration. Digital accelerometers usually use pulse width modulation (PWM) for their
output. This means there will be a square of a certain frequency and the amount of time
the voltage is high will be proportional the the amount of acceleration.

Force Sensitive Resistor Flex Resistor Infrared Sensor Pair 3-axis Accelerometer
!
Sensor Types:
! direct vs. indirect: a direct sensor converts energy from an input directly into
electrical signals whereas an indirect sensor works like a chain of transducers that send
the input stimulus to other energy and electrical signals.
! passive vs. active: a passive sensor, referred to as a self-generating sensor,
provides its own power and derives energy from the act of being measured. These
sensors have two wires, ground and signal. A piezoelectric sensor is an example because
it will convert mechanical vibrations into electrical signal without any additional support.
Active sensors need energy from external sources in order to operate and commonly
have three wires: power, ground and signal.
! absolute vs. relative: an absolute sensor detects stimulus in reference to an
absolute physical scale that is independent of measurement conditions. Relative sensors
produce a signal that relates to some special case.
Jaseau 11

! contact vs. no contact: a contact sensor needs to be in physical contact with a


source of energy in order to be converted into electrical signals. A no contact sensor does
not need to be in contact with a human in order to produce data.
! analog vs. digital: analog sensors provide a continuous electrical signal and will
have some type of resolution greater than 2, usually 1028 values with 5V applied. A
digital sensor is comprised of discrete steps and works in binary: off or on (0,1).

Mapping can be considered as the liaison strategy between the outputs of a gestural
controller and the input of the synthesis algorithm. A mapping strategy that consists as a
single layer means that a change of gestural controller or synthesis algorithm would
require a different mapping entirely. Then the instrument is limited to only one
composition and is not available for other compositions. A way to get more variety of
works out of a gestural controller is to consider a mapping strategy with two independent
layers, allowing more from the synthesis algorithm. The two independent layers are 1,
the mapping of the control variable to intermediate parameters, and 2, the mapping of
intermediate parameters to synthesis variables. A goal of designing a gestural controller
would be to design it so that the instrument itself, being the hardware configuration, acts
as one component, and the sound making portion, being the software programming, acts
as another component. That way, one is not limited to only one piece of music or work,
but one could play many pieces with the same instrument simply by changing part of the
mapping algorithm. It would be silly to see a instrumentalists only perform one musical
work with the given instrument, so why should gestural controllers be any different?
! The process of data acquisition entails ways of measuring physical phenomena to
obtain electrical signals suitable for use as input to control a computer or music system.
Simply stated, data acquisition is the information about a body movement or force.
!
! Once gesture has been defined, hardware has been built, software has been
programmed, and mapping strategies have been explored, then it is finally time to hear
and explore what the gestural controller can do. Each artists that follows uses his
instrument in new ways that does not fall into just one category, but instead challenges
the categories previously mentioned.
Jaseau 12

People in Performance
Michel Waisvisz - The Hands

Figure 1: Michel with the first version of The Hands

" Michel Waisvisz was a well known developer and virtuoso of live systems including
The Hands, a performance device made up of a variety of sensors mounted on a set of
two aluminum ergonomically shaped plates strapped to the hands of the performer. The
combination of many different sensors capture the movement of the hands, the fingers,
and the arms. This concept was unique at the time and made The Hands one of the most
refined and musical MIDI-controller in electronic music history.
! Building The Hands was a collaboration between artist and programmer Bert
Bongers and was greatly influenced by Waisviszʼs timbral conceptions. He considers
himself a “composer of timbres”11 focusing his work on the creation of electronic musical
instruments as part of the compositional process.
“The way a sound is created and controlled has such an influence on its musical
character that one can say that the method of translating the performerʼs gesture
into sound is part of the compositional method. Composing a piece implies
building special instruments to perform it as well.”12

11 Krefeld, V. (1990)
12 Ibid
Jaseau 13

! Waisvisz began his work in Amsterdam at STEIM in 1973 with advances in what is
now known as “circuit bending”13. He developed the CrackleBox (or kraakdoos, see
Figure 2) that utilizes touch to close a circuit and thus create unrestrained sounds that are
uncontrollable and unpredictable, emphasizing spontaneity. I have seen these boxes
used in a live performance of Mark Applebaumʼs game composition 5:3 for 8 cracklebox
players and 2 dice rollers (2005). The sounds were almost human like, and would
respond to the level of energy (heat or sweat) in the performers body. In watching the
dress rehearsal and the performance, I noticed a significant amount of more sonic activity
during the performance than what I had heard in the rehearsal. I relate that increase of
sonic activity to the physical chemistry of the human performer while interacting with the
object, being that added nerves equate to added heat and this phenomena lends itself to
the increase of wild sonic activity. This is probably what Applebaum had in mind.

Figure 2: kraakdoos

! Waisviszʼs personal philosophies includes the notions of Effort and Touch. Effort is
an important consideration because the exertion of effort or physical energy shows the
audience your connection or focus with what you are doing. A performer must exert some
energy thus relating the audiences perception of action to aural perception. In Waisviszʼs
viewpoint, this means opening up to fear, to find pleasure.
“Effort is something abstract in a computer but a human performer radiates the
psychological and physical effort through every move. the creation of an electronic
instrument shouldnʼt just be the quest for ergonomic efficiency...[easier, faster, and
more logical doesnʼt improve the musical quality of the instrument]....Iʼm afraid itʼs
true one has to suffer a bit while playing; the physical effort you make is what is
perceived by listeners as the cause and manifestation of the musical tension of the
work” (Krefeld, 29)

13short-circuiting of electronic devices, usually low voltage or battery powered devices, such as childrenʼs
toys, to create new musical instruments or sound generators.
Jaseau 14

! As a “pioneer of touch”, he has become a creator of personal and physical musical


instruments requiring effort. Touch, in the context of gestural controllers, allows the
performer to move towards the control of sound synthesis with human gesture and away
from ancillary motions associated with the keyboard.
! “It is the touch - closing a circuit with your body - that makes the music; the
conductor of electricity and the conductor of the musical experience are one. One can
then touch the electricity inside the instrument and thereby become a thinking component
of the machine.”14
! As a founder of “Physical Philosophy”, a science where axioms (accepted truth, or
general truth) are replaced by physical objects, such as handmade instruments, Waisvisz
strongly believes in appropriating tools and instruments by modification or complete
custom builds. The motto? “If you donʼt open it, you donʼt own it”. This is now an
important philosophy at STEIM 15: music makers are encouraged to play an important role
in the design and building of an authentic live electronic performance instruments.
! The Hands was the first real alternative instrument that contributed a new model
for gestural control. The instrument could design, control and playback MIDI information
for use in an electronic music performance. The Hands were not built upon a previously
existing acoustic instrument and thus created a new design for gestural controllers.

Versions
! The Hands (see Figure 3) were originally created in 1984, and were recreated in
1989. The first version of the Hands were used in concert for the first time in 1984 at the
Concertgebouw in Amsterdam three months after the release of MIDI. At that concert,
The Hands were remotely controlling three Yamaha DX7ʼs programmed with “special,
very responsive sounds.”16 Johan den Biggelaar developed the first version and
prototype of The Hands in combination with SensorLab, the computer that the Hands
utilize. Wim Rijnsburger was also involved in the software design of the early SensorLab.

14 Dykstra-Erickson, E & Arnowitz,J (2005)


15 STudio for Electro Instrumental Music located in the Netherlands
16 Michel Waisvisz personal account written 2006 http://www.crackle.org/TheHands.htm
Jaseau 15

The SensorLab is a small computer that is worn on the back of a performer and translates
sensor data into MIDI codes. This device is programmable so that for each work, a
unique relationship between the performerʼs gestures and the musical output can be
programmed.17 The Hands have been rebuilt and reprogrammed many times linking to a
plethora of MIDI instruments such as the Yamaha DX7, Emu samplers, and STEIMʼs
software products such as the Lick Machine and LiSa.

Figure 3: The Hands, 1984-1989

Figure 4: a closer look


! The Hands II (see Figure 5) version was built in collaboration with Bert Bongers.
Improvements included a single wooden frame as the main body for attaching various
sensors, plus the addition of better components and a more reliable wiring-system (see
Figure 6). The software also underwent a transformation and was rewritten completely

17 www.crackle.org accessed May 8, 2010


Jaseau 16

using the Spider-programming-system.18 In order to use The Hands as a conductor


instrument, new software was developed to manipulate strings of MIDI events that allows
the artist to use MIDI controllers to conduct an electronic orchestra through controllable
sequences. At STEIM, this became the Lick Machine.
! The Lick Machine program uses a performance setup where a user can trigger and
drive a group of pre-recorded sequences called licks. With one press of a MIDI button, a
complex series of musical events can be triggered. Other MIDI events may control the
performance parameters of that lick such as tempo, velocity, starting point, length,
transposition, note density within the lick, pitch and time deviation. The Lick Machine
contains a sequencer in which licks can be recorded and edited. After recording a lick, it
can be assigned to a triggering key. While playing, the performance parameters of that
lick can be controlled by other MIDI events defined in the ʻkey infoʼ window.

Figure 5: The Hands II, 1989-2000

Figure 6: Closer look at Bongers improvements


Specifications

18A programming language by Tom Demeyer where mapping algorithms were deployed on the SensorLab
hardware.
Jaseau 17

! The Hands were designed to include distance, speed, and tilt sensing along with a
set of multi-function switches and potentiometers. Three parts are considered; the two
hands and an analog-to-MIDI convertor. Each hand has three rows of four keys that are
used to control note on/off within one octave. There are four mercury tilt sensors aligned
on the cardinal directions, defining a conical space. These tilt sensors in combination
with movement provide eight combinations that are mapped to an eight octave range.
! The left hand contains a sonar transmitter that is pointed at a sonar receiver
contained on the right hand. The distance between the hands is measured by comparing
the delay of ultrasonic pulses between the hands to the original pulse time. This data is
assigned to the key velocity mapped to a separate oscillator in a FM sound generation
environment.19 Also included in the left hand are three buttons that provide access to
three MIDI channels (1, 2, 3 or all three). These MIDI channels can be accessed by both
hands. The right hand contains press buttons for program choice. One button steps up,
the other steps down. The thumb has access to a wheel potentiometer that controls pitch
bend and also allows access to chords that could not otherwise be played due to the
octave constraints, being that the buttons only play in one octave determined by the
cardinal position of the tilt sensors. In addition, there is also a push button that turns the
“scratch” function on and off that allows for the “bowing” of sounds(being Waisviszʼs
personal favorite component of The Hands). The sonar-transducers, pitch bend wheel,
resistors, and diodes are all placed on the bottom side of the instrument..
! The software was built on the 6502 assembly language on an Apple IIe in the
ProDOS operating system using EPROM. This software starts an endless loop that
scans the key matrix and mercury switches, reads the pitch bend information, measures
sonar distance and maps this data through tables to MIDI command sequences. This
specifically build software routes information from the Hands through a pattern of
conditions that is useable during a live performance..
! A Control Signal Processor (CSP) increases the quantity of information obtained
from the Hands. Simple finger movements are mapped to a network of relationships such
a wave or data tables that are read out by pointers with manual scan patterns. When a

19 Waisvisz (1)
Jaseau 18

key is pressed it will start a series of pointer movements that go through a sequence of
vectors. These vectors are mapped to control inputs that all equates to a sound sequence
determined by the algorithm of a pointer path, and the reading speed of the pointers. This
is an example of “indirect” mapping, being that the pointer outputs become inputs for the
live reprogramming of the pointer scanning algorithm.
! Waisvisz utilizes four main performance algorithms; GoWild, GoOn, T.E. (thats
enough), and Effort. The GoWild algorithm mutilates the original control signal only if a
thresh monitor is passed due to heightened and/or continuos action. This allows for the
extension of effort and activity to trigger the application of randomness. Again, this plays
into Waisviszʼs notion of Effort and how performing should be in some way difficult and
cause the performer to exert energy. The GoOn algorithm does not present any
changes, it only asks four questions; What did I do?, What do I want?, Was it good?,
What do I do next?20. According to the answers of these questions, the algorithm will
move on to another pattern of conditions. The T.E. algorithm decides at algorithmically
random moments that things have become boring and stops the execution of the current
state. This algorithm is particularly sensitive during the GoOn algorithm, thus responding
to a performer and inducing interactivity between device and performer in real time
performance, thus surprising the performer while still maintaining the program structure.
This interaction provides spontaneity and freshness to the composition or real time
creation of the composition. The Effort algorithm produces artificial “friction” that can be
mechanical (such as slow reacting buttons, heavy hands, or the need for wider arm
movements) or compositional (such as sudden change of control data from the CSP, or
the generation of new data derived from the same device).
“As listeners we are interested in spiritual effort made by the composer/performer.
These spiritual efforts can be seen only through the physical efforts the composer/
performer makes during the performance. High controller-ergonomics provide
effortless composer/performer actions and uninteresting music is the result. There
is a musical need for artificial friction in the concept of new controller design.”21

20This relates to Waisisvz self-built Oracle program. Due to space and topic restraints, this topic will not be
expounded upon in this paper.
21 Waisvisz (5)
Jaseau 19

! Michel Waisvisz offered a new kind of alternate controller that has influenced
artists such as Edwin wan der Heide from SensorBand22. Edwin plays the
MIDIconductor, where are hand controller machines that send and receive ultrasound
signals, measuring the handsʼ rotational positions of and relative distance. This is very
similar to Waisviszʼ use of the IR pair, although the two instruments are uniquely different,
they both share similar qualities such as distance measurement, and buttons on the
fingerboard. Waisvisz offered original ways of controlling synthesizers with the use of
gesture in a way that deeply incorporates touch and effort. This legacy lives on and takes
on new forms in regards to touch and gestural control.

Kevin Patton - Ambidextron


" Kevin Patton, originally from Houston, Texas is a Doctoral candidate at Brown
University where he received his Masters of Arts degree. He also earned a Masters in
Music degree at the University of North Texas, prior to Brown. As a guitarist, his first
explorations in music were in punk rock and jazz and he discovered electronic music later
on in his music career while studying at UNT.
! Patton felt inspired to touch sound and shape sound with his hands, and so, built a
device enabling him to manually manipulate parts of sound synthesis. The Ambidextron
Double Glove System is a handmade sensor-based glove used for controlling electronic
sound with physical gesture in live performance. The gloves ʻreadʻ physical gestures of a
performer, and those gestures “literally generate music”23. His gloves are part of the next
generation of hand-like controllers such as the Ladies Glove, below, by Laetitia Sonami .

22 http://www.ataut.net/site/Sensorband accessed June 9, 2010


23 www.lajunkielovegun.com accessed May 19, 2010
Jaseau 20

! The Ambidextron is an immersive controller and primarily uses two accelerometers


(one for each hand) and six flex (or bend) sensitive resistors that are sewn on top of the
index fingers, middle fingers and thumbs of the glove pair. The bend sensors were used
to modify some of the sounds with real-time effects such as a reverb decay and various
filters. “Some of the difficulties in building this immersive controller was learning how to
sew, and the actually programming of the micro-controller” said Patton in a email
interview. The sewing was challenging because the flex sensors need room to flex, but
also need to be attached securely to the gloves. Another large challenge was the
methodological requirement when approaching building oneʼs own gestural controller. The
process includes thresholding the accelerometers and learning how to play those
thresholds.
The key lies in gaining a physical sense of how the mapping responds. To go from
sensing the outside world to generating sound requires a series of choices, some
of which are pure engineering decisions and others creative.
"
" Again we see the theme of linking physical gestures to the cause and effect
relationship of sound. In the Patton world, this translates to playing with ideas about
source-bonding (the natural tendency to relate sounds to supposed sources and causes,
and to relate sounds to each other because they appear to have shared or associated
origins) 24 with the motion required to make the sound and the sound itself. He reflects
that some of this source-bonding belief can be created in the beginning of a piece, and at
other times the user can make physical correlations to help the process of mapping. An
example of this would be the mapping of physical velocity to increase the volume. He
mentions one of the liberating things about this new practice of gestural control is that you
can attempt to frustrate the audience or musician expectations as well.
! Patton composed Creaking the Air25 for the Ambidextron. The piece is about
achieving a trance-like state through physical motion similar to the practices of Sufi music.
I asked him how he felt about the idea that this piece can only be performed by him in the
present time and how that notion influences his concept of designing a new controller
instrument. His response:

24 Ears: Electro-acoustic Research Site http://www.ears.dmu.ac.uk accessed June 7 2010


25 http://www.youtube.com/watch?v=zS4rLkNkrl4&feature=player_embedded accessed May 19 2010
Jaseau 21

The positing of this question presupposes a value in the classical notion of


durability and repeatability. These are definitely not my values. Each performance
is unique to itself, no matter the means of delivery to a performer. Music is
essentially ephemeral and music that acknowledges this and embraces it, for me,
function in a more interesting and honest way. Even if a specific piece of electronic
music is not specifically repeatable in the same way as a Brahms string quartet,
the basic activity of creating new, interesting, sound-worlds from the materials at
hand –no matter the era – will continue as a necessary human activity. At any rate,
recording technologies certainly can function as archives.

! The notion and act of touch is a main theme in the creation and exploration of
sensor based technology because gestural controllers require touch. “When I say touch I
am thinking about continuous controllers that one can access (through) muscle memory
and control.” Touch is contact with a surface, and if one thinks about it this way, then
touching happens at a much greater degree of resolution than a button. Pressing a
button may technically be touch, such as in a mouse, but a mouse is fixed to a surface
and pressing a button lacks luster. The mouse is functioning as a trigger and does not
respond to changes depending on the quality or duration of the touch. Operating a
mouse requires no finesse, and finesse is needed in order to convey greater meaning
through sound. Although there have been some virtuosic performers that truly know how
to bring out the essence of a mouse in performance, most uses of a mouse on stage
leaves nothing for the audience to relate to. They could be checking their email for all
they know! Continuous controllers can be programed to be used as buttons, thus
requiring some amount of finesse and ʻbody englishʼ to influence the feeling that leads-up
to crossing a threshold. Gestural controllers that are not fixed “allow a greater amount of
subtle body motion to influence the music creation experience in real time. This is
definitely what I am after.”
! The future leads technology in many directions, and I inquired about where Patton
thinks gesture-based electronic music performance will lead. His response:
To be honest I have no idea. It could be to enhance the gaming experience or turn
concerts into group performances. It could also remain an obscure research topic
in academic music….
Jaseau 22

Jaime Oliver - Silent Construction 1


" Jaime Oliver is a Doctoral candidate at the University of California San Diego,
where he also received his Masters degree under the direction of Dr. F. Richard Moore.
He was born in Lima, Perú and received his undergraduate degree at the Universidad
Peruana de Ciencias. In his studies, he has developed the Silent Construction series,
with the main motivation being derived from dreams of his in which electronic devices saw
gestures and converted the gestures to sounds.

Figure 7: The Silent Drum Controller


! Silent Construction 1 is a piece and device that explores hand gestures, developed
in 2006 by Jaime Oliver in collaboration with percussionist Matthew Jenkins at CRCA 26.
The Silent Drum Controller (see Figure 7) was initially conceived to be hit with mallets
within a percussion setup, but was later developed for hand technique allowing for a
greater subtlety of the live morphing of a hand gesture. Human gestures transform pre-
recorded percussion sounds, creating a fully embodied live electronic sound world and
providing the audience with a large amount of gestural information. He is greatly
interested in the haptic feedback of a system.

26Center for Research and Computing and the Arts- an organized research unit of the University of
California at San Diego.
Jaseau 23

! A haptic system is the sensibility of an individual to the world adjacent to the body
and by the use of the body. He uses this ideology to compose his instruments, and his
music. Haptics is the science of applying tactile sensation to human interaction with
computers. A haptic device is one that involves physical contact between the computer
and the user usually through an input/output device that senses the bodyʼs movements.
By using a haptic device, the user can both feed information into the computer but can
also receive information from the computer in the form of a felt sensation.
! Upon researching this device and composition, I recognized that silence, in
addition to haptic response, is an essential component of his work in electronic music. I
asked him to tell me why silence was important to him and why he came to explore
silence in electronic music. He supplied two responses; (1) “Any muscle we move (except
the vocal system) is essentially silent, we canʼt move our bodies fast enough to make
sounds, thatʼs why we use instruments. The silent part in silent drum points to that, what
the camera sees are silent gestures”, and (2) “I didnʼt want an interface that made sounds
since it would interfere with the ones the computer made… a practical reason.” In
contrast to other interfaces (glove-like in particular), this device enables one to physically
detach completely from the object to produce silence. The use of long processed sounds
(for example drones) leaves very little room for silence. “A practical difficulty in
incorporating silence in electronic music performance contradicts that in loudspeaker-only
music, silence signifies the performance is over even though this may not be the case.”
! I inquired about the process of making the device. I wondered what came first, the
device or the ideal sound? He responded that he makes his instruments in a dialectic
process, that being the nature of logical argumentation. “The interface suggests sounds
and the sounds suggest improvements for the interface”. He mentions that programming
provides a field of opportunities and through exploration, one can begin to make changes.
It is clear he is invested in sounds that continuously change, not only in one aspect of
music, but in many.

“The compositional process is not strictly the design and organization of sounds,
but is, or ought to be, a composition of environments or sound spaces that are
brought to a concrete morphology by performance gestures.”
Jaseau 24

! The Silent Drum Controller, with an elastic drumhead working as a force-resistant


haptic boundary for the hand, allows Oliver to analyze a complex system and obtain
multiple interdependent variable which are then mapped onto multiple variables of sound.
In regards to the control variables, the Silent Drum Controller produces a large amount of
variables only if there is a complex enough input. Its design reflects a simple, but effective
principle: if there is no hand, there are no fingers; that is, it follows a hierarchical logic.
Sound events are bounded by discrete variables; these are used for score control,
triggers, mapping changes, etc. Continuous variables are used to shape sound
morphologies.27
! The sound material used for this composition consists of a set of pre-recorded
percussion sounds: a tam tam, wind gong, chinese opera gong, vibraphone bars, a set of
thai gongs and a Bangkok wooden frog rusk. These samples undergo several processes
including playback operations, amplitude modulation, frequency and pitch shifting, time
stretching, FFT timbre-stamping, filtering, and additive synthesis thus creating a pre-set
morphology for him to transform in a real-time performance.

Figure 8: Oliver manipulating the haptic environment

27For a wonderful and well documented demo, see this video


http://www.youtube.com/watch?v=2kLVqgUMGSU (accessed May 26th)
Jaseau 25

! In Silent Construction 1, Oliver uses human gestures to transform those pre-


recorded morphed percussion sounds. His process follows taking concrete percussion
samples that already contain a morphology and transforms the sounds using a phase-
vocoder28 to explore the temporal morphology. Additionally, he may send some samples
into a feedback network or use FFT vocoding, exploring the audio spectrum in time and
frequency. These processes inspires a lot of transformation “..but the important thing is
that you start to feel that you are in a way touching the sounds, molding them.”
! The camera in use analyzes a video stream that “sees” the manipulation of the
haptic environment (the black cloth). What is most important is that there are multiple
analysis parameters controlling multiple sound parameters. Since there are so many
parameters happening all at one time, conscious control is not possible thus leading to
the correspondence between device and human learnt by practice and discovery instead
of intellectual choice.
! His device is unique in that the sensor that acquires data is separate from the
tactile object that receives touch. This is true because the camera is acting as the sense
organ of the computer. Through the computerʼs vision, it can see and report any change
of placement within the Silent Drum.
! In regards to the synthesis algorithmic component of the composition, there are
many. Phase-vocoders are used for the temporal space, and FFT filters for the vertical
space. A “complex enough input” is necessary in this, meaning that if the computer only
sees one finger pressing, it then results in only a couple of parameters whereas if he uses
the hands more freely and in more ways than one, it results in more complicated
relationships due to overlapping algorithms.
! In the performance of this piece29 there is no clear operation of recalling a preset,
yet the environment of sound drastically changes its general range of sounds through
different sections of the piece. There are a few presets, he presses spacebar on his

28 a type of vocoder which can scale both the frequency and time domains of audio signals by using phase
information. The computer algorithm allows frequency-domain modifications to a digital sound file, usually
time expansion/compression and pitch shifting.
http://www.panix.com/~jens/pvoc-dolson.par (accessed May 1st 2010)

29 http://www.youtube.com/watch?v=LTytHbZG0p8 (accessed April 27, 2010)


http://www.jaimeoliver.pe/workobra/silent-construction-series/sc1
Jaseau 26

keyboard with his foot only 2 or 3 times in the piece. Although, most of the time, he says
that he changes mappings by entering of leaving the analyzed image. “When nothing is
pressing the drum and something enters the frame I get something like IN, and when I
take my hand out and nothing is seen pressing the drum I get OUT.” He uses these
notions to progress through the score. He can also go in and out from different points on
the surface that will determine different outcome as well. This mapping strategy lends
itself easily to the instrument doing the composing. The instrument can compose
because itʼs algorithm makes decisions that are based on the amount of input received
from physical gestures manipulating the haptic environment. When composing this piece,
his main goal was to create a ʻcomposed instrumentʼ.

Figure 9: The inner working of the Silent Drum

! “The Silent Construction Series are a process of discovery, composing is actually


determining what the instrument is...” There is a Silent Construction 2 entitled The MANO
controller, and so far the plans for it are the same as for the silent drum, both use video
tracking and track hand gestures.
Jaseau 27

! In regards to specific technical questions, the programming of the analysis


algorithms were done by Oliver in C as externals to GEM 30 and Pure Data (Pd). Oliverʼs
approach is to obtain a network of parameters that provide significant information. “It is
inevitable that the compositions require me to modify parts of the analysis. The sound
part is completely open. I change and add things all the time. Composing and
programming are inevitably linked…”
! If we consider that Art-making includes a creator and a viewer, then what are your
thoughts on how the Silent Construction Series affects electronic music or the layperson?

Well, I think that seeing gestures and hearing sounds are things we have trained
for all of our lives. Seeing a performer on stage is helpful to understand how the
sounds are being created and you donʼt need to be an expert to find immediately
after seeing a gesture how the sound is being created. In the same sense that
one doesnʼt need to be a physicist to understand how a violin works.
Seeing someone make the sounds with their bodies makes it a familiar setting.
The performer uses his hands, which we all have!

! Some reactions from the public after the performance of Silent Construction 1,
included curiosity about the technology. Few people noticed that a webcam was doing
the job until they inquired and saw the computer screen which was surprising for most.
Other than that, most people focused on the gesture and the interaction of gesture as a
totality. I asked him what did he hope for people to get out of observing him perform the
Silent Construction series. His response:

To have an aesthetic experience and to break the concept of a musical instrument


as a fixed timbre with varying pitch. To achieve all the possibilities of electronic
sound creation with an extremely complex control system: our hands…

" The domain of sound-shapes and hand-shapes closely resemble each other in
that both have several parameters that combine together providing a matched system
and exponential options for shared morphology between the hands and the sounds.
Hands and sound do share resemblance and are a well matched pair for the shaping and
molding of non-tangible entity that is sound.

30 open source software for motion tracking


Jaseau 28

Conclusion!
! Musical communication is fundamentally driven by movement and therefor, careful
consideration of the role of gesture must be implemented in the performances of new
gestural controllers. Gestural controllers provide the opportunity to devise new ways of
driving music with movement. Sound fundamentally arises from movement because
sound is fundamentally produced by the movement of air particles, now we can see how
the movement of the body can be a fundamental component of sound generation.
! The study of musical gestures is reshaping our concepts of music and sound as
new controllers are being invented and used in the live performance of electronic music.
The line between cause-and-effect is starting to resurface as the connection between
gesture and sound is being reevaluated as an important and integral component of the
performance of electronic music. !
! All three of these artists share gestural significance in touching, shaping and
controlling sound which may become a common standard for the next generation of
electronic music performers.
! The use of gestural controllers creates a new kind of music that arises from the full
form of the body as gesture becomes a vehicle for new meaning and new ways of
expression. This is an astounding advancement from the limited use of a mouse or
keyboard. We can now tap into the pure energy of electronic manipulation by using our
bodies as a medium to communicate with a computer.
! Gestural control provides an opportunity to build a relationship between both the
performer and the computer, and the performer and the audience. I expect that electronic
music performance will get increasingly innovative and exciting as more alternate
controllers are built utilizing gesture. As more artists from different disciplines collaborate
with each other, works will be created that will most likely include multi-media
performances where light controls sound, and sound controls video, and video controls
sound, and sound controls light maintaining a synergistic relationship. This web of
interactivity provides a greater medium for electronic artists to develop unique worlds of
interaction and responsiveness that connects the performers to the audience, and creates
greater meaning for the work and developments in the electronic arts.