Sie sind auf Seite 1von 48

The Meaning of Music

Ciarn Everitt

May 2016

In partial fulfillment of the requirements for the Diploma in


Logotherapy and Existential Analysis

Viktor Frankl Institute of Ireland


The power of music lies in its ability to speak to all aspects of the human being
the animal, the emotional, the intellectual, and the spiritual.

Everything is Connected: The Power of Music

- Daniel Barenboim

2
CONTENTS

Acknowledgements.4

Abstract...5

Dimensional Ontology6

Chapter 1. Music and the Soma......8

Chapter 2. Music and the Psyche...19

Chapter 3. Music and the Nos..31

Conclusion.........................40

Bibliography...........45

3
ACKNOWLEDGEMENTS

I would like to thank my lecturer and advisor Dr. Stephen J. Costello for his expert guidance,
full support, and encouragement in the writing of this thesis. I would also like to thank the
support of family and friends.

4
ABSTRACT

The purpose of this research is to examine the complex interaction music has with the human
person using Frankls dimensional ontology soma, psyche, nos. The interaction of music
with the soma will include an investigation into the myth of Dionysius as described by
Nietzsche, the human brain, human physiology, stress, genetics, and children. The second
dimension will investiage the interaction of music with the human psyche of which we will
examine the myth of Apollo according to Nietzsche, and the psychological processes of
perception, language, and emotions. Lastly, we will discuss musics interaction with the noetic
dimension by examining the transcendental nature of music, how meaning can be found in
music, music and spirit, and lastly applying music to the technique of dereflection. We
conclude by stating that music has the power to entrain the soma, manipulate the emotions, and
transcend the individual. I would argue the use of music in Logotherapy as Music Logotherapy.
It would employ the use of music as a supplement to Logotherapy to help with clinical and
personal issues.

5
DIMENSIONAL ONTOLOGY

According to Frankl (2004c) in his book On the Theory and Therapy of Mental Disorders,
humans are integrated beings with three dimensions: the somatic (body), the psyche (mental),
and the noetic (spiritual). Frankl (2004c) refers to this as his dimensional ontology. This thesis
will discuss music and its relation to each of the three dimensions.
The first chapter will discuss the first dimension - the soma or body, the biology of the
person. In this chapter, we will first discuss Friedrich Nietzsches description of Dionysius in
his book The Birth of Tragedy to set the tone for the rest of the chapter because Dionysius is
the ambassador for the soma. After Dionysius, we will focus primarily on the anatomy of the
soma such as the human brain, physiology, stress, genetics, and children, and their interaction
with music drawing on scientific research. These subjects give us an idea of how music
interacts with and manipulates the human body.
In the second chapter we will examine the second dimension of Frankls (2004c)
dimensional ontology - psyche. Psyche according to Frankl (2004c) is the mental level of
psychological processes. The psychological processes we will be investigating are: perception,
language, and emotion to draw out the salient features of music in its relationship to the psyche.
This chapter will focus on Friedrich Nietzsches description of Apollo from his book The Birth
of Tragedy to set the tone for the rest of the chapter as Apollo symbolises the psyche. We will
examine the perception of the aesthetics of music according to Roger Scruton, the relationship
between language and music, and the relationship between emotions and music. These
elements give us a deeper understanding of how music interacts with the human psyche.
The third and final dimension of Frankls (2004c) dimensional ontology is called the
noetic (spiritual) dimension or the nological dimension. The noetic is often referred to as the
spiritual but the term spiritual has religious connotations in English whereas in German it
doesnt. The nological dimension is the anthropological rather than the theological dimension
(Frankl, 1988). The noetic dimension bears a similarity to the psychic dimension but it is
distinguished in several ways such as:

1. Freedom and responsibility exists only in this dimension. We are determined at the
somatic and psychological level, while Logotherapy recognises that we have the ability
to take a stand toward our fate and the things that can determine us at any given time.
2. Conscience operates at this level and is the organ for perceiving meaning.
3. This dimension is exclusively human; it distinguishes humans from animals.
4. A person as a spiritual being cannot become ill. Only the somatic and the psychological
dimensions can become ill.
5. This dimension interacts with the somatic and psychological dimensions. A lack of
meaning in ones life can contribute to the development of a neurosis. The operation of
this dimension can be affected by a biologically caused mental disorder such as bipolar
disorder; which can leave a person unable to express themselves fully and be unable to
perceive values rightly.

6
Attributes associated with this dimension are responsibility, authenticity, love, creativity,
choices, values, will to meaning, ideas, and ideals. This chapter will investigate the
transcendental nature of music, meaning in music, music and spirit, and the application of
music to the Logotherapeutic technique of dereflection. By examining the transcendental
nature of music we will discuss music in society and how we surrender to the power of music
and transcend not only ourselves but our psyche. In meaning and music we will be looking
briefly at culture and how music is used as a tool to heal (music therapy), also looking at how
meaning can be actualised according to Frankls (1973) values (creative, attitudinal,
experiential). In music and spirit we will be examining the spiritual aspects of music and its use
in religion. We will lastly be investigating how music can be applied to the Logotherapeutic
technique of dereflection which is orientating a persons attention away from hyper-reflection.

7
Chapter 1

Music and the Soma

And those who were seen dancing were thought to be insane by those who could not hear the music.

- Friedrich Nietzsche

Introduction

We will now begin by discussing Dionysius and how he pertains to the somatic dimension.

Dionysius and Music

The Greeks ascribed certain psychological traits to their gods. Each god is a projection of a part
of the human psyche. Nietzsche discusses a dichotomy of man in his book The Birth of
Tragedy. In this book Nietzsche discusses the Greek gods Apollo and Dionysius. Dionysius
represents the soma and Apollo represents the psyche. We will discuss Apollo in greater detail
in the next chapter. Nietzsche brought these two psychological dimensions of art together. Who
is Dionysius and why does he represent the dimension of the soma?
Dionysius is known as the god of drunkenness and ecstasy and is also known as the god
of wine and fertility. Dionysius represents the primitive, instinctual part of man. Dionysius
symbolises untamed drunken frenzy that destroys forms and the ultimate abandonment we
sometimes sense in music (Kaufmann, 1974, p. 128). Dionysius opposes all that is Apollonian
and represents untamed and wild creativity. We could distinguish Dionysius as being interested
in physiological pursuits from the Apollonian psychological pursuits. Dionyius pursues the
pleasures of the body. This pursuit of pleasure is no less an escape from a yearning
lamentation for an irretrievable loss (Nietzsche, 1995, p. 6). The Dionysian part of man comes
out more so when we are engulfed in an existential crisis. We seek out hedonistic activities like
alcohol, sex and drugs to ease the boredom we feel. We do this in many ways such as going to
nightclubs and losing ourselves to the music played there. Dionysius has a connection to music
that is known as dithyrambic music.
The dithyramb is an ancient Greek hymn sung and danced in honour of Dionysius. The
dithyramb is wild and ecstatic. It is not possible to attribute dithyrambic music to any genre of
music as it is simply how music is expressed. A wild build up of emotions on a catharctic

8
display. Nietzsche opines It is under the influence of narcotic draught, which we hear of in the
songs of all primitive men and peoples [. . .] that these Dionysian emotions awake, which, as
they intensify, cause the subjective to vanish into complete forgetfulness (Nietzsche, 1995, p.
3-4). Dionysius is associated with alcohol and with alcohol comes a unity of people in song.
Nietzsche calls this the Primordial Unity. Nietzsche poetically describes the Primordial
Unity below:
In song and in dance man expresses himself as a member of a higher
community; he has forgotten how to walk and speak; he is about to
take a dancing flight into the air. His very gestures bespeak enchantment.
[. . .] He is no longer an artist, he has become a work of art: in these
paroxysms of intoxication the artistic power of all nature reveals itself to
the highest gratification of the Primordial Unity (Nietzsche, 1995, p. 4).

Dithyrambic music seems to come from feeling and emotion. It is not intellectualized but it is
felt in the body. The Dionysian musician feels the music. It comes from deep inside him. The
very essence of Dionysian music according to Nietzsche is the emotional power of the tone,
the uniform flow of the melos, and the utterly incomparable world of harmony (1995, p. 7).
Nietzsche discusses the Primordial Unity of Dionysius which is the unity of people through
their collective pain and can be found in such places as concerts, raves, festivals, nightclubs,
bars, etc. It is in these venues that one can lose oneself in the common experience and
transcend individual suffering for at least a short time. There is an appearance of joy that is
contradicted by the primordial pain that is writhing beneath. The individual disappears only to
be taken over by the music. The Dionysian musician according to Nietzsche is pure primordial
pain and its promordial rechoing (1995, p. 15). The Dionysian musician is not a person but a
work of art that has surrendered his subjectivity to nature. Nature is expressed symbolically
through the body (Nietzsche, 1995).
When the individual disappears they essentially become a slave to their body and crave
physiological interests and engage in what Nietzsche calls sexual licentitousness (1995, p. 6).
The body is a tool for its own end and this sexual licentitousness is as Jung points out the
libido in instinctive form, takes possession of the individual as though he himself were an
object and uses him as a tool or an expression of itself (1991, p. 139). Man has become a
work of art by Nature. Man becomes ruled by his body. If man is ruled by his body then he
seeks out pleasure to satiate the lust that has beholden him. This idea would fit very well with
Freud as Freud believed that the goal of man was to release tension from himself. Freud was
unmusical and the arts and sculpture were very important to him and the arts and literature
were according to Freud a sublimation of unsatisfied libido (Storr, 1997). If this were fully true
then in an ideal world in which everyone reached full sexual maturity, the arts would have no
place. (Storr, 1997, p. 151) Freuds focus here is that the goal of man is to reduce sexual
tension within himself. This is a reductionistic view of man. Freud mistakes the by-product of
existence as the goal. It is only partly true that some music is made from an unsatisfied libido.
It is a motivating factor. There is a parallel between Dionysius and Freuds id, and Apollo and
the ego/superego.
9
The duality of Apollo and Dionysius has roots in physiology according to Paglia
(1991), who theorises that this separation is between the higher cortex and the older limbic and
reptilian brains. This is interesting as it shows the connection of our physiology and our
experiences in the world. We will now review what scientific research has been found on the
brain.

The Human Brain and Music

When it comes to music there is no specific part of the brain that corresponds with musical
experiences. Many parts of the brain are activated and work in conjunction with each other
when we experience music. When the music we listen to is emotionally charged, the brain
processes it in two ways via slower and faster routes (Theorell, 2014). The faster route
(direct pathway) is when an emotional stimulus such as music - is elicited it is relayed to the
thalamus then on to the amygdala (and other parts of the emotional brain) and then the
amygdala forms an emotional rsponse. The reason it is so fast is because there is incomplete
information which has not reached the cognitive brain. The slow route (indirect pathway) is:
sound stimulus to thalamus then on to the sensory brain cortex and then to the amygdala and
then an emotional response is formed. The amygdala is responsible for the perception of
emotions and is the primitive part of our brain which is also found in most animals. The cortex
of the brain thoroughly analyses the music helping to form a response. These two processes
happen at the same time though there is a time lag in the indirect pathway which can be
beneficial when we are faced with danger as the faster/direct pathway can save us from
possible death. The faster route (direct pathway) is a direct instinctual reaction while the
slower route (indirect pathway) is processed thoroughly by analytical thought in which a
response is formed (see Fig. 1 below). If we look at this process using Apollo and Dionysius as
described by Nietzsche in his aforementioned book, we will be quick to realise that the
fast/direct route is as stated instinctual which is reminiscent of Dionysius and the
slower/indirect route is a cognitive response which is reminiscent of Apollo. We have
described the physiological roots of the myths of Apollo and Dionysius as postulated by Paglia
(1991).
There is a part of the brain known as the hippocampus which is associated with memory
that is involved with the processing of music. The hippocampus only becomes activated when
we listen to music that we are familiar with in which we have memory associations (Levitin
2006; Schneck & Berger, 2006). The primary responsibility of the hippocampus is to tag the
audio input giving it spatial and temporal significance and the hippocampus is able to alter the
route, from amygdala-centred neural networks associated with emotional fear responses, to
hippocampus-centred networks associated with more rational, cognitive responses (Schneck
& Berger, 2006, p. 133).

10
Fig 1: Direct (fast route) and indirect (slow route) pathways to the amygdala
Source: www.oecd.org/edu/ceri/aprimeronemotionsandlearning.htm

The oldest part of the human brain, the cerebellum or reptilian brain as it is often
referred to, is crucial to musical timing. The cerebellums main functions include voluntary
movements such as posture, balance, speech, coordination, and musical timing (Levitin, 2006).
Paglia (1991) has related Dionysius to this part of the brain and for good reason, Dionysius is
asscoiated with dance. The timing of the music gives rise to dance as we cannot dance without
timing, the cerebellum aids in the consistant rate of timing. Research into the cerebellum and
music found that it is activated when music is present but not with noise. It appears that the
cerebellum is involved with tracking the beat of the music which is not present in noise
(Levitin, 2006). It is understood that the main function of the cerebellum was balance but the
work of Jeremy Schmahmann has shown that the cerebellum contains massive connections to
emotional centers of the brain the amygdala, which is involved remembering emotional
events, and the frontal lobe, the part of the brain involved in planning and impulse control
(Levitin, 2006, p. 175). This research is important in understanding the connection the
cerebellum has with emotion; essentially what is the connection between movement and
emotion? Levitin (2006) posits that this connection is evolutionary; if we hear a sound we are
startled and assess whether the sound is a threat, for example, a rat sleeping in a hole in the
ground hears a loud noise which could be a branch or a predator, the rat is startled. If the
intensity and frequency change it signals that the environment has changed and he needs to
notice this change (see Fig. 1 above). But could the connection between movement and
emotion have a reason why we dance? Its a complex question and not easily answerable as of
course the elements of music would come into play such as rhythm, but then again havent we
all seen someone bounce around the dance floor with no rhythm absorbed by the music?
11
Though this bouncing around is rhythm itself albeit irregular rhythm.
The process of the brain when we listen to music is a complex one that involves various
parts of the brain including the newer frontal lobes and the older parts of the brain such as the
cerebellum. The processes literally move from the front of the brain to the back of the brain.
Music is related to certain parts of the brain such as memory as we listen to music it reminds of
certain times of our lives and we can become nostalgic. Levitin gives an account of the process
that happens in the brain when we listen to music which I have reprinted below:
Listening to music caused a cascade of brain regions to become activated in a particular
order: first, auditory cortex for initial processing of the components of the sound. Then
the frontal regions, such as BA44 and BA47, that we had previously identified as being
involved in processing musical structure and expectations. Finally, a network of regions
the mesolimbic system involved in arousal, pleasure, and the transmission of opioids
and the production of dopamine, culminating in activation in the nucleus accumbens. And
the cerebellum and the basal ganglia were active throughout, presumably supporting the
processing of rhythm and meter. The rewarding and reinforcing aspects of listening to
music seem, then, to be mediated by increasing dopamine levels in the nucleus
accumbens, and by the cerebellums contribution to regulating emotion through its
connections to the frontal lobe and the limbic system (Levitin, 2006, p. 191).

It is important to note that both hemispheres play a role in music, the left side has more to do
with cognition while the right side is more important to integrated total interpretation
(Theorell, 2014; McGilchrist, 2009). If we look at the hemispheres we see that the left
hemisphere has a lot to do with rhythm as research has shown that patients with damage to the
left hemisphere can lose the ability to produce and perceive rhythm but can perceive meter,
while patients with damage to the right hemisphere have shown the opposite. This not only
shows that the left and right hemispheres play different roles in music but also show that meter
extraction and rhythm are very different when it comes to brain processes. Also, lesions to the
right temporal lobe affect the perception of melodies moreso than lesions to the left temporal
lobe (Levitin, 2006). If we look at the differences in the hemispheres we see that the left
hemisphere is responsible for words, logic, analysis, linearity and sequences, while the right
hemisphere is responsible for spatial awareness, imagination, day dreaming, holistic awareness
and dimension. Music is often associated with the right hemisphere but research has found that
professional musicians develop analytical processes in the left hemisphere while amateurs
process music in their right hemispheres. The average passive listener of music is more right
brained and the professional musician is more left brained (Jun, 2011). The reason for the
difference is the learning of music theory (left hemisphere) and how it is applied to playing
music (right hemisphere). Listening to music is an holistic experience while composing it is
more analytical. Ian McGilchrist states that the left hemisphere yields clarity and the power to
manipulate things that are known, fixed, static, isolated, decontextualised, explicit, general in
nature but ultimately lifeless. The right hemisphere in contrast yields a world of individual,
changing, evolving, interconnected, implicit, incarnate, living beings in the context of the lived
world (The RSA, 2011). These are essentially perspectives of how we approach music.

12
Music has quite a complex effect on brain processes and thus in turn physiology. We
will now review research on how music interacts with physiology.

Physiology and Music

There are moments when we listen to music and its the right music at the right time; our
bodies respond by giving us chills. We know then that the music has effected us in some way.
Panskepp (1995), and Blood and Zatore (2001) have researched this. Panskepp (1995) found
that sad and depressing music is the most common type of music to trigger the reaction of
chills and it may be because of evolution that we experience chills. Could it be because of
threatening situations or we feel vulnerable and sad? The answers are just not known. Blood
and Zatorre (2001) used a piece that they knew would give rise to chills on 10 students and
found using PET scanning that during the chills there was an increased blood flow in the parts
of the brain associated with feeling good and reward. They also found that there is a parallel
between music that gives us chills and the reactions in the brain during eating enjoyable food
and during sexual activity. Research has found that the sacculus (a part of the inner ears
vestibular system) responds to low frequency sounds above 90dB (Todd & Cody, 2000; Todd,
2001). The sacculus is connected to the vestibular nerve, which then stimulates a part of the
brain known as the hypothalamus, which is associated with hunger, sex, and hedonistic
experiences. So listening to loud, bass filled music produces pleasure in the individual much
similar to the pleasures of food and sex. Of course there is a threshold of pleasure which can
easily become painful if the music becomes too loud. Loud music synchronizes the brains of
listeners and promotes group cohesion and group cohesion is very important in our
evolutionary heritage (Blesser, n.d.). Loudness becomes pleasurable to a certain point and can
easily fall into the threshold of pain. If we look at modern pop music that is prevalent in
festivals, raves, and clubs, with its loud bassy sounds it seems that it is tailored specifically for
pleasure. The research seems to conform to Nietzsches description of Dionysius. Just to
reiterate that Dionysius is the god of music and ecstacy. He represents pleasure untamed. So it
seems that Nietzsche was on point with Dionysius.
Theorell (2014) noted that the physiological reactions to music were sometimes
inconsistent in some people. It is assumed that stimulating music increases heart rate while
relaxing music decreases the heart rate. What he found was that in some people there was no
decrease in heart rate while listening to relaxing music. He concluded that highly educated
people were good at picking out stimulating music and quite unskilled at picking relaxed
music. Is this because of our fast-paced lifestyles and the abundance of fast-paced music that is
mainstreamed?
Music is often used for stimulation and vitalisation more so than relaxing. Some people
may have a hard time picking relaxing music for various reasons. Jabr suggests a reason for
this may be because one should also consider the memories, emotions and associations that
different songs evoke. For some people, the extent to which they identify with the singers

13
emotional state and viewpoint determines how motivated they feel. And in some cases, the
rhythms of the underlying melody may not be as important as the cadence of the lyrics (2013,
para. 4). Even some research described by Jabr (2013) found that some people have an innate
preference for rhythms of 120bpm and when asked to tap their fingers or walk, most people
unconsciously settle into a rhythm of 120bpm. Is it innate or is it because of the fact that most
songs between 1960 and 1990 had a bpm of 120? Well, our bodies operate according to
circadian rhythms of upswings and downswings. Mannes (2011) notes that some researchers
believe that a particular time of day depends on how we react emotionally and physically to
music. It looks like the circadian rhythm plays a part in how we react to music; we might not
be as relaxed listening to relaxing music on one of our up times compared to listening to
relaxing music on one our down times. Our bodies and music synchronise together. We
entrain to a rhythmic beat and as this entrainment occurs, rhythmic cycles synchronise. Mannes
(2011) gives a great example of two clock pendulums swinging at different rates and when they
are put together they eventually swing at the same rate. This is called entrainment and it is what
our bodies do when we listen to music. We can see entrainment at work when we tap our feet
and hands in time with a song. This entrainment can have an effect on our physiology by
increasing or decreasing the stress levels of an individual. This in turn can effect our health.
How does our physiology interact with music under stress?

Stress and Music

Our bodies respond to music in various ways. Music can vitalise us when it is the right music at
the right time. This leads to increased heart rate and concentration of stress hormones such as
cortisol. Music can also activate the release of endorphins (the bodys morphine), raise blood
pressure, help the body form clots, and increase the activity of some parts of the immune
system. When relaxing music is played a lowered heart rate and blood pressure has been
observed (Theorell, 2014). This shows that music can act as a stimulating and relaxing agent. A
study by Khalfa et al. (2003) found that the concentration of cortisol (stress hormone) in saliva
decreased after a stress test in the laboratory when the subjects listened to music compared to
when they werent. But in another study conducted by the Free University of Berlin they
exposed healthy individuals to three types of music in which they examined the blood cortisol
levels of each individual. The music that was used was a waltz by Johann Strauss which has a
regular rhythm, a song by H.W. Henze which has a very irregular rhythm, and finally a
meditative piece by Ravi Shankar which has no rhythm. What the researchers found was that
the levels of cortisol only decreased when the subjects listened to the Ravi Shankar piece. We
could assume that lack of rhythm may have had an influence in the decrease of cortisol but it is
hard to say if rhythm was a decisive factor. What we can gather from the research is that music
influences the hormones in the body (Mannes, 2011). Musics influence on cortisol levels is
also noted by Miluk-Kolasa et al. (1994) who found that music helped decrease cortisol levels
in patients who received news in getting surgery. Music can also have the opposite effect and
raise cortisol levels in healthy individuals. Mannes (2011) indicates that fast paced music and
14
slow sedative music and no music was tested on trained and untrained runners and what was
found was that the fast paced music raised the corstisol levels in the untrained runners. So fast-
paced music is helpful in the training of athletes. Other research (Jabr 2013) found that musics
role in fitness is key to motivation and endurance as it keeps us awash in strong emotions.
These strong emotions - if we identify with the singers emotions or perspective can
influence us to become more motivated while exercising. Aside from listening to music,
interacting with music such as singing increases the concentration of oxytocin which reduces
anxiety and pain (Theorell, 2014). Oxytocin is one of many hormones that are involved in the
immediate biological reactions to music and these hormones increase when singers sing in a
choir which has health-promoting effects (Theorell, 2014). It seems that musical ability helps
with lowering stress levels and helps the body to protect itself against adverse effects of stress
by repairing and restoring worn-out cells (Theorell, 2014, p. 106).
This is important in understanding the health benefits of playing music. If we look
closely at musical ability, how much of it is genetic and is it influenced by environment or
both?

Genetics and Music

There is very little research in the area of genetics and music and it is a relatively new area of
research. The research of Medical Geneticist Irma Jrvel has been important in finding the
genetic basis of music and how much of musical ability is genetic and how much is learned.
Jrvel and her colleagues analysed 224 family members who were musicians or related to
musicians. The subjects were given standardised musical tests which included the ability to
discern the duration of two tones, and differences in pitch. What the researchers found was a
heritability of 50 percent. This is interesting considering that some subjects with no musical
training scored at a professional musical level (Wright, 2008). Genetics gives us an insight into
how much of music is heritable. Irma Jrvel says that heritability is not key to a musical
genius and that exposure to music is important as Mozart came from a musical family. Some
people may have genes to give them the ability to perform music better than others but these
genes need to be triggered by exposure to music (Torres, 2015). In a study by the University of
Helsinki (2009) they found a strong genetic component for the creative functions of music
composing, improvising, and arranging music. The study compared high music test scores with
creative functions in music. This was done because creativity is a multifactorial genetic trait
which involves a complex network composed of genes and environment. It seems that musical
ability is a complex interaction between genes and environment but it helps if you are
genetically predisposed to musical ability, but all in all, there needs to be environmental
exposure.
Research has shown that exposure to music from an early age can aid in cognitive
development and increase mathematical and language skills (Theorell, 2014). Music
programmes such as those by Kindermusik in playschools and creches, are based on current
research in child development and music research, show effectiveness in helping develop
15
cognition, mathematics and logic, literacy and language, social-emotional development, and
also physical, creative, and musical development. We will now review some of the research on
music in children.

Children and Music

To understand music in the infant we must look at how the infant learns to develop language as
music and language are closely related (we will go into more detail in the Psyche section). By
looking at the development of language in newborns we see that the musical aspects of
language comes first. It comes in the form of babbling and cooing. Babbling occurs in the first
year of life usually around six and nine months. Babbling is when phonemes are produced and
take the forms of consonants and vowels such as ma and da. Babbling occurs at around the
same age in all babies regardless of culture, and even deaf babies of deaf-mute parents show a
kind of sign-babbling (Petitto, 1988). Newborns are already sensitive to language as we can
see from infant-directed speech. Infant-directed speech is how adults communicate intuitively
through regulating the pitch and timbre of the vocalisations. Infant-directed speech is the music
of speech, which helps the child develop language and we see that intonation, phrasing and
rhythm develop first while syntax and vocabulary develop later. Research has shown that
newborns can distinguish the intonation and timbre of their mothers voice, and prefer it to any
other voice (McGilchrist, 2009). McGilchrist argues that the above capacitiy to distinguish the
characteristic inflections of language are not necessarily characteristics of an inborn talent for
language as he states that:
They rely on aspects of right-hemisphere holistic processing capable of making fine
discriminations in global patterns and having little to do with the analytic processing of
language by the left hemisphere [. . .] These processes, then in newborns have more to do
with the activation of areas of the brain which subserve the non-verbal, the musical, aspects
of speech (McGilchrist, 2009, p. 103).

What McGilchrist is saying is that music is ontologically older than language but we will look
at that in more detail in the next chapter on the psyche. By looking at research from the areas of
child psychology we find that music has innate properties (Trevarthen and Malloch, 2000). The
child uses babbling and cooing to communicate feelings to the parent that it is hungry, sleepy,
etc. The parent also communcates to the child via infant-directed speech as spoken about
above. This infant-directed speech is intuitive and is referred to as Communicative Musicality
which refers to those attributes of human communication, which are particularly exploited in
music . . . [and which] is vital for companionable parent/infant communication (Darnley-
Smith and Patey, 2003, p. 6).
Research into child development have found that mothers seem to instinctively know
that their babies prefer lullabies to any other music and that infants prefer to hear a female
voice more so than that of a males. Infants also prefer live music as compared to recorded
music (Coleman et al, 1997; Trehub, 2000). Infants and newborns can discriminate between
16
high and low pitches. The low pitches elicit more of a response than the high pitches in utero as
well as in newborns. This seems to make sense as we look at the nature of low pitches which
can penetrate the womb and travel much more deeply into the tissue compared to high pitches
which dissipate quite rapidly. If we look at timbre we find that neonates and infants also prefer
the lower-pitched sounds compared to higher pitched sounds. Though womens voices have a
higher timbre than mens it seems that this preference is due to nurture of an intimate
relationship with the mother (Schneck and Berger, 2006).
Some researchers have found that musics function in the developing child is to aid
with complex cognitive function and social activities as music helps with the demands of
language and social interaction (Levitin, 2006). Levitin states that Music processing helps
infants to prepare for language; it may pave the way to linguistic prosody, even before the
childs developing brain is ready to process phonetics. Music for the developing brain is a form
of play, an exercise that invokes higher-level integrative processes that nurture exploratory
competence, preparing the child to eventually explore generative language development
through babbling, and ultimately more complex linguistic and paralinguistic productions
(2006, p. 262). This confirms McGilchrists assertion that music is ontologically older than
language and aids in the development of a childs cognition.
Research has pointed toward a preference of consonance over dissonance and found
that babies listened to consonant music more so than dissonant music (Mannes, 2011). But this
preference could be due to learning. Duely note that preference doesnt mean they liked it
better. While research by Sandra Trehub found that babies didnt mind atonal music. It seems
from this research that a preference for consonant music over dissonant music is learned.
Western music promotes consonant music while certain cultures have a preference for
dissonant music (Mannes, 2011). In the debate of biology vs. culture Sandra Trehub is very
much on the side of biology as she believes in an innate preference for consonant music over
dissonant music as this preference is evolutionary as dissonance gives rise to a sense of fear
(Mannes, 2011). While other research found that babies passively listening to music didnt care
if the music was tonal or atonal while babies engaged actively with music preferred tonal
music. It seems that preference is learned (Mannes, 2011).
If we look in utero we see that fetuses selectively respond to familiar songs even before
birth. So, fetuses are learning songs and reacting to them in the womb (Mannes, 2011). But if
we look at brain development in the first few months we see that the infant brain is unable to
distinguish the source of sensory inputs as certain regions of the brain are functionally
undifferentiated such as the auditory cortex, the visual cortex, and the sensory cortex which
leaves the infant in a state of complete psychedelic splendour (Levitin, 2006).
Research into whether we are born with an innate sense of rhythm has been confirmed
by a joint European study which found that the babies brains actually produced an electrical
response after each deviant rhythm, indicating they expected to hear that downbeat in the
regular place (Mannes, 2011, p. 48). There is a connection between much of todays music
and the human heartbeat as the tempo of most music ranges within the heartbeat range of 60 to
150 beats per minute. This is found in all societies from primitive drums to modern music.

17
Mannes (2011) gives an interesting account of a music therapist at work in a neonatal intensive
care unit. The music therapist tapped a wooden gato drum which is an African intrumemnt that
mimics the fetal heartbeat. As she tapped the babys heartbeat began to beat in rhythm with the
drum which could be seen on the heart rate monitor.
We are all aware of the Mozart effect where it is believed that listening to classical
music in utero and in early infancy makes babies more intelligent. Is there any truth to this or is
it just a myth fuelled by the media? The study was originally done by Psychologist Frances
Rauscher, which was published in Nature in 1993. The study consisted of 36 college students
who either listened to ten minutes of Mozart, relaxation music, or silence while doing spatial
reasoning tasks. In one of the tasks the college students who listened to Mozart showed a
significant improvement in their performance. After this study was released people started
buying classical cds to improve their childs IQ. Even Governors in America were promoting
the use of classical music in crches. In 1999 Psychologist Christopher Chabris performed a
meta-analysis on 16 Mozart effect studies to understand its overall effectiveness. He found
only a half IQ point in the task of paper folding only. Rauscher himself believes the Mozart
effect to be a myth. The media took the research and created a hype based on an insignificant
increase in IQ (Swaminathan, 2007). Research has shown an increase in cognitive ability
though an increase in IQ is a myth (Theorell, 2014).

Summary

This chapter has discussed how music interacts with the soma. We discussed Dionysius and
how he relates to the biology of the person. We also examined elements of the soma such as
the human brain, physiology, stress, genetics, and how music interacts with children and
babies. This chapter has given us an in-depth description of music and how it interacts with the
somatic dimension of the human person. We will now continue with our investigation into
music and its interaction with the dimensional ontology of the person by examining the second
dimension the psyche.

18
Chapter 2

Music and the Psyche

The only truth is music.

- Jack Kerouac

Introduction

The previous chapter discussed the somatic dimension and we will now examine the dimension
of the psyche by investigating how Apollo symbolises the psyche.

Apollo and Music

The psyche is focused on conceptualising music as opposed to feeling it while also concerned
with aesthetics and structure. We will look at the psyche from how it perceives music using the
Greek god Apollo who is the ambassador of the psyche as described by Nietzsche in his book
The Birth of Tragedy.
The Greek god Apollo is the god of sculpture, music, poetry, truth and prophecy, and
sunlight. Many of the characteristics associated with Apollo are associated with the personality
trait of introversion according to Jung (1991) and the dream like state by Nietzsche (1995).
Apollo being associated with music alongside Dionysius, his view on music is very different
from that of Dionysius. He is associated with reason, intellect, structure, self-knowledge,
restraint, and harmony. Apollo as described by Paglia is the hard, cold separatism of western
personality and categorical thought (1991, p. 96). He is the divine image of the principium
individuationis. He stands away from the crowd he represents a freethinking, introspective
individual. Apollo demands know thyself and nothing overmuch; consequently pride and
excess are regarded as the truly inimical demons of the non-Apollonian sphere (Nietzsche,
1995, p. 11).
Apollo is a handsome looking man - which is a portrayal of aesthetic beauty, which is
important to Apollo. So in terms of music aesthetics, form, and structure are important.
Apollo objectifies (Paglia, 1991) while he steps back from the crowd and watches, preserving
his individuality. He is serious about himself and is ruled by intellect and reason. He is
creativity controlled compared to the uncontrolled creativity of Dionysius. He despises
Dionysian frenzy. He is often seen with a golden lyre, which invokes feelings of harmony and
serenity. These feelings of harmony and serenity are based upon technique as Apollo sculpts
19
sound. It is the psyche (Apollo) that sculpts the sound that the soma (Dionysius) feels.
The duality of Apollo and Dionysius is essentially the harmony of reason versus the
experience of ecstasy. Apollo plays the music and sculpts it with his psyche while Dionysius is
played by the music in ecstatic frenzy. Apollo finds while Dionysius is lost. Without each
other, the Dionysian lacks the form and structure to make a coherent piece of music, and
without the Dionysian, the Apollonian lacks the necessary vitality and passion. This duality of
music is in us all. Dionysius represents the soma and Apollo represents the psyche. They are
opposed but are intertwined. Music is physically and emotionally based, i.e. it is rooted in the
soma, and Dionysian but needs to be shaped and organised by Apollonian techniques of the
psyche (Storr, 1992). Instead of the soma and psyche trying to dominate over each other there
should be a harmonising flow between the two.
We have looked at the Apollonian perspective of the psyche to set the tone for this
section. The Apollonian sphere is associated with psychological pursuits as Kaufmann (1974)
points out and also aesthetics. Aesthetics are important to Apollo and since he represents the
psyche we will look at what the philosopher Roger Scruton has to say about the aesthetics of
music. Roger Scruton gives us an in-depth view of the aesthetics of music as his approach is
Apollonian and relates to the psyche.

Perception of the Aesthetics of Music

Roger Scruton (1997) and Philip Ball (2010) believe that music is intrinsically human and in
this section we will investigate the human perception of music by examining the aesthetics of
music. The aesthetics of music are organised by the perception of the human musical
experience (Scruton, 1997). Scruton (1997) gives an in-depth analysis of the aesthetics of
music which we will now investigate.
Roger Scruton (1997) talks about the primary and secondary qualities of objects. For
example if I ask you to point out the colour orange and you point to an orange highlighter. The
primary quality of the orange highlighter is orange and the secondary quality of the orange
highlighter is that it is a highlighter. One cannot point to a colour without pointing to an object
to give a description of that colour. With music this is entirely different. Music has no object
and thus has no primary or secondary qualities. One can identify a sound without having to
identify its source.
Is sound a physical object? Sound emanates from physical objects but it is not physical
the same way a rainbow is visible and occupies physical space but it is not a physical object.
Rainbows like music are not qualities at all as Scruton puts it. Sounds can be identified without
identifying their source. If we look at the physics of sound we see that sounds are produced by
waves: vibrations communicated to the ear. When we hear sounds we know that the sound
waves are reaching my ear from the direction in which I locate the sound. This is the physical
reality which explains how we hear (Scruton, 1997).
Sounds exist independent of ourselves and are not qualities of things and our perception
of sound is what Scruton (1997) calls phenomenal sound. Phenomenal sound is how we
20
perceive sound and is always the result of sound waves. The objective reality of sound is
phenomenal but also intrinsic. When you take all of this into account, it is almost strange to say
that we can count and individuate music! Sound only lasts for a certain amount of time and
then vanishes but its spatial properties are indeterminate or vague, and even its temporal
boundaries may be unclear until fixed by convention (Scruton, 1997, p. 8). Scruton (1997)
describes sound as a pure event as opposed an event. He gives an example of a car crash and
how there are many events involved in this scenario but in the case of sound that I hear is
produced by something, I am presented in hearing the sound alone. The thing that produces the
sound, even if it is something heard, is not the intentional object of hearing, but only the
cause of what I hear (Scruton, 1997, p. 11). Scruton speaks of a sound world in which sound
only occurs. The pure event is not found in vision as visible objects cut each other off from the
eye. With sound this is not the case as sound drowns out other sounds and saturates our hearing
to the point where we cannot distinguish what is there. This sound world however only
contains events and processes only, with no people or substances. Scruton describes the sound
world as inherently other, and other in an interesting way: it is not just that we do not belong
in it; it is that we could not belong in it: it is metaphysically apart from us. And yet we have a
complete view of it, and discover in it, through music, the very life that is ours. There lies the
mystery, or part of it (1997, p. 13-14).
Sound and music are different. Music is an art of sound and a special kind at that.
Music is organized sound. When we hear sound as music we order it. The sound of a rusty
hinge is not music but if it is ordered, it can become musical. When it comes to music we have
a tacit knowledge of it in a similar way we have tacit knowledge of grammar. Sound exists in a
sound world and we are able to perceive it and organize it with our minds creating an art of
sound that we call music.
When discussing tone we must be careful not to confuse tone with sound. Tone cannot
be reduced to sound as tone arises from music. Music unlike sound is an intentional object of
musical perception (Scruton, 1997, p. 78). Scruton says that when we hear music, we do not
hear sound only; we hear something in sound, something which moves with a force of its own
(1997, p. 19-20). This something is called tone. A tone is any sound considered with reference
to its quality, pitch, strength and source. Tone according to Scruton is purely the product of
imagination; the tone occupies some imaginary musical space. Tone then is not a material thing
but idea. Our experience of sound as music depends on our imagining the sound as something
other than sound. Perception plays a huge part in our experience of music and is a natural
epistemological power of the organism, which depends on no social context for its exercise.
The musical experience, however, is not merely perceptual. It is founded in metaphor, arising
when unreal movement is heard in imaginary space. Such an experience occurs only within a
culture, in which traditions of performance and listening shape our expectations (Scruton,
1997, p. 239). It seems then that metaphor is essential rather than contingent to musical
experience. Metaphorical descriptions are a fundamental aspect of understanding sound as
music. If you take metaphor away, you cease to be able to describe music, only sound because
metaphor describes the intentional object of the musical experience.

21
Scruton describes tonality as the central metaphor for describing our experience of
music. The metaphor of tonality is crucial because it allows us to experience music spatially.
Tonality gives us the impression of social harmony while atonal music is the opposite and
Scruton describes atonal music as partly negative and filled with anxiety. Atonal music suffers
from a poverty of organizing metaphors as found in tonal music. Tonal music is essentially a
description of music that sounds good while atonal music sounds unpleasant. Social harmony
in tonal music is merely a response to the metaphor of tonality while anxiety is a response to
the metaphor of atonality in music.
Now that we have looked at the aesthetics of music from an Apollonian perspective we
will now investigate music in respect of language because these two systems share an
architecture that is similar according to McGilchrist (2009) and by examining them both side
by side we get to see the underlying complexity of the relationship we have with music and
language.

Language and Music

We are born into a world with two distinct sound systems language and music (Patel, 2008).
Language is an important part of the human psyche as language is how we communicate to
each other via a set of arbitrary symbols (Gross, 2009). The language system consists of
vowels, consonants, and pitch constrasts while the musical system consists of timbres and
pitches of a cultures music (Patel, 2008). We will first investigate the origins of both as they
have an interwoven history that is highly debated. We will then examine the aesthetics of both
as we have mentioned above that Apollo is the ambassador of the psyche and one of the traits
associated with Apollo is aesthetics. Aesthetics will be important to understand the shared
architecture of music as language is a primary form of communication whereas music is
believed to communicate the emotions, which we will investigate in the next section.
It is not known when either of these systems developed but it is speculated that
language may have developed approximately 40,000 years ago. There is evidence of cultural
artefacts that arose around 80,000 years ago which found that early man practised a ritualised
burial of the dead, and there would have to be some sort of communication (McGilchrist,
2009). The archaeologist Nicholas Conrad found a flute with five finger holes which was dated
between 35,000 and 40,000 years ago. Experts who examined it found that it was more
technologically advanced than modern instruments. The scale that the flutes play are
remarkably similar to the scale played on modern instruments (Mannes, 2011). These flutes
show that music was an integral part of human existence. There is very little agreement when it
comes to the understandings of the origins of music and language. There are three likely
scenarios:
1.) Some believe that music is a spin-off of language.
2.) Some believe that language developed from musical communication.

22
3.) Finally, there are those that believe that music and language developed independently
but alongside each other. It is called musilanguage.

There is a close relationship between music and language and we can only postulate as to
which came first phylogenetically, but because of evidence of fossil records which in itself is
rather convincing, as the evidence of fossil records seems to point toward music developing
before language. McGilchrist cites Salomon Henschen as saying the following The musical
faculty is phylogenetically older than language; some animals have a musical faculty birds in
a high degree. It is also ontologically older, for the child begins to sing earlier than to speak
(2009, p. 103). According to McGilchrist (2009) the control of voice and respiration came into
being before language developed and if we pay close attention to the development of language
in children we will see musical aspects of language develop first as intonation, phrasing, and
rhythm develop first then language follows as syntax and vocabulary occur later. Baby-talk or
infant directed speech shows that new-borns are sensitive to the rhythms of language. To help
new-borns acquire language parents raise the pitch, slow down the tempo and emphasise the
rhythm of the speech. (McGilchrist, 2009). We discussed language acquisition and music in
greater detail in the Soma section on Children.

Aesthetics of Language and Music

We will now look briefly at the aesthetics of music and language as language is the
primary form of communication for most of humanity (Patel, 2008) but music is also
considered a form of communication in that it can communicate the emotions, i.e. music is
considered a language of emotions (Thompson, 2009). Patel states that the central role of
music and language and human existence and the fact that both involve complex and
meaningful sound sequences naturally invite comparison between the two domains (2008, p.
3). One of the commonalities of language and music is connection. We try to connect to people
through the use of langauge and music as both convey something and they define us as human
(Patel, 2008). As previously mentioned in the last chapter, music aids in the development of
language. These two sound systems have a similar architecture which we will now examine.
Research has shown that there are connections between music and language in terms of
cognitive and neural processing (Patel, 2008). We will focus on rhythm, melody, syntax and
touch briefly upon meaning. Rhythm in music has a regular timed beat with which one can
synchronise with periodic movements such as taps. When it comes to rhythm in language there
are three approaches which researchers have taken; typological, theoretical, and perceptual.
The typological approach looks at the differences and similarities among human languages. For
example it would look at stress-timed languages such as English, Thai, and Arabaic; and
syllable-timed languages such as French, Hindi, and Yoruba. The theoretical approach looks to
uncover the principles that govern the rhytmic shape of words and utteranes in a given
language or languages (Patel, 2008, p. 118). The perceptual approach looks at the role rhythm
plays in the perception of speech.

23
Music and language are both rhythmic and have important similarities and differences.
They are similar in grouping structure, for example tones and words are grouped into level
units such as phrases. A differences however is temporal periodicity. Temporal periodicity
occurs in music rhythm but lacks in speech rhythm.
According to Patel (2008), melody is an intuitive concept and one that is hard to define.
It is usually defined by pitched sounds or a sequence of notes arranged in musical time. A
difference between musical melody and linguistic melody is that musical melodies are built
around a stable set of pitch intervals while linguistic melodies are not. Another difference is a
phenomenon known as declination which is unique to speech. Declination is the gradual
lowering of the baseline pitch and narrowing of pitch range over the course of an utterance
(Patel, 2008, p. 184). Despite the differences of linguistic amd musical melody there are
similarities such as structure and processing. An example of this would be the statistics of pitch
patterning which can be reflected in instrumental music via a composers native language.
Neuropsychological research (Patel, 2008) has shown that musical and linguistic melody
overlap in the brain.
Linguistic syntax is so complex that it sets itself apart from any known nonhuman
communication system. With linguistic syntax comes a strong relationship with meaning.
Changing the structure of a sentence can alter its meaning. This sets linguistic syntax apart
from other syntactic systems such as whale song and bird song. An important part of linguistic
syntax is that words can take on abstract grammatical functions which are determined by their
context and structural relations rather than by inherent properties of the words themselves
(Patel, 2008, p. 244). However, with human music, rearranging chord sequences (given that
tension and resolution is taken as a kind musical meaning) can have a strong impact on the
meaning of the music through the influence of tension-relaxation patterns (Patel, 2008).
Despite the differences of musical and linguistic syntax (having domain-specific syntactic
representations) there is overlap in the neural resources that serve to activate and integrate
these representations during syntactic processing (Patel, 2008, p. 297).
Meaning in language refers to semantic reference and predication. However, if we use
this definition of meaning in relation to music we will find that music and language wil have
very little in common (Patel, 2008). Language according to Patel (2008) transmits 3 basic types
of meaning between individuals such as:

1. Refer to concepts and predicate things about them


2. Ask questions and express wishes
3. Make metalinguistic statements

However, music does not transmit these types of meanings. What meaning means in
language does not work with music. Music has no concrete meaning whereas language does.
The assertion that instrumental music expresses emotion is highly debated as certain people can
enjoy a piece of music without feeling any emotion to it. According to McGilchrist, music is
the communication of emotion, the most fundamental form of communication, which in
phylogeny, as well as ontogeny, came and comes first (2009, p. 103). If music came first then
24
shouldnt it have some sort of meaning? Does music express emotion? We will now move our
investigation into looking at emotion and music in order to reslove this problem.

Music and Emotion

Music and emotion has been discussed for thousands of years. Philosophers including Plato,
and Aristotle expressed the importance of music and how it can effect our emotions (Ball,
2010). Aristotle suggested that changes in rhythm, and scales can lead to an emotional
response (Ball, 2010). However, he warned the youth against the Phrygian mode as it was
exciting and would elicit Bacchic frenzy (Thompson, 2009). It is only recently that
musicologists and music psychologists have started studying the effects of music on
emotion. What we have found is that music is a powerful tool for regulating emotions as
music is an effective device for conducting the emotional work that is required to maintain
desired states of feeling and bodily energy (relaxation, excitement), or to diminish
undesirable emotional states (eg., stress, fatigue) (Thompson, 2009, p. 170). Research into
music and emotions focuses on perceptive and cognitive processes because researchers
believe that they are universal (Thompson, 2009; Delige & Davidson, 2011). Subjective
emotional responses are unreliable and are determined by culture and social norms.
Before we begin discussing the theories of emotion and music we will first define
what we mean by emotions. Juslin, a prominent music psychologist, proposes a broad
definition of emotions as:
Emotions are relatively brief, intense, and rapidly changing reactions to potentially
important events (subjective challenges or opportunities) in the external or internal
environment often of a social nature - which involve a number of subcomponents
(cognitive changes, subjective feelings, expressive behaviour, and action tendencies)
that are more or less synchronised during an emotional episode (Juslin, 2011, p. 114).

There have been various theories to describe the unique power music has to arouse emotions
(Thompson, 2009) and we will discuss four of those theories which are; Melodic cues,
Morphology of Feeling, ITPRA Theory, and Multiple Mechanisms Theory. These four theories
give a broad and comprehensive description of the discussion and current research of music
and emotions. We will begin by looking at the first theory in our discussion - Melodic Cues.

1.) Melodic Cues

One of the theories of music and emotion is that proposed by Deryck Cooke (1959) in his book
The Language of Music in which he discussed the idea that melodic intervals and patterns have
their own emotional qualities (Thompson, 2009). He implies as the title of the book suggests
that music is a language of emotion In the book he suggests that an ascending major third
suggests joy or triumph, an ascending major sixth suggests a longing for pleasure, a minor
sixth suggests anguish, an augmented fourth (tritone) suggests hostility and disruption, a major
second is less suggestive and depends on context. Cooke (1959) suggests that the relationship
25
between musical intervals and emotions are universal and not merely a language for Western
music.
Cooke used musical examples to back up his theory and many of the musical examples
have vocal accompaniments. Cooke connects the words with an interval or pattern, for example
The Simpsons theme. The leap from The to Sim- is associated with an augmented fourth,
which has an emotional association of hostility and disruption (Thompson, 2009). What Cooke
has demonstrated is that there is a link between certain melodic intervals and emotional
responses across a range of styles though his claim that they are universal is questionable and
ethnocentric (Thompson, 2009). Thompson explains that composers might associate the major
third with joy simply because other composers might associate the major third with joy simply
because other composers have done so in the past, and listeners have learned those associations
through repeated exposure to music (2009, p. 178). There is evidence that major thirds are
perceived as more joyful than minor thirds (Thompson, 2009). So it seems that the link
between certain melodic intervals and emotional responses may be learned behaviour as the
cognitive basis for such associations remain unclear. The philosopher Susanne K. Langer
rejected Cookes theory and we will look at what her views are in respect of music.

2.) Morphology of Feeling

In 1941 Susanne K. Langer in her book Philosophy in a New Key discussed the significance of
music using her theory Morphology of Feeling. Langer rejected the idea that music
represented specific emotions unlike Cookes theory above. She states that music has no
specific vocabulary linking musical intervals and patterns to emotions. Music and emotions are
two separate systems that have similar dynamic patterns. The similar dynamic patterns of
music and emotion include: motion and rest, agreement and disagreement, fulfillment, tension
and release, excitation, and sudden change (Thompson, 2009). Langer states that music
acquires its meaning through its natural resemblance to the dynamic forms of emotional life.
Music and emotion resemble each other but it does not mean that they express each other.
Langer states that if they resemble each other they stand in symbolic and semantic relation to
each other. We as people tend to mistake this resemblance for expressiveness and so it is us
who give music its meaning. Langer makes this clear by arguing that music does not reference
emotion through arbitrary signs in a culture-specific manner, as with words referring to objects.
For example, a mouse is an arbitrary sign for an actual mouse, which bears no resemblance to
the mammal (Thompson, 2009). In short, musicians use the aesthetics of music (rhythm,
melody, harmony, timbre), which are culture-specific to express an idea. As Langer argues, we
simply mistake the resemblance of emotions and music as being expressive of each other.
Music according to Langer (1941) does not have an inherent meaning and is non-
representative. Meaning is expressed through an underlying idea given by the composer and
the listener interprets the meaning using learned culture-specific cues of music.
Langers philosophical discussion of music and emotion leads us to a cognitive
understanding to unmask this deeper significance of music (1941, p. 167). We will now

26
look at the work of David Huron who developed the ITPRA theory, which focuses on the
underlying cognitive mechanism of expectancy.

3.) ITPRA Theory

Another theory of emotions and music is the ITPRA theory developed by David Huron in
2006. The theory focuses on expectancy to understand the emotional qualities of music. Huron
points out that focusing on expectancy is only one of many ways to understand music and
emotions and expectancy helps prepare the individual for the future (Thompson, 2006).
The ITPRA theory stands for imagination, tension, prediction, reaction, and appraisal.
These are the five categories associated with expectancy and are broken down into pre-
outcomes and post-outcome responses. Pre-outcome responses involve the categories of
imagination and tension as these are feelings that occur prior to an event expected or
unexpected. The post-outcome responses include prediction, reaction, and appraisal. The five
types of expectancy can operate simultaneously creating a complex emotional experience to a
musical event (Thompson, 2006).
Imagination involves imagining a future event and acting in a way that makes the event
more likely if they are positive and less likely if they are negative. Tension is a physiological
preparation for an event going to happen. An example of this would be the fight or flight
reaction to a stimulus. This response is activated just before an anticipated moment of outcome.
We now move on to the post-outcome responses. Prediction involves states of reward or
punishment in response to an expectation. Accurate predictions are rewarded with positive
emotional responses while inaccurate predictions are punished with negative emotional
responses. Reaction occurs automatically and activates physiological responses which are
instinctive reflexes but not always as these reflexes can be learned. Appraisal is conscious
assessment of an outcome. Huron states that expectancy is only one source of tapping into
musical emotions (Thompson, 2009). The emotions function to:
Reinforce accurate prediction, promote appropriate event readiness, and increase the
likelihood of future positive outcomes. Positive feelings reward states deemed to be
adaptive, and negative feelings discourage states deemed to be maladaptive. Music
making taps into these primordial functions to produce a wealth of compelling emotional
experiences (Thompson, 2009, p. 185).

The ITPRA theory can help explain a wide variety of emotional experiences to music though
research seems to point in the direction that these emotional experiences are learned from
repeated exposure to Western tonal music and such regularities are stored as mental schemata,
which function to interpret new sensory input and generate expectations for events to follow
(Thompson, 2009, p. 186).
We will now look at more underlying mechanisms of music and emotion by looking at
the Multiple Mechanisms Theory.

27
4.) Multiple Mechanisms Theory

The Multiple Mechanisms Theory (Juslin, Liljestrm, Vstfjll, & Lundqvist, 2010) describes
the underlying mechanisms, which induce emotions. There are 7 mechanisms, which are; brain
stem response, evaluative conditioning, emotional contagion, visual imagery, episodic
memory, music expectancy, and cognitive appraisal.
The first mechanism is the brain stem response and is when music induces an emotion
because of an acoustical character of the music signals to the brain stem an important or urgent
event. Loud and dissonant sounds would trigger this response and increase arousal.
The second mechanism is rhythmic entrainment and is a process by which music
induces an emotion because the rhythm of the music interacts with an internal rhythm of the
body such as the heart rate.
The third mechanism is evaluative conditioning and is when an emotion is induced
upon hearing music because the music has been paired with a positive or negative event. For
example, a piece of music played when you meet a friend who was away travelling. The piece
of piece would remind you of how you felt when you met your friend.
The fourth mechanism is emotional contagion and refers to perceiving an emotion from
a piece of music then mimicking the same emotion.
The fifth mechanism is visual imagery and is evoked when emotions are induced when
listening to music. The emotions experienced come from the interaction of the music and
emotions. The nature of this process needs to be explained further but listeners appear to
conceptualise the musical structure through a metaphorical non-verbal mapping between the
music and image schemata grounded in bodily experience (Juslin, Liljestrm, Vstfjll, &
Lundqvist, 2010, p. 622).
The sixth mechanism is episodic memory and is when an emotion is induced in a
listener because the music evokes a personal memory or event in the listeners life. When we
listen to music, a memory can be evoked which in turn induces the emotion associated with the
memory. For example, a song may evoke a memory of a specific moment or event in a
persons life.
The seventh mechanism is musical expectancy and is when an emotion is induced when
a piece of music violates, or confirms the listeners expectations. The expectations are based
upon repeated exposure of the same style of music.
The 7 mechanisms described afford a comprehensive account for all emotional
responses to music, and our emotional experiences to music are typically the outcome of
several mechanisms operating simultaneously (Thompson, 2009, p. 170). The Multiple
Mechanisms Theory explores the interaction of music and multiple psychological mechanisms
(as there is no single mechanism that can account for all musical emotions), which can help
understand humans as individuals as well as a species though more research into the seven
mechanisms is needed to fully understand this.
We will move away from the theories and examine how we respond to music
emotionally, the properties of music that lead to an emotional response as well as context of

28
musical emotions. Discussing the properties of music and their relationship to emotional
response can be problematic for various reasons but we will examine what the research has
found regarding the relationship between the two.
Gabrielsson & Juslin (1996) conducted research on the relationship between the music
created by the performer and the listeners experience of the music. What they found is that
bright timbre, fast tempo, and exaggerated rhythmic contrasts, were used by performers to
express happiness. They also found that performers used slow tempo, slow and deep vibrato, as
well as a soft dynamic level to convey sounds of sadness. Performers used loud dynamics,
harsh timbre, dissonance, and rapid tempo to express anger. Musicians use a variety of
techniques as well as sound dynamics to convey various emotions. A study by Ile & Thompson
(2011) found that by changing the pitch and tempo of music can influence emotional responses
to music. It is hard to discern whether emotional meaning in music is universal or culturally
specific as most of the studies in the field of music psychology are based around western tonal
music and music involves both culturally specific cues to emotion and basic psychophysical
cues that transcend cultural boundaries (Thompson, 2009, p. 202).
A study by Thompson and Balkwill (2010) devised a model called Fractionating
Emotional Systems (FES) model to determine a balance of universal and culture-specific cues
by looking at phylogeny and ontology. The FES model assumes that at birth and early
development emotional responses to music depend on emotional cues that reflect primitive
operations of the auditory system (Thompson, 2009, p. 202). The phylogenetic base tries to
understand the psychophysical cues to emotion. The ontogenetic process tries to understand the
modalities of music and prosody across cultures. Phylogeny is universal and every culture in
the world shares this type of musical sensitivity while ontogeny is culture-specific learned
emotional cues to music. Trying to differentiate the phylogeny and ontogeny helps us to
understand the relationship between music and emotions a lot better.
There seems to be considerable debate as to emotional meaning in music. There are two
views on how listeners respond to music; one is the cognitivist position and the other is the
emotivist position. The cognitivist position argues that listeners understand the emotions
conveyed by music but do not experience the emotions conveyed. The emotivist position
argues that music gives rise to an emotional response in listeners. The causal connections
between the two are unclear and begs the questions of; does our mood determine the type of
music we listen to or does the type of music determine our mood? It seems that both are
possible (Thompson, 2009). There is a great deal of support for the emotivist position as
experiences of intense emotion have been widely reported by researchers (Thompson, 2009).
One factor to help us understand how we respond emotionally to music is context.
Context of musical emotions is important to understand because musical emotions
occur in a complex interplay of listener, music, and environment. What we do know is that we
cannot predict emotions from the characteristics of music alone and that people react
differently to the same piece of music and the same person reacts differently to the same piece
of music in a different context (Juslin, 2011). Some responses to music are conditioned by
context and might be key to an emotional response (Ball, 2010). Where do most people

29
experience musical emotions? According to Juslin, most common activities during musical
emotions are focused music listening, travel, movie or television watching, work/study, social
interaction, and relaxation (2011, p. 118). Common sources of music listening are ipods,
mobile phones, laptops, and television. Musical emotions occur mostly at home and outdoors
as people spend most of their time there and can occur in a variety of places which indicates
that context is not all that important (Juslin, 2011). In a social setting, where music often
occurs, a significant amount of musical emotions occur when the listener is alone. When the
listener is alone or with others, this condition influences which emotions are induced. Emotions
that occur in social settings include happiness-elation, pleasure-enjoyment, and anger-irritation
while other emotions such as calm-contentment, nostalgia-longing, and sadness-melancholy
occur in a solitary setting (Juslin, 2011). Understanding context of listening to music helps us
to understand a persons relationship with music in a variety of social settings as well as
understanding the inner world of the individual in relation to music.

Summary

In this chapter we discussed key elements of the psyche perception, language, and emotion
and how these psychological processes helped us to understand how music operates within the
psychological dimension of the human person. We will now continue our investigation into
music and its interaction with the dimensional ontology of the person by examining the third
and final dimension the nos, or the noological.

30
Chapter 3

Music and the Nos

Music expresses that which cannot be said and on which it is impossible to be silent.

- Victor Hugo

Introduction

After examining how music interacts with the human mind, we will now begin our
investigation into the noetic dimension by first discussing the transcendental nature of music.

The Transcendental Nature of Music

Music is magical in its own rite and it reminds us that we are not autonomous, alienated
beings, but integral, members of an infinitely larger and more wondrous universe (Bird, 2008,
p. 17). Music is something that we engage in out there in the world. Music is omnipresent and
in great abundance. It is played in shops, cars, public transport, pubs, elevators, supermarkets,
etc. We have music at social events such as weddings, birthdays, funerals, religious worship,
parties, etc. Music is the main focus of these events and it is a consuming passion that brings us
together. We spend a lot of time and money investing into music listening to it on stereos in the
car, at home, and on portable devices in our own time. There are hundreds of different types
and styles of music and from these various types of music we bond over them in groups of
people who like the same music and from this bond a subculture can be born (North &
Hargreaves, 2008).
Subcultures emerged decades ago from different genres of music, for example in the
sixties with Beatles fame with groups of young girls, the 1970s with punk rock, 1980s with
goths and the rise of heavy metal and rap in the 1990s. Each subculture wears clothes that
symbolise their allegiance to a musical style or group. Subcultures are greatly influenced by the
music they follow which can be a good thing or a bad thing. The musicians themselves are role
models for the subcultures especially the youth who can be influenced by their favourite
musicians. Subcultures of music serve as a badge of affiliation which communicates attitudes,
values, and ideas to others (North & Hargreaves, 2008). Different types of music be it heavy-
metal or rap despite the fact some people find it offensive at times, can actually empower
others who sorely need to understand themselves as something other than helpless and

31
victimized (Bird, 2008, p. 17). These subcultures serve as a community in which we bond
over music and share our commonalities and differences of being human (North & Hargreaves,
2008).
Despite the various subcultures and their differences, what we all have in common with
music is that we surrender to its power. In an interview by Evans (2013), the famous musician
Brian Eno discusses the idea of surrender in music. Surrender according to Eno is about letting
go of the ego and being absorbed by the music. We surrender our egos in place of something
other than ourselves and by doing this we let go of control and transcend ourselves
momentarily.
According to Frankl (1973) the essence of human existence is self-transcendence as
being human is directed at something other than oneself. We can transcend ourselves by
serving others, doing a good deed, loving, creating, helping. By doing this we are living
authentically. Authenticity of ones being is only so when one when it is lived in terms of self-
transcendence (Frankl, 1988). When we transcend ourselves, we transcend the somatic and the
psychic dimensions and into the genuinely human dimension called the nological dimension
(Frankl, 1973). The true meaning of life is to be discovered out there in the world and not
within oneself (Frankl, 2004a). Man must actualise himself and this self-actualisation is not an
aim but a by-product of self-transcendence as if one made self-actualisation the goal then one
would surely miss it. (Frankl, 2004a). Within the nological dimension man has the capacity to
detach himself from himself (Frankl, 1973). If a person has a psychological neurosis such as
depression then that person has the ability to detach oneself from their depression by the use of
music, either by listening or playing. When we surrender to music we actualise ourselves by
transcending ourselves. We can detach from our mind-set by surrendering to the music and by
doing this the side-effects of playing music is that it can raise a persons self-esteem thus
leading to increased motivation and self-efficacy (Broh, 2002) and can also increase emotional
sensitvity (Resnisow et al., 2004). Music can help us forget our problems and help us to
become better than we are and free us of from our mind to help us to become more human. In
the depth of a depression the right song at the right moment can help lift the depression and
detach ourselves from it. We are not our depression. The gift of music is that we can relate to it
and it can move something within us. It can also move us beyond ourselves. Through the
power of music we can transcend the mundane, if only for a short time. The composer Daniel
Barenboim opines that music:
is this intangible substance that is expressible only through sound. [] It has to do with the
condition of being human, since the music is written and performed by human beings who
express their innermost thoughts, feelings, impressions and observations. This is true of all
music [] (2009, p. 12).

The purpose of music according to the musician and composer John Cage is edifying, for
from time to time it sets the soul in operation. The soul is the gatherer-together of the disparate
elements [], and its work fills one with peace and love (2009, p. 62). All knowing and
loving is directed to something outward and as humans we do not exist for the sake of self-
observation but to give ourselves away and to devote ourselves to something other than
32
ourselves, this is how we transcend our being (Frankl, 2004c). When we transcend ourselves
we are likely to find meaning in music which we will now discuss.

Meaning and Music

Music occurs in all cultures throughout the world even cultures that do not have language have
music (McGilchrist, 2009; Storr, 1997). Music has many functions including forming new
bonds and friendships while ethnomusicologists have found that music expresses emotion,
induces pleasure, accompanies dance, validates rituals and institutions, and promotes social
solidarity (Ball, 2010). In many African cultures music is an outlet for positive and negative
emotions. Rage, protest, and apathy are some of the emotions enacted through music and dance
in a constructive way. Certain ancient cultures such as the Egyptians associate music with
healing and it was used by Hebrews to help cure physical and mental disturbance an early
form of music therapy (Ball, 2010). The World Federation of Music Therapy (WFMT) defines
music therapy as:
The use of music and/or elements (sound, rhythm, melody and harmony) by a qualified
music therapist, with a client or group, in a process designed to facilitate and promote
communication, relationships, learning, mobilisation, expression, organisation, and other
relevant therapeutic objectives, in order to meet physical, emotional, mental, social and
cognitive needs. Music therapy aims to develop potentials and/or restore functions of the
individual so that he or she can achieve better intra- and interpersonal integration and
consequently a better quality of life through prevention, rehabilitation or treatment
(Darnley-Smith & Patey, 2003, p. 7).

The definition is quite broad which encompasses the variety of methods of practice within the
field of music therapy. Music therapy can be applied in two ways, firstly, for its inherent
restorative or healing qualities, and secondly, as a means of interaction and self-expression
within a therapeutic setting (Darnley-Smith & Patey, 2003). In the first instance music takes
primary importance and the relationship between therapist and client is secondary, for example
the use of Vibroacoustic therapy in which the vibration of sound is used to heal physical and
psychosomatic disorders. In the second instance music is used as a means of interaction and
self-expression and an example would be of community music therapy and improvisational
music therapy. Expressing ourselves using music is ideal in some settings as sometimes what
we are feeling cannot be put into words but expressed via the universal language of music.
The psychotherapeutic approach to music therapy was pioneered by Mary Priestly in
which music is used as a way of exploring the unconscious with an analytical music therapist
by means of sound expression. It is a way of getting to know oneself possibly as a greater self
than one had realised existed (Darnley-Smith & Patey, 2003, p. 27). The use of music in
psychotherapeutic sessions helps the client to openly share their inner life, for example,
responses to dreams or concerns about family life at home. They used the musical instruments
to express their current thoughts (Darnley-Smith & Patey, 2003, p. 26). Using a combination
33
of playing music and talking can lead the client and therapist to a deeper understanding of the
underlying psychological conflicts which may have contributed to the illness (Darnley-Smith
& Patey, 2003, p. 42).
In the clinical setting a method that is used is improvisation. Improvisation allows the
therapist and client to make music as part of the therapeutic process.The improvisation aims to
allow:
spontaneous sounds or musical elements become the means and focus of communication,
expression and reflection between therapist and client [and thus] . . . the therapist listens and
responds, also upon the musical instruments, and so the therapeutic relationship begins. The
use of free musical improvisation means that, on the whole therapists do not prepare what
musical material they are going to use. Instead, they wait to see what music emerges between
them and their clients when they start to play or sing together: the music is made up there
and then (Darnley-Smith & Patey, 2003, p. 42).

The relationship between the therapist and the client is a very important one but in this
instance instead of language music becomes a form of communication. The therapist listens
and responds through music. Sometimes we cannot express ourselves through words and so the
power that music creates helps greatly in letting it all out. In this sense, music can be quite
meaningful to us.
The will to meaning is a central tenet of Logotherapy and humans are characterised by
their reaching out for meaning (Frankl, 2010). Mans primary concern in life is to discover
meaning of ones own existence as Frankl (1973) asserts and this is contrasted to the Freudian
idea that the goal of life is pleasure and the Adlerian goal of life is power (Frankl, 1973).
Pleasure and power are by-products of the fullfilments of life and can be destroyed if they are
made goals. The more we aim at pleasure and power the more we miss them. Meaning is the
goal of existence and according to Frankl (2010) is not some general, abstract meaning in life
but the meaning of ones own life. Frankl (2010) asserts that there is no universal meaning of
life, but rather the unique meaning of individual situations meaning of the moment. Like the
uniqueness of meaning, human beings too are unique in terms of their existence and essence,
and each one of us cannot be replaced (Frankl, 2010). Music is a human phenomenon (Scruton
1997; Storr, 1997) in which we can find meaning. Meaning is something that is not given but
something that is discovered and can be discerned through values of which there are three
according to Frankl (1988, 2004b). Meaning can be sought in creative, experiential, and
attitudinal values. There are times when life demands that we realise one or even all of these
values. Sometimes we need to enrich the world of our actions or even ourselves by our
experiences. Creative values are realised when we give something to the world through self-
expression such as creating music. By creating we are transcending ourselves momentarily by
realising our creative potential.
Another way we can realise meaning is through experiential values, which are realised
when we are receptive to the world, when we surrender to the beauty of music. We have
experiences of music in concert halls, festivals, and nightclubs, and music events. Music can
aid new experiences such as travelling to a new country to a music festival and experiencing a
34
new culture. For example, the Basongye people in Africa consider music to be inseparable
from feelings of happiness. For them listening and performing music would be an experience
of positivity (Rice, 2014).
Meaning can also be found by realising attitudinal values, which are when an individual
is faced with a situation that is unalterable, and we are forced to change our attitude toward the
situation. This is a way of self-transcending and finding meaning in unavoidable suffering.
Music has the ability to help change our attitudes toward life, such as song lyrics that resonate
with an individuals current life situation. By surrendering to the beauty of music we can find
meaning in our lives, which can alter our attitude to a given situation. For example, most
people hearing a familiar song would elicit memories whether they are happy or sad of some
time of our lives when we listened to that song. People extol the virtues of listening to music
about how it saved them or helped them through a traumatic event in their life.
The philosophers Schopenhauer and Nietzsche both adored music (Blumenau, 2015).
Schopenhauer considered music to be the highest of the arts and an escape from the cruel order
of life but in contrast to Nietzsche it was something that could reconcile us with life rather than
detach us from it (Storr, 1997). We find solace in music and it has the power of
communicating, announcing, and confirming ones existence. While a man who is enjoying
supreme artistic pleasure [] never doubts for a moment that his life is meaningful (Frankl,
2004b, p. 55). The gift of music is that we can relate to it and it can move something within us.
It can move us beyond ourselves. Through the power of music we can transcend the mundane
in the service of something higher than ourselves and this we can do in religious music.

Music and Spirit

Music encompasses the spiritual life of the person, even in religious practice as most religions
use music in religious ceremony to promote and maintain the doctrine of the respective religion
(Bird, 2008). Religious music has a strong tradition and has a very long tradition in the history
of man (Theorell, 2014). Music also helps believers to become closer to God as can be seen in
a variation of Islam called Sufism where dance and music are seen as paths to God (Storr,
1997; Ball, 2010). Music has an intangible nature much like ones relationship with God or the
numinous. In the fifth century St. Augustine was worried about the emotionality of music even
though he enjoyed it. He was worried that the singing might move worshippers more than by
what was sung (Ball, 2010; Storr, 1997). Religious music addresses our collectivity, our
spirituality, and our experiences of the world (Theorell, 2014). One type of religious music that
does this is Gospel choir music and also Gregorian chant. These types of music are a powerful
mediator of emotions and cohesiveness (Theorell, 2014; Hiley, 2013). The experience of both
of these types of religious music is a powerful one and one need not be a believer to partake
and enjoy them as a famous musician by the name of Brian Eno opines the power of Gospel
music as:

35
always about the possibility of transcendence, of things getting better. It's also about the
loss of ego, that you will win through or get over things by losing yourself, becoming part
of something better. Both those messages are completely universal and are nothing to do
with religion or a particular religion. They're to do with basic human attitudes and you can
have that attitude and therefore sing gospel even if you are not religious (Morley, 2010).

Religion is a human phenomenon and is existential and its purpose is salvation (Frankl, 1988;
2011; Costello, 2015). One does not have to be a believer to be healed by religious music. For
believers religious music is about being closer to God (Unity) and transcending ones being if
only for a few minutes while the non-believer it is about the cohesive nature of music, whether
religious or not, brings us closer to each other in the service of our fellow human beings. We
have forgotten that the natural state of the human being, only exists when our spiritual nature
is awakened, for we are, in essence, spiritual beings (Browne, 2009, p. 337). Music has that
the power to awaken our natural state our noetic core. Music is spiritual according to
Fitzpatrick (2013) who opines:
Music is one of our most powerful gateways to connect to our spiritual nature our divine
source the unseen, as well as to the universe around us and those other divine beings that
inhabit it with us. I know of no other medium that can transport us as immediately, on all
levels of our existence, beyond the limits of our intellect and physical body to a higher, often
blissful and inexplicable state. Music has the unique ability to transform us independently of
our thinking mind, to a place uninhibited by the judgments, doubts, fears that too often
dictate the narration of our thoughts and self-limiting beliefs (Fitzpatrick, 2013).

As music permeates the somatic, psychical, and the spiritual (noetic) dimensions of the
individual, we will now examine how music can be useful in the Logotherapeutic setting by
using it as a dereflection technique to stop hyper-reflection, which is excessive conscious
reflection. We will now elaborate upon dereflection.

Clinical Application of Music in Logotherapy: Dereflection

Dereflection is characterised by a persons fight and this fight is not against but rather for
something (Frankl, 1978). The more a person aims at something the more one misses it. When
one pays too much attention at a task in hand it is called hyper-reflection. Hyper-reflection is
excessive self-observation and this can cripple an individual. In the case of a violinist who tried
to play as consciously as possible, the simple act of placing the violin on his shoulder was done
with great detail and as consciously as possible. The violinist performed in complete self-
reflection, which led to an artistic breakdown (Frankl, 2011). The violinist was hyper-reflecting
his ability to play which ultimately crippled him. To help the violinist with his hyper-reflection
Frankl (2011) employed his technique of dereflection. Dereflection is a tenet of Logotherapy
and is about averting a persons attention away from hyper-reflection and toward something
else (Frankl, 1978). The use of this technique moved the violinist away from his excessive

36
conscious reflection and gave back trust in the artistic conscience. Dereflection liberates
creativity from the unconscious and the goal here is to make the unconscious conscious and
unconscious again. We must turn an unconscious potentiality into a conscious act and restore it
to an unconscious habit (Frankl, 2011).
Creation and performance are essential tasks of the musician. These tasks must occur
unconsciously but if they occur consciously then the musician will face an artistic block.
Excessive conscious reflection is harmful to the musician. Hyper-reflection can become a
severe handicap for the creative individual. Frankl (2011) asserts that to produce on a
conscious level is doomed to failure as the creative process is unconscious. If we reflect upon
the creative process on a conscious level it will ultimately fail. Reflection is something that
occurs afterward.
Relaxation techniques can be used as a form of dereflection. Relaxation techniques can
have a significant effect in helping concentration, and help enrich creativity and also help
decrease anxiety, tension, stress, and panic attacks that one may be suffering from (Ortiz,
1997). One such relaxation technique is centering which is a technique described by Ortiz
(1997). Centering is about finding inner calm and relaxation. Playing an instrument is a great
way to become centered and become one with the instrument. The exercise that Ortiz (1997)
outlines is replicated below:

If you have access to a piano, try the following:

1. Sit at the piano and choose a note, any note that sounds good, or that happens to feel
particularly pleasing. Select a tone in your instrument that happens to resonate with
your particular vibrational need at the moment.
2. Having found the right note, gently place your finger on it. Close your eyes, and
breathe regularly, and naturally. With your finger (or thumb) on the selected note press
down on the key and hold it.
3. As the piano string reverberates, simply listen to the sound until it is no longer audible.
4. As you hold your finger down on the key, feel the emerging oscillations as these
quiver up your finger, through your arm, and up into your body.
5. Continue to stroke the same piano key, to play the same note, until you are able to
follow the emerging vibrations into your very core.
6. Continue to resound (i.e., play) this note until it escorts you to what you feel is a place
of inner serenity, a place of internal tranquility where silence is so deep that it actually
drowns out all other sounds. (Ortiz, 1997, p. 224 225).

The above centering exercise is useful in therapy sessions with musicians and can be used with
many instruments. The exercise dereflects the musicians attention and also relaxes the body
and eases the conscious mind. It puts the musician into a mindful state, which helps liberate
creativity from the unconscious. It is useful to help unblock musicians struggling to create and
help them listen to the voice of the artistic conscience. Exercises like these can be used in the
clinical setting to help supplement the Logotherapeutic technique. Exercises like the one above
have been used for clinical issues such as self-esteem, depression, stress, anger, insomnia, and
37
relaxation by Ortiz (1997). The power of music enables the observing self to reach new levels
of awareness while providing the guiding structure for inner, personal exploration (Ortiz,
1997, p. 41). One such clinical issue that is common is depression. Ortiz (1997) developed a
practical exercise for helping lift a depressed mood of which we will replicate in part. It is
called modifying affect and is replicated below:
If you are having difficulty expressing feelings which may be underlying or causing
your depression, select music that will function as a catalyst to assist you in releasing these
feelings. For example, if your feelings of depression are tied to feelings of anger,
animosity, or annoyance at someone, but you feel it would be unwise, inappropriate or
politically incorrect to demonstrate these feelings directly (to your boss) you may want to
select some very upbeat, loud, fast, and frenzied, turbulent, or energetic sounding music
from your favorite genre, such as rock (classic rock, heavy metal, punk, hard rock,
alternative), classical (a vigorous piece with explosive crescendos), new age (up-tempo,
highly rhythmic), or big band (highly charged, energized). []
Play this music at a loud, but not uncomfortable, volume. As it plays, either sing or
emote along with it (act it out!). Allow yourself to dance, exercise, or simply respond to
the beat. If you choose, allow the music to give you permission to release your feelings
and express them by while in the privacy and comfort of your safe haven (virtually)
holtering at your boss (neighbour, partner, child), effectively airing any pentup feelings
you may have of wrath or indignation. Allow yourself to let go as you become one,
revitalized through the energy generated by the rage and contained within the music (Ortiz,
1997, p. 9).

The above exercise can alter a persons perception and mood to a more positive outlook.
Exposure to cheerful, and uplifting music can entrain the body and modify a persons mood
(Ortiz, 1997). Examples of music that can help to do this are; Beethovens Moonlight Sonata,
or R.E.M.s Shiny Happy People (Ortiz, 1997).
Listening to happy music immediately wont have much of an effect in lifting a
depressed mood rather it needs to be done in four stages according to Ortiz (1997).
The first stage is called Joining with your depression. This stage requires a person to
listen to music that expresses how one feels. We need to sit with our depression for a time to
feel it and accept it. Examples of music provided by Ortiz (1997) for this stage include Pink
Floyd Comfortably Numb, Temptations I Wish It Would Rain. Each person is encouraged
to personalise his or her own selections.
The second stage is moving up or out of your depressive state. This stage is the
beginning of a persons ascension out of a depressed state. Examples of music for this stage
include Led Zeppelin Stairway to Heaven, and Tom Petty I Wont Back Down.
The third stage is entering a brighter, more up-beat mood state. This stage is a
persons entry into a happier state. Song examples include Led Zeppelin Rock & Roll, and
Beach Boys Good Vibrations.
The fourth and final stage is called turning the transition into positive energy. This
stage focuses on moving on from a depressed state. Music examples include Bruce Springsteen
Born in the USA, and Beatles All You Need Is Love.
38
These stages help an individual to sit with their depressed mood, confront it, and move
on. As the above exercises show, music can be used as a therapeutic tool. Many of us think that
music is some sort of background embellishment that we listen while we go about our daily
tasks. We dont realise the power of music and how it is used to manipulate our moods,
thoughts, and behaviours. We are unaware of how music is used to manipulate our spending
habits in supermarkets and shops, and even in nightclubs and pubs and that it is meticulously
selected by experts to regulate our moods and behaviours (North & Hargreaves, 2008; Ortiz,
1997). We must make use of such a powerful tool to help in the therapeutic process.
Music as a therapeutic tool can be used to help supplement the Logotherapeutic
technique of dereflection. I would argue that music has a future within Logotherapy and
Existential Analysis not as music therapy but as what I would call Music Logotherapy. Music
Logotherapy would employ the above exercises to aid in the dereflection technique and also
make use of the exercises as detailed in The Tao of Music: Sound Psychology by John M. Ortiz.
In no way is Music Logotherapy designed to cure any mental health problem or supplant any
technique. It is merely a tool to aid Logotherapeutic techniques as it can help us to move our
attention away from hyper-reflection. Frankl opines human persons are wholly human only
when they are absorbed in some thing or are completely devoted to another person. And only
those who forget themselves are completely themselves (2004c, p. 233). Music has the power
for us to be wholly human.

Summary

This chapter discussed the transcendent nature of music, how meaning can be found in music,
music and spirituality, and the application of music to the Logotherapeutic technique of
dereflection and the proposed use of music to supplement Logotherapy - Music Logotherapy.
We will now examine this thesis by concluding what we have found.

39
CONCLUSION

This thesis has examined the interaction of music within the framework of Frankls
dimensional ontology of soma, psyche, and nos.
In the first chapter we discussed how music interacts with the somatic dimension. We
orientated our discussion around how music interacts with the human brain, the physiology of
the person, stress, genetics, and children. We also examined how Dionysius represents the
somatic dimension according to Nietzsches description in his book The Birth of Tragedy.
We first discussed the myth of Dionysius as described by Nietzsche in his
aforementioned book. What we found is that Dionysius relates to the soma through
physiological pursuits. Dionysius is wild creativity untamed. He is a work of art in which his
soma is the canvas. Dionysius represents the primal part of man as Paglia (1991) relates him to
the older reptilian brain. From here we move on to the brain.
We examined the complexity of the brain processes involved with processing music.
We found that the cerebellum (older reptilian brain) is crucial to musical timing and also
contains massive connections to emotional centres of the brain such as the amygdala. The
connections music has to various parts of the brain are also involved with other processes such
as emotion, which is why music elicits emotion. Various parts of the brain are activated when
we listen to music such as the auditory cortex, basal ganglia, cerebellum, BA44, BA47, and the
mesolimbic system. All of these are involved with emotion, music structure, melody, rhythm,
harmony, and timing. There seems to be no particular part of the brain involved specifically
with music but a wide network of brain processes. We also found out and dispelled the myth
that musicians are right brained as professional musicians processes music in their left
hemisphere while amateurs process music in their right hemisphere. This has got to do with the
learned and analytic processes of the left hemisphere. The experience of music for professional
musicians is more cognitive while amateurs would have a more holistic experience, which is
based in the right hemisphere. Our investigation then moved on to physiology.
Our investigation found that the circadian rhythm of our bodies plays a part in how we
react to music. We also found that the circadian rhythm of our bodies matches up with the
rhythm of the music. Simply put, our bodies entrain to the beat of the music. We then moved
on to look at the effect music has on stress.
Music can increase the heart rate and concentration of the stress hormone cortisol.
Music can also activate the release of endorphins, raise blood pressure, help the body form
clots, and increase the activity of some parts of the immune system. We also found that fast-
paced music increases cortisol levels in untrained athletes but not in trained athletes. Research
has also shown us that musical ability helps lower stress levels by helping to repair worn-out
cells. We then moved on to genetics and its relation to musical ability.
What the research in genetics has shown is that heritability is 50 percent. The main
researcher in the field of genetics and music Irma Jrvel has asserted that heritability is not
key to musical genius as exposure is just as important. Musical ability is a complex interaction
of genes and environment. Early exposure has been shown to aid in the cognitive development
40
of the child.
By examining the ontological development of music in children we concluded that
music older than language. We recognised this because music and language have a lot in
common and are somewhat intertwined. From the research we have seen that music has innate
properties as the child uses babbling and cooing to communicate if it is hungry, sleepy, etc. We
looked at research that has shown that babies prefer lullabies to any other music, and also that
lower-pitches elicit more of a response than higher-pitches. The Mozart Effect was a
controversial issue, as it was believed that babies that listened to classical music made them
more intelligent. This is not the case as it the media hyped a slight increase in IQ by a study by
Christopher Chabris. Though research has shown that music helps in cognitive development.
Educational systems have been setup such as those by Kindermusik to promote music to aid in
the development of the child.
Having investigated the various aspects of the relationship between music and the
soma, we have concluded that musics interaction with the soma is quite extensive and
complex, which can help us understand the relationship we have with music and the somatic
dimension. We shall now discuss what we have found in musics interaction with the next
dimension psyche.
The next chapter investigated music in the psychological dimension. The psychological
processes we investigated were perception, language, and emotions and their relationship to
music as these elements gave us a deeper understanding of the relationship between the human
psyche and music. We also discussed the mythic god Apollo as described by Nietzsche in his
book The Birth of Tragedy as Apollo symbolises the psyche.
Nietzsches description of Apollo helped us understand the nature of the human psyche
and helped set the tone for the rest of the chapter. Apollo relates to the psyche through intellect
and reason and is associated with aesthetics, the arts and psychological traits of reason,
structure, and self-knowledge, which, makes him the perfect ambassador of the human psyche.
In the section on the perception of the aesthetics of music we examined Roger
Scrutons (1997) description of the aesthetics of music and how we perceive music. He
differentiates between sound and music and calls music an art of sound music is organised
sound. Scruton asserts that metaphor is essential to musical experience and metaphor is a
fundamental aspect of understanding sound as music. If you take metaphor away, you cease to
be able to describe music only sound because metaphor describes the intentional object of the
musical experience. Sound occurs in a sound world and music is fashioned by man. We then
focused our investigation on language and music.
We first examined the origins of language and music as they developed around the
same time but there is much debate to which occurred first and if possible, did they develop
side by side. What we do know from looking at young children is that music develops first
ontologically as the child begins to sing earlier than to speak. The relationship of music and
language are intertwined so we looked at the aesthetics of music and language to ascertain the
differences and similarities by looking at rhythm, melody, syntax, and meaning. What our
investigation found in rhythm is that in music it is a regular timed beat but in language it is

41
more complex as there are three approaches to rhythm in language. A similarity between music
and language is that they both use a grouping structure. We looked at meaning in language
refers to semantic reference and predication and meaning in music and of course both are
completely different as music has no concrete reference. We then moved on to further the
discussion by looking at music and emotions, of which we discussed four theories.
One of the theories (melodic cues) proposed that music is a language of emotion and
that music intervals and patterns portray certain emotions. We contrasted this with Susanne K.
Langers morphology of feeling theory that asserts that music and emotion have no direct
connection but resemble each other and this resemblance gets mistaken for expression. Moving
away from philosophical discussion we examined what the research into music psychology was
postulating. Hurons ITPRA Theory looked at the underlying mechanism of expectancy using
five categories of Imagination, Tension, Prediction, Reaction and Appraisal, which are
associated with expectancy. Hurons theory is an advancement in the understanding of
emotional experiences associated with music from a Western tonal music point of view. We
moved on to the Multiple Mechanisms Theory, which is a very broad attempt to look at as
many underlying mechanisms of music and emotions. The theory looks at seven underlying
mechanisms, which we each discussed brain stem response, evaluative conditioning,
emotional contagion, visual imagery, episodic memory, music expectancy, and cognitive
appraisal. The underlying mechanisms are a comprehensive account for all emotional
responses as there is no single mechanism that can account for all musical emotions. We
moved our discussion on to the properties of music and emotional response and a discussion of
responding emotionally to music and a factor, which isnt spoken about much context. To
express happiness musicians used bright timbre, fast tempo, and exaggerated rhythmic
contrasts and to express sadness the musicians used a slow tempo, and slow and deep vibrato.
Discerning emotional meaning in music is problematic as music involves cultural specific cues
and most research is based upon Western tonal music. The FES model developed by Thompson
& Balkwill (2010) helped us to understand the universal and culturally specificity of music by
examining the phylogeny and ontology of music. Context is important as the research has
shown that some responses to music are conditioned by context as listening to an emotive song
on a bus may not elicit sad emotions compared to being at home. Our research then moved to
discover music and the purely human dimension the noological dimension.
The noological dimension examined the areas of the transcendental nature of music,
meaning and music, music and spirit, and the application of music to the Logotherapeutic
technique of dereflection.
We first discussed the transcendental nature of music in that it is omnipresent and it
takes us out of ourselves. Subcultures are formed from the bonding of music and this bond can
help us to understand ourselves. Despite the different types of music, there is one thing all have
in common surrender. We surrender ourselves to the music. The essence of human existence
is self-transcendence as Frankl opines. By transcending ourselves become truly human. Music
is a powerful tool to transcend the human being. Music is a mainstay in all cultures and it is
through music that man forgets himself and transcends his own being and becomes what he is

42
wholly human. Music is so powerful that is can detach us from a psychological neurosis such
as depression. It can aid in combatting mental illness and detaching ourselves from our illness,
as we are not our illness. We suffer from illness and music can help us to remember that. By
transcending our being Frankl states that that is where we can discover meaning.
Meaning can be found in music by using Frankls values of meaning creative,
experiential, and attitudinal. We discovered that creative values can be realised in the creation
of music. Expressing creative values we give something to the world and transcend ourselves
momentarily. Experiential values can be realised when we are receptive to the world, when we
surrender to the beauty of music. We can find meaning in our experiences of music in concert
halls, festivals, nightclubs, and music events. Meaning can also be found in the orientation of
our attitude. Music has the ability to help change our attitude toward an unalterable fate. The
inspiring beauty of music is a way in which we can alter our attitude toward life. Upbeat music
can help turn around an unpleasant day. The transcendent quality of music can alleviate
suffering if only for a short while. Music has the power to affirm ones existence.
Music can affirm our existence by its role in religious and spiritual practice. What we
discovered is that religious music address our collectivity, our spirituality, and our experiences
of the world. We also looked at the power of Gospel music and how it can transcend the
individual. Religious music is about transcending the individual and becoming closer to God.
Music connects us to our spiritual nature.
What we have also discovered in this thesis is that the somatic and noetic dimensions
have a connection and that connection is the idea of surrender. Nietzsches description of
Dionysius is the best example of surrender in the soma. Here we let go control of the ego and
give in to the soma and become a work of art. Surrendering in the noetic, we surrender up to
something other than ourselves. We also let go of the ego but to something higher a
transcendent being, of which some call God.
The final part of this thesis examined the clinical application of music in Logotherapy
through the use of the technique of dereflection. We discussed the technique of dereflection
and gave an example of a violinist who suffered greatly from hyper-reflection and how
dereflection helped him overcome his artistic block. We also discussed exercises developed by
Ortiz (1997) who used music in treating clinical as well as personal problems. We explored
how a centering exercise developed by Ortiz (1997) could be used as a dereflection exercise to
help musicians to overcome a block in their artistic creation and listen to their artistic
conscience. Music can also be used with non-musicians with clinical problems such as
insomnia, depression, stress, anger, and relaxation. We also provided an excerpt from Ortiz
(1997), which has shown an exercise in helping lift a depressed mood using four stages. The
four stages gradually lift the person from a depressed mood using various songs provided
above, which can also be personalised. Lifting a depressed mood is something that is gradual
and not immediate. One must sit with a depressed mood, confront it, and then move on. Music
is a way to dereflect ones attention away from excessive reflection (hyper-reflection) and can
also lift a mood by utilising the effects music has on the body such as entrainment.

43
I would argue that music has a future in Logotherapy as Music Logotherapy as we
discovered from the research in this thesis that music can entrain the body because of our
circadian rhythm and can influence our emotions, as well as help transcend the individual from
their suffering and can also detach a person from a neurosis. I would also argue that certain
types of music would predominantly fit into a specific dimension because of a dominance of
elements within the music as shown below:

Soma (body): Techno


Psyche (mind): Classical
Nos (spirit): Gregorian chant

I would say from the above research that techno music fits in the soma dimension because
within techno music there is a steady, thrumming beat that manipulates the circadian rhythm of
the listener. Classical music would fit into the psychical dimension because it can elicit strong
emotions. Gregorian chant is a music that is used in a spiritual manner and it is used in worship
to elevate the individual in service of a higher being.
Music has the ability to transcend us from our somatic and psychological disturbances
and bring us into the noetic dimension where we are truly human. Music Logotherapy can
utilise the exercises as designed by John M. Ortiz in his book The Tao of Music: Sound
Psychology. These exercises can supplement not supplant the therapeutic process. Music can
be used as a dereflection technique and also used to supplement clinical issues such as
depression, pain, self-esteem, stress, anger, and insomnia. It can also be used in personal issues
such as grief and loss, procrastination, memory recall, physical exercise, creativity, relationship
issues, romantic intimacy, and improving communication. One must take advantage of the
influence of music on the human person to help them heal and we would be remiss to not
utilise music as a supplement to Logotherapy.

44
BIBLIOGRAPHY

Ball, P. (2010). The Music Instinct: How Music Works And Why We Cant Do
Without It. London: The Bodley Head.
Bird, J. (2008). The Spirituality of Music. Canada: Northstone.
Blesser, B. (n.d.). The Seductive (Yet Destructive) Appeal of Loud Music. Retrieved
September 21st 2015, from
http://www.blesser.net/downloads/eContact%20Loud%20Music.pdf
Blood, A. J., Zatorre, R. J. (2001). Intensely pleasurable responses to music correlate
with activity in brain regions implicated in reward and emotion. Proceedings
of the National Academy of Sciences, 98, 11818-11823.
Blumenau, R. (2015 June/July). Music in Philosophy. Philosophy Now,108, 22-23.
Botton, A. de, & Armstrong, J. (2013). Art as Therapy. London: Phaidon.
Broh, B. A. (2002) Linking extracurricular programming to academic achievement:
who benefits and why? Sociology of Education, 75, 69-95.
Browne, I. (2009). Music and Madness. Ireland: Atrium.
Cage, J. (2009). Silence: Lectures and Writings. London: Marion Boyars.
Coleman, J. M., Rebollo-Pratt, R., Stoddard, R. A., Gretsmann, D. R., and Abel, H. H.
(1997). The Effects of the Male and Female Singing and Speaking Voices on
Selected Physiological and Behavioural Measures of Premature Infants in the
Intensive Care Unit. International Journal of Arts & Medicine 5, 2, 4-11.
Costello, S. J. (2015). The Spirit of Logotherapy. Religions 2016, 7, 3.
Darnley-Smith, R. & Patey, H. M. (2003). Music Therapy. London: Sage
Publications.
Delige, I. & Davidson, J. W. (Eds.). (2011). Music and the Mind: Essays in Honour
of John Sloboda. Oxford: Oxford University Press.
Evans, J. (June 9th 2013). Brian Eno on surrender in art and religion. The History of
Emotions Blog. Retrieved 31st March 2016 from
www.emotionsblog.history.qmul.ac.uk/?p=2677
Fitzpatrick, F. (March 5th 2013). Why Music, Part 9: Music and Spirituality.
Huffington Post. Retrieved June 22nd 2015, from
http://www.huffingtonpost.com/frank-fitzpatrick/music-
spirituality_b_3203309.html
Frankl, V. E. (1973). Psychotherapy and Existentialism: Selected Papers on
Logotherapy. Great Britain: Pelican.
Frankl, V. E. (1978). The Unheard Cry for Meaning. New York: Touchstone.
Frankl, V. E. (1988). The Will to Meaning: Foundations and Applications of
Logotherapy. USA: Meridian.
Frankl, V. E. (2004a). Mans Search for Meaning. London: Rider Books.
Frankl, V. E. (2004b). The Doctor and the Soul: From Psychotherapy to Logotherapy.
London: Souvenir Press.

45
Frankl, V. E. (2004c). On the Theory and Therapy of Mental Disorders. New York:
Brunner-Routledge.
Frankl, V. E. (2010). The Feeling of Meaninglessness: A Challenge to Psychotherapy
and Philosophy. Wisconsin: Marquette University Press.
Frankl, V. E. (2011). Mans Search for Ultimate Meaning. London: Rider Books.
Gabrielsson, A., & Juslin, P. N. (1996). Emotional expression in music performance:
Between the performers intention and the listeners experience. Psychology
of Music, 24, 68-91.
Gross, R. (2009). Psychology: The Science of Mind and Behaviour. UK: Hodder
Arnold.
Hiley, D. (2013). Gregorian Chant. UK: Cambridge.
Ile, G., & Thompson, W. F. (2011). Experiential and cognitive changes following
seven minutes exposure to music and speech. Music Perception, 28, 247
264.
Jabr, F. (March 20th 2013). Lets Get Physical: The Psychology of Workout
Music. Scientific American. Retrieved March 11th 2015, from
http://www.scientificamerican.com/article/psychology-workout-music/
Jun, P. (March 7th 2011). Music, Rhythm and the Brain. Brain World Magazine.
Retrieved April 24th 2015, from http://brainworldmagazine.com/music-
rhythm-and-the-brain-2/
Jung, C. G. (1991). Psychological Types. Revised Trans. R. F. C. Hull. London:
Routledge.
Juslin P. N. (2011). Music and Emotion: Seven questions, seven answers. In Delige,
I., & Davidson, J. W. (Ed.) Music and The Mind: Essays in Honour of John
Sloboda (p. 113 135). New York: Oxford University Press.
Juslin, P. N., Liljestrm, S., Vstfjll, D., & Lundqvist, L. (2010). How does music
evoke emotions? Exploring the underlying mechanisms. In Juslin, P. N. &
Sloboda, J. A. (Ed.) Handbook of Music and Emotion (p. 605 642). UK:
Oxford University Press.
Kaufmann, W. (1974). Nietzsche: Philosopher, Psychologist, Antichrist. Princeton:
Princeton University Press.
Khalfa, S., Bella, S. D., Roy, M., Peretz, I., Lupien, S. J. (2003). Effects of relaxing
music on salivary cortisol level after psychological stress. Annals of the New
York Academy Sciences, 999, 374-376.
Langer, S. K. (1941). Philosophy in a New Key: A Study in the Symbolism of
Reason, Rite, and Art. New York: Mentor Books.
Levitin, D. (2006). This is Your Brain on Music. London: Atlantic Books.
Panksepp, J. (1995). The Emotional Source of chills induced by Music. Music
Perception, 13, 171-207.
Mannes, E. (2011). The Power of Music: Pioneering Discoveries in the New Science
of Song. New York: Walker and Company.

46
McGilchrist, I. (2009). The Master and His Emissary. Great Britain: Yale.
Miluk-Kolasa, B., Obminiski, Z., Stupnicki, R., Golec, L. (1994). Effects of Music
Treatment on Salivary Cortisol in Patients Exposed to Pre-Surgical Stress.
Experimental and Clinical Endocrinology 102, 118 120.
Morley, P. (2010, January 17). On gospel, Abba, and the death of the record: An
audience with Brian Eno. The Guardian. Retrieved from
http://www.theguardian.com/music/2010/jan/17/brian-eno-interview-paul-
morley
Nietzsche, F. (1995). The Birth of Tragedy. Trans. Clifton P. Fadiman. New York:
Dover.
North, A. & Hargreaves, D. (2008). The Social and Applied Psychology of Music.
Oxford: Oxford University Press.
Oritz, J. M. (1997). The Tao of Music: Sound Psychology. USA: Weiser.
Paglia, C. (1991). Sexual Personae: Art and Decadence from Nefertiti to Emily
Dickinson. New York: Vintage.
Petitto, L. A. (1988). Language in the Prelinguistic Child. In F.S. Kessell (ed.) The
Development of Language and Language Researchers: Essays in Honour of
Roger Brown. Hillsdale, NJ: Erlbaum.
Resnicow, J. E., Salovey, P., & Repp, B. H. (2004) Is recognition of emotion in music
performance an aspect of emotional intelligence, Music Perception, 22(1), 145-158.
Rice, T. (2014). Ethnomusicology: A Very Short Introduction. UK: Oxford
University Press.
Schneck, D. J. & Berger, D. (2006). The Music Effect: Music Physiology and Clinical
Applications. London: Jessica Kingsley Publishers.
Scruton, R. (1997). The Aesthetics of Muisc. UK: Oxford University Press.
Storr, A. (1997). Music and the Mind. Great Britain: Collins.
Swaminathan, N. (September 13th 2007). Fact or Fiction?: Babies Exposed to
Classical Music End Up Smarter. Scientific American. Retrieved 20th April,
2015, from http://www.scientificamerican.com/article/fact-or-fiction-babies-
ex/
The RSA (October 21st 2011). RSA Animate The Divided Brain. Retrieved April
24th 2015, from https://www.youtube.com/watch?v=dFs9WO2B8uI
Theorell, T. (2014). Psychological Health Effects of Musical Experiences Theories,
Studies and Reflections in Music Health Science. London: Springer.
Thompson, W. F. (2009) Music, Thought, and Feeling: Understanding the
Psychology of Music. Second Edition. Oxford: Oxford University Press.
Thompson, W. F., & Balkwill, L-L. (2010). Cross-cultural similarities and
differences. In Juslin, P., & Sloboda, J. (Eds.), Handbook of Music and
Emotion: Theory, Research, Applications (p. 755 788). Oxford: Oxford
University Press.
Todd, N. (2001). Evidence for a Behavioural Significance of Saccular Acoustic

47
Sensitivity in Humans. Journal of Acoustical Society of America 110: 1, 380-
390.
Todd, N. & Cody, F. (2000). Vestibular Responses to Loud Dance Music: A
physiological basis for the Rock and Roll threshold? Journal of Acoustical
Society of America 107:1, 496-500.
Torres, I. (March 27th 2015). The Genetics of Musical Talent: An Interview with
Irma Jrvela. Science in the Clouds. Retrieved March 30th, 2015, from
http://scienceintheclouds.blogspot.ie/2015/03/the-genetics-of-musical-
talent.html
Trehub, S. (2000). Human Processing Predisposition and Musical Universals. In
N. L. Wallin, B. Merker, and S. Brown (eds. ) The Origins of Music.
Cambridge, MA: MIT Press.
Trevarthen, C. & Malloch, S. (2000). The Dance of Well-being: Defining The
Musical Therapeutic Effect, Nordic Journal of Music Therapy, 9 (2): 3-17.
University of Helsinki. (2009, May 27). Genetic Basis Of Musical Aptitude:
Neurobiology Of Musicality Related To Intrinsic Attachment
Behavior. ScienceDaily. Retrieved April 21, 2015 from
www.sciencedaily.com/releases/2009/05/090526093925.htm
Wright, K. (December 12th 2008). Musical Ability seems to be 50 Percent Genetic.
Discover Magazine. Retrieved March 28th, 2015, from
http://discovermagazine.com/2009/jan/052

48

Das könnte Ihnen auch gefallen