Sie sind auf Seite 1von 19

Article

Musicae Scientiae

Musical mode and visual-spatial


119
The Author(s) 2017
Reprints and permissions:
cross-modal associations in infants sagepub.co.uk/journalsPermissions.nav
DOI: 10.1177/1029864917705001
and adults journals.sagepub.com/home/msx

Leonardo Bonetti
University of Bologna, Italy

Marco Costa
University of Bologna, Italy

Abstract
The classification of major and minor musical stimuli along five dichotomous scales (happysad, pleasant
unpleasant, updown, lightdark, and warmcold colors) was investigated in two studies involving 51
children aged 46 years, and 168 adults. Musical stimuli were six chords and six harmonized melodies
differing in mode (major, minor). Furthermore, fluid intelligence was assessed in both infants and adults.
The associations between major mode and happiness and between minor mode and sadness increased
from a proportion of 58% at the age of 4, to 61% at the age of 5, 72% at the age of 6, and 92% in adults.
The majorup, minordown associations were 62% in both the 5- and 6-year-olds, and increased to 84%
in adults, while the majorlight, minordark associations increased from 59% at the age of 5 to 72%
in 6-year-olds and 92% in adults. Warmcold colors were systematically associated with, respectively,
major and minor stimuli in adults but not in children. Fluid intelligence was strongly related to the childs
ability to associate happy and sad faces to major and minor musical stimuli.

Keywords
colors, cross-modal associations, intelligence, musical development, musical mode, spatial perception

The integration of information from different sensory modalities, with a particular emphasis
on auditoryvisual correspondences, has received extensive attention from the scientific com
munity in recent decades (Calvert, Spence, & Stein, 2004; Spence, 2011; Spence & Deroy,
2012). This study aimed to track the development of cross-modal associations between major
minor musical stimuli and visual-spatial and emotional dimensions in children, comparing the
results with an adult sample. Specifically, we assessed the associations between musical stimuli
differing in mode and two colorimetric dimensions (lightnessdarkness, warmcold colors),
one spatial dimension (updown), one emotional dimension (happinesssadness), and one

Corresponding author:
Leonardo Bonetti, Department of Psychology, University of Bologna, Viale Berti Pichat, 5, Bologna I-40127, Italy.
Email: leonardo.bonetti2@unibo.it
2 Musicae Scientiae 00(0)

aesthetic dimension (pleasantnessunpleasantness) in children ranging from 4- to 6-years old


and adults. Furthermore, we explored for the first time the relation between fluid intelligence
and cognitive maturation, assessing the ability to associate happiness and sadness with, respec
tively, major and minor mode in children.
Previous research on cross-modal associations between auditory and visual-spatial stimuli
was mainly centered on pitch, while there is a paucity of evidence on mode (Spence, 2011).
Marks, Hammeal, and Bornstein (1987), for example, obtained matches between sounds and
lights that varied along such dimensions as visual brightness, pitch, and loudness. Adults
matched the brighter of two lights with the higher-pitched and the louder of two sounds. Four
year-old children mirrored the adults, as assessed by their cross-modal matches, finding con
gruence in both loudness and brightness (75% of 4-year-olds), and pitch and brightness (90%
of 4-year-olds). Congruence between loudness and brightness, and between pitch and bright
ness, characterizes perception in young children and in adults.
Several experiments using the speeded classification task (Marks, 2004) have also revealed a
strong association between pitch and visual-spatial position (Ben-Artzi & Marks, 1995;
Bernstein & Edelstein, 1971; Melara & OBrien, 1987). Mudd (1963) and Rusconi, Kwan,
Giordano, Umilt, and Butterworth (2006) have shown that the cognitive system maps pitch
height onto an internal spatial representation. Higher pitch sounds are spatially located up, on
a vertical line, and on the right in a horizontal line, mirroring the cognitive representation of
numbers. Lightness, the visual perceptual dimension varying from black to white, has been
associated with pitch in Marks (1987), Martino and Marks (1999), and Melara (1989). High
pitch sounds were also associated with angular shapes, while low pitch sounds were linked to
rounded shapes (Marks, 1987).
The influence of musical stimuli differing in mode on the processing of visually presented
words was studied by Steinbeis and Koelsch (2011), using an affective priming paradigm. They
showed that both musically trained and untrained participants evaluated emotional words
more quickly when they were congruent to the affective connotation of a preceding chord in
major or minor mode as compared to words incongruent to the preceding chords. This affective
priming was accompanied by the N400, an event-related-potential (ERP) component typically
linked with semantic processing, which was specifically modulated by the (mis)match between
the prime and the target. Costa (2012) has similarly applied the affective priming paradigm to
test the effects of mode in picture and word-evaluation tasks. Affective target words were clas
sified faster if preceded by a congruent valenced chord. Conversely, major and minor chords
were not as effective as priming in the case of affective pictures.
Bakker and Martin (2014) presented simultaneously happy or sad facial stimuli and major
or minor chords. Facilitation of processing in congruent pairs was measured via event-related
potentials. When faces and chords were presented containing congruent emotional informa
tion (happy facemajor chord or sad faceminor chord), processing was facilitated, as indexed
by decreased N2 ERP amplitudes. This suggested that musical chords do possess emotional con
notations that can be processed as early as 200 ms in naive listeners, and that can significantly
influence non-acoustic cognitive processes.
A pitchhue association has been reported in children by Simpson, Quinn, and Ausubel
(1956), although the effect could be ascribed to lightness. Yellow and green (lightest colors)
were associated with higher pitches, red and orange (mid-lightness colors) with midlevel
pitches, and blue and violet (darkest colors) with lower pitches. This confound between light
ness and hue emerged also in Bresin (2005), who found that music in the major mode was
associated with lighter colors than music in the minor mode. The same limitations were pre
sent in Sebba (1991) who discovered that students used warm, saturated, light, and highly
Bonetti and Costa 3

contrasting colors in creating images while listening to a major Mozart selection than when
listening to a selection of Albinonis pieces in minor mode. Additional limitations of that study
were that only two musical excerpts were presented, and students chose the preferred musical
selection rather than being randomly assigned to one of them.
Palmer, Schloss, Xu, and Prado-Len (2013), recruiting a sample of adults belonging to two
cultures, found that faster music in the major mode produced color choices that were more
saturated, lighter, and yellower whereas slower, minor music produced desaturated, darker,
and bluer choices. The emotions associated with the musical excerpts and with the colors were
highly correlated, supporting the notion that music-to-color associations are mediated by com
mon emotional connections. The association of the minor mode with blue hues was also con
firmed by Murari, Rod, Canazza, De Poli, and Da Pos (2015).
All else being equal (e.g., intensity, tempo, rhythm, and timbre), music composed by inter
vals of the major scale tends to be perceived as relatively happy and bright, whereas music
composed using the minor scale tends to be perceived as more subdued, sad, dark, or wistful
(Bowling, Gill, Choi, Prinz, & Purves, 2010; Cooke, 1959; Costa, Ricci Bitti, & Bonfiglioli, 2000;
Crowder, 1985; Hevner, 1935; Kastner & Crowder, 1990; Lahdelma & Eerola, 2016; Maher &
Berlyne, 1982; Parncutt, 2012, 2014).
The ability to differentiate between major and minor mode seems to develop before the
age of six (Nieminen, Istk, Brattico, Tervaniemi, & Huotilainen, 2011). Five-year-old chil
dren are able to perceive a change from major to minor mode in music, and after a short
period of training, they can correctly categorize the excerpts according to musical mode
(Costa-Giomi, 1996).
Kastner and Crowder (1990) showed that children as young as 3-year-olds are able to detect
differences between sad and happy music when asked to match musical tunes with cartoon-like
drawings of faces designed to express emotions of happiness, contentment, sadness, and anger.
In this study, tempo was kept constant, and musical tunes differed only for mode and harmoni
zation. However, inferences about 3-year-old children resulted from a small sample of seven
children. Gregory, Worrall, and Sarge (1996) found that children aged 78 years heard unac
companied major tunes as happy and minor as sad, whereas children aged 34 years did not.
Dalla Bella, Peretz, Rousseau, and Gosselin (2001) prepared excerpts of real music by compos
ers such as Verdi and Albinoni that were played with a piano timbre and manipulated with
respect to mode and tempo, and asked listeners to judge whether each passage was sad or
happy: they found that 68-year-olds and adults considered both tempo and mode, but 5-year
old children responded only to tempo and ignored mode.
In Nieminen, Istk, Brattico, and Tervaniemi (2012) two groups of children, 67-year-olds
and 89-year-olds, evaluated three short piano pieces: a piece in major mode, a piece in minor
mode, and a free tonal piece. Except for 67-year-olds without any musical education, children
gave the highest happiness ratings to the major piece. Only 89-year-olds found the minor piece
sadder than the major piece, and the major piece more beautiful than the minor one.
In other studies, using orchestral recordings of classical compositions, 5-year-olds were able
to make a rather systematic distinction between positive and negative emotions (Cunningham
& Sterling, 1988; Terwogt & Van Grinsven, 1991). Moreover, children as young as 4 years were
able to associate discrete emotions to music through photographs of facial expressions (Nawrot,
2003). Consensus about the emotional labeling of music excerpts appears to increase with age
(Cunningham & Sterling, 1988; Terwogt & Van Grinsven, 1991).
Children are capable of associating major music with happy emotions earlier than minor
music with sad emotions (Cunningham & Sterling, 1988; Dolgin & Adelson, 1990; Gerardi &
Gerken, 1995). Gerardi and Gerken (1995) presume that this might be due to greater exposure
4 Musicae Scientiae 00(0)

to major mode because most childrens songs in Western culture are in major mode (Cohen,
Thorpe, & Trehub, 1987; Kastner & Crowder, 1990).
The association of happiness and sadness to major and minor mode emerges later than the
association of fast and slow tempo to positive and negative emotions. For example, Mote
(2011) found that 4-year-olds and 5-year-olds rated fast songs as significantly happier than
slow songs. However, 3-year-olds failed to rate fast songs differently than slow ones at above-
chance level.
Although recognition of a discrete emotion in music may require cultural learning and the
ability to assign a verbal or visual label/symbol to the experienced emotion, the direct implicit
perception of an emotion, and particularly its induction by music, may be observable in very
young children (Nieminen etal., 2011). This is particularly true for the perception of conso
nance/dissonance. Several studies utilizing the head-turning procedure have found that infants
as young as 3 months, like adults, prefer consonant to dissonant intervals (Trainor & Heinmiller,
1998; Trainor, Tsang, & Cheung, 2002; Zentner & Kagan, 1998). Neonatal brain processes of
consonant music, as investigated with functional magnetic resonance imaging (fMRI), have
been shown to differ from those of dissonant music (Perani etal., 2010).
Infants are sensitive to musical properties such as melodic contour, intervals, rhythmic pat
terns, and some aspects of harmony (McPherson, 2012; Trehub, 2003, 2006). This evidence
might suggest that human processing of melody, interval, and rhythm have biological roots
connected to the natural development of the auditory and cognitive systems and their wiring
patterns. It has been shown that no specific and prolonged exposure to music is needed to
evoke childrens automatic neural responses when melody, interval, or rhythmic regularities
are violated (Trehub, 2006). Moreover, these basic musical skills are apparently culture-free
(Trehub, 2001).
Since the discrimination of musical mode occurs at a much later developmental stage than
the discrimination of other musical features, such as consonancedissonance, tempo, and
rhythm (Costa-Giomi, 1996; Dalla Bella etal., 2001; Nieminen etal., 2011), it might be possi
ble that musical mode discrimination, and the ability to associate musical mode with happi
nesssadness and visual-spatial dimensions, is influenced by cognitive maturation and fluid
intelligence. Cross-modal associations require the child to match stimuli in different sensorial
domains, a task that implies high-order cognitive abilities. Moreover, fluid intelligence plays a
significant role in individual preferences for the minor mode (Bonetti & Costa, 2016).

Aims of the study


This study had three aims: the first was to assess cross-modal associations between musical
mode and various dimensions. Specifically, we considered two colorimetric dimensions (light
nessdarkness, warmcold colors), one spatial dimension (updown), happinesssadness, and
pleasantnessunpleasantness. These associations were studied in different groups of children
and in a large sample of adults, aiming to track the differences in cross-modal evaluations
along groups varying in age.
A second aim was to better define the threshold age for the association of majorminor
modes with expressions of happiness and sadness. Previous literature on this subject is rather
controversial with one study having shown this ability starting from 3 years old (Kastner &
Crowder, 1990), and others having attained this ability at a later stage (e.g., Gagnon & Peretz,
2003; Nieminen et al., 2012). In our study, we tracked this ability in three groups of 4-, 5-, and
6-year-old children.
Bonetti and Costa 5

Finally, we aimed to investigate how fluid intelligence affects the ability of matching major
and minor musical stimuli to, respectively, happy and sad faces.

Method
Participants
Children. Three groups were considered. Two groups were from kindergarten, and one from the
first year of primary school. Both kindergarten and primary school were based in Bologna
(Italy). The first kindergarten group was composed of 12 children (7 females and 5 males), with
a mean age of 4.08 0.29 years, while the second one was composed of 20 children (10 females
and 10 males) with a mean age of 5.05 0.22 years.
The primary school group was composed of 19 children (13 females and 6 males) with a
mean age of 6.16 0.37 years. An informed consent was obtained from the parents and the
study was approved by the ethical committee of the University of Bologna.
Due to the attentional limits of 4-year-old children, the procedure for this group included
only the evaluations on the happysad scale for both chords and melodies, and the two intelli
gence tests. The 5- and 6-year-old children groups performed the complete procedure with the
five scales and the two intelligence tests. All children had no musical training.
A post-hoc power analysis for testing the achieved power was performed separately for
the chi-square analyses and the ANOVAs using G*Power (Faul, Erdfelder, Lang, & Buchner,
2007). Post-hoc power analysis for chi-square tests, considering a small to medium effect
size of = .20, = .05, and six repeated measures for each condition, resulted in an achieved
power (1 ) of .39 for the 4-year-old sample, of .59 for the 5-year-old sample, and of .57
for the 6-year-old sample. Post-hoc power analysis for the ANOVA designs, considering a
medium effect size of p2 = .30, and = .05, a total sample size of N = 51 participants
divided into three groups, and six measurements for each condition, resulted in an achieved
power of 1 = .68.

Adults. Participants were 168 university students, 118 females (mean age 20.75 1.05 years),
and 50 males (mean age 20.90 5.11 years). Participation was on a voluntary basis. Anonym
ity was guaranteed during both data collection and data analysis. Fifty-five (32.54%) partici
pants had formal training in playing musical instruments or singing, with a range of 113
years, and a weighted mean of 4.24 years. Forty-one (24.26%) participants played a musical
instrument without having formal training, with a range of 118 years of instrumental prac
tice, and a weighted mean of 6.95 years.
A post-hoc power analysis for testing the achieved power was performed separately for the
chi-square analyses and the ANOVAs using G*Power (Faul etal., 2007). Post-hoc power analy
sis for chi-square tests, considering a medium effect size of = .30, and = .05, six repeated
measures for each condition, and one degree of freedom, resulted in an achieved power of .97.
Post-hoc power analysis for the ANOVA designs, considering a large effect size of p2 = .60, =
.05, and six repeated measures for each condition, resulted in an achieved power of 1.0.

Procedure
Children. Each child was examined separately. The procedure was divided into two parts. In the
first part, the child had to evaluate 72 musical stimuli (36 pairs) differing in musical mode only.
The second part included two tests of fluid intelligence. All musical stimuli were presented with
6 Musicae Scientiae 00(0)

Figure 1. Iconic displays of the four visual-spatial dichotomous scales.

a piano timbre. The child sat exactly in the middle of the left and right speakers that were posi
tioned 60 cm apart, on the right and left side of the child.
The study was divided into six blocks. In the first block, the child was requested to match 6
major and 6 minor chords (Figure 2) to happy and sad schematic faces (Figure 1). Chords con
sisted of the fundamental, third, fifth, and octave, and they were played first melodically, in an
ascending pattern, and then harmonically. They were presented in six different tonalities: C
majorminor, D majorminor, F majorminor, G majorminor, A majorminor, and B flat
majorminor (Figure 2). The order of presentation of the 12 chords was randomized by the
E-Prime software.
In blocks 26, we presented majorminor melodic stimuli (Figure 3). Melodic musical stim
uli were six pairs of harmonized major and minor simple melodic pieces composed by Leonardo
Bonetti (Figure 3). The tonalities of melodic stimuli were: C majorminor, D majorminor, G
majorminor, A majorminor, B flat majorminor, and B majorminor. The melodic stimuli
were repeated for each block including the five dichotomous scales considered in this study:
happysad, lightdark, updown, warmcold color, pleasantunpleasant, for a total of 60
stimuli. Lightnessdarkness was rendered with two gray colors varying in lightness. Updown
was expressed by upward and downward arrows. Warm and cold colors were matched for light
ness and saturation and varied only for hue (Figure 1). The block order and the order of the 12
Bonetti and Costa 7

Figure 2. Major and minor chords, in six tonalities, used in both studies. Accidentals of the major or
minor version are reported within parentheses.

melodic stimuli within each block were randomized between participants by the E-Prime 2.0
software.
At the beginning of each block, except the pleasantunpleasant one, the experimenter gave
the child two sheets which showed two images illustrating the two opposites of the dichoto
mous scale (Figure 1). The experimenter verified that every child was able to match the two
opposite verbal labels associated with each scale with the two corresponding images. The child
was instructed to listen carefully to the musical stimulus and to point one of the two response
sheets according to his/her judgment. Each child was given the opportunity to replay the musi
cal stimuli.
A calibration of loudness balance between left and right channels was performed using a
Class 1 Delta Ohm HD 2010 UC/A phonometer. Musical stimuli were presented with an average
SPL level of 75 dB(A). The experimenter entered all the responses directly into the E-Prime
application.
Non-verbal and fluid intelligence assessment included two tests: the Block Design subtest of
the WISC-III scale, and the Ravens Colored Progressive Matrices (CPM). In the Block Design
subtest, the child was given four or nine blocks. Each block was composed of two red faces, two
white faces, and two red and white faces, in which the two colors are divided by the diagonal of
the square. The child had to arrange the blocks to match the design shown on cards. Each item
was scored for accuracy and for speed. This subtest measures spatial problem-solving and fluid
intelligence (Kamphaus, 2005). The test was time limited to 5 min. The experimenter first
explained the three types of faces in each block. Models 1 and 2 were first executed by the
experimenter, then executed by the child. Following models were executed from cards. Execution
time was recorded for each trial. For models 14 the time limit was 45 s. For models 68 the
time limit was 75 s, and for models 911 the time limit was 120 s. The scoring was computed
following the instructions of the WISC-III manual (Wechsler, 1991).
Ravens Colored Progressive Matrices (CPM) followed the Block Design subtest. The test is
made up of a series of diagrams or designs with a missing part. The child is expected to select
the correct part from eight options to complete the design. This test aims specifically at assess
ing general and fluid intelligence (g factor) (Raven, 2000). The first set made up of 12 matrices
was used. CPM administration was not time limited, and the score consisted of the number of
trials in which the child correctly identified the missing part of the matrix. The whole proce
dure lasted 3035 min for each child.

Adults. Musical stimuli were the same used in the first study. The first 12 trials were the six
major and the six minor chords shown in Figure 2. The order of chord presentation was rand
omized. Participants were provided with a response sheet in which they had to cross sad or
8 Musicae Scientiae 00(0)

happy for each musical stimulus. Chords were evaluated only for happinesssadness. The
remaining 60 trials were divided into five blocks, each made up of 12 melodic stimuli (6 majors
and 6 minors), as shown in Figure 3. Each block was related to one of the scales considered in
the study (happysad face; arrow pointing updown; warmcold color; lightnessdarkness;
pleasantnessunpleasantness). The order of the 12 majorminor stimuli within each block was
randomized.
Ravens Advanced Progressive Matrices (APM) (Raven, Raven, & Court, 1998) were admin
istered after the evaluations of musical stimuli. Since the adult sample was composed of univer
sity students we adopted the APM version of Ravens test, which is specifically intended for
populations with a mean IQ in the upper part of the normal distribution. The experimenter
explained first the test and then presented set I for instructional and training purposes. After
the training, set II was distributed to all participants. Set II is composed of 36 trials with increas
ing order of difficulty. The test was time limited to 30 min, as suggested in the test manual for
group administration (Raven etal., 1998). The score was computed as the number of matrixes
in which the missing part was correctly identified.

Data analysis
In both studies, data were analyzed in two ways. The first analysis was performed on response
frequency distributions for the 10 polarities of the five scales (happysad; updown; lightdark;
warmcold color; pleasantunpleasant), distinguishing between major and minor stimuli. The
data were compared by chi-square tests. A second analysis was performed computing the pro
portion of congruent and incongruent responses along the five scales for each participant. The
following associations were considered as congruent: majorhappy, minorsad, majorup,
minordown, majorlight, minordark, majorwarm color, minorcold color, majorpleasant,
minorunpleasant. Congruent and incongruent response proportions were analyzed with
mixed ANOVA designs, inserting congruency as within-subject factor, and age as between-sub
ject factor. The association with happysad faces included 4-, 5-, and 6-year-old groups, while
for the other scales the analysis included 5- and 6-year-old groups. Tukey-HSD test was used for
post-hoc tests when applicable. Greenhouse-Geisser correction was applied for repeated-
measures factors. The effect size was reported as phi () in chi-square analyses and as partial
eta-squared ( p2 ) in ANOVAs.

Results
Children
Response proportions for the five dichotomous scales. The response proportions, in percentages, for
the three children groups for happysad, updown, lightdark, warmcold color scales, distin
guishing between major and minor stimuli are reported in Table 1. Each majorminor compari
son was tested with a chi-square test. Alpha-level was Bonferroni corrected for multiple
comparisons. In the 4-year-olds the difference was not significant for both happiness and sad
ness. In the 5-year-olds major stimuli were associated with happiness (2 = 16.98, p < .001, =
.21), upward arrow (2 = 8.96, p = .002, = .20), lightness (2 = 6.56, p = .01, = .17). In the
6-year-olds major stimuli were significantly associated with happiness (2 = 21.85, p < .001,
= .22), and lightness (2 = 14.4, p < .001, = .31). Minor stimuli were significantly associated
with sadness (2 = 23.86, p < .001, = .24), and darkness (2 = 8.47, p = .003, = .18).
Bonetti and Costa 9

Figure 3. Melodic stimuli used in the two studies. Accidentals between parentheses were applied in the
minor mode version.
10 Musicae Scientiae 00(0)

Table 1. Percentages for the majorminor comparison in the happysad, updown, lightdark, and
warmcold color scales in children.

Age Mode Happy Sad Up Down Light Dark Warm Cold


color color
4 Major 57 41
Minor 43 59
5 Major 71 44 69 47 67 50 52 64
Minor 29*** 56 31** 53 33** 50 48 36
6 Major 71 27 60 35 80 32 55 46
Minor 29*** 73*** 40 65 20*** 68** 45 54
Total Major 68 36 65 42 73 41 53 55
Minor 32*** 64*** 35** 58 27*** 59 47 45

**p < .01. ***p < .001.

Considering all the age groups, major stimuli were significantly associated with happiness
(2 = 36.20, p < .001, = .18), upward arrow (2 = 10.45, p = .001, = .15), and lightness (2
= 19.46, p < .001, = .23). Minor stimuli, on the contrary, were significantly associated with
sadness (2 = 20.45, p < .001, = .14). Percentages for pleasantunpleasant choices are
reported in Table 2. Chi-square tests were computed for the comparisons between major and
minor stimuli in the two age groups. Major stimuli were classified as more pleasant than minor
ones (2 = 9.38, p = .002, = .23), and minor stimuli were evaluated more unpleasant than
major ones (2 = 6.58, p = .01, = .16) in 6-year-old children.

Congruent and incongruent responses


Happysad faces. Majorhappy, minorsad responses increased from 58% ( 12%) at the
age of 4, to 61% ( 22%) at the age of 5, and 72% ( 17%) at the age of 6 (Figure 4a). Consid
ering all groups, the proportion of congruent responses (majorhappy, minorsad) was 65%
( 19%). Congruency main effect was significant: F(1, 47) = 26.59, p < .001, p = .36. The
2

interaction age and congruency was not significant (p = .06).

Updown. The proportion of majorup, minordown responses was 62% ( 26%) in the
5-year-olds, and 62% ( 21%) in the 6-year-olds (Figure 4b). Considering both groups, the
proportion of congruent responses (majorup, minordown) was 62% ( 23%). Congruency
main effect was significant: F(1, 35) = 9, p = .005, p2 = .21. The interaction age and congru
ency was not significant (p = .97).

Lightdark. The proportion of majorlight, minordark responses was 59% ( 23%) in the
5-year-olds, and 72% ( 18%) in the 6-year-olds (Figure 4c). Considering both groups the propor
tion of congruent responses was 66% ( 22%). Congruency main effect was significant: F(1, 34)
= 19.65, p < .001, p2 = .37. The interaction age and congruency was not significant (p = .07).

Warmcold color. The proportion of majorwarm color, minorcold color responses was 41%
( 24%) in the 5-year-olds and 54% ( 28%) in the 6-year-olds (Figure 4d). Considering both
groups, the proportion of congruent responses was 48% ( 26%). Both congruency main effect
and the interaction age and congruency were not significant (respectively p = .65 and p = .14).

Pleasantunpleasant. The proportion of majorpleasant, minorunpleasant responses was


Bonetti and Costa 11

Table 2. Percentages for pleasantnessunpleasantness evaluation of major and minor stimuli in the
5-year-old and 6-year-old groups.

Pleasant (%) Unpleasant (%)

Age 5 6 5 6
Major 66 72 51 34
Minor 34 28** 49 66**

**p < .01. ***p < .001.

61% ( 22%) in 5-year-olds, and 68% ( 23%) in 6-year-olds (Figure 4e). Considering both
groups the proportion of congruent responses was 65% ( 22%). Congruency main effect was
significant: F(1, 32) = 13.94, p = .001, p = .30. The interaction age and congruency was not
2

significant (p = .32).

Chordmelodies
Happy faces were associated with major chords with a proportion of 67%, and with major
melodies with a proportion of 68%. Sad faces were associated with minor chords with a propor
tion of 62%, and with minor melodies with a proportion of 65%. The comparison of chord
melody proportions with a chi-square test was not significant, showing complete response
overlapping between the two categories of stimuli.

Block Design test


A linear regression was performed between the Block Design test score and the proportion of con
gruent responses in the happysad scale including all participants in the three age groups. The
model was significant: F(1, 42) = 18.78, p < .001, adjusted R2 = .29, = .56.

Ravens Colored Progressive Matrices


A linear regression was performed between the Ravens CPM score and the proportion of con
gruent responses to the happysad scale, including all participants in the three age groups. The
model was significant: F(1, 44) = 5.38, p = .02, R2adj = .09, = .33.

Adults
Response proportions for the five dichotomous scales. The percentage distributions for the
five dichotomous scales distinguishing between major and minor stimuli are reported in
Table 3. All chi-square tests were significant. Major stimuli were perceived as happy, up,
light, warm, and pleasant. Minor stimuli were perceived as sad, down, dark, cold, and
unpleasant.

Congruent and incongruent responses


Happysad face. Mean proportion for majorhappy face, minorsad face associations was
92% ( 11%), and 8% ( 10%) for majorsad face, minorhappy face associations. Congru
ency main effect was significant: F(1, 171) = 2886.27, p < .001, p2 = .94 (Figure 4a).
12 Musicae Scientiae 00(0)

Figure 4. Proportion of congruent responses (y-axis) as a function of age (x-axis). Error bars show
standard errors.

Lightdark. Mean proportion was 92% ( 15%) for majorlight, minordark, and 8%
( 15%) for majordark, minorlight. Congruency main effect was significant: F(1, 171) =
1261.81, p < .001, p2 = .88 (Figure 4b).

Updown. Mean proportion for the associations majorup, minordown was 84% ( 20%),
and 16% ( 19%) for majordown, minorup. Congruency main effect was significant: F(1,
171) = 523.54, p < .001, p2 = .74 (Figure 4c).

Warmcold color. Mean proportion was 77% ( 27%) for majorwarm color, minorcold
color, and 23% ( 27%) for majorcold color, minorwarm color. Congruency main effect was
Bonetti and Costa 13

Table 3. Percentages and chi-square statistics in the evaluation of major and minor stimuli along the five
dichotomous scales in adults.

Happy Sad Up Down Light Dark Warm Cold Pleasant Unpleasant


Major 90 4 82 12 89 4 77 22 58 43
Minor 10 96 18 88 11 96 23 78 42 57
2 697.96 810.75 228.05 271.53 342.85 398.68 149.05 155.56 12.32 10.64
p < .001 < .001 < .001 < .001 < .001 < .001 < .001 < .001 < .001 = .001
.43 .52 .34 .41 .42 .51 .28 .29 .08 .07

significant: F(1, 171) = 171.05, p < .001, p2 = .50 (Figure 4d).

Pleasantunpleasant. Mean proportion was 57% ( 31%) for majorpleasant, minor


unpleasant and 43% ( 31%) for majorunpleasant, minorpleasant. Congruency main effect
was significant: F(1, 171) = 9.44, p = .002, p2 = .05 (Figure 4e).

Chordsmelodies
The proportion for majorhappy face association was 87% for chord stimuli and 92% for melo
dies. The difference, comparing absolute frequencies with a chi-square test, was not significant.
The association minorsad face was 96% for chords and 96% for melodies. The difference,
tested with a chi-square test, was not significant. The attribution of happinesssadness was
therefore not affected by the complexity of musical stimuli.

Ravens Advanced Progressive Matrices


A linear regression with APM score as independent variable and proportion of congruent
responses for minor musical stimuli in the happysad scale was significant: F(1, 101) = 7.59, p =
.007, R2adj = .07, and = .27. All other associations with APM score were not significant.

Discussion
Music concepts have often been described in visual-spatial terms (e.g., highlow pitch and reg
ister, light or dark timbres, sound brightness). The word timbre, for example, in German is
translated as Klangfarbe, literally the color of sound; and the majorminor mode distinction
semantically calls to mind the visual-spatial dichotomies highlow and bigsmall. Previous
research has mainly investigated how pitch is mapped onto visual and spatial attributes (e.g.,
Ben-Artzi & Marks, 1995; Marks, 1987, 2004; Marks, Hammeal, & Bornstein, 1987; Rusconi
etal., 2006), whereas cross-modal association regarding mode was studied considering only
the affective value of faces (Bakker & Martin, 2014; Nieminen etal., 2012; Steinbeis & Koelsch,
2011), the affective value of words and pictures (Costa, 2012), and the emotional expression of
colors (Palmer etal., 2013). In our study, we have explored cross-modal associations between
musical mode and visual-spatial dimensions such as updown, lightdark, and warmcold
colors in children aged 5 and 6 years, and in adults. Moreover, the association between musical
mode and happiness and sadness was investigated in a broader sample of children ranging in
age from 4- to 6-years-old, and their results compared with an adult sample.
14 Musicae Scientiae 00(0)

We recorded significant cross-modal associations starting from the age of 5 years. The major
mode was perceived as spatially upper than the minor mode. In the visual domain, the major
mode was also perceived as visually lighter. At 6 years the minor mode was perceived as spa
tially down, and darker, while the major mode was perceived as upper and lighter. The propor
tion of congruent responses regarding the spatial dichotomy updown (majorup, minordown)
was constant in both the 5-year-old and the 6-year-old groups (62%). In adults, the majorup,
minordown associations reached a proportion of 84%.
Verticality is one of the most recurrent and powerful dimensions in cross-modal associations
since it embodies both affective valence and arousal. There is a strong tendency to categorize
pitch as high or low, and to map high-pitched sounds with elevated, small, and bright objects
(e.g., Chiou & Rich, 2012; Lidji, Kolinsky, Lochy, & Morais, 2007; Melara & OBrien, 1987;
Rusconi et al., 2006; Walker & Smith, 1984). Louder sounds are mapped higher than soft
sounds (Eitan, Schupak, & Marks, 2008). Other studies have clearly shown the important role
of verticality in the processing of emotional stimuli (Crawford, Margolies, Drake, & Murphy,
2006; Meier & Robinson, 2004; Weger, Meier, Robinson, & Inhoff, 2007), and in the perception
of particular qualities such as sacredness (Costa & Bonetti, 2016; Meier, Hauser, Robinson,
Friesen, & Schjeldahl, 2007). In an embodied cognition perspective, spatial properties of pitch
can also significantly bias motor behavior. Salgado-Montejo etal. (2016), for example, demon
strated that tones and chords can influence hand movements. In their study, higher-(lower-)
pitched sounds gave rise to a significant bias towards upper (lower) locations in space.
The lightdark association with, respectively, the major and minor mode was even stronger,
peaking at a proportion of 72% of congruent responses (majorlight, minordark) in the 6-year
olds. In adults, congruent responses reached a proportion of 92%, confirming that lightnessdark
ness in this study was the best cross-modal dimension that fitted the majorminor distinction.
While in adults the distinction between warmcold colors was significantly mapped onto the
major-mode dichotomy, in children aged 5 and 6, warmcold colors were not significantly associ
ated with major and minor musical stimuli. In adults, warm colors were associated with the major
mode, while cold colors were associated with the minor mode with a proportion of 77%.
A possible explanation of the associations between majorminor modes and, respectively,
lightdark and updown visual scales might be ascribed to the sharing of the same semantic
polarities. In this perspective, lightupmajor mode would share a positive valence, while dark
downminor mode would be connoted by a negative valence. According to the previous litera
ture, it could be argued that whenever stimuli share the same emotional-semantic connotation,
their processing is facilitated because the neural representations associated with visual and
auditory stimuli can interact in a shared semantic system (Chen & Spence, 2010; Doehrmann
& Naumer, 2008; Watt & Quinn, 2007; Wilkins, Hodges, Laurienti, Steen, & Burdette, 2014;
Woods, Spence, Butcher, & Deroy, 2013). Furthermore this explanation would be consistent
with the classification proposed by Spence (2011) who divided cross-modal correspondences
into three categories: statistical, structural, and semantic. We could assume that the associa
tions investigated in our study might be ascribed to the semantic category. Differently, statistical
correspondences occurring for pairs of stimulus dimensions that happen to be correlated in
nature are probably not related to our results since it is difficult to hypothesize that minor music
is usually emitted by darker or lower sources in comparison to major music. Similarly, struc
tural correspondences that occur because of neural connections that are present at birth (e.g.,
Mondloch & Maurer, 2004; Walker etal., 2010) seem not to be related to our results, since the
childrens results described in this paper clearly show that a mapping of musical mode on vis
ual-spatial features emerges only at the age of 5. Moreover, semantically mediated cross-modal
correspondences start to emerge later than statistical or structural correspondences, after the
Bonetti and Costa 15

development of language (e.g., Kadosh, Henik, & Walsh, 2009; Marks etal., 1987; Smith &
Sera, 1992).
In both the 5-year-old and the 6-year-old samples, major stimuli were evaluated as more
pleasant than minor ones. The proportion of majorpleasant, minorunpleasant responses
was 61% at 5 years and 68% at 6 years. This effect mirrors previous results found in Cunningham
and Sterling (1988), Gerardi and Gerken (1995), and Nieminen etal. (2012). As proposed by
Gerardi and Gerken (1995), this may be due to the greater occurrence of the major mode in
most childrens songs (Cohen et al., 1987; Kastner & Crowder, 1990). In adults, the higher
pleasantness for the major mode was slightly mitigated, and the proportion of majorpleasant,
minorunpleasant responses was 57%.
The association of majorminor stimuli with expressions of happiness and sadness was
studied starting with 4-year-old children. The proportion of majorhappy, minorsad associa
tions increased linearly with age, starting from 58% at the age of 4, and rising to 61% at the age
of 5, and 72% at the age of 6. The analysis of response distribution showed that these associa
tions become significant starting from the age of 5. In adults the majorhappy face, minorsad
face associations were quite unanimous, with a proportion of 92%. These data confirm the
previous results of Costa-Giomi (1996) and Nieminen and colleagues (2011, 2012), who found
that 5-year-old children can perceive a change from major to minor mode in music. The greater
complexity of melodies and their harmonization did not result, in either children or adults, in a
facilitated association with expressions of happiness and sadness.
The ability to associate major and minor stimuli with happiness and sadness was strongly
related to the fluid intelligence level, particularly when assessed with the Block Design test. It is
interesting to note that this test, which evaluates spatial problem-solving, is particularly predic
tive of the ability to discriminate mode, a musical property that we have shown as being strongly
associated with visual-spatial dimensions. We found a rather high correlation of .64 between
the proportion of congruent responses to the happysad scale and the Block Design test score.
This can open an interesting perspective on the possible use of a musical mode discrimination
task in the assessment of fluid and/or spatial intelligence in children.
Children are able to associate major music with happy emotions earlier than minor music
with sad emotions (Cunningham & Sterling, 1988; Dolgin & Adelson, 1990; Gerardi & Gerken,
1995). This was also confirmed in our study. Five-year-old children were able to associate a
happy schematic face to major musical stimuli, but sad schematic faces were not associated
with minor musical stimuli. Minor music was significantly associated with sad faces only within
the 6-year-old sample. This suggests that the recognition of negative valenced emotions in
minor music requires a higher degree of cognitive and intellectual maturation than the recog
nition of positive valenced emotions in major mode music.
A development of this study would be to test if the associations between musical stimuli dif
fering in mode and visual-spatial dimensions would also hold on an implicit level, adopting a
priming or a speeded classification paradigm (Marks, 2004). In this perspective, major chords
would facilitate, in terms of lower reaction times and error rates, the classification of light vis
ual stimuli and stimuli that are placed up, whereas minor chords would facilitate the classifica
tion of dark visual stimuli and stimuli that are placed down. Another possible development
would be to test if the facilitating effect of musical stimuli differing in mode on ongoing visual-
spatial cognitive processing would be detectable in event-related potentials. This perspective
would be in line with the study of Bakker and Martin (2014) who showed that congruent
incongruent associations between majorminor chords and happysad faces were reflected in
the modulation of the short-term N2 ERP component. These studies would clarify at what stage
and latency of perceptual-cognitive processing musical mode can interact and interfere
16 Musicae Scientiae 00(0)

with the elaboration of visual-spatial information. Furthermore, an ERP-approach might


clarify if the visual-spatial mapping of musical mode is detectable also in children younger
than 46 years old.

Acknowledgements
We thank Teresa Pintori, head of the Gandino-Guidi school in Bologna for letting us conduct the study
with children in her classes.

Funding
This research received no specific grant from any funding agency in the public, commercial, or not-for
profit sectors.

References
Bakker, D. R., & Martin, F. H. (2014). Musical chords and emotion: Major and minor triads are processed
for emotion. Cognitive, Affective & Behavioral Neuroscience, 15(1), 1531.
Ben-Artzi, E., & Marks, L. E. (1995). Visual-auditory interaction in speeded classification: Role of stimulus
difference. Perception & Psychophysics, 57(8), 11511162.
Bernstein, I. H., & Edelstein, B. A. (1971). Effects of some variations in auditory input upon visual choice
reaction time. Journal of Experimental Psychology, 87, 241247.
Bonetti, L., & Costa, M. (2016). Intelligence and musical mode preference. Empirical Studies of the Arts,
34(2), 160176.
Bowling, D. L., Gill, K., Choi, J. D., Prinz, J., & Purves, D. (2010). Major and minor music compared to
excited and subdued speech. The Journal of the Acoustical Society of America, 127(1), 491503.
Bresin, R. (2005). What is the color of that music performance? In Proceedings of the International Computer
Music ConferenceICMC 2005 (pp. 367370). San Francisco, CA: International Computer Music
Association.
Calvert, G., Spence, C., & Stein, B. E. (Eds.). (2004). The handbook of multisensory processes. Cambridge, MA:
The MIT Press.
Chen, Y. C., & Spence, C. (2010). When hearing the bark helps to identify the dog: Semantically-congruent
sounds modulate the identification of masked pictures. Cognition, 114(3), 389404.
Chiou, R., & Rich, A. N. (2012). Cross-modality correspondence between pitch and spatial location modu
lates attentional orienting. Perception, 41(3), 339353.
Cohen, A. J., Thorpe, L. A., & Trehub, S. E. (1987). Infants perception of musical relations in short trans
posed tone sequences. Canadian Journal of Psychology/Revue Canadienne de Psychologie, 41(1), 3347.
Cooke, D. (1959). The language of music. Oxford, UK: Oxford University Press.
Costa-Giomi, E. (1996). Mode discrimination abilities of pre-school children. Psychology of Music, 24(2),
184198.
Costa, M. (2012). Effects of mode, consonance, and register in visual and word-evaluation affective prim
ing experiments. Psychology of Music, 41(6), 713728.
Costa, M., & Bonetti, L. (2016). Geometrical factors in the perception of sacredness. Perception, 45(11),
12401266.
Costa, M., Ricci Bitti, P. E., & Bonfiglioli, L. (2000). Psychological connotations of harmonic musical inter
vals. Psychology of Music, 28, 422.
Crawford, E. L., Margolies, S. M., Drake, J. T., & Murphy, M. E. (2006). Affect biases memory of location:
Evidence for the spatial representation of affect. Cognition & Emotion, 20(8), 11531169.
Crowder, R. G. (1985). Perception fo the major/minor distinction: III. Hedonic, musical, and affective
discriminations. Bullettin of the Psychonomic Society, 23, 314316.
Cunningham, J. G., & Sterling, R. S. (1988). Developmental change in the understanding of affective
meaning in music. Motivation and Emotion, 12(4), 399413.
Bonetti and Costa 17

Dalla Bella, S., Peretz, I., Rousseau, L., & Gosselin, N. (2001). A developmental study of the affective value
of tempo and mode in music. Cognition, 80(3), B1B10.
Doehrmann, O., & Naumer, M. J. (2008). Semantics and the multisensory brain: How meaning modu
lates processes of audio-visual integration. Brain Research, 1242, 136150.
Dolgin, K. G., & Adelson, E. H. (1990). Age changes in the ability to interpret affect in sung and instru
mentally-presented melodies. Psychology of Music, 18(1), 8798.
Eitan, Z., Schupak, A., & Marks, L. E. (2008). Louder is higher: Cross-modal interaction of loudness change
and vertical motion in speeded classification. Proceedings of the 10th International Conference on Music
Perception and Cognition (ICMP10). Sapporo, Japan.
Faul, F., Erdfelder, E., Lang, A.-G., & Buchner, A. (2007). G*Power: A flexible statistical power analy
sis program for the social, behavioral, and biomedical sciences. Behavior Research Methods, 39(2),
175191.
Gagnon, L., & Peretz, I. (2003). Mode and tempo relative contributions to happysad judgements in
equitone melodies. Cognition & Emotion, 17(1), 2540.
Gerardi, G. M., & Gerken, I. (1995). The development of affective responses to modality and melodic con
tour. Music Perception, 12, 279290.
Gregory, A. H., Worrall, L., & Sarge, A. (1996). The development of emotional responses to music in
young children. Motivation and Emotion, 20(4), 341348.
Hevner, K. (1935). The affective character of the major and minor modes in music. American Journal of
Psychology, 47, 103118.
Kadosh, R. C., Henik, A., & Walsh, V. (2009). Synaesthesia: Learned or lost? Developmental Science, 12(3),
484491.
Kamphaus, R. W. (2005). Clinical assessment of child and adolescent intelligence. New York: Springer.
Kastner, M. P., & Crowder, R. G. (1990). Perception of the major/minor distinction: IV. Emotional con
notations in young children. Music Perception, 2, 189202.
Lahdelma, I., & Eerola, T. (2016). Single chords convey distinct emotional qualities to both naive and
expert listeners. Psychology of Music, 44, 3754.
Lidji, P., Kolinsky, R., Lochy, A., & Morais, J. (2007). Spatial associations for musical stimuli: A piano in
the head? Journal of Experimental Psychology: Human Perception and Performance, 33(5), 11891207.
Maher, T. F., & Berlyne, D. E. (1982). Verbal and exploratory responses to melodic musical intervals.
Psychology of Music, 10, 1127.
Marks, L. E. (1987). On cross-modal similarity: Auditory-visual interactions in speeded discrimination.
Journal of Experimental Psychology: Human Perception and Performance, 13(3), 384394.
Marks, L. E. (2004). Cross-modal interactions in speeded classification. In G. Calvert, C. Spence, & B. Stein
(Eds.), The handbook of multisensory processes (pp. 85105). Cambridge, MA: The MIT Press.
Marks, L. E., Hammeal, R. J., & Bornstein, M. H. (1987). Perceiving similarity and comprehending meta
phor. Monographs of the Society for Research in Child Development, 52(1), 1102.
Martino, G., & Marks, L. E. (1999). Perceptual and linguistic interactions in speeded classification: Tests
of the semantic coding hypothesis. Perception, 28(7), 903923.
McPherson, G. E. (2012). The child as musician: A handbook of musical development. Oxford, UK: Oxford
University Press.
Meier, B. P., Hauser, D. J., Robinson, M. D., Friesen, C. K., & Schjeldahl, K. (2007). Whats up with God?
Vertical space as a representation of the divine. Journal of Personality and Social Psychology, 93(5),
699710.
Meier, B. P., & Robinson, M. D. (2004). Why the sunny side is up. Psychological Science, 15(4), 243247.
Melara, R. D. (1989). Dimensional interaction between color and pitch. Journal of Experimental Psychology:
Human Perception and Performance, 15(1), 6979.
Melara, R. D., & OBrien, T. P. (1987). Interaction between synesthetically corresponding dimensions.
Journal of Experimental Psychology: General, 116(4), 323336.
Mondloch, C. J., & Maurer, D. (2004). Do small white balls squeak? Pitch-object correspondences in young
children. Cognitive, Affective, & Behavioral Neuroscience, 4(2), 133136.
18 Musicae Scientiae 00(0)

Mote, J. (2011). The effects of tempo and familiarity on childrens affective interpretation of music.
Emotion, 11(3), 618622.
Mudd, S. A. (1963). Spatial stereotypes of four dimensions of pure tone. Journal of Experimental Psychology,
66(4), 347352.
Murari, M., Rod, A., Canazza, S., De Poli, G., & Da Pos, O. (2015). Is Vivaldi smooth and takete?
Non-verbal sensory scales for describing music qualities. Journal of New Music Research, 44(4),
359372.
Nawrot, E. S. (2003). The perception of emotional expression in music: Evidence from infants, children
and adults. Psychology of Music, 31, 7593.
Nieminen, S., Istk, E., Brattico, E., & Tervaniemi, M. (2012). The development of the aesthetic experience
of music: preference, emotions, and beauty. Musicae Scientiae, 16(3), 372391.
Nieminen, S., Istk, E., Brattico, E., Tervaniemi, M., & Huotilainen, M. (2011). The development of aes
thetic responses to music and their underlying neural and psychological mechanisms. Cortex, 47(9),
11381146.
Palmer, S. E., Schloss, K. B., Xu, Z., & Prado-Len, L. R. (2013). Music-color associations are mediated
by emotion. Proceedings of the National Academy of Sciences of the United States of America, 110(22),
88368841.
Parncutt, R. (2012). Majorminor tonality, schenkerian prolongation, and emotion: A commentary on
Huron and Davis (2012). Empirical Musicology Review, 7(34), 118137.
Parncutt, R. (2014). The emotional connotations of major versus minor tonality: One or more origins?
Musicae Scientiae, 18(3), 324353.
Perani, D., Saccuman, M. C., Scifo, P., Spada, D., Andreolli, G., Rovelli, R., Koelsch, S. (2010). Functional
specializations for music processing in the human newborn brain. Proceedings of the National Academy
of Sciences of the United States of America, 107(10), 47584763.
Raven, J. R. (2000). The Ravens Progressive Matrices: Change and stability over culture and time.
Cognitive Psychology, 41, 148.
Raven, J. R., Raven, J. C., & Court, J. (1998). Manual for Ravens progressive matrices and vocabulary scales.
San Antonio, TX: Harcourt.
Rusconi, E., Kwan, B., Giordano, B. L., Umilt, C., & Butterworth, B. (2006). Spatial representation of
pitch height: The SMARC effect. Cognition, 99(2), 113129.
Salgado-Montejo, A., Marmolejo-Ramos, F., Alvarado, J. A., Arboleda, J. C., Suarez, D. R., & Spence, C.
(2016). Drawing sounds: Representing tones and chords spatially. Experimental Brain Research,
234(12), 35093522.
Sebba, R. (1991). Structural correspondence between music and color. Color Research & Application,
16(2), 8188.
Simpson, R. H., Quinn, M., & Ausubel, D. P. (1956). Synesthesia in children: Association of colors with
pure tone frequencies. Journal of Genetic Psychology, 89(1), 95103.
Smith, L. B., & Sera, M. D. (1992). A developmental analysis of the polar structure of dimensions. Cognitive
Psychology, 24(1), 99142.
Spence, C. (2011). Crossmodal correspondences: A tutorial review. Attention, Perception, & Psychophysics,
73(4), 971995.
Spence, C., & Deroy, O. (2012). Crossmodal correspondences: Innate or learned? I-Perception, 3(5), 316
318.
Steinbeis, N., & Koelsch, S. (2011). Affective priming effects of musical sounds on the processing of word
meaning. Journal of Cognitive Neuroscience, 23(3), 604621.
Terwogt, M., & Van Grinsven, F. (1991). Musical expression of moodstates. Psychology of Music, 19, 99
109.
Trainor, L. J., & Heinmiller, B. M. (1998). The development of evaluative responses to music. Infant
Behavior and Development, 21(1), 7788.
Trainor, L. J., Tsang, C. D., & Cheung, V. H. W. (2002). Preference for sensory consonance in 2- and
4-month-old infants. Music Perception, 20(2), 187194.
Bonetti and Costa 19

Trehub, S. E. (2001). Musical predispositions in infancy. Annals of the New York Academy of Sciences, 930,
116.
Trehub, S. E. (2003). The developmental origins of musicality. Nature Neuroscience, 6(7), 669673.
Trehub, S. E. (2006). Infants as musical connoisseurs. In G. E. McPherson (Ed.), The child as musician: A
handbook of musical development (pp. 3349). Oxford, UK: Oxford University Press.
Walker, P., Bremner, J. G., Mason, U., Spring, J., Mattock, K., Slater, A., & Johnson, S. P. (2010). Preverbal
infants sensitivity to synaesthetic cross-modality correspondences. Psychological Science, 21(1), 21
25.
Walker, P., & Smith, S. (1984). Stroop interference based on the synaesthetic qualities of auditory pitch.
Perception, 13(1), 7581.
Watt, R., & Quinn, S. (2007). Some robust higher-level percepts for music. Perception, 36(12), 1834
1848.
Wechsler, D. (1991). Wechsler intelligence scale for childrenThird Edition (WISC-III). San Antonio, TX: The
Psychological Corporation.
Weger, U., Meier, B., Robinson, M., & Inhoff, A. (2007). Things are sounding up: Affective influences on
auditory tone perception. Psychonomic Bulletin & Review, 14(3), 517521.
Wilkins, R. W., Hodges, D. A., Laurienti, P. J., Steen, M., & Burdette, J. H. (2014). Network science and the
effects of music preference on functional brain connectivity: From Beethoven to Eminem. Scientific
Reports, 4, 6130.
Woods, A. T., Spence, C., Butcher, N., & Deroy, O. (2013). Fast lemons and sour boulders: Testing cross-
modal correspondences using an internet-based testing methodology. I-Perception, 4(6), 365379.
Zentner, M. R., & Kagan, J. (1998). Infants perception of consonance and dissonance in music. Infant
Behavior and Development, 21(3), 483492.

Das könnte Ihnen auch gefallen