Beruflich Dokumente
Kultur Dokumente
a r t i c l e
i n f o
Article history:
Accepted 6 October 2014
Keywords:
Gliomas
Facial emotion
Vocal emotion
Multisensory integration
Hodotopic model
a b s t r a c t
The relevance of emotional perception in interpersonal relationships and social cognition has been well
documented. Although brain diseases might impair emotional processing, studies concerning emotional
recognition in patients with brain tumours are relatively rare. The aim of this study was to explore emotional recognition in patients with gliomas in three conditions (visual, auditory and crossmodal) and to
analyse how tumour-related variables (notably, tumour localisation) and patient-related variables inuence emotion recognition. Twenty six patients with gliomas and 26 matched healthy controls were
instructed to identify 5 basic emotions and a neutral expression, which were displayed through visual,
auditory and crossmodal stimuli. Relative to the controls, recognition was weakly impaired in the patient
group under both visual and auditory conditions, but the performances were comparable in the crossmodal condition. Additional analyses using the race model suggest differences in multisensory emotional
integration abilities across the groups, which were potentially correlated with the executive disorders
observed in the patients. These observations support the view of compensatory mechanisms in the case
of gliomas that might preserve the quality of life and help maintain the normal social and professional
lives often observed in these patients.
2014 Elsevier Inc. All rights reserved.
1. Introduction
Gliomas are invasive brain tumours occurring more frequently in
young people and most commonly discovered after sudden epileptic seizures (Capelle et al., 2013; Pallud et al., 2013). Astrocytomas,
oligodendrogliomas, and mixed gliomas often concern frontal and
temporal lobes (Correa et al., 2007) in cortical and subcortical structures (Duffau, Gatignol, Mandonnet, Capelle, & Taillandier, 2008).
According to World Health Organization (WHO), a grading system
leads to separate low-grade (II) gliomas (LGG) and high-grade (III
and IV) gliomas (HGG) (Figarella-Branger et al., 2008). LGG show a
continuous, constant growth (about 4 mms/year) in local, white
matter and systematically evolve toward HGG (Bonnetblanc,
Desmurget, & Duffau, 2006), with a median of around 78 years
for anaplastic transformation and a median survival around
10 years without treatment (Duffau, 2005). Due to their slow development and inltrating characteristics, gliomas efciently activate
93
94
Table 1
Clinical data.
Patient
Tumour
Anti-epileptic drug
Case
Age
Sex
KPS score
Location
Type
Grade
Volume (3D/SEGM)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
52
42
41
40
44
37
40
42
29
36
39
50
33
61
37
51
49
44
36
23
27
44
36
22
52
40
F
F
F
H
H
F
H
F
H
H
F
H
F
H
H
F
H
F
H
H
H
H
H
H
F
F
80
100
70
100
100
100
85
100
100
100
80
80
75
90
100
100
90
100
90
90
100
80
100
100
80
90
L
R
L
L
L
L
L
R
L
L
L
L
BI
R
L
L
L
L
L
L
R
L
L
R
R
L
Oligodendroglioma
Mixed glioma
Oligodendroglioma
Mixed glioma
Mixed glioma
Mixed glioma
Mixed glioma
Astrocytoma
Oligodendroglioma
Mixed glioma
Astrocytoma
Gliobastoma
Mixed glioma
Oligodendroglioma
Oligodendroglioma
Mixed glioma
Gliobastoma
Mixed glioma
Mixed glioma
Mixed glioma
Mixed glioma
Mixed glioma
Mixed glioma
Mixed glioma
Oligodendroglioma
Oligodendroglioma
III
II
II
II
II
II
II
II
II
II
II
IV
III
II
II
III
IV
III
II
II
II
III
IV
II
II
II
29.6
33.8
16.2
3.9
24.3
43.3
64.9
47.7
15.6
58.5
135.5
46.1
211.4
26.5
17.2
30
11.4
26.1
119.8
78.6
23.8
175.6
35.2
8.4
53.3
35.2
Temporal
Insular
Frontal
Frontal
Frontal
Fronto-temporo-insular
Insular
Parietal
Frontal
Cingular
Fronto-temporo-insular
Temporal
Frontal
Frontal
Temporal
Frontal
Temporal
Cingular
Temporo-insular
Temporo-insular
Insular
Fronto-insular
Frontal
Temporal
Temporo-insular
Frontal
USA) was used to control the visual and auditory stimulus delivery
and record the response times and the accuracy for each subject.
Table 2
Demographic characteristics of study participants.
Yes
Yes
Yes
No
Yes
Yes
Yes
Yes
No
Yes
Yes
Yes
No
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
No
Patients
(n = 26)
Controls
(n = 26)
40.27 (9.18)
15/11
13.81 (2.73)
39.62 (10.88)
15/11
16 (1.2)
2.2.2. Procedure
All stimuli were presented on a 17 in. monitor (resolution set to
1200 800 pixels). Each participant performed 180 trials, which
were divided into 3 blocks (visual, auditory, and crossmodality).
Two sets of 60 items in pseudo random order were constructed
for each block. The visual and auditory conditions were randomly
presented and the crossmodal condition was always presented
last. Each trial was initiated with the presentation of a xation
cross for 300 ms, followed by the stimulus presentation. An intertrial interval of 500 ms announced the next trial. In the three emotion recognition tasks, the participants were instructed to carefully
consider all six alternatives before responding. The verbal labels for
the six emotion expressions were printed under each picture on
the computer screen throughout the test, and the subjects were
asked to select the word that best described the emotion presented
in each slide. The responses were recorded from mouse clicks.
There was no time limit for analyzing each stimulus or for responding. The patients were not given feedback on their performance.
Ten additional practice trials were conducted prior to each block.
E-prime software (Physiology Software Tools, Pittsburgh, PA,
Mean (SD)
FAB/18
DO70/70
16.52(1.6)
67.81(2.4)
TMT
Part A, time, s
Part B, time, s
Part B score /13
Forward span/backward span
32.27(12.3)
85.96(34.4)
13.04(2.1)
5.21(1.1)/3.88(1.2)
STROOP test
Inhibition, time, s
Flexibility, time, s
Flexibility, corrected errors
100.54(32.4)
114.42(28.3)
1.17(1.9)
95
(91.18 5.60% vs 93.78 2.70%). We observed a signicant interaction between Group and Modality (F(2, 100) = 3.76, p = .027, e = .94,
g2p = .07). Planned comparisons revealed that the patient group performed more poorly than the CTRL group, in the visual modality
(87.82% vs 91.73%, F(1, 50) = 4.69, p = .04, g2p = .09) and the auditory
modality (87.95% vs 91.47%), F(1, 50) = 4.76, p = .03, g2p = .09). By contrast, the scores did not signicantly differ between the two groups
(F < 1) in the crossmodal condition (Fig. 1).
A signicant effect of Emotion was observed (F(5, 250) = 42.31,
p < .001, e = .77, g2p = .45). The participants recognized happiness
(97.82%) and neutral (98.8%) more accurately than the other emotions, particularly anger, which was recognized less accurately
(82.98%). There was a signicant interaction between Emotion and
Modality (F(10, 500) = 37.27, p < .001, e = .51, g2p = .43), and also
between Group and Emotion (F(10, 250) = 1.02, p = .41, e = .77,
g2p = .02).
Importantly, a signicant effect of Modality was observed
(F(2, 100) = 90.27, p < .001, e = .94, g2p = 0.64). Planned comparisons
revealed that the participants recognized the emotion more accurately in the crossmodal condition than in the visual and auditory
modalities (F(1, 50) = 194.67, p < .001, g2p = .80). No signicant differences were observed between the visual and auditory modalities
(F < 1).
For the response times, the results were slightly different. We
rst observed a main effect of the Group (F(1, 50) = 15.65,
p < .001, g2p = .24). Indeed, the LGG group was consistently slower
than the CTRL group (2994 972 ms vs 2213 424 ms). A signicant
interaction failed to reach signicance between the Group and
Modality (F(2, 100) = .92, e = .74, p = .40). A signicant effect of Emotion was observed (F(5, 250) = 46.35, p < .001, e = .68, g2p = .48). Participants responded more quickly to happiness and neutral, and
fear was the slowest to be recognized (see Table 2). There was a signicant interaction between Emotion and Modality (F(10, 500) =
13.42, p < 0.001, e = .52, g2p = .21) and a signicant interaction
between Emotion and Group (F(5, 250) = 5.15, p < .001, e = .68,
g2p = .09). A Bonferroni post hoc analysis revealed that happiness
and neutrality were always recognized faster (all p < .001), and the
differences between these two emotions and the other emotions
(i.e., negatives emotions) were more important in the patient group
compared than in with the CTRL group.
Finally, we observed a signicant effect of Modality
(F(2, 100) = 59.83, p < .001, e = .74, g2p = 0.54). Planned comparisons
revealed faster responses in the crossmodal condition than in unimodal (visual and auditory) conditions (F(1, 50) = 105.25, p < .001,
g2p = .68). The auditory modality was faster than the visual modality
(F (1,50) = 17.43, p < .001, g2p = 0.26).
3.2. Multisensory integration
3. Results
To determine the nature of the crossmodal advantage over unimodal integration in each group, we used the race model of
inequality. If the response times obtained in the crossmodal condition are better than predicted by the race model, this provides evidence that the information from the visual and auditory modalities
interacted to produce the RT facilitation, suggesting the likely neural integration of the two unisensory inputs. As expected, we
observed a violation of the race model prediction for the CTRL
group over the 5, 25 and 35 percentiles of the RT distribution
(p < .05). By contrast, in the patient group, the race model was
not violated on any percentile of the distribution (Fig. 2).
On the premise that neuropsychological functioning might contribute to emotion recognition difculties following gliomas, notably in crossmodal integration, we investigated whether a
96
4000
GLIOMAS
CTRL
GLIOMAS
CTRL
3500
90
80
70
60
3000
2500
2000
50
1500
Visual
Auditory
Crossmodal
Visual
Auditory
Crossmodal
Fig. 1. Mean score (%) and response times (in milliseconds) for each modality (visual, auditory, crossmodal) and for each emotion, for the two groups (GLIOMAS, CTRL).
ognition score (p > .19). Additional analysis showed that the emotion recognition score (ranging from 72.78% to 97.22%) correlated
with the KPS score (p = .01) and not with age (p > .05).
When tumor-related variables were investigated, we found that
tumor volume did not inuence emotion recognition score
(p > .05). U-MannWhitney tests revealed none effect of tumor
malignity (LGG vs HGG) on emotion recognition score in visual,
auditory and crossmodal modalities (all p > .24). An effect of laterality was found in auditory modality (p = .05), with patients with
left glioma better performing than those with right gliomas. None
effect of laterality was found in visual and crossmodal modality (all
p > .06). KruskalWallis tests revealed none effect of histology (all
p > .25) or localization (divided in four groups) (all p > .10). Concerning the tumor localization, slight differences were found
between the four subgroups (see Table 4). MannWhitney U Test
revealed a signicant difference between the frontal and the temporo-insular group in visual modality (p = .04), with the frontal
group responding worse than the temporo-insular group (84.09
vs 91.39%). None signicant difference was found in auditory
modality or crossmodal condition (p > .14).
4. Discussion
CTRL
0,8
0,8
Cumulative probability
Cumulative probability
GLIOMAS
0,6
0,4
Visual
Auditory
Crossmodal
Predicted
0,2
0
1000
2000
3000
4000
5000
6000
0,6
0,4
Visual
Auditory
Crossmodal
Predicted
0,2
7000
0
1000
2000
3000
4000
5000
6000
Fig. 2. The cumulative probability distributions for discrimination response times to visual (black dots), auditory (black squares), and multisensory stimuli (white dots). The
summed probability for the visual and auditory responses is depicted by the race model curve (red stars). (For interpretation of the references to color in this gure legend,
the reader is referred to the web version of this article.)
Temporo-insular (n = 12)
Fronto-temporo-insular (n = 2)
Parietal (n = 1)
L(n = 9)
R(n = 1)
B(n = 1)
L(n = 8)
R(n = 4)
L(n = 2)
R(n = 0)
L(n = 0)
R(n = 1)
Visual
Auditory
Crossmodal
2
1
0
1
1
1
0
0
0
2
1
1
1
2
1
0
0
1
3
1
0
0
0
1
0
0
0
97
98
that is sensitive to facial emotional valence; this suggests an association between the cortical markers of facial processing and the
performance on tasks of social cognition. The present study did
not include a measure of real-life social behavior, but the results
indicated that the recognition of emotional faces and voices serves
as an important mediator to investigate the activities of daily living
and continuing to work in patients with gliomas. These data also
highlighted the need to use social cognition scales, which mediate
the relationship between emotion recognition and daily life functioning. Importantly, although the present study provides information about characteristics of emotion recognition and emotional
crossmodal integration in patients with gliomas, future studies
should investigate more precisely the neural basis of emotional
crossmodal integration and focus attention on the relation
between decits in particular emotions and tumor localization
using intraoperative direct electrical stimulation during awake
brain surgery in gliomas population.
In conclusion, the results of the present study suggest that
patients harboring a glioma present a mild impairment in emotion
recognition despite a normal clinical examination and a normal
score according to the patients functional status (e.g., KPS). These
results support the notion of effective compensatory mechanisms
in patients with gliomas, for emotional processes as well as in
other cognitive domains. They also conrm the efcacy of congruent multisensory presentation in normal and pathological subjects
when identifying emotions. Nevertheless, unlike the control group,
the crossmodality gain obtained in the LGG group might reect a
redundancy presentation of the visual and auditory modalities
rather than a real multisensory integrative process. Indeed, in addition to the general difculties in the recognition of emotions, the
patient group exhibited a reduced integration of the separate sensory representations of the emotional expressions, which was
associated with executive weakness. However, the crossmodality
gain might improve the emotional abilities in daily life and help
maintain the quality of social interactions. Continued effort is
required to include more systematic emotion recognition measures and introduce response times in the neuropsychological
assessment of patients with cerebral disease, notably, LGG.
Acknowledgments
We are grateful to the patients and healthy volunteers who participate in this study. The authors wish to thank Nicolas Bodeau for
help in statistical analyses. The authors have no nancial relationships or conicts of interest to disclose.
References
Adolphs, R., Baron-Cohen, S., & Tranel, D. (2002). Impaired recognition of social
emotions following amygdala damage. Journal of Cognitive Neuroscience, 14(8),
12641274.
Archibald, Y. M., Lunn, D., Ruttan, L. A., Macdonald, D. R., Del Maestro, R. F., Barr, H.
W. K., et al. (1994). Cognitive functioning in long-term survivors of high-grade
glioma. Journal of Neurosurgery, 80, 247253.
Baird, A., Dewar, B. K., Critchley, H., Dolan, R., Shallice, T., & Cipolotti, L. (2006).
Social and emotional functions in three patients with medial frontal lobe
damage including the anterior cingulate cortex. Cognitive Neuropsychiatry,
11(4), 369388.
Belin, P., Fecteau, S., & Bdard, C. (2004). Thinking the voice: Neural correlates of
voice perception. Trends in Cognitive Sciences, 8(3), 129135.
Belin, P., Fillion-Bilodeau, S., & Gosselin, F. (2008). The montreal affective voices: A
validated set of nonverbal affect bursts for research on auditory affective
processing. Behavior Research Methods, 40(2), 531539.
Berthoz, S., Blair, R. J. R., Le Clech, G., & Martinot, J. L. (2002). Emotions: From
neuropsychology to functional imaging. International Journal of Psychology,
37(4), 193203.
Bonnetblanc, F., Desmurget, M., & Duffau, H. (2006). Gliomes de bas grade et
plasticit crbrale: Implications fondamentales et cliniques. Mdecine Sciences,
22(4), 389394.
99
Bonora, A. L., Benuzzi, F., Monti, G., Mirandola, L., Pugnaghi, M., Nichelli, P., et al.
(2011). Recognition of emotions from faces and voices in medial temporal lobe
epilepsy. Epilepsy & Behavior, 20, 648654.
Bosma, I., Douw, L., Bartolomei, F., Heimans, J. J., van Dijk, B. W., Postma, T. J., et al.
(2008). Synchronized brain activity and neurocognitive function in patients
with low-grade glioma: A magnetoencephalography study. Neuro-Oncology, 10,
734744.
Calvert, G. A., Hansen, P. C., Iversen, S. D., & Brammer, M. J. (2001). Detection of
audio-visual integration sites in humans by application of electrophysiological
criteria to the BOLD effect. NeuroImage, 14, 427438.
Capelle, L., Fontaine, D., Mandonnet, E., Taillandier, L., Golmard, J. L., Bauchet, L.,
et al. (2013). Spontaneous and therapeutic prognostic factors in adult
hemispheric World Health Organization Grade II gliomas: A series of 1097
cases. Journal of Neurosurgery, 118, 11571168.
Catani, M., DellAcqua, F., Vergani, F., Malik, F., Hodge, H., Roy, P., et al. (2012). Short
frontal lobe connections of the human brain. Cortex, 48, 273291.
Catani, M., & Ffytche, D. H. (2005). The rises and falls of disconnection syndromes.
Brain, 128, 22242239.
Chaby, L., & Narme, P. (2009). La reconnaissance des visages et de leurs expressions
faciales au cours du vieillissement normal et dans les pathologies
neurodgnratives. Psychological NeuroPsychiatry, 7(1), 3142.
Cohen, J. (1988). Statistical power analysis for the behavioral science (2nd edition).
Hillsdale, New Jersey: Lawrence Erlbaum Associates.
Collignon, O., Girard, S., Gosselin, F., Roy, S., Saint-Amour, D., Lassonde, M., et al.
(2008). Audio-visual integration of emotion expression. Brain Research, 1242,
126135.
Collignon, O., Girard, S., Gosselin, F., Saint-Amour, D., Lepore, F., & Lassonde, M.
(2010). Women process multisensory emotion expressions more efciently
than men. Neuropsychologia, 48, 220225.
Correa, D. D. (2010). Neurocognitive function in brain tumors. Current Neurology and
Neuroscience Reports, 10, 232239.
Correa, D. D., De Angelis, L. M., Shi, W., Thaler, H. T., Lin, M., & Abrey, L. E. (2007).
Cognitive functions in low-grade gliomas: Disease and treatment effects.
Journal of Neuro-Oncology, 81, 175184.
Correa, D. D., Shi, W., Thaler, H. T., Cheung, A. M., DeAngelis, L. M., & Abrey, L. E.
(2008). Longitudinal cognitive follow-up in low grade gliomas. Journal of NeuroOncology, 86, 321327.
Crespi, C., Cerami, C., Dodich, A., Canessa, N., Arpone, M., Iannaccone, S., et al. (2014).
Microstructural white matter correlates of emotion recognition impairment in
amyotrophic lateral sclerosis. Cortex, 53, 18.
De Angelis, L. M. (2001). Brain tumors. The New England Journal of Medicine, 344(2),
114123.
De Benedictis, A., & Duffau, H. (2011). Brain hodotopy: From esoteric concept to
practical surgical applications. Neurosurgery, 68, 17091723.
De Gelder, B., & Vroomen, J. (2000). The perception of emotions by ear and by eye.
Cognition and Emotion, 14(3), 289311.
Desmurget, M., Bonnetblanc, F., & Duffau, H. (2007). Contrasting acute and slowgrowing lesions: A new door to brain plasticity. Brain, 130, 898914.
Di Bitonto, L., Longato, N., Jung, B., Fleury, M., Marcel, C., Collongues, N., et al. (2011).
Moindre reactivit motionnelle aux stimuli ngatifs dans la sclrose en
plaques, rsultats prliminaires. Revue Neurologique, 167, 820826.
Dolan, R. J., Morris, J. S., & de Gelder, B. (2001). Crossmodal binding of fear in voice
and face. PNAS, 98(17), 1000610010.
Douw, L., Klein, M., Fagel, S. S. A., van den Heuvel, J., Taphoorn, M. J. B., Aaronson, N.
K., et al. (2009). Cognitive and radiological effects of radiotherapy in patients
with low-grade glioma: Long-term follow-up. Lancet Neurology, 8, 810818.
Du Boullay, V., Plaza, M., Capelle, L., & Chaby, L. (2013). Identication des motions
chez des patients atteints de gliomes de bas grade versus accidents vasculaires
crbraux. Revue Neurologique, 169, 249257.
Duffau, H. (2005). Lessons from brain mapping in surgery for low-grade glioma:
Insights into associations between tumour and brain plasticity. Lancet
Neurology, 4(8), 476486.
Duffau, H. (2014). The huge plastic potential of adult brain and the role of
connectomics: New insights provided by serial mappings in glioma surgery.
Cortex., 58, 325337.
Duffau, H., Capelle, L., Denvil, D., et al. (2003). Usefulness of intraoperative electrical
subcortical mapping in surgery of low grade gliomas located within loquent
regions functional results in a consecutive series of 103 patients. Journal of
Neurosurgery, 98(4), 764778.
Duffau, H., Gatignol, P., Mandonnet, E., Capelle, L., & Taillandier, L. (2008).
Contribution of intraoperative subcortical stimulation mapping of language
pathways: A conscutive series of 115 patients operated on for a WHO grade II
glioma in the left dominant hemisphere. Journal of Neurosurgery, 109, 461471.
Duffau, H., Taillandier, L., Gatignol, P., & Capelle, L. (2006). The insular lobe and
brain plasticity: Lessons from tumor surgery. Clinical Neurology and
Neurosurgery, 108, 543548.
Fecteau, S., Belin, P., Joanette, Y., & Armony, J. L. (2007). Amygdalaresponsesto non
linguistic emotional vocalizations. Neuroimage, 36(2), 480487.
Figarella-Branger, D., Colin, C., Coulibaly, B., Quilichini, B., Maues De Paula, A.,
Fernandez, C., et al. (2008). Classication histologique et molculaire des
gliomes. Revue Neurologique, 505515.
Gallese, V., Keysers, C., & Rizzolatti, G. (2004). A unifying view of the basis of social
cognition. Trends Cognitive Science, 8, 396403.
100