Sie sind auf Seite 1von 25

Science in Context 25(3), 401–424 (2012). Copyright C Cambridge University Press

doi:10.1017/S0269889712000129

The Social Brain and the Myth of Empathy

Allan Young

McGill University E-mail: allan.young@mcgill.ca

Argument

Neuroscience research has created multiple versions of the human brain. The “social brain” is one version and it is the subject of this paper. Most image-based research in the field of social neuroscience is task-driven: the brain is asked to respond to a cognitive (perceptual) stimulus. The tasks are derived from theories, operational models, and back-stories now circulating in social neuroscience. The social brain comes with a distinctive back-story, an evolutionary history organized around three, interconnected themes: mind-reading, empathy, and the emergence of self-consciousness. This paper focuses on how empathy has been incorporated into the social brain and redefined via parallel research streams, employing a shared, imaging technology. The concluding section describes how these developments can be understood as signaling the emergence of a new version of human nature and the unconscious. My argument is not that empathy in the social brain is a myth, but rather that it is served by a myth consonant with the canons of science.

Introduction

Empathy is today a popular subject among social neuroscientists, science journalists, and the consumers of this literature – the curious public, social and behavioral scientists, ethicists, forensic psychiatrists, moral philosophers, other humanities scholars, cognitive psychologists, and investigators working on psychopathy, autism spectrum disorder, and other developmental pathologies. In this paper, I describe the social neuroscience of empathy and its object of inquiry, the social brain, from an ethnographic perspective. The social brain is indissociable from the anatomical brain but likewise something different from other scientific visions of the brain. It is called a social brain because of what it does (it permits minds and brains to read intentions and share feelings inside other minds and brains), how it emerged (it is a product of hominid social evolution), and how these presumptions have shaped research and discourse. For three centuries, human nature and inter-subjectivity were similarly associated with the faculty of reasoning. The golden age of social anthropology, arguably the last of the Enlightenment sciences, was pre-occupied with questions about the nature and universality of reason. Were mentalities and modes of reasoning around the world essentially the same or different? Do human societies occupy positions along

402 Allan Young

a single developmental continuum, whose bedrock is the “psychic unity of mankind”?

Are magic and science cognate efforts to understand and manipulate the material world? These well-known debates, which included Frazer, Malinowski, Freud, and Evans-Pritchard, focused on mentalities and cultures, and, except for Freud, rarely on biology and evolution (Tambiah 1990, 2). In the anglophone world at least, human nature is increasingly understood with reference to empathy, brain science, and evolutionary biology, rather than reason, and without recourse any longer to social anthropology.

These developments include a widespread public misunderstanding. Neuroscience knowledge about empathy and the social brain derives from two sources: research on mirror neurons and research based on the same technology but in combination with

a neo-Darwinian back-story. The two streams are complementary and the boundaries

are fuzzy. The misunderstanding is that the capacities and significance of mirror neurons

are grossly exaggerated in popular acounts, while the back-story and associated brain research, the focus of much interest among neuroscientists, are poorly understood or ignored. In part one of this paper, I discuss research on mirror neurons, their popular appeal, and some reasons for the declining interest in them among neuroscientists. Part two is about the back-story and related research. The two parts run chronologically in parallel; my emphasis throughout is on empathy.

Part One: Mirror Neurons

Mirror neurons were initially discovered in premotor and inferior parietal regions of rhesus monkey brains (Di Pellegrino et al. 1992). Evidence of human mirror neurons was discovered soon afterward in homologous regions. Most neurons in these brain regions perform a single function: they communicate sensory information to the brain or transmit motor commands from the brain. Mirror neurons combine these functions:

an individual observes goal directed behavior being performed by a second individual; the motor activation pattern in the observer’s brain mirrors (matches) the pattern in the performer’s brain; the process is not conscious and the motor behavior is not performed. Mirror neurons are multimodal: they respond to visual stimuli, auditory stimuli (e.g. the sound of a peanut shell being cracked open), imagined events (e.g. an athletic performance), and texts containing action verbs (Fadiga et al. 1996; Fogassi et al. 2002 and 2005; Kohler et al. 2002; Rizzolatti and Arbib 1998; Rizzolatti and Craighero 2004; and Tettamanti et al. 2005). Mirror neuron research on monkey brains is based on an invasive procedure:

microelectrodes are inserted into the brain at selected points. For ethical reasons, this technique was not permissible for human studies. Consequently, human research was based on non-invasive techniques, fMRI and PET. These technologies can image the activation of populations of neurons but not single neurons. The problem with this technology is that the existence of mirror neurons can be inferred but not demonstrated.

The Social Brain and the Myth of Empathy 403

The original conception was that every mirror neuron is “strictly congruent” with an identifiable goal-oriented action. The current conception divides mirror neurons into four sub-populations: strictly congruent neurons; broadly congruent neurons (the correspondence is not precise); canonical neurons (these fire when someone observes himself performing a relevant action or observes an object to which this action might be directed, but not when he observes someone else performing the same action); and anti-mirror neurons (which inhibit the performance of mirrored action patterns) (Keysers et al. 2010). Evidence of human mirror neurons is necessarily indirect and contested. Critics have argued that the “mirror effect” that is observed via neuroimaging is the product of three sub-populations (sensory-motor, sensory only, motor only), no one of which corresponds to the original idea of mirror neurons (Dinstein 2008; Dinstein et al. 2007; Dinstein et al. 2008; Jacob 2008; Jacob and Jeannerod 2005; Singer 2006; Turella et al. 2009). The very existence of human sensory-motor neurons has been questioned. But a recent and unprecedented single-neuron investigation (Mukamel et al. 2010), in which microelectrodes were inserted into the brains of surgical patients diagnosed with intractable epilepsy, is said to “finally provide direct electrophysiological evidence that humans have mirror neurons” (Keysers and Gazzola 2010,

R353).

The mirror phenomenon is a brain-to-brain product, unmediated by mental states, and different from situations where a match between brains (observer and performer) occurs in parallel and is routed through consciousness. Here is an example: Participants in this experiment were inserted into an fMRI scanner, where they viewed photos of hands and feet in painful situations. They were next asked to assess the level of pain being experienced by the anonymous individuals. Participants’ assessments and brain images were compared. High assessments correlated with high activations in brain regions known (from previous research) to play a significant role in pain processing (Jackson et al. 2005, 771). In other words, the distinctiveness of the mirror phenomenon depends on more than demonstrating neural matching: it is necessary to likewise demonstrate the absence of mental processing. To understand the efforts of researchers to connect mirror neurons to empathy, it is important to carefully distinguish between mirror states and parallel states. There are social neuroscientists who insist on this distinction (e.g. Singer et al. 2004), but others who do not. For convenience, I will call the former the “maximalists.” According to the maximalists, since the frontal and parietal regions that are commonly associated with mirror neurons are connected to multiple brain regions, it can be presumed that mirror neurons extend to these regions as well (Keysers and Gazzola 2010, R353). The relevant research has focused on the role of mirror neurons in the social emotions, notably disgust. In the next paragraphs, I summarize the work published on this subject by maximalists (Vittoriao Gallese, Marco Iacoboni, Giacomo Rizzolatti, Christian Keysers and colleagues) during the period of most intense interest, between 2002 and 2007.

404 Allan Young

Mirror neurons and disgust

“[T]he fundamental mechanism at the basis of the experiential understanding of others’ actions is the activation of the mirror neuron system. A similar mechanism, but involving the activation of viscero-motor centers, underlies the experiential understanding of the emotions of others.” Evidence obtained from multiple sources (electrical stimulation of monkey brains, task-driven fMRI research, clinical

observations) indicates that these neural populations intersect in the insula. The insula “contains neural populations active both when the participants directly experienced disgust and when they understood it through the facial expression of others.” It is a “center of viscero-motor integration,” and a relay between action representations (mirror neurons) and emotion (Gallese et al. 2004, 396, 398–400). This “large-scale network,” composed of the mirror neuron system, the insula, and limbic structures including the amygdala, would permit an observer “to empathize with others through the representation and ‘inner imitation’ of the actions (facial expressions, body postures) of others” (Iacoboni and Dapretto 2006, 942; also Carr et al. 2003). (This “inner imitation” or “action representation” is synonymous with the “activation pattern” that I have already mentioned.) The process is demonstrated in “an fMRI study in which participants inhaled odorants producing a strong feeling of disgust. The same participants observed video clips showing the emotional facial expression of disgust. Observing such faces and

feeling of disgust activated the same sites in the anterior insula

hand actions activates the observer’s motor representation of that action, observing an emotion activates the neural representation of that emotion.” Activation was more intense when participants imitated the facial emotion than when they observed it. An analogous effect is observed in monkey and mirror neuron studies of goal directed behavior (Wicker et al. 2003, 655; also van der Gaag et al. 2007). In an interview in the New York Times , Christian Keysers explained that this neural network underpins additional empathic social emotions – his list includes guilt, shame, pride, embarrassment, humiliation, rejection, and lust – and also empathic responses to pain (Blakeslee 2006). These researchers (Keysers, Kass, and Gazzola 2010) welcomed the publication of Mukamel et al.’s single-neuron research, since it confirmed the existence of human mirror neurons and, more specifically, located them in regions beyond the frontal (premotor) and parietal regions of the brain, i.e. “supplementary motor area, and hippocampus and environs.” While these findings are consonant with the maximalist vision of mirror neurons, there is only indirect evidence connecting mirror neurons to the insula and, through this interface, connecting mirror neurons into a “large-scale network” and social emotions. The reasoning is illustrated in an fMRI study (Carr et al. 2003) in which participants are asked to both imitate (a motor activity) and observe (a visuo-sensory experience) emotional displays. These are the findings:

[Just] as observing

The Social Brain and the Myth of Empathy 405

1. Imitating and observing the same emotion are shown to activate the same premotor areas. This finding is necessary, but not sufficient, for showing the presence of sensory-motor neurons and, more specifically, mirror neurons.

2. Imitating an emotion is shown to activate additional brain areas, insula and amygdala, that are “relevant” to (unconscious) action representations. This finding is necessary, but not sufficient, for showing “the modulation of the action representation circuit onto limbic activity.”

3. In other words: sensory stimulus activation of mirror neurons action representation insula interface limbic activation (includes the amygdala).

The lynchpin in this operation is the presence of “action representations” (Fadiga and Craighero 2004; Shmuelof and Zohary 2007). I will continue with this point in the following section.

Mirror neurons and empathy

Although there is no standard definition of empathy among social neuroscientists, there is a general understanding that “empathy” refers to a state that is shared by an observer and another individual, present or imagined. Empathic states include, singly or in combination, cognitive empathy, emotional empathy, and somatic empathy (mainly pain). Human mirror neurons are routinely depicted as being intrinsically empathic (e.g. Iacoboni and Dapretto 2006; Iacoboni 2009; Rizzolatti and Craighero 2004; Nummenmaa et al. 2008). Motor theories of cognition date back to the nineteenth-century (James 1890, on ideomotor theory); mirror neurons provide the latest chapter in this history (Jeannerod 1994; Hickcok 2009). Until recently, interest in empathy has concentrated in the fields of developmental psychology, social psychology, counseling psychology, and in clinical investigations of certain psychiatric disorders, notably childhood autism and personality disorders. Biological research on empathy focused on limbic system and neuroendocrine responses to social signals such as expressed emotion (see Brothers 1989 for a review). The discovery of the human mirror neuron system has expanded the neurological frontiers of empathy by identifying a biological mechanism that appears to illuminate, if not answer, one version of an enduring philosophical question, “the problem of other minds.” How do minds infer intentions embedded in other minds (Custers and Aarts 2010, 49)? The simplest form of mirror neuron activation (and action representation) is referred to as “resonance” and it is unconscious. The human mirror neuron system is capable of “de-coupling” an action representation so that it can be projected (attributed) back to its source, the performer, thereby differentiating between self and other. Recent research suggests that sometimes de-coupling is spontaneously reversed during memory reconsolidation (retrieval). This occurs in instances where an observer remembers

406 Allan Young

someone else’s performance (via an action representation) as having been his own experience – a phenomenon that is conventionally called “source amnesia” and “false memory” (Lindner et al. 2010). The mirror neurons and action representation are sufficient for inferring the performer’s intention. I put it this way – failing to indicate who or what is doing the inferring – to make a point. Many theorists presume that mirror neurons possess this capacity: that is, they can interpret action representations and detect intentions without cognitive processing by other parts of the brain. It is possible that certain non-human species are capable of doing this, but there is much disagreement on this point. It is clear that only humans are capable of the final stage, in which de- coupled representations can be objectified, i.e. detached from their source (performers and original performances) and stored for future use – as templates and prototypes for imitation, emulation, analogical reasoning, and sophisticated forms of mind- reading. While language obviously facilitates objectification, researchers are divided as to whether it was essential for the emergence of this capacity, since pre-linguistic babies are demonstrably capable of performing these functions.

The final process, the basis of cognitive empathy, operates with “different levels of action representation, from the motor intention that drives a given chain of motor acts

to the propositional attitudes (beliefs, desires and so on) that

behaviour in terms of its plausible psychological reasons.” The end-product is an over-arching “mentalizing network” that connects the mirror neuron system – whose action representations provide “a subpersonally instantiated common space” between observer and performer and a scaffold for “bootstrapping mutual intelligibility” (Gallese and Goldman 1998; Gallese 2001 and 2003) – to high-level operations (reasoning in cortical areas without demonstrable mirror properties (Corrado 2010, 264; also Kilner and Frith 2008). Evidence connecting mirror neurons to the putative network and other brain regions is, once again, “indirect,” e.g. bolstered by efforts to discover mirror neuron abnormalities in people diagnosed with autism spectrum disorders, a diagnosis associated with poor mind-reading abilities and other deficits in cognitive empathy (Hadjikhani et al. 2006; Oberman et al. 2005; Rizzolatti and Fabbri-Destro

2010).

explain the observed

Controversy

There are obvious similarities between human mirror neurons, as I have described them, and the mental modules delineated by Jerry Fodor (1983) and later elaborated by evolutionary psychologists (Barkow et al. 1992). The modules and neurons respond spontaneously to domain-specific sensory inputs that require no cognitive processing. Their operations are depicted as automatic (obligatory), rapid (unimpeded by consciousness), and encapsulated (invisible to reflective consciousness). Each is associated with a dedicated neural architecture and a characteristic pattern of

The Social Brain and the Myth of Empathy 407

decomposition (pathology). And they are both products of evolution and adaptation. Commonality is exemplified by the so-called “intentionality detector” (a function required for mind reading) that evolutionary psychologists conceive as a mental module and neuroscientists attribute to human mirror neurons. The intentionality detector also highlights an important difference, affecting the way in which the two perspectives are connecting the mind to the brain: mental modules are connected to the brain metaphorically (e.g. via “inference engines” and “mental organs”); mirror neurons are biological mechanisms that uniquely bridge the gap between mind and brain (via action representations). While the discovery of a visible bridge connecting mind to brain and the possibility, perhaps inevitability, of reducing mind to brain, has stimulated intense popular interest in mirror neurons, these claims have likewise drawn the attention of skeptical researchers. Skeptics have three concerns. Do mirror neurons exist? (Mukamel et al. 2010 has resolved this concern for the time being). If they do, do they possess this list of module-like features? Is there convincing evidence that mirror neurons can independently infer intentions? To answer these questions, one should begin with the lynchpin of mirror neuron operations, the notion of “action representations.” The cognitive science view of the central nervous system is that it is a dynamic apparatus comprising parts and processes that respond adaptively to a dynamic external world, a process facilitated by a sensory-motor loop. Adaptation is constant and requires the nervous system to construct and continually update its representations of the external world (Boden 2006, 1178–1179). Mirror neurons are presumed, by advocates, to be pre-adapted components, and their action representations are static, unaffected by experience or memory. During interactions with the external environment – observing behavior – mirror neurons form “logically related” chains of representations that, in turn, make it possible to represent performers’ intentions (Iacoboni et al. 2005). The process takes place outside of consciousness, without the interference of mentalizing structures. This “logic” is unlike propositional logic, and Gallese identifies it with the brain’s “forward model architecture.” This model is a “theoretical” system that describes how motor output and bodily movement is regulated in vertebrate species. Thus:

When I am going to stretch my arm to grasp a handle in front of me, the resulting postural perturbation that would follow, causing my body to bend [forward], is canceled

by a forward signal sent to the posterior muscles of my leg, which stabilize my standing

posture. The muscles

anticipates, predicts the outcome of the programmed action of the arm, [the] perturbation,

[thus] preventing it (Gallese 2001, 38)

Neither overt knowledge nor conscious inference is involved.

contract before my arm is set in motion. The contraction

The job of the system represented in the forward model is to compare its established predictions/representations with internal sensory feedback from the external

408 Allan Young

environment (“inverse model representations”). When a mismatch between predictions and real-time feedback exceeds a predetermined limit, the forward model readjusts the motor command. Following sufficient cycles, a revised prediction/representation is established. The forward model logic – thus logical relations among mirror neuron representations – has been appropriated from the field of control engineering, where the model is employed to manage dynamic constellations of internal forces, resistances (intrinsic to all mechanical systems), and feedback from the external world (Schwartz 1999; Kosslyn 2005; Boden 2008, 1184; de Vignemont and Haggard 2008) systematically decomposes the non-propositional logic attributed to mirror neurons. There are prominent neuroscientists who reject this account, on the grounds that mirror neurons cannot independently infer intentions because they do not possess the modular properties that make this feat conceivable. Thus according to Heyes and colleagues, humans are not born with mirror neurons; while human mirror neurons do emerge, they are not pre-adapted to anything in particular. Evolution provides us with “motor neurons that become mirror neurons, but it has not selectively established links between visual and motor neurons coding the same action” (Heyes 2010, 790). These neurons form during each individual’s development via a familiar Hebbian process, the idea that “neurons that fire together wire together” (Heyes 2010, 789; Hickok 2009). Similarly, it is mistaken to suppose that these neurons are capable of representing a movement or object without a context. Hume correctly argued that meaningful perceptions presuppose an act of interpretation and contextualization, and this act relies on memory and associations acquired through experience (Brass et al. 2007; Csibra and Gergely 2007). Consider an experiment conducted by Gergely et al. (2002). Preverbal babies observed a goal-directed action: a seated adult activated a light-box by leaning forward and touching the box with her forehead. At one session (context), her hands were encumbered and unavailable; at a second session, her hands were free. Two-thirds of the babies imitated the unusual head-down behavior after watching the hands-free demonstration; just one-fifth imitated this behavior after observing the hands-occupied demonstration. In the first context, babies “inferred that the head action offered some advantage,” since the demonstrator could have turned on the light in the familiar way, using her hands, if she had wished. In the second context, most babies chose to emulate motor actions in their repertoire (using their hands), after “presumably concluding that that the head action was not the most rational.”

Conclusion to Part One

There is a consensus among social and cognitive neuroscientists that human mirror neurons exist in one form or another. Opinion is divided on whether mirror neurons are pre-adapted to recognize (match) and represent specific goal directed actions and identify performers’ intentions. One opinion is that mirror neurons perform both functions. An alternative view among neuroscientists is that mirror neurons

The Social Brain and the Myth of Empathy 409

exist but are not innate and perform neither function. A third view questions the existence of mirror neurons, but doubts seem untenable in the light of recent single- neuron research. Mirror neurons are now widely identified with embodied empathy

(biologized intersubjectivity). This is one way for fMRI images to represent embodied empathy. Embodied empathy is also represented in research that shows matching brain activations in observer and performer, but without the involvement of mirror neurons. The distinction between the two demonstrations – between mirrored and parallel kinds

of activations – is very often blurred in journalistic accounts, leaving the mistaken

impression that mirror neurons are a default mechanism rather than just one possibility.

Part Two: A neo-Darwinian back-story

A narrative’s “back-story” is a fictive or notional history that precedes the events

described in the narrative. An effective back-story makes fictional characters “recognizable,” and creates expectations in the reader that the author can manipulate

as the narrative develops. A prologue is an overt back-story; the “introduction” section

in science publications performs a similar function. The invocation can be didactic, as

when a fictional character overtly reflects on his past, or subtle, as when back-stories unfold in a series of clues and strings of bibliographic citations. The effectiveness of these clues for creating a sub-text depends partly on the reader’s background knowledge.

This neo-Darwinian back-story is a plausible genealogy of the social brain. By the end of the Paleolithic era, the course of social and biological evolution has produced the “psychologically modern” human and the cognitively demanding kinds

of reciprocity that are both source and product of the social brain. This story has been

(and continues to be) erected piece-meal: the work of multiple authors, each tackling

a

discrete puzzle, with no collective goal in mind. The story is compelling because it

is

consonant with the available empirical evidence and because the constituent puzzles

and solutions are logically connected. The story is also implicit, in that it is part of the collective consciousness of social brain researchers, but there is (as yet) no occasion

when it is retold in its entirety. Individual investigators are not necessarily familiar with, or subscribe to, all of the episodes. While individual episodes can be (and are)

operationalized through empirical research, the entirety – extending perhaps seven million years – can only be inferred, and a researcher can justifiably claim that her work is confined to the topic that currently attracts her attention, e.g. the intersection

of affective empathy and perceived fairness. One might compare this back-story with

the idea of myth described in Paul Veyne’s book, Did the Greeks Believe in their Myths? An Essay on the Constitutive Imagination :

[N]othing shines in the night of the world. The materiality of things has no natural

luminescence

successive hands in a poker game, lead men to shine an endlessly changing lighting on

But the accidents of human history, as erratic and unplanned as the

410 Allan Young

their affairs. Only then is the materiality of things reflected in the light. This lighting begins to make a certain world exist. It is a spontaneous creation, the product of an imagination. When a lighted clearing appears in this way, it is generally taken for the very truth, since there is nothing else to see. (Veyne 1988, 125)

Before proceeding to the back-story, I want to clarify what “neo-Darwinian” means in this context of the social brain. Beginning in the 1950s, John Maynard Smith, an evolutionary biologist, pioneered an empiricist approach to natural selection that combined the methods and perspectives of population genetics, game theory, and cost-benefit analysis. The approach was the basis for a dialectical understanding of human biological evolution and, we shall see, a back-story for the neuroscience revolution (Maynard Smith 1979). In an article, “Reconciling Marx and Darwin,” published shortly before his death in 2004, Maynard Smith recalls Marx’s thesis on human consciousness – thoughts, inferences, perceptions, and desires – the view that consciousness is determined by the material conditions of social life. Marx might reasonably claim to be a materialist, Maynard Smith writes, but he was not a reductionist, someone interested in investigating the biological evolution of social life and the brain that made human consciousness possible in the first place. Marx’s vision of human nature as malleable – a view shared by many cultural anthropologists during the 1950s – was unscientific and proved to be “tragically flawed” and a “manifest failure” when put into action by communist regimes. The neo-Darwinian back-story for the social brain begins with two puzzles. It is propelled forward, from the emergence earliest hominids (perhaps five million years ago) to the advent and dispersion of psychologically modern humans, by a dialectical logic. I will consider each puzzle in turn. The first puzzle concerns altruism. Population geneticists define altruism as behavior that transfers some or all of the altruist’s reproductive potential (fitness) to a beneficiary. In the most extreme case, an altruist dies so that a beneficiary might live and reproduce. Altruistic behavior is often observed in animal populations, and presumed to be genetically determined. It is a puzzle because it gives the non-altruist recipients (lacking altruism genes) a reproductive advantage. They will eventually outbreed altruists, and altruists and altruism genes will eventually disappear from the population. This does not happen however and this is the puzzle. W.D. Hamilton solved the puzzle in the 1960s with the theory of kin selection (inclusive fitness). The theory says that altruistic behavior, including self-sacrifice, has favorable cost-benefits if the beneficiaries are closely related to the altruists. When this happens, altruism preserves a homogeneous gene pool and a continuing supply of altruism genes for future generations. Hamilton’s solution leads to a new puzzle, once biologists discover populations in which altruistic behavior is common but the altruists and beneficiaries are often unrelated or too distantly related for Hamilton’s solution. Robert Trivers, an anthropologist, solved this puzzle in 1971 by demonstrating that the arrangement works (benefits are greater than costs) if altruism is reciprocal. The recipient repays the

The Social Brain and the Myth of Empathy 411

altruist. Reciprocal behavior is common among non-human primates, but limited to individuals living in close proximity. It is typically restricted to grooming and, less often, food-sharing, and the interval between gift-giving and repayment is brief, a few seconds in the case of capuchin monkeys and just minutes with chimpanzees. More complex forms of reciprocity would involve dispersed partners, and long intervals between gifts and repayment. In time, multiple local groups would be connected, and extensive social networks and coalitions would develop. But these developments depend on the ability of the individuals to keep track of their exchanges and the reputations of potential exchange partners. Thus memory-based reciprocity represented a great leap forward, but was cognitively demanding, and presupposed various developments in the nervous system, including the capacity to inhibit the archaic impulse for immediate gratification. When altruism and reciprocity are limited to a genetically homogeneous population, cheating is impossible. Of course there will be non-reciprocators and their reproductive fitness will benefit from the costs absorbed by the altruists. But the costs and benefits circulate within a closed system. The term “cheater” is reserved for genetically heterogeneous populations, in which costs and benefits can accumulate within different lineages – circumstances that emerge together with memory-based reciprocity. Once again, there is a solution (reciprocity) that creates a problem (cheaters) and a puzzle. Cheaters are inevitable because all organisms are driven to maximize their reproductive interests and the interests of close kin – a neo-Darwinian premise. The cost-benefits of cheating always favor cheaters. All things being equal, cheaters (non-reciprocators) will outbreed the reciprocators; the nascent social networks will collapse; human social evolution will progress no farther (Nowak and Sigmund 2005; Rosas 2008). Of course the collapse did not occur. This is a puzzle. There is a neo-Darwinian solution, and it leads to a further contradiction. Puzzle solution contradiction puzzle etc.: this is the dialectic that glues the back-story together. In this case, the solution is in two parts. The social fabric is preserved by the evolution of positive sentiments that include friendship, gratitude, and sympathy. Fast forward to the present time: evidence from game-playing experiments, mathematical modeling and simulation, and ethnographies of small-scale, pre-industrial communities indicate that positive sentiments are insufficient to prevent the collapse. Something stronger is needed and the evidence points to a gene-driven impulse to punish cheaters. The solution is efficient except in one respect: it cannot pay for itself. The fitness costs of being an enforcer exceed the benefits. Punishment consumes resources (e.g. energy) and can be terminally expensive if cheaters retaliate violently. For this reason, the enforcer’s behavior is labeled “altruistic punishment.” There is a further problem or contradiction, in that punishment creates a new class of cheaters, namely friends and neighbors who are good reciprocators but unwilling to be enforcers. These are “second order cheaters”; they get the cost-free benefits of punishment and eventually they should outbreed gene-driven enforcers (Boyd et al. 2003).

412 Allan Young

Human nature and empathic cruelty

Punishment can prevent entropy, but why would a rational individual – someone innately self-interested and able to estimate cost-benefits – become an enforcer? Benefits are often hypothetical (scheduled to arrive in the distant future) and indirect (dissuading potential cheaters), and costs are unpredictable (perhaps bringing retaliation by the cheater or his kin). Even when an enforcer gets his fair share, he cannot know whether this would happen without his intervention. Thus material rewards can provide only a weak motive for altruistic punishment. A solution is described in “The neural basis of altruistic punishment,” published in Science (de Qervain et al. 2004). The experiment is bit of ontological theater, based on the “trust game.” The game comprises interaction between two anonymous participants, A and B. Individual A plays the game seven times, facing seven different Bs. Each participant is given 10 “money units,” convertible into Swiss francs. The rule is that A can transfer all or none of his units to B. If he gives 10 units to B, they are quadrupled, so that B now has 50 units (40 plus his original 10). B can now send 25 units to A (“fair” behavior), or he can keep the entire 50 for himself (“unfair”). A is now given an additional 20 units. If A now wants to punish B, he can do this in three ways: 1. he can buy “penalty points” (each point costs A one unit, and subtracts two units from B); 2. he can reduce B’s sum cost-free (A does not have to pay for the penalty points); or 3. he can reduce B’s sum symbolically (B is informed of A’s decision but loses no units). Each game is limited to just one formula (1, 2, or 3), designated by the researchers. There is an additional variable: in some games, a “random device” determines whether B’s choice will be fair or unfair. A knows the rules obtaining during each game: that is, whether his option is 1, 2, or 3, and whether B’s choice was decided by the random device. Once A learns B’s choice, he has one minute to make a decision – whether to punish, how much to punish. During this one-minute interval, his brain is scanned via PET. A’s brain images reflect his deliberations and his anticipation (imagining) of B’s response. Thus the experiment reproduces moments in the evolutionary narrative –the punishment of free-loaders by enforcers is simulated when B is unfair and A penalizes him; altruistic punishment is simulated by option 1 (A buys B’s penalty points). The PET images of enforcers’ brains show activation of the caudate nucleus of the dorsal striatum, a region is associated with dopamine excretion and part of the brain’s “reward center.” The images also indicate that the intensity of the activation correlates positively with the severity of the punishment – the number of penalty points that A assigns to B. Thus the images show that when A is paying out units to punish B, A’s brain is paying itself back with pleasure. In which case, (altruistic) punishment is its own reward. The Concise Oxford Dictionary defines “cruelty” as “having pleasure in another’s suffering.” If so, the back-story permits the conclusion that cruelty entered human nature along with our felicitous pro-social disposition to altruistic punishment. (This is an ethnographic observation, not a moral judgment .) The enforcers in the experiment

The Social Brain and the Myth of Empathy 413

actively imagined the disappointment that will be experienced by the defectors; they vicariously participated in the defectors’ state of mind. This is not simple cruelty but, more precisely, empathic cruelty. Research by Takahashi et al. (2009) on Schadenfreude is even more explicit. According to Takahashi, Schadenfreude (pleasurable response to news that a misfortune has fallen to a person who is envied or resented) and envy (a painful feeling of inferiority and resentment that results from awareness of someone else’s superior quality, achievement, or possessions) are two sides of one coin. Students in the experiment read descriptions of three fictive students – A, B, and C – and were instructed to see B and C from A’s perspective. Student A had average abilities, achievements, possessions, social endowments and prospects. Student B was superior and successful in each respect and also in life domains important to A. Student C was superior and successful but in domains not important to A. Participants silently read additional texts pertaining to A, B, and C while their brains were scanned (fMRI). In phase one, the texts described the successes of B and C, and participants reported how envious the descriptions made them feel. In phase two, texts described misfortunes that spoiled events and prospects for A, B, and C. Participants reported the intensity of their pleasure (Schadenfreude ) regarding each of the events. Brain images and self-reports were compared. Empathic cruelty has three elements: the target’s present or anticipated distress, the perpetrator’s pleasure (reward), and empathy (shared pain, distress, etc.). In Takahashi’s experiment, brain images demonstrated the pleasure part directly, via the activation of the participant’s reward center (dorsal and ventral striatum and medial orbitofrontal cortex). Evidence of empathy is more complicated. Prior research shows that cognitive conflicts and social pain are processed in the dorsal anterior cingulate cortex (dACC), part of the brain’s “pain matrix.” Brain images of the participants in Takahashi’s experiment showed activation of the dACC after reading the texts. This activation had two sources: the participant’s envy (phase one) and his internal representations (imagination) of the target’s distress (phase two). Thus the brain images are evidence of shared affect identified with empathy. To the extent that envy is an expression of frustrated entitlement, the envied person (target) in Takahashi’s experiment resembles the unfair person in de Quervain’s experiment. No data relating to empathic affect was collected by de Quervain et al. Participants were simply asked to describe their own feelings while deciding to punish unfair participants (for additional research relating to empathic cruelty, see Fehr and Camerer 2007; Fliessbach et al. 2007; Knoch et al. 2006; Lanzetta and Englis 1989; Shamay-Tsoory et al. 2009; Singer et al. 2006).

Empathy and evolution

The neo-Darwinian back-story of the social brain starts with two puzzles: one puzzle concerns altruism, the second concerns the brain. The size of the human brain is

414 Allan Young

an evolutionary puzzle. Our ancestors split from the great apes six million years ago. During this period, the ancestral human brain quadrupled in volume. The metabolic cost of the human brain is enormous: it constitutes 2 per cent of total body weight and consumes 15 per cent of cardiac output and 20 per cent of body oxygen, and these demands are ceaseless and inflexible. We can assume that the bigger brain paid for itself, yielding favorable cost-benefits. However efforts to model these developments indicate that increasing metabolic cost would eventually exceed benefits. Why did the brain continue to grow (in size and power) when further growth was no longer adaptive? This is the puzzle, and its solution is a story about how brains adapted to other brains. The process is a cognitive arms race (Byrne and Whiten 1988; Barton and Dunbar 1997; Dunbar 2003). It begins with the emergence of a “mind-reading” capacity: the ability to detect the intentions and predict the behavior of other individuals. Mind-reading facilitates more complex social relations, but it also facilitates “cheaters,” individuals who use mind-reading to manipulate and deceive other individuals. All things being equal, cheating is cost-effective: it gives cheaters a reproductive advantage. As cheaters increase as a proportion of a population, social life grows more unpredictable, thus undermining the stability of social relations. Entropy looms. This might have been the evolutionary fate of our hominid ancestors, but it was not. Entropy was avoided through a further improvement to the neural hardware: the emergence of a “cheater detector.” This could be no more than a temporary solution however. The next generation of opportunists used their improved brains to subvert the function (or adaptation) of the cheater-detector. Entropy was avoided by the evolution of an improved version of the cheater-detector that would, of course, facilitate the emergence of a cohort of improved cheaters. And so on over millions of years, following the dialectical logic of the neo-Darwinian back-story. This part of the story begins with mind-reading. Opinion in neuroscience is divided on the biological basis of this ability. There are rival explanatory accounts of mind-reading: a version based on mirror neurons and a version based on cognitive mechanisms, including analogical reasoning, as mentioned by Hume. In both versions, the evolution of empathy is understood to be a pro-social development: “the ‘glue’ of the social world, drawing us to help others and stopping us from hurting others” (Baron-Cohen and Wheelwright 2004, 163; see also Lawson et al. 2004; Baron-Cohen et al. 2005; Wheelright et al. 2006; Williams et al. 2001; Iacoboni and Dapretto 2006). According to Simon Baron-Cohen, an authority on autism, human evolution produced two (polar) kinds of brains: a female brain with highly developed empathic capacities and a male brain adapted to manipulating objects and creating systems. Empathy originated as a pro-social adaptation allowing females to detect the wants of pre-verbal children and the moods of the potentially dangerous males with whom they lived (see also Hrdy 2009). Autistic individuals are characteristically poor empathizers, and autism’s epidemiology is biased towards males: 10 to 1, in high functioning autistic disorder.

The Social Brain and the Myth of Empathy 415

In both versions, there is the further assumption that empathy is intrinsically a morally positive disposition. According to Baron-Cohen, we respond to suffering in three ways: the response mirrors the sufferer’s distress (we experience it); the response

is culturally appropriate (e.g. pity) but does not mirror the suffering; or the observer

takes pleasure in the sufferer’s condition. Baron-Cohen equates “empathy” with the first two responses, and explicitly excludes the third. De Quervain’s research vindicates

a further possibility – empathic cruelty – in which the observer’s brain mirrors the

sufferer’s distress and also takes pleasure in the sufferer’s condition. This is an unexpected twist in the evolutionary back-story: evidence of an evolved empathic disposition that is simultaneously pro-social and cruel. Human nature in the age of neuroscience grows morally complex: a theme that recurs in the story of the cognitive arms race, where an uncomplicated kind of cheating, the refusal to reciprocate, evolves, via mind-reading, into deception , the effort to represent the current situation as something different from the reality. Deception puts great demands on the brain, which is now required to do two things simultaneously. An individual must construct a lie and also withhold (inhibit) the truth. Experimental and clinical evidence suggests that telling the truth is the brain’s default response, a legacy of a pro-social evolutionary adaptation. “Responding with a lie demands something ‘extra’, and will engage executive prefrontal systems [responsible for planning, decision-making, and monitoring] more than does telling the truth” (Spence 2004, 8; see also Spence et al. 2004). Thus a deceiver is in constant danger of signaling the truth and betraying himself, for instance, by involuntarily and detectable hesitation preceding a lie. To succeed, deception requires a capacity for self-deception, the ability to conceal one’s true intentions and facts from oneself (Trivers 1971). At this point, two unprecedented objects are created – there is the other (created via projection) and the self (refracted in a doppelganger , the fraudulent self).

Empathic time travel

The evolutionary origins of the self are brought to life in research on “mental time travel” (Suddendorf et al. 2009). Mental time travel is the capacity to project one’s self into situations in the past, future, and subjunctive (an alternative scenario to the actual past or present). Travel to the past evolved first and provided a prototype (an episodic memory in which the thinker is spectator or protagonist) for constructing mental representations of possible futures and also alternative presents and pasts. 1 The emergence of language – notably pronouns and verb forms – and empathy were

1 See Ingvar 1985 for initial appearance of “memory of the future”; see Busby and Suddendorf 2005, and Schacter et al. 2007, on the role of the “prospective brain” in facilitating strategic planning and behavioral flexibility in new situations; and see Addis et al. 2007, Okuda et al. 2003, and Szpunar et al. 2007, on shared and non-shared neural substrates of memories of the past and future.

416 Allan Young

prerequisite for the evolution of mental time travel (Corballis 2009). The idea that we simply “project” ourselves wholesale into the past or future oversimplifies time travel. It is a first-person experience that requires the splitting of the self: one must be both here-and-now and there-and-then at the same time. The bond between the split-selves is empathic, but not necessarily positive. (In this sense, time travel parallels Trivers’ account of self-deception.) Neuroscience makes it possible to see time travel: pain provides an efficient modality. Jean Decety and his collaborators conducted fMRI experiments in which they asked participants to imagine themselves and others in painful situations (Decety and Grezes` 2006). In other words, participants traveled to the subjunctive (an alternative present-time). One expects that, in some participants, the targeted situations stimulated spontaneous time travel to other places as well – notably travel to intensely empathic memories involving loved ones in pain. The imagined situations activated – one might say “mirrored” – brain regions reliably associated with experiencing the emotional content of pain in present-time. There was no phenomenological confusion between the mental act and the experience however. Neuroimages in the self vs. other scenarios were similar but, as one might expect, there were discernible differences – which is what one would expect, given that “a minimal distinction between self and other is essential for social interaction in general and for empathy in particular.” In the neo-Darwinian back-story, human evolutionary history begins with a great leap forward. Up to this point, social exchange is based on altruism and simple kinds of reciprocity between genetically similar individuals. Afterwards, relations take the form of networks of exchange (reciprocity) among individuals genetically unrelated or only distantly related. Many problems were encountered along the dialectical road to modern times. The “future” was one of these problems. Where reciprocity entails long delays between giving and repayment, exchange partners must share some awareness of “time,” as a continuum that connects past-time to future-time and is the sine qua non for “debt.” It is assumed that the concept of time would emerge from incessant travels between memories of the past and the future. The benefits of having or knowing “time” are obvious. Time travel promotes behavioral flexibility in novel situations and it is the basis for long-term strategic planning targeted to pre-selected goals. With the emergence of language, transient memories of individuals could be transformed into reproducible narratives that could be accumulated and circulated within groups, creating a powerful collective memory. But human nature can be uncooperative, since there is a demonstrable tendency for people to treat present-time and future-time unequally when they calculate costs and benefits: they discount future benefits while inflating costs incurred in the present. Thus self-interest is intrinsically impulsive and opportunistic, going for immediate gain. If unrestrained, it limits reciprocity and would have curtailed the dialectical developments described in the back-story. We have already seen this work in the genesis of punishment: the cost-benefits of being an enforcer and the benefits (rewards) provided

The Social Brain and the Myth of Empathy 417

by empathic cruelty and time travel to the future. A suitable “countermotivation device” was needed and there was one available:

does not align with our current goals. This is striking in the

common phenomenon of rumination, the unwanted but persistent activation of thoughts

concerning an unpleasant past

current goals, as well as time discounting and

rewards against opportunistic motivation. (Boyer 2010, 222; see above, the evolutionary origins of empathic cruelty)

provides us with immediate counter-

provides emotions that bypass

Memory for emotions

Time travel

Clinical psychiatry was acquainted with mental time travel avant la lettre , as early as the 1880s. Today, posttraumatic stress disorder (PTSD) is the most widely known time travel syndrome (Young 2004). PTSD comprises an etiological event, a distressful and intrusive memory of this event, and a behavioral syndrome that represents an adaptation to the memory. Traumatic memories are a pathological expression of the “phenomenon of rumination” mentioned by Boyer (Berntsen and Hall 2004). In the language of psychiatry, traumatic memories are “re-experiences” and their exemplar is the “flashback.” Schreckneurose or “fright neurosis” is especially interesting in this regard. The disorder, the German variation of shellshock during World War One, was characterized by the victim’s terrifying dreams of a traumatizing experience. In the view of influential German doctors, the syndrome could be caused by re-experiences (memories, nightmares) of the future in combination with the past. The theory is that the soldiers were fixated on visions of their deaths. The man is overcome by an empathic tenderness for himself – the subject in the nightmares and intrusive images. The remembered event is a composite of two events: a real past event and an imagined future event. The past is re-enacted in the future but with a significant change. The past event was harmless; the future event is fatal. He experiences two events as a single, etiological event, in the past. His abnormality is a weakness of will and an excess of self-empathy. His symptoms, which often include psychogenic paralysis, are an unconscious effort to hide from the future. His true defect is moral, not medical. A real man (a soldier) lives in the present moment. The doctor’s job is to terminate to the patient’s pathogenic time travel with the most effective means, including electrical torture (Lerner 2003; Young 1995). 2

Post-Paleolithic developments

Human nature, as portrayed in the back-story, was fully formed during the upper Paleolithic Period. The story tells us that we are innately empathic, but that

2 See also Young 2002, on “self-traumatized perpetrators,” a clinical phenomenon that intersects pathological time travel and empathic cruelty.

418 Allan Young

“empathy” may include unexpected and undesirable attitudes: dissimulation, self- deceit, spitefulness, Schadenfreude , and cruelty. Western normative institutions – religion, secular ethics, clinical psychology – regard these attitudes as anti-social and self-destructive. The dialectical history in this chapter views these attitudes from a different perspective, as the causes and consequences of human social evolution and self- awareness. Leaving aside the professional cynics, no one is claiming that these attitudes form the core or essence of human nature. Mind-reading, perspective-taking, hormonal responsiveness, and mental time travel were likewise responsible for “psychological altruism,” the propensity to adjust one’s desires and intentions to the perceived needs

or wishes of others. (In contrast, the starting point for the neo-Darwinian back-story is

“biological altruism,” which is defined by fitness costs and unconcerned with altruists’

perceptions and intentions.) Human nature, as described in the back-story, is morally complex, even contradictory, certainly inclined to “read and share the concern of others” (Hrdy 2009), but likewise prepared for cruel pleasures.

Conclusion

A recent article in the New Yorker magazine celebrates a “revolution in consciousness”

attributed to neuroscience (Brooks 2011a). The author, David Brooks, is a columnist for the New York Times and author of a recent book on brain science, human nature, and public policy (Brooks 2011b). Revolutions have winners and losers, and Brooks’ loser is the idea of consciousness that we inherited from the Enlightenment. His winner is a new vision of the unconscious. According to Brooks, the revolution elevates “emotion over pure reason, social connections over individual choice, moral

intuition over abstract logic, [and] perceptiveness over I.Q.” The “biases, longings,

about which our culture has least to say” now command our

respect and not, as in the past, contempt. Where shall we find our ontological bearings in this new world? Neuroscience is the source of the problem (the primacy of the unconscious) and also the source of the solution: “Brain science helps fill the hole left by the atrophy of theology and philosophy” (Brooks 2011a, 26). Multiple versions of the “unconscious” have circulated for two centuries. They share the idea that some determinants and contents of mental life are located beyond consciousness, and that certain kinds of puzzling behavior, mainly connected with intentionality, are attributable to these hidden elements. The notable puzzles include behavior that occurs in the absence of conscious intentions, and occasions when people erroneously believe that their intentions (reasons) are also the causes of their behavior. Freud’s Psychopathology of Everyday Life is of course about actions without intentions:

slips of the tongue, forgetting proper names, mistakes made while reading and writing, and erroneously performed actions. Likewise his writing on Oedipal desire is about confusing reasons with causes. The Freudian unconscious is obsolescent, as interest has moved from forces hidden in the mind to forces sequestered in the brain.

[and] predispositions

The Social Brain and the Myth of Empathy 419

The new unconscious originates in cognitive science, in experiments published in the 1980s. The emblematic experiment (Libet et al. 1983) challenged the idea that our decisions to perform our actions necessarily precede the brain’s preparation to make these actions happen. Libet’s experiment showed that “cerebral initiation of a spontaneous, freely voluntary act can begin unconsciously” (ibid., 399–400). The old and new unconscious are obviously unlike: Freud’s self-confirming style of reasoning, based on his clinical observations, is now replaced by brain imaging, experimentation, and falsifiable hypotheses (Hassan, Uleman, and Bargh 2005; Bargh and Morsella 2008). The old and new versions are similar in one notable respect: they are organized around theories and myths about evolutionary origins. Myth is not antithetical to science, and the myth of empathy, the twists and turns that lead dialectically from altruism to empathic cruelty, are not antithetical to the emergence of a science of empathy. Freud’s evolutionary myth was the work of just one man, writing and revising his accounts over the course of three decades, in Totem and Taboo (1913), Overview of the

Transference Neurosis (1915), Group Psychology (1921), and Moses and Monotheism (1939). The neo-Darwinian myth, the back-story to the social brain, began sixty years ago. It is

a collective effort, combining population biology, evolutionary science, mathematical

modeling, anthropology, primatology, experimental economics, and brain science, and

it is still unfinished.

Acknowledgments

This paper has been supported by a research grant provided by the Social Science and Humanities Research Council of Canada.

References

Addis, Donna Rose, Alana Wong, and Daniel L. Schacter. 2007. “Remembering the past and imagining the future: common and distinct neural substrates during event construction.”Neuropsychologia

45(7):1363–1377.

Bargh, John A. and Ezequiel Morsella. 2008. “The unconscious mind.” Perspectives in Psychological Science

3:73–79.

Barkow, Jerome H., Leda Cosmides, and John Tooby, eds. 1992. The Adapted Mind: Evolutionary Psychology and the Generation of Culture. New York: Oxford University Press. Baron-Cohen, Simon, Rebecca C. Knickmeyer, and Matthew K. Belmonte. 2005. “Sex differences in the brain: implications for explaining autism.” Science 310:819–823. Baron-Cohen, Simon and Sally Wheelwright. 2004. “The Empathy Quotient: an investigation of adults with asperger syndrome or high functioning autism, and normal sex differences.” Journal of Autism and Developmental Disorders 34:163–175. Barton, Robert A. and Robin I.M. Dunbar. 1997. “Evolution of the social brain.” In Machiavellian Intelligence II, edited by Andrew W. Whiten and Richard W. Byrne, 240–263. Cambridge: Cambridge University Press. Berntsen, Dorthe and Nicolene M. Hall. 2004. “The episodic nature of involuntary autobiographical memories.” Memory and Cognition 32:789–803.

420 Allan Young

Blakeslee, Sandra. 2006. “Cells that read minds.” New York Times , 10 January. Boden, Margaret A. 2006. Mind as Machine: A History of Cognitive Science , vol. 2. Oxford: Clarendon Press. Boyd, Robert, Herbert Gintis, Samuel Bowles, and Peter J. Richerson. 2003. “The evolution of altruistic punishment.” Proceedings of the National Academy of Science 100:3531–3535. Boyer, Pascal. 2010. “Evolutionary economics of mental time travel?” Trends in Cognitive Sciences 12:219–

224.

Brass, Marcel, Ruth M. Schmitt, Stephanie Spengler, and Gyorgy¨ Gergely. 2007. “Investigating action understanding: inferential processes versus action simulation.” Current Biology 17:2117–2121. Brooks, David. 2011a. “Social animal: How the new sciences of human nature can help make sense of a life.” New Yorker , 17 January. Brooks, David. 2011b. The Social Animal: The Hidden Sources of Love, Character, and Achievement. New York: Random House. Brothers, Leslie. 1989. “A biological perspective on empathy.” American Journal of Psychiatry 146:10–19. Busby, Janie and Thomas Suddendorf. 2005. “Recalling yesterday and predicting tomorrow.” Cognitive Development 20:362–372. Byrne, Richard and Andrew Whiten, eds. 1988. Machiavellian Intelligence . Oxford: Oxford University Press. Carr, Laurie, Marco Iacoboni, Maria-Charlotte Dubeau, John C. Mazziotta, and Gian Luigi Lenzi. 2003. “Neural mechanisms of empathy in humans: a relay from neural systems for imitation to limbic areas.” Proceedings of the National Academy of Sciences 100:5497–5502. Corballis, Michael C. 2009. “Mental time travel and the shaping of language.” Experimental Brain Research

192:553–560.

Custers, Ruud and Henk Aarts. 2010. “The unconscious will: how the pursuit of goals operates outside of conscious awareness.” Science 320:47–50. Decety, Jean and Julie Greze.` 2006. “The power of simulation: Imagining one’s own and other’s behavior.” Brain Research 1079:4–14. de Quervain, Dominique, Urs Fischbacher, Valerie Treyer, Melanie Schellhammer, Ulrich Schnyder, Alfred Buck, and Ernst Fehr. 2004. “The neural basis of altruistic punishment.” Science 305:1254–1258. de Vignemont, Frederique and Tania Singer. 2006. “The empathic brain: how, when and why?” Trends in Cognitive Sciences 10:435–441.

de

Vignemont, Frederique and Patrick Haggard. 2008. Action observation and execution: What is shared? Social Neuroscience 3:421–433.

di

Pellegrino Giussepe, Luciano Fadiga, Leonardo Fogassi, Vittorio Gallese, and Giacomo Rizzolatti. 1992.

“Understanding motor events: a neurophysiological study.” Experimental Brain Research 91:176–180. Dinstein, Ilan. 2008. “Human cortex: reflections of mirror neurons.” Current Biology 18:R956-R959. Dinstein, Ilan, Uri Hasson, Nava Rubin, and David J. Heeger. 2007. “Brain areas selective for both observed and executed movements.” Journal of Neurophysiology 98:1415–1427. Dinstein, Ilan, Cibu Thomas, Marlene Behrmann, and David J. Heeger. 2008. “A Mirror up to Nature.” Current Biology 18:R13 – R18. Dunbar, Robin I. M. 2003. “The social brain: mind, language, and society in evolutionary perspective.” Annual Review of Anthropology 32:163–181. Fadiga, Luciano and Laila Craighero. 2004. “Electrophysiology of action representation.” Journal of Clinical Neurophysiology 21:57–169. Fadiga, Luciano, Giovanni Pavesi, and Giacomo Rizzolatti. 1996. “Motor facilitation during observation:

a magnetic simulation study.” Journal of Neurophysiology 73:2608–2611. Fehr, Ernst and Colin F. Camerer. 2007. “Social neuroeconomics: the neural circuitry of social preferences.” Trends in Cognitive Science 11:419–427. Fliessbach, Klaus, Bernd Weber, P. Trautner, T. Dohmen, U. Sunde, C.E. Elger, and A. Falk. 2007. “Social comparison affects reward-related brain activity in the human ventral striatum.”Science 318:

1305–1308.

The Social Brain and the Myth of Empathy 421

Fodor, Jerry. 1983. Modularity of Mind: An Essay on Faculty Psychology. Cambridge MA: MIT Press. Fodor, Jerry. 2000. The Mind Doesn’t Work that Way: The Scope and Limits of Computational Psychology . Cambridge MA: MIT Press. Fogassi, Leonardo, Vittorio Gallese, and Giacomo Rizzolatti. 2002. “Hearing sounds, understanding actions: action representation in mirror neurons.” Science 297:846–848.

Fogassi, Leonardo, Piere F. Ferrari, Benno Gesierich, Stefano Rozzi, Fabian Chersi, and Giacomo Rizzolatti. 2005. “Parietal lobe: from action organization to intention understanding.” Science 308:662–

667.

Friston, Karl. 2002. Beyond phrenology: What can neuroimaging tell us about distributed circuitry? Annual Review of Neuroscience 25:221–2250. Gallese, Vittorio. 2001. “The “shared manifold” hypothesis: from mirror neurons to empathy.” Journal of Consciousness Studies 8:33–50. Gallese, Vittorio. 2003a. “The roots of empathy: the shared manifold hypothesis and the neural basis of intersubjectivity.” Psychopathology 36:171–180. Gallese, Vittorio. 2003b. “A neuroscientific grasp of concepts: from control to representation.” Philosophical

Transactions of the Royal Society of London B 358:1231–1240. Gallese, Vittorio and Alvin Goldman. 1998. “Mirror neurons and the simulation theory of mind reading.” Trends in Cognitive Science 2:493–501. Gallese, Vittoriao, Christian Keysers, and Giacomo Rizzolatti. 2004. “A unifying view of the basis of social cognition.” Trends in Cognitive Sciences 8:396–403. Gallese, Vittorio, Morris Eagle, and Paolo Migone. 2007. “Intentional attunement: mirror neurons and the neural underpinnings of interpersonal relations.” Journal of the American Psychoanalytic Association

55:151–176.

Gergely, Csibra, and Gyorgy¨ Gergely. 2007. “‘Obsessed with goals’: functions and mechanisms of teleological interpretation of actions in humans.” Acta Psychologica 124:60–78. Gergely, Gy orgy,¨ Harold Bekkering, and Ildiko Kiraly. 2002. “Rational imitation in preverbal infants.” Nature 415:755. Hadjikhani, Nouchine, Robert M. Joseph, Josh Snyder, and Helen Tager-Flusberg. 2006. “Anatomical differences in the mirror neuron system and social cognition network in autism.” Cerebral Cortex

16:1276–1282.

Hamilton, William D. 1964. “The genetical evolution of social behaviour. I.” Journal of Theoretical Biology

7:1–16.

Hassin, Ran R., James S. Uleman, and John A. Bargh, eds. 2005. The New Unconscious. New York: Oxford Univ. Press. Heyes, Cecilia. 2010. “Mesmerizing mirror neurons.” Neuroimage 51:789–791. Hickcok, Gregory. 2009. “Eight problems for the mirror neuron theory of action understanding in monkeys and humans.” Journal of Cognitive Neurosciences 21:1229–1243. Hrdy, Sandra B. 2009. Mothers and Others: the Evolutionary Origins of Mutual Understanding . Cambridge MA: Harvard University Press. Iacoboni, Marco. 2009. “Imitation, empathy, and mirror neurons.” Annual Review of Psychology 60:653–

670.

Iacoboni, Marco. 2008. Mirroring People: The New Science of How We Connect with Others. New York:

Farrar, Straus, and Giroux. Iacoboni, Marco and Mirella Dapretto. 2006. “The mirror neuron system and the consequences of its dysfunction.” Nature Reviews Neuroscience 7:942–951. Iacoboni, Marco, Istvan Molnar-Szakacs, Vittorio Gallese, Giovanni Buccino, John C. Mazziotta, and Giacomo Rizzolatti. 2005. “Grasping the intentions of others with one’s own mirror neuron system.” PLoS ( Public Library of Science) Biology 3:e79. Jacob, Pierre. 2008. “What do mirror neurons contribute to human social cognition?” Mind and Language

422 Allan Young

Jacob, Pierre and Marc Jeannerod. 2005. “The motor theory of social cognition: a critique.” Trends in Cognitive Sciences 9:21–25.

Jackson, Philip L., Andrew N. Meltzoff, and Jean Decety. 2005. “How do we perceive the pain of others?

A window into the neural processes involved in empathy.” NeuroImage 24:771–779.

James, William. 1890. The Principles of Psychology. New York: Macmillan.

Jeannerod, Marc. 1994. “The representing brain: neural correlates of motor intention and imagery.” Behavioral and Brain Sciences 17:187–245. Jones, Edward G. 2008. “Cortical maps and modern phrenology.” Brain 131:2227–2233. Keysers, Christian and Valeria Gazzola. 2010. “Mirror neurons reported in Humans.” Current Biology

20:R353-R354.

Keysers, Christian, Jon H. Kaas, and Valeria Gazzola. 2010. “Somatosensation in social perception.” Nature Reviews Neuroscience 11:417–428. Kilner, James M. and Chris D. Frith. 2008. “Action observation: inferring intentions without mirror neurons.” Current Biology 18:R32-R33. 2008 Knoch, Daria, Alvaro Pascual-Leone, Kaspar Meyer, Valerie Treyer, and Ernst Fehr. 2006. “Diminishing reciprocal fairness by disrupting the right prefrontal cortex.” Science 314:829–832. Kohler, Evelyne, Christian Keysers, M. Alessandra Umlita,´ Leonardo Fogassi, Vittorio Gallese, and Giacomo Rizzolatti. 2002. “Hearing sounds, understanding actions: action representation in mirror neurons.” Science 297:846–848. Kosik, Kenneth S. 2003. “Beyond phrenology, at last.” Nature Reviews Neuroscience 4:234–239. Kosslyn, Stephen M. 2005. “Mental images and the brain.” Cognitive Neuropsychology 22:333–347. Lanzetta, John T. and Basil G. Englis. 1989. “Expectations of cooperation and competition and their effects on observers’ vicarious emotional responses.” Interpersonal Relations and Group Processes 56:543–

554.

Lawson, John, Simon Baron-Cohen, and Sally Wheelwright. 2004. “Empathising and systemising in adults with and without Asperger Syndrome.” Journal of Autism and Developmental Disorders 34:301–310. Lerner, Paul. 2003. Hysterical Men: War, Psychiatry, and the Politics of Trauma in Germany, 1890–1930 . Ithaca:

Cornell University Press. Libet, Benjamin. 1986. “Unconscious cerebral initiative and the role of the unconscious will in voluntary action.” Behavioral and Brain Sciences 8:529–566. Libet, Benjamin, Curtis A. Gleason, Elwood W. Wright, and Dennis K. Pearl. 1983. “Time of conscious

intention to act in relation to onset of cerebral activity (readiness potential): The unconscious initiation

of a freely voluntary act.” Brain 102:623–642.

Lindner, Isabel, Gerald Echterhoff, Patrick S.R. Davidson, and Matthias Brand. 2010. “Observation inflation: your actions become mine.” Psychological Science 21:1291–1299. Maynard Smith, John. 1979. “Game theory and the evolution of behaviour.” Proceedings of the Royal Society of London. Series B, Biological Sciences 205:475–488. Maynard Smith, John. 2001. “Reconciling Marx and Darwin.” Evolution 55:1496–1498. Mukamel, Roy, Arne D. Eckstrom, Jonah Kaplan, Marco Iacoboni, and Iizhak Fried. 2010. “Single- neuron responses in humans during execution and observation of actions.” Current Biology 20:1–7. Nowak, Martin A. and Karl Sigmund. 2005. “Evolution of indirect reciprocity.” Nature Reviews Neuroscience

437:1291–1298.

Nummenmaa, Lauri, Jussi Hirvonen, Ritta. Parkkola, and Jari K. Hietanen. 2008. “Is emotional contagion special? An fMRI study on neural systems for affective and cognitive empathy.” Neuroimage 43:571–580. Oberman, Lindsay M., Edward M. Hubbard, Joseph P. McCleery, Eric L. Altschuler, Vilayanur S. Ramachandran, and Jaime A. Pineda. 2005. “EEG evidence for mirror neuron dysfunction in autism spectrum disorders.” Cognitive Brain Research 24:190–198. Okuda, Jiro, Toshikatsu Fujii, Hiroya Ohtake, Takashi Tsukiura, Katsuyo Tanji, Kyoko Suzuki, Ryuta Kawashima, Hiroshi Fukuda, Matsatoshi Itoh, and Atsushi Yamadori. 2003. “Thinking of the future and past: the roles of the frontal pole and the medial temporal lobes.” NeuroImage 19:1369–1380.

The Social Brain and the Myth of Empathy 423

Rizzolatti, Giacamo and Michael A. Arbib. 1998. “Language within our grasp.” Trends in Neuroscience

21:188–194.

Rizzolatti, Giacamo and Greg S. Corrado. 2010. “The functional role of the parieto-frontal mirror circuit:

interpretations and misinterpretations.” Nature Reviews Neuroscience 11:264–274. Rizzolatti, Giacomo and Laila Craighero. 2004. “The mirror-neuron system.” Annual Review of Neuroscience 27:169–192. Rizzolatti, Giacamo and Maddlena Fabbri-Destro. 2010. “Mirror neurons: from discovery to autism.” Experimental Brain Research 200:223–237. Rosas, Alejandro. 2008. “The return of reciprocity: a psychological approach to the evolution of cooperation.” Biology and Philosophy 23:555–566. Ryle, Gilbert. 1949. The Concept of Mind. London: Hutchinson. Schacter, Daniel, Donna Rose Addis, and Randy L. Buckner. 2007. “Remembering the past to imagine the future: the prospective brain.” Nature Reviews Neuroscience 8:657–661. Shamay-Tsoory, Simone G., Meytal Fischer, Jonathan Dvash, Hagai Harari, Nufar Perach-Bloom, and Yechiel Levkovitz. 2009. “Intranasal administration of oxytocin increases envy and schadenfreude (gloating).” Biological Psychiatry 66:864–870. Schwartz, Daniel L. 1999. “Physical imagery: kinematic versus dynamic models.” Cognitive Psychology

38:433–464.

Shmuelof, Lior and Ehud Zohary. 2007. “Watching others’ actions: mirror representations in the parietal cortex.” Neuroscientist 13:667–672. Singer, Tania. 2006. “The neuronal basis and ontogeny of empathy and mind reading: review of literature and implications for future research.” Neuroscience and Biobehavioral Reviews 30:855–863. Singer, Tania, Ben Seymour, John P. O’Dougherty, Holger Kaube, Raymond J. Dolan, and Chris D. Frith. 2004. “Empathy for pain involves the affective but not sensory components of pain.” Science

303:1157–62.

Singer, Tania, Ben Seymour, John P. O’Dougherty, Klaas E. Stephan, Raymond J. Dolan, and Chris D. Frith. 2006. “Empathic neural responses are modulated by the perceived fairness of others.” Nature

439:466–469.

Smith, D. F. 2010. “Cognitive brain mapping for better or worse.” Perspectives in Biology and Medicine

53:321–329.

Spence, Sean A. 2004. “The deceptive brain.” Journal of the Royal Society of Medicine 97:6–9. Spence, Sean A., Mike D. Hunter, and Gemma Harpin. 2002. “Neuroscience and the will.” Current Opinion in Psychiatry 15:519–526. Strawson, Galen. 2008. Real Materialism and Other Essays . Oxford: Oxford University Press. Suddendorf, Thomas, Donna Rose Addis, and Michael C. Corballis. 2009. “Mental time travel and the shaping of the human mind.” Philosophical Transactions of the Royal Society B 364:1317–1324. Takahashi, Hidehiko, Motochiro Kato, Masato Matsuura, Dean Mobbs, Tetsuya Suhara, and Yoshiro Okubo. 2009. “When your gain is my pain and your pain is my gain: neural correlates of envy and schadenfreude.” Science 323: 937–939. Supporting Online Material, at www.sciencemag.org/ cgi/content/full/323/5916/937/DC1 (accessed 17 April 2012). Tambiah, Stanley J. 1990. Magic, Science, Religion, and the Scope of Rationality. Cambridge: Cambridge University Press. Tettamanti, Marco, Giovanni Buccino, Maria C. Succaman, Vittorio Gallese, Massimo Danna, Paola Scifo, Ferruccio Fazio, Giacomo Rizzolatti, Stefano E. Cappa, and Daniela Perani. 2005. “Listening to action-related sentences activates fronto-parietal motor circuits.” Journal of Cognitive Neuroscience

17:273–2781.

Trivers, Robert L. 1971. “The evolution of reciprocal altruism.” Quarterly Review of Biology 46:35–

57.

Turella, Luca, Andrea C. Pierno, Frederico Tubaldi, and Umberto Castiello. 2009. “Mirror neurons in

humans: consisting or confounding evidence?” Brain and Language 108:10–21.

424 Allan Young

van der Gaag, Christiaan, Ruud B. Minderaa, and Christian Keysers. 2007. “Facial expressions: What the mirror neuron system can and cannot tell us.” Social Neuroscience 2:179–222. Veyne, Paul. 1988. Did the Greeks Believe in their Myths? An Essay on the Constitutive Imagination. Chicago:

University of Chicago Press. Wegner, Daniel M. 2002. The Illusion of Conscious Will. Cambridge MA: MIT Press. Wheelwright, S., S. Baron-Cohen, N. Goldenfeld, J. Delaney, D. Fine, R. Smith, and A. Wakabayashi. 2006. “Predicting Autism Spectrum Quotient (AQ) from the Systemizing Quotient-Revised (SQ-R) and Empathy Quotient (EQ).” Brain Research 1079:47–56. Wicker, Bruno, Christian Keysers, Jane Plailly, Jean-Pierre Royet, Vittorio Gallese, and Giacamo Rizzolatti. 2003. “Both of us disgusted in My insula: the common neural basis of seeing and feeling disgust.” Neuron 40:655–664. Williams, Justin H.G., Andrew Whitten, Thomas Suddendorf, and David I. Perrett. 2001. “Imitation, mirror neurons and autism.” Neuroscience and Biobehavioral Review 25:287–295. Young, Allan. 1995. The Harmony of Illusions: Inventing Posttraumatic Stress Disorder. Princeton: Princeton University Press. Young, Allan. 2002. “The self-traumatized perpetrator as a ‘transient mental illness’.” Evolution Psychiatrique

67:26–50.

Young, Allan. 2004. “When traumatic memory was a problem: On the antecedents of PTSD.” In Posttraumatic Stress Disorder: Issues and Controversies , edited by G. Rosen, 127–146. Chichester UK: John Wiley. Young, Allan. 2006. “Remembering the evolutionary Freud.” Science in Context 19:175–189.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.