Sie sind auf Seite 1von 6

JM Engelhardt The Processes Underwriting Social Cognition Are Not 1

Simulations
In the recent book Simulating Minds, Alvin Goldman argues that in our
everyday practices of predicting, interpreting and explaining others’ behaviors,
agents employ both simulation processes and theories of mind. As far as the
mindreading literature is concerned, then, Goldman proposes a hybrid of
Simulation Theory (ST) and Theory-Theory (TT). More particularly, he says the
simulation processes are an agent’s “default” mode of mindreading (e.g. 170, 176,
178) whereas a tacit theory is employed only in unspecified unusual
circumstances. While it’s plausible that there are some instances in which
mindreaders use simulations, I argue here that simulation cannot be the default
method. At the conclusion, I’ll sketch a different hybrid of the two and note an area
of ST that requires clarification if we’re to decide whether we do any simulational
mindreading at all.1
We’re trying to figure out how one agent, the attributor, comes to form
beliefs about what another, the target, believes or desires or will believe, desire, or
decide. We assume that people often do this accurately and that there exist
propositional attitudes and contents (whatever these may be) belonging to some
agents for other agents to “read”. TT says we mindread by exploiting a store of
psychological laws that arrange behaviors into equivalence classes that are
considered effects of equivalence classes of beliefs and desires. If TT is true, given
behavior phi, I token a psychological law that categorizes phi into class of
behaviors psi, then, given a law that says behaviors of type psi are caused, ceteris
paribus, by mental states of type alpha, I infer that the target of my mindreading
is/was tokening state(s) of the type alpha. Whatever mechanism we use to do our
usual inferring and theoretical reasoning will be the same we use when
mindreading, according to TT. ST, on the other hand, says an agent A reads minds
not with her general inference mechanism, but with whatever processor she would
use were she doing what she wants to read from her target. For example, Wilma
predicts what Fred will choose between orange juice and milk at lunch by
pretending to have Fred’s beliefs and desires about milk, juice, and his situation at
the time, and running these pretend states through her own practical reasoning
processor. Wilma then predicts that Fred will make whatever decision her processor
spits out. Employing one’s processor in this way with pretend-states as input is
sometimes called taking one’s processor “off-line”. (Goldman 2006: 19-20) The
major differences between the theories, then, the information and cognitive
mechanisms they impute to successful mindreaders. TT requires that Wilma have
general information—typically in the form of psychological laws—that apply in the
particular to Fred’s case if she’s going to read him accurately. ST, on the other
hand, says Wilma needs only information applicable to Fred at the time she wants
to read his mind, so long as she can run the particular information through the
right processor on her end.2 On the mechanism front, TT says Wilma can read
minds with only the inference mechanism and a store of psychological

1
I’ll be talking about reading propositional attitudes only. Goldman thinks there is also a “lower-
level” mindreading of affective states. In fact, I think Goldman’s simulation theory is quite plausible
there and much empirical evidence supports him.
2
We’ll see, in fact, that ST probably needs psychological generalizations even to get to the point
where a simulation can be run.
JM Engelhardt The Processes Underwriting Social Cognition Are Not 2
Simulations
generalizations (wherever they are) intact. ST says Wilma needs a practical
reasoning processor and a store of information about Fred’s beliefs or desires to
predict his decisions, an inference mechanism and a store of information for
predicting his inferences, and so on for whatever mental process she wants to
read.

Default Mindreader
Something can be the “default” process by which we read minds in at least
two ways. It could be the one we first employ in mindreading or it could be the one
we typically advert to. Simulation can’t play either role.
It can’t be our first employed mindreading technique either phylogenetically
or ontogenetically, and both for the same simple reason. Running a simulation
requires that there be input for the simulator to process, no matter which
processor it is. Inputs to these processes are mental states, and so having input for
reading Fred’s mind means having already imputed some mental states to him.
Wilma can’t expect to predict his decision by simulating his decision processor
unless she has an “idea” (not necessarily a conscious thought, of course) of what
mental states he’s going to put into it. If simulational mindreading is going to
operate in advance of the application of psychological generalizations, then, ST
must tell us a story about the provenance of the input to the original simulation. In
the absence of robust theories of empathy or extrasensory perception, this means
“retrodictive mindreading”, or the attribution of mental states taken as causes of
behavioral or verbal act the attributor witnesses or otherwise knows of.
Goldman admits that ST has no good story about retrodictive mindreading.
Its best bet, he says, is “the generate-and-test strategy”. (Goldman 2006: 183-4)
Seeing Fred drink a glass of milk, Wilma attempts to discern why he’d do such a
thing by throwing some mental states into her practical reasoning processor. If the
output gives her the behavior she sees Fred evincing, then she’s successfully read
his mind and she can attribute to him the mental states she put into her
simulation. If not, she tries another combination of mental states (and perhaps
another processor), and over again until she gets it right. It’s implausible that
humans read minds this way for a number of obvious reasons, but implausibility
aside it doesn’t give us a story sufficient for retrodictive mindreading. That’s
because discerning whether the output behavior is relevantly similar to a target’s
requires something not yet provided by ST: equivalence classes of behaviors.
Suppose Wilma gets very lucky and puts the pretend desire for milk, the
pretend belief that there’s milk in a cup before her, and the pretend belief that
drinking the contents of the glass before her will sate her desire for milk into her
practical reasoning processor while leaving all her own actual beliefs out of it. With
those mental states, her output behavior will be something like moving from where
she is standing at the time of mindreading toward the refrigerator, opening the
door, pouring a glass of milk, and then drinking the contents of that glass. To most
of us, it seems that this must be a lot like what Fred did. Indeed, most of us
probably think this behavior is so similar to Fred’s that it’s likely Wilma got the
mental states right. But how is Wilma to know this? If she doesn’t employ
generalizations over classes of behaviors linking them to their mental causes, she
JM Engelhardt The Processes Underwriting Social Cognition Are Not 3
Simulations
has no reason to think of these behaviors as relevantly similar. Without
equivalence classes of behaviors, Wilma’s behavior would likely seem to her very
different from Fred’s. Whereas he tilted his head back at a 45-degree angle to drink
from the carton, she needs to go back 90 degrees (since he drank so much); if she
doesn’t have a generalization according to which the two acts are effects of the
same mental states, she has no reason to think they’re relevantly similar
behaviors. If she doesn’t think they’re relevantly similar behaviors, she’ll go on
generating and testing, skipping over what most of us think is the appropriate
attribution. Given considerations of this sort, if there is any collection of mental
states that can pass the test portion of Wilma’s generate and test method in the
absence of folk psychological generalizations, it would likely include beliefs and
desires that we tend to think irrelevant to behaviors like drinking milk. It would
include, for example, beliefs and desires about what angle to tilt one’s neck when
drinking milk, about the duration of milk-drinking involved in sating one’s thirst,
and so on to the end of Wilma’s capacities for discerning differences between her
behavior and Fred’s.
Unless ST can tell a story for retrodictive mindreading or for generating
equivalence classes of behaviors, then simulation cannot be the mindreading
mechanism first used by humans historically or first used by each of us
developmentally.
Nor can simulation be the mindreading mechanism we use most frequently.
Nichols, et al (1995) point out that when Wilma misreads Fred’s mind, ST can
blame only two aspects of simulation. Either the processor she’s running her
pretend states through differs from Fred’s or the pretend states she put into the
processor were insufficiently accurate. In any given case of failed mindreading,
then, if neither of these explanations is plausible, then simulation wasn’t used. (TT,
by contrast, can say the mindreader lacked the necessary generalizations, folk-
psychological or otherwise.) Now we take a particular case of failed mindreading;
in this case, it turns on the “endowment effect”. Loewenstein and Adler (1995)
show that subjects fail to predict how much owning an object will change their own
valuations of that object. First they had all subjects examine a mug, and they
asked half of the subjects to imagine owning it. They also asked those who were
imagining owning it to predict how much they might sell it for. On average, they
predicted they’d sell it for $3.73. Finally, all subjects were actually given mugs and
told they could exchange them for cash. The average actual cost of exchange for a
mug was $5.40 for those who had imagined owning one and $6.46 for those who
hadn’t. Subjects without a mug, the authors conclude, underestimate how much
they’ll value it once they have it. Subjects fail to predict the endowment effect
even in their own case. If the subjects were running simulations in predicting their
own behaviors, how can we account for the failures?
We can’t, of course, say that the processes differ because the same people
are running them offline one minute and then online another; if processors vary
this much in individuals, cognitive science is doomed and we don’t need to worry
about mindreading mechanisms. Is it plausible that the mental states run through
the processor were inaccurate? It seems unlikely since, again, the predictor and
target are the same person in each case, but maybe the difference is due to the
JM Engelhardt The Processes Underwriting Social Cognition Are Not 4
Simulations
change of mental states going from only imagining that one owns a mug to
actually owning a type-identical mug.
In fact, this is one response Goldman gives. (Goldman 2006: 175) Perhaps
it’s more difficult than we typically think to imagine how owning something will
affect our valuations, and when predictors imagined it, they did a poor job. Maybe
this is so, but it only invites another objection to ST.
(1) Seldom do we have input for a simulator as accurate as the subjects
did in the Loewenstein experiment, much less more information
than that.
(2) Yet, it seems we accurately predict, explain, and interpret the
mental states of others and ourselves quite frequently.
(3) If Wilma is simulating Fred, and Wilma’s information about Fred is
less accurate and/or narrower in scope than that which the subjects
in Loewenstein’s experiment had about themselves, then Wilma’s
simulation will be inaccurate.
If it’s true that we seldom have information of the accuracy and breadth apparently
required for accurate simulation and it’s true that we often succeed in
mindreading, then in those cases where we have less information about the
target’s mental states and we yet get her mental states right, we’re not using a
simulational mindreading process.
The advocate of ST might, of course, reject one or more of the steps in the
argument. Or, she can, as Goldman proposes, relax all three.3 This looks
unpromising. (1) If third-person mindreaders often have more information about
their targets than the subjects did about themselves in the experiment, then it
seems we’re more ignorant about ourselves than we are about others. But this
harms ST more than it helps. After Wilma runs her simulation, she reads her
pretend output so that she can attribute it to Fred. That is, ST requires Wilma to
read her own mind in order to read Fred’s. It can’t also say that she knows Fred’s
mind better than her own. (2) It’s well known that toddlers succeed at false belief
tasks, and Goldman himself endorses results according to which children as young
as 15 months regularly succeed at mindreading. (Goldman 1006: 77-8) Responding
to these results and to our intuitions is an uphill battle. (3) Since the ST theorist
must explain the prediction failure with the endowment effect, the amount and sort
of information Wilma needs about Fred in order to read his mind by simulation will,
in any case, have to be more and/or better than subjects had about themselves
(just a few minutes in the future) in Loewenstein’s case. Unless Loewenstein’s
subjects were especially self-ignorant, this is a very high standard.
There’s another response to this line of argument that says the mindreading
done in the endowment effect isn’t representative of mindreading in general, and
so we can’t infer from it anything about how or when we use simulation in general.
(Goldman 2006: 174) Once we’ve given a principled reason for thinking
Loewenstein’s case is exceptional, we can say simulation doesn’t typically require
information more precise and far-reaching than the subjects had in Loewenstein’s
case, or an advocate of a ST-TT hybrid like Goldman may then say we employ folk-
psychological generalizations in Loewenstein’s case, as Nichols et al argue, but we
3
In conversation 10/17/06
JM Engelhardt The Processes Underwriting Social Cognition Are Not 5
Simulations
still simulate most of the time. There may be an explanation of why this apparently
quotidian prediction task is in fact extraordinary, and this explanation may not be
ad hoc, but at present there’s no indication that this explanation is forthcoming.
Furthermore, this explanation needs to be given not just for the endowment effect,
but for probably for all surprising psychological phenomena that are systematically
mispredicted.
Without a response to the challenge posed by predictions of surprising
psychological phenomena, there’s no reason to suppose simulation is our usual
mechanism for mindreading.

Supplemental Mindreading
Still, there is reason to think that we sometimes do simulate others’ mental
processes. Goldman reviews cases of egocentric biases in mindreading. (Goldman
2006: 164-70) In all of these cases, ranging over the reading of others’ beliefs and
knowledge, values, and feelings, mindreaders attribute to their targets mental
states that are, one way or another, shown to be more appropriately attributed to
the mindreaders themselves. For example, Van Boven and Loewenstein (2003)
asked subjects to predict 1) the feelings of hikers lost in the woods without food or
water and 2) how they, the subjects, would feel in the same situation. One group
made their predictions before vigorous exercise, and the other group did it after
exercise. Those who predicted after exercise were more likely than the other group
to predict that 1) the hikers would feel more thirsty than hungry, and that 2) they
would feel more thirsty than hungry were they in the hikers’ situation. If we
assume that the post-exercise predictors were themselves more thirsty than
hungry, it seems they imputed their own feelings to both the hikers and their
hypothetical selves.
Goldman thinks these egocentric biases reflect “quarantine failure” in the
simulation routine. Some of the mindreader’s own mental states slip into the
simulating processor for one reason or another with the result that the output
attributed to the target is colored by the mindreader’s beliefs, values, or whatever.
This is supported by substantial evidence that failure to inhibit one’s own mental
states is a difficult task and is perhaps one reason many infants fail false-belief
tasks. (Goldman 2006: 72-6) In the above case, people who were thirsty regularly
let their thirst slip into their simulating process. But what is it about thirst or about
hiking that makes quarantine failure common here? This approach fails to explain
why quarantine failure occurs where it does. If egocentric bias were exclusively a
matter of quarantine failure, we’d expect to see correlations between failures and
demands on executive control appropriate to each subject’s cognitive capacities,
and we’d see little quarantine failure in cases like the hiker story, which makes
little demand in the way of either executive control or memory. A more plausible
explanation of egocentric bias in these cases, and a more plausible approach to TT-
ST hybrids, is to say we resort to simulation when our information about the target
underdetermines what we want to predict, explain, or interpret. In the present
case, there’s not much reason to say the hikers will be either hungry or thirsty,
given that the mindreader hasn’t any reason to attribute particular beliefs or
desires to them. We haven’t been given any individuating features about the
JM Engelhardt The Processes Underwriting Social Cognition Are Not 6
Simulations
hikers, and so we can’t apply our folk-psychological laws. If we ask how gluttons
would feel when lost in the woods, I suspect mindreaders would attribute hunger
more often, no matter whether they’d exercised first. Without clues as to the laws
subsuming the target’s mental states, mindreaders fall back on simulation.
I submit, then, that besides cases of neurological disorders and failures of
inhibition correlated with processing or memory demands, we will see an
egocentric bias when what’s to be predicted, explained, or interpreted is
underdetermined by the information available to the mindreader plus her folk-
psychological theory. This is of a piece with Goldman’s urging that possible folk-
psychological laws must be constrained along some dimensions to explain regular
tendencies in mental state attributions, like our bias for “spelke-objects” in our
representations of both the world and others’ mental states. (Goldman 2006: 178-
9) In the absence of a folk-psychological law (or perhaps other determining
factors), mindreaders export their own biases and run their own mental states on
their own processors. This, of course, demands clarification and supplementation…
but not here.

Das könnte Ihnen auch gefallen