Sie sind auf Seite 1von 23

Psychological Bulletin 1999, Vol. 125, No.

6, 737-759

Copyright 1999 by the American Psychological Association, Inc 0033-2909/99/$3.QO

How We Knowand Sometimes MisjudgeWhat Others Know: Imputing One's Own Knowledge to Others
Raymond S. Nickerson
Tufts University
To communicate effectively, people must have a reasonably accurate idea about what specific other people know. An obvious starting point for building a model of what another knows is what one oneself knows, or thinks one knows. This article reviews evidence that people impute their own knowledge to others and that, although this serves them well in general, they often do so uncritically, with the result of erroneously assuming that other people have the same knowledge. Overimputation of one's own knowledge can contribute to communication difficulties. Corrective approaches are considered. A conceptualization of where own-knowledge imputation fits in the process of developing models of other people's knowledge is proposed.

To communicate effectively with other people, one must have a reasonably accurate idea of what they do and do not know that is pertinent to the communication. Treating people as though they have knowledge that they do not have can result in miscommunication and perhaps embarrassment. On the other hand, a fundamental rule of conversation, at least according to a Gricean view, is that one generally does not convey to others information that one can assume they already have (Grice, 1975). A speaker who overestimates what his or her listeners know may talk over their heads; one who underestimates their knowledge may, in the interest of being clear, be perceived as talking down to them. Both types of misjudgment work against effective and efficient communication. The purpose of this article is to consider the general question of how people form models of what other people know and, in particular, the role that the imputation of one's own knowledge to others plays in that process. Knowledge in this context is given a sufficiently broad connotation to include beliefs, opinions, suppositions, attitudes, and related states of mind. Much work on social cognition has dealt with questions regarding how people come to know how others feel and how they will behave in specified situations; less attention has been given to the question of how people form models of what others know. That people form such models is taken as given; how they do so, and how effectively they do so, are questions of some theoretical and practical significance. How people develop a theory of mind has been the focus of a considerable amount of research, especially since Premack and Woodruff (1978) first used the term in an article that addressed the question of whether a chimpanzee has one. The question sparked a rash of studies aimed not only at attempting to answer it but at investigating how children acquire a theory of mind during the normal course of development. (See Astington, Harris, & Olson, 1988, for a collection of early papers on the topic.) Gauvain (1998)

pointed out that a key assumption underlying much of this research is that an understanding of mind is a universal quest. Major theoretical treatments of the subject have been reviewed by Astington (1993), Astington and Gopnick (1991), and Lillard (1998). The Lillard review dealt not with the question of how people acquire a conception of what mind is and how it works, but with that of how an individual develops a conceptual model of what a specific mind contains (i.e., what another individual knows). The view presented here is not in conflict with any of the Lillard-reviewed accounts of how a theory of mind develops, but it has much in common with what she referred to as the simulation view, according to which, in order to understand what people are thinking or feeling in a particular situation, one imagines oneself in that situation and discovers what one would think or how one would feel (Gordon, 1995a, 1995b). I assume that the basic way of attempting to understand what another knows, or how another feels in a particular situation, is by considering what one oneself knows or would feel in that situation. The points that I argue are that one uses one's own knowledge as the primary basis for developing a model of what specific others know and that this works quite well for many purposes but that it often results in imputing to others knowledge that they do not have.

Empathic Accuracy and the Importance of Perspective Taking


The importance of "the ability to accurately infer the specific content of another person's thought and feelings"Ickes's (1993, p. 588) definition of empathic accuracyhas been noted by several investigators. According to Ickes (1997),
Empathically accurate perceivers are those who are consistently good at "reading" other people's thoughts and feelings. All else being equal, they are likely to be the most tactful advisors, the most diplomatic officials, the most effective negotiators, the most electable politicians, the most productive salespersons, the most successful teachers, and the most insightful therapists, (p. 2)

Raymond S. Nickerson, Department of Psychology, Tufts University. Correspondence concerning this article should be addressed to Raymond S. Nickerson, 5 Gleason Road, Bedford, Massachusetts 01730. Electronic mail may be sent to rnickerson@infonet.tufts.edu.

Research has emphasized the role of empathic accuracy in longterm, close, personal relationships (Ickes & Simpson, 1997;
737

738

NICKERSON what others are thinking until they understand that others have minds and can have beliefs, knowledge, and the like. Eisenberg et al. (1997) pointed out that children typically can distinguish between real and mental entities by the age of 3 and that they rapidly develop a theory of mind between the ages of 3 and 5. As a theory of mind begins to take shape, it may be natural for children to assume that the mental experience of others is exactly like their own (Chandler, 1988; Mossier, Marvin, & Greenberg, 1976). If a candy box is opened to reveal that it contains pencils, and three-year-olds are asked what someone else will think the box contains, they are likely to say pencils (Perner, Leekam, & Wimmer, 1987); similarly, if asked what they thought was in the box before it was opened, they are also likely to say pencils (Gopnik & Astington, 1988). Most children, it appears, acquire the ability to think about the mental states of others during their 4th year (Astington, 1993). By the age of 4 or 5, they begin to give evidence that they realize that another person's beliefs or knowledge about a particular situation may differ from their own (Gopnik, 1993; Mossier et al., 1976; Perner, 1991). Children at this age are able to use their awareness of their own and others' mental states as a basis for explaining and predicting behavior, but they are not yet very good at perspective taking, and their skills at tailoring speech to the needs of listeners continue to improve over several more years (Eisenberg et al., 1997; Selman, 1980). In addition, in descriptions of others, the emphasis that is given to relatively abstract or covert characteristics (e.g., traits, abilities, values, beliefs), rather than more concrete or overt aspects (e.g., age, sex, appearance, possessions), increases during the early school years (Livesley & Bromley, 1973; Scarlett, Press, & Crockett, 1971). The inability to ascribe mental states to oneself and/or to others has been suggested as a basic characteristic of autism (BaronCohen, 1989, 1995; Leslie & Frith, 1988). Failure of a normal theory of mind to emerge during the first few years of life has been associated broadly with mental retardation, though not to the same degree as with autism and not necessarily for the same reason (Yirmiya, Erel, Shaked, & Solomonica-Levi, 1998). Piaget (1923/ 1926, 1924/1928) characterized the normal development of thought as passing from an initial state akin to autism, through what he termed egocentric thought, to fully socialized thought (Flavell, 1992; Vygotsky, 1962). A similar view was promoted by Werner (1948). A defining feature of egocentrism, as Piaget conceived it, is an inability to take another's perspective, which is tantamount to assuming that another's perspective is precisely one's own:
Egocentrism does not refer to the fact that children tend to make more errors of social judgment or more extreme errors than do adults; it refers only to their tendency to make a particular kind of error: attributing to others their own knowledge, viewpoint, feelings, and so on. (Shantz, 1983, p. 509)

Thomas & Fletcher, 1997; Knudson, Sommers, & Golding, 1980; Sillars & Scott, 1983) and in psychotherapeutic contexts (Ickes, Marangoni, & Garcia, 1997; Marangoni, Garcia, Ickes, & Teng, 1995). There may be situations in which a person is motivated to have an inaccurate perception of another's state of mind (Hodges & Wegner, 1997; Ickes & Simpson, 1997; Sillars, 1985; Simpson, Ickes, & Blackstone, 1995). Cases in point are situations in which accurate perception of another's thoughts or feelings could threaten the stability of a valued relationship, as, for example, the accurate perception of a partner's attraction to a potential rival. There is some evidence to suggest that a high degree of empathic accuracy can work against the survival of a relationship in conflict situations (Sillars, Pike, Jones, & Murphy, 1984). It is not hard to imagine other circumstances in which social interactions would not be improved by the accurate perception of another's true thoughts or feelings. Such situations notwithstanding, it seems safe to assume that, as a general rule, relatively accurate models of what specific others know, believe, or feel are preferred over inaccurate ones. Eisenberg, Murphy, and Shepard (1997) pointed out that empathy, as the term has been used in studies of empathic accuracy, has both emotional and cognitive aspects. Affective or emotive states are of some interest here, but the primary focus is knowledge in the conventional sense. The more cognitive aspects of empathy are often discussed under the topic of perspective taking. Psychologists have stressed the importance of perspective taking (i.e., role taking, point-of-view appreciation) in communication, negotiation, and social interaction in general (Baldwin, 1906; Brown, 1965; Kohlberg, 1969; Krauss & Glucksberg, 1969; Mead, 1934; Raiffa, 1982; Rommetveit, 1974; Rubin & Brown, 1975). Whether because of inability or unwillingness, failure to take others' perspectives can be the basis for misunderstandings and disputes. Among children, a correlation has been found between difficulty in taking another's point of view and difficulty in establishing and maintaining friendships (Selman, 1981); more generally, many misunderstandings are believed to be rooted in people's failure to recognize the degree to which their construals of a situation may differ from those of others (Griffin, Dunning, & Ross, 1990; Griffin & Ross, 1991). Conversely, skill at perspective taking is seen as a major determinant of successful interactions (Bazerman & Neale, 1982; Neale & Bazerman, 1983; Noller & Venardos, 1986). Understanding the nature of such skill and how it is acquired is a continuing objective of research (Karniol, 1986, 1990, 1995).

The Development of Perspective-Taking Skills


The ability to distinguish between one's own knowledge or Beliefs and what others might know or believe about specific aspects of reality requires a certain level of cognitive development. Precisely when and how the ability is acquired is a matter of recent and ongoing research (Eisenberg et al., 1997; Flavell, Botkin, Fry, Wright, & Jarvis, 1968; Harris, Johnson, Hutton, Andrews, & Cooke, 1989; Karniol, 1995; Selman, 1971, 1980; Taylor, 1996). Children may have some awareness of emotional and mental states of others before they are threethey may, for example, be able to recognize intention at some level (Meltzoff, 1995)but it seems unlikely that they can understand, in a more than superficial way,

The difficulty that young children have in taking another's spatial perspective was demonstrated in a well-known study by Piaget and Inhelder (1956). (Much of the early developmental work on perspective taking focused on the ability to imagine how an object or scene would appear to a person whose viewing angle differed from one's own; more recently, this interest has broadened to include development of the ability to imagine the thoughts and feelings of others more generally.)

IMPUTING KNOWLEDGE

739

The idea that egocentrism, as conceived by Piaget, characterizes a phase through which all children pass in the normal course of development has not lacked criticism (Glucksberg, Krauss, & Higgins, 1975; M. Shatz, 1983). Investigators have shown that children as young as 3 or 4 adjust their speech to the age of a listener, using shorter and simpler sentences when speaking to a younger child than when speaking to a peer or an adult (Sachs & Devin, 1976; C. V. Shatz & Gelman, 1973). There appears to be little doubt, however, that the ability to take another's perspective increases gradually over a period of several years (Brandt, 1978; Kurdek, 1977). Realizing that another can have a perspective that differs from one's own does not necessarily entail being able to adopt that perspective; it has been suggested that the former ability may develop before the latter (Selman & Byrne, 1974; Shantz, 1983). The time course over which children acquire a concept of mind, discover that others have thoughts and feelings that may or may not correspond to their own, and learn to anticipate how others are likely to think or act in specific situations has been the focus of much research. A review of this extensive literature is not attempted here. Relevant reviews include Shantz (1975, 1983), M. Shatz (1983), Bennett (1993), and Flavell and Miller (1998). What is most relevant in the developmental literature to the theme of this paper is wide recognition of the importance of having reasonably accurate models of the feelings and states of mind of specific other persons, evidence that the ability to acquire or produce such models begins early in life and improves during childhood and adolescence, and recognition that it takes time for children to learn to distinguish clearly their own thoughts and feelings from those of others. None of this rules out the possibility that even adults frequently make a less than sharp distinction between what they know or believe and what they assume that others do. Flavell (1977) speculated that all people may be at risk for egocentric thinking throughout their lives:
We experience our own point of view more or less directly, whereas we must always attain the other person's in a more indirect manner. Furthermore, we are usually unable to turn our own viewpoint off completely when trying to infer the other's, and it usually continues to ring in our ears while we try to decode the other's. It may take considerable skill and effort to represent another's point of view accurately through this kind of noise, and the possibility of egocentric distortion is ever present. For example, the fact that you thoroughly understand calculus constitutes an obstacle to your continuously keeping in mind my ignorance of it while trying to explain it to me; you may momentarily realize how hard it is for me, but that realization may quietly slip away once you get immersed in your explanation, (p. 124)

inner and external speech (Werner & Kaplan, 1963). Representations intended for personal future use have proved to be less easily interpreted by others than those constructed with the intent to communicate (Innes, 1976; Krauss, Vivekananthan, & Weinheimer, 1968). According to the audience design hypothesis, speakers design messages to be appropriate to what they assume to be the knowledge of the recipients. (Clark, 1992; Clark & Murphy, 1982; Fussell & Krauss, 1992). Fussell and Krauss (1989a, 1989b) found that the verbal descriptions people produced of nonsense figures differed depending on who (i.e., themselves, friends, strangers) they expected would later have to match the descriptions to the figures. They found too that the descriptions produced for themselves were less useful to others than to themselves, and that those produced for friends were slightly more useful to friends than were those produced for strangers. The investigators interpreted these results as evidence that people attempt to adapt messages to the background knowledge and perspectives of the intended recipients, and that these efforts affect the intelligibility of the messages. Members of groups that function in a coordinated way benefit from having some conception of what knowledge all the members have in common as well as some understanding of what they or other specific members might know that the group as a whole does not. The idea that people in close relationships tend to assume responsibility for different parts of the knowledge that they, in the aggregate, need has been developed by Wegner and others (Atkinson & Huston, 1984; Wegner, 1986; Wegner, Erber, & Raymond, 1991), who refer to the corporate knowledge store of such a group as a transactive memory:
A complete transactive memory in a group occurs when each member keeps current on who knows what, passes information on a topic to the group's expert on the topic, and develops a relative sense of who is expert on what among all group members. (Wegner, 1995, p. 326)

Many of the studies reviewed in this article lend credence to this view.

Effects of Knowledge of Others' Knowledge


There are many evidences that one's behavior with respect to others is influenced in various ways by what one knows (i.e., believes, assumes) about what specific others know. People appear to represent information differently, for example, if they have to communicate it to others than if they expect only to have to remember it, a finding that is reflected in the distinction between

The development of a transactive memory during training has been proposed as the reason why people who are trained to perform a moderately complex task (e.g., assemble a transistor radio) perform better as a group if trained as a group than if trained individually (Liang, Moreland, & Argote, 1995). If a transactive memory is to be effective, not only must its users know what parts of it they are responsible for, they must know what other specific members are responsible for as well; they must have a reasonably accurate model of what other specific members know that they themselves do not. Much of the work on transactive memory has involved couples, but the idea of a division of labor for knowledge acquisition and retention is applicable to groups of any size. In any group, there is likely to be some knowledge that is common to all members and some that is held by only one or a subset of the group's members. Shared knowledge has been recognized as a defining characteristic of groups; group culture is sometimes defined in terms of shared thoughts (Levine & Moreland, 1991). However, how groups access and make use of knowledge that is held by only one or a subset of its members is also a question of both theoretical and practical interest, especially in view of the fact that lack of a shared understanding of how skills and knowledge are distributed in a group can have a negative effect on the functioning of the group (Hackman, 1987). The results of several studies suggest that groupsat least newly formed groupstend to focus, in discussions, on informa-

740

NICKERSON

tion that they hold in common, somewhat to the neglect of information that is held by only one or a few members, and that this neglect may sometimes adversely affect the performance of the group as a whole (Gigone & Hastie, 1993; Kim, 1997; Stasser, Taylor, & Hanna, 1989; Stasser & Titus, 1985, 1987). This result has been found several times with a hidden-profile task, introduced by Stasser and colleagues to study communication in decisionmaking groups. One of several decision alternatives is superior to the others, but its superiority is revealed only by the aggregation of information from different members of the group; information that is common to all group members supports selection of a suboptimal alternative. The discovery of methods to increase the likelihood that decision-making groups will tap the knowledge held by only one member or a subset of group members has been an objective of some research (Hollingshead, 1996).

How We Construct Models of What Others Know


Others, in this context, can refer to large, heterogeneous groups (e.g., potential readers of a newspaper article), small groups with a common defining characteristics (e.g., members of a community historical society), or a single individual (e.g., an acquaintance to whom one is writing a letter, a stranger with whom one is engaged in a face-to-face conversation). These connotations represent different sorts of challenges vis-a-vis the problem of calibrating one's message to what one believes its recipient knows. At the one extreme, one needs a model of what most people know; at the other, one needs a model of what a specific individual knows. When one knows nothing about the individual, the group model may be the best one can do, but the more differentiating information one has about the individual, the better the job one should be able to do to fine-tune communication with that individual. According to the view presented here, one constructs a model of a specific other's knowledge by shaping a default model of what an unknown random other person knows, thereby taking into account information one has, or acquires, about the other individual (or group) that makes him, her, or it different from the default model. More specifically, model construction is viewed as a process in which one (a) starts with a model of one's own knowledge, applies to this any reasons one has for believing one's knowledge to be unusual, and constructs from this basis a default model of a random other; (b) develops the default model into an initial model of a specific other in accordance with any differentiating knowledge one may have of the individual, including what may be inferred from knowledge of his or her categorical affiliations; and (c) modifies one's working model on an ongoing basis in accordance with new information obtained. The proposal is represented in Figure 1. This can be seen as a case of the general reasoning heuristic of anchoring and adjustment (Tversky & Kahneman, 1974), according to which people make judgments by starting with an anchor as a point of departure and then make adjustments to it. The anchor may be provided by someone else, by oneself, or by some aspect of the context in which the judgment must be made. The primary finding of numerous experiments investigating the use of this heuristic is that when people are given an anchor, they typically adjust their judgments in the right direction, but by an insufficient amount (Carlson, 1990; Chapman & Bornstein, 1996; Slovic & Lichtenstein, 1971). It is as though they give more credence to the

Figure 1. Illustrating the bases of one's working model of specific others' knowledge.

anchor than it deserves. In the proposed model, the anchor for one's judgments about other people's knowledge is one's own knowledge, and, in keeping with the anchor and adjustment heuristic, it is assumed that adjustments are made in the right direction, but often by insufficient amounts. The model predicts that one's estimates of what unknown others know are likely to err in the direction of what one knows, or thinks one knows. In the absence of a basis for assuming otherwise, one is likely to overestimate the probability that another knows something one knows oneself and to underestimate the probability that another has a specific bit of knowledge that one does not have. In other words, the model predicts that, other things being equal, one is likely to overestimate the extent to which a random other person's knowledge corresponds to one's own. Experimental results, which are reviewed later, are generally supportive of this expectation. I do not mean to suggest that one goes through the process represented in Figure 1 every time one has to decide whether a specific other person has a particular bit of knowledge. I assume that we develop models of specific others by learning about them, and especially as a consequence of interacting with them, over time. One's model of a close acquaintance (e.g., one's spouse) presumably includes a model of (much of) what that individual knows and does not know, and how his or her knowledge differs from one's own. I assume that the model one has of the knowledge of a close acquaintance is constantly subject to change as a consequence of feedback regarding its accuracy, that, at any given time, it is the result of the cumulative effect of such feedback over

IMPUTING KNOWLEDGE the past, and that when the model does not include a representation of whether the acquaintance knows a particular fact or a basis for an inference on the question, one is likely to make the judgment on the basis of whether one knows it oneself.

741

Constructing the Default Model


Default or prototypical models of peopleespecially of how people commonly react in specific situationsare used in artificial intelligence software as points of departure for anticipating how individuals will react in those situations. That people tend to behave in stereotypical ways in specific situations is the idea behind the development of scripts, frames, and similar constructs that have proved to be useful in modeling and predicting human behavior (Minsky, 1975; Schank & Abelson, 1977). The idea that, in developing a model of what another person knows, one may begin with a default model and then update it on the basis of individuating information has been proposed by Wegner et al. (1991). The default model, in this case, is assumed to be based on characteristics, such as sex and age, that permit social classification from which stereotypical inferences can be drawn. The argument made here is that the best basis for a default model of what a random other person knows is one's model of one's own knowledge, adjusted to take into account any ways in which one sees one's own knowledge as special or unlikely to be representative of the knowledge of people in general. Everyone has some conception of what he or she knows. The most direct evidence one has of one's own knowledge is the information one can retrieve from memory for use in appropriate contexts. However, a substantial literature on metacognition (metaknowledge, metamemory), and- especially the feeling-ofknowing experience, attests to the fact that people know more, and know that they know more, than they are always able to retrieve on demand. Numerous studies have shown that when people feel they have knowledge in memory they cannot retrieve, the strength of this feeling is a reasonably good indication of the probability that they will be able to recall it eventually or to recognize what they cannot produce (Blake, 1973; Flavell & Wellman, 1977; Gruneberg & Monks, 1974; Hart, 1967; Leonesio & Nelson, 1990; Read & Bruce, 1982; Smith & Clark, 1993). The feeling of knowing and the closely related tip-of-the-tongue experience (Brown & McNeill, 1966) have been documented with a variety of experimental tasks; most relevant to the topic of this paper is the work that has been done with general-knowledge questions. Studies in which participants have been asked to indicate their degree of confidence that they would be able to recognize answers to questions for which they are not able to produce answers on request provide compelling evidence that people are able to say, with considerable accuracy, what they know but cannot retrieve (Hart, 1965; Metcalfe, 1986; Nelson, Gerler, & Narens, 1984; Nelson & Narens, 1990). The finding of a positive relationship between the strength of the feeling of knowing and the time people spend searching for an answer (Gruneberg, Monks, & Sykes, 1977; Lachman & Lachman, 1980; Reder, 1987, 1988; Ryan, Petty, & Wenzlaff, 1982) suggests that people use their knowledge of their own knowledge in directing attempts at knowledge retrieval. There are other indications that people use their knowledge of their own knowledge as a basis for strategic deci-

sions about whether and how to search memory for desired information (Reder & Ritter, 1992). One's default model of what a random other person knows, according to my conjecture, is one's model of what one knows oneself, adjusted to take account of ways in which one considers one's knowledge, or that of the specific other, to be unusual. Bases for considering one's own knowledge to be unusual include whatever differentiating knowledge one has of oneself: education, vocation, political affiliation, religion, special interests, and so on. One knows some things by virtue of being a sentient human being (e.g., the difference between pleasure and pain), others because of living in New England (e.g., the difference between New Englandrealclam chowder and the Manhattan variety), and still others as a consequence of being a father, a plumber, and a Boston Red Sox fan.

Transforming the Default Model to a Model of Specific Others


The default model is assumed to serve as the basis for the derivation of more person-specific models. Such derivations make use of various types of clues, which are discussed in a later section of this article. These include shared immediate context, knowledge of shared past experiences, and models of knowledge shared by members of categories (e.g., social, professional, avocational) to which a specific other is known or believed to belong. Psychologists and anthropologists have long been interested in the question of how knowledge is distributed within and across culturally, socially, and occupationally defined groups of people and the artifacts (e.g., books, tools, rules, and procedures) in which much of that knowledge is codified and by which it is passed from generation to generation (Cole & Engestrom, 1993; Schutz, 1970). This question is central to work on current topics of interest such as collective induction (Laughlin & Hollingshead, 1995; Laughlin, VanderStoep, & Hollingshead, 1991), collaborative or socially shared cognition (Levine, Resnick, & Higgins, 1993; Resnick, Levine, & Teasley, 1991), distributed or situated cognition (Salomon, 1993), the intergenerational transfer of cognitive skills (Sticht, Beeler, & McDonald, 1992), and social cognition more generally (Fiske & Taylor, 1991). How best to think about how knowledge is distributedwhat it means for a group to know something, how to characterize what a group knows, and how best to relate what a group knows to what its individual members knowis not clear (Nickerson, 1993). These questions are not pursued here beyond noting that every person has a crude model of how knowledge is distributed that, in various ways, guides his or her everyday actions. People assume, for example, that what individuals know can be inferred, to a practically useful degree, by the professional or occupational groups to which they belong. Thus, one consults a physician to deal with recurring chest pains but turns to an automobile mechanic when one's car does not steer properly. How people develop models of the distribution of knowledge, how precise and accurate these models are, and how susceptible the models are to adjustment on the basis of feedback are all questions for research. It seems clear that the imputation of knowledge to others may be done with varying degrees of awareness. One may impute some knowledge (e.g., the knowledge that Wednesday follows Tuesday) automatically without being conscious of doing so; in other cases.

742

NICKERSON

one may impute knowledge as a consequence of a thought process of which one is very much aware (e.g., "She undoubtedly knows who wrote Middlemarch, because she is very interested in English literature"). I assume that the imputations that one is aware of making are more likely to be involved in the development of differentiated models than in one's default model of what others, in general, know.
Continual Updating of Differentiated Model

The third phase in the development of one's model of the knowledge of a specific other involves a refining and continual updating on the basis of incoming information. Such refining and updating may involve the direct acquisition of new information (e.g., through conversation) and the making of inferences from the acquisition of clues regarding a person's education or general level of knowledge and his or her special interests or category memberships. Several researchers have stressed the importance of continually refining one's model of another's knowledge as well as the dependence of successful communication on the effectiveness with which the refining is done (Morton & Keysar, 1996; Isaacs & Clark, 1987; Krauss & Fussel, 1990, 1991a, 1991b). Some have also cautioned that although a person's thoughts and feelings are likely to be easier to infer the more time one has to get to know the person, individuals may differ greatly with respect to the ease with which what they are thinking or feeling can be ascertained (Hancock & Ickes, 1996; Ickes et al., 1997). Krauss and Fussell (1990, 1991a, 1991b) argued that people's assumptions about others' knowledge are necessarily tentative and probabilistic and are best thought of as hypotheses that need to be evaluated and modified dynamically over time. Communicative messages are formed with a model in mind of the recipient's knowledge, which is determined both by prior beliefs and feedback during interaction through which the beliefs are modified. In this view, the role of prior beliefs is especially important when feedback is not possible; in a conversation, erroneous beliefs about another's knowledge have a chance of being recognized as such and corrected as a consequence of the feedback that is received, but when no feedback is possible, incorrect beliefs are more likely to persist. Investigators have noted the critical role that the first few verbal exchanges in a conversation can play in providing each participant with information about the knowledge another has with respect to the topic of conversation (Schegloff, Jefferson, & Sacks, 1977). This process is especially important when the participants in a conversation differ greatly in their level of topical expertise (Isaacs & Clark, 1987). Isaacs and Clark suggested that participants are likely to volunteer information if they believe it will make their talk more efficient. I have already mentioned the audience design hypothesis, according to which speakers design their messages to be appropriate to the assumed knowledge of their recipients. Horton and Keysar (1996) contended that even more important is the role that monitoring of feedback plays in helping speakers revise their utterances to make their intentions clear. They argued that the monitoringand-adjustment approach to message production is more efficient than that of initially designing utterances to take common ground into account because it requires the expenditure of fewer resources.

Because the recipient of a message who is in the same immediate context as the producer of that message is likely to have the same situational information, the producer may, in effect, utilize common ground even without making any deliberate attempt to do so. When this is the case, monitoring may reveal the utterance to be understandable to the listener, in which case no adjustment is necessary. For present purposes, it is important to note that Horton and Keysar's (1996) proposal did not question the role of commonground knowledge in communication; it questioned only the extent to which producers of utterances take it into account in planning utterances, as opposed to monitoring their interpretation and making adjustments to correct for misunderstandings that arise when the producer erroneously assumes information to be available to the recipient.

Own Knowledge as the Starting Point


According to the conceptualization sketched above, what one knows, or thinks one knows, constitutes one's primary basis for constructing a default model about what a random other person knows. One's initial model of another's knowledge, according to this view, is one's model of what one oneself knows, qualified, although often not sufficiently, by ways in which one believes one's own knowledge to be unusual. Much of the remainder of this article focuses on work that is relevant to this conjecture; that is, work relating to the imputation, and often overimputation, of one's own knowledge to others. First, however, it is useful to consider some of the sources of information that can be used to transform a default model of a random other person's knowledge into a model of the knowledge of a specific person.

Clues to What Specific Others Know


To transform a model of what a random other person knows that is based on what one knows, evidence is needed of how the other person's knowledge is likely to correspond to, or to differ from, one's own. Clues regarding either correspondences or differences presumably come from a variety of sources. Social psychologists have investigated how people infer personality traits (or dispositional characteristics) and emotional states from behavioral or categorical information (Krauss & Fussell, 1991b). The ability of people to anticipate how specific others will react to particular events or situations, and the basis of expectations in this regard, have been the focus of some research (Karniol, 1986). Ways in which members of a group can develop models of what other members of the group know, so as to permit the effective sharing of cognitive resources, have been noted by investigators of transactional memory (Wegner, 1986; Wegner et al., 1991).
Observation and Disclosure

What one knows is revealed in various ways, by one's actions in specific situations. If, for example, a person witnesses a neighbor dismantling the carburetor of an antique Ford, he or she is likely to conclude that the neighbor knows something about automobile engines, particularly about old Ford engines. When one has the opportunity to observe the behavior of specific others in many situations over a long period of time, one should learn a lot about

IMPUTING KNOWLEDGE what they know. Members of a group may form a shared conceptualization of the prototypical member of their group (Niedenthal, Cantor & Kihlstrom, 1985); they may also develop customs that flag relative status or knowledge level within the group (Levine & Moreland, 1991). Information that can be used to calibrate a working model of what another knows can be obtained with explicit queries, such as "Where are you from?"; "What do you do for work?"; and "Do you know anything about. .. ?" People may also convey information spontaneously about what they do or do not know by what they say or do; thus, the responses people evoke from others are often adapted to what people reveal (Isaacs & Clark, 1987; Schegloff, 1972). In the formation of close relationships, what is learned from observation is likely to be augmented by voluntary disclosure between the parties involved (Archer, 1980). Observational clues to what one knows can be indirect and can take subtle forms. Jameson, Nelson, Leonesio, and Narens (1993) found, for example, that people who observed others trying to answer general-knowledge questions were able to predictbetter than those who did not observe themwhich, of the answers that they were unable to produce, they would recognize in a subsequent multiple-choice test.

743

areas of common knowledge for people who know they have visited the same vacation spot, read the same book, shopped at the same store, or driven the same model car. Not surprisingly, people are better at anticipating the immediate thoughts and feelings of friends than those of strangers (Colvin, Vogt, & Ickes, 1997; Stinson & Ickes, 1992). The greater degree of empathic accuracy among friends than among strangers is a consequence, these investigators suggest, of the fact that friends share more knowledge about each other than do strangers. People in close relationships (e.g., married couples, members of a working team) are likely to have acquired, over time, direct knowledge of what is known by the other member or members of the group. Moreover, as developers of the idea of transactive memory have argued, in some cases, planned knowledge allocation (Wegner, 1995) may be negotiated by people in close relationships, because the knowledge that someone else to whom one has easy access knows something relieves one, in many cases, of the need to know it (Wegner et al., 1991; Wegner, Giuliano, & Hertel, 1985).

Who or What One Is: Category Membership


Several levels of specificity of estimates of what others know may be distinguished. At the most general level are estimates of what nearly all people know. At the most specific level are estimates of what given individuals know. Intermediate levels involve estimating what people in specific categories know. Gender, for example, may provide a hint as to the likelihood that one will have certain types of knowledge (M. Ross & Holmberg, 1988). Categories can be more or less broad (e.g., Americans, Texans, Austinites) and category membership can be based on kinship (other family members), place of residence (neighbors), age (preschoolers), interests (baseball fans), vocation (carpenters), and many other criteria. Everyone belongs to numerous conceptual categories and may be described, with varying degrees of completeness and accuracy, in terms of those categories. For example, she is a retired high school history teacher, a grandmother, a bird watcher, and a member of the town's conservation committee; or he is a cabinet-maker, a parent of teenage children, a fisherman, and an active member of a Presbyterian church. Often, clues to the category membership of strangers are provided by circumstances, symbols, or behaviors such as attire (e.g., uniforms, badges, pins), foreign or regional accents in speech, and context (e.g., an adult at a PTA meeting is probably a parent or a teacher). This article focuses on the problem of estimating what a specific other person knows. Category membership is important because, in developing a model of what a specific other knows, one can use as a basis of inferences knowledge of categories of various degrees of inclusiveness to which the other is known to belong. There are some things that people expect all human beings of normal intelligence to know (e.g., birds fly, fish swim). Nebraskans and Californians should have some common knowledge by virtue of the fact that they reside in the United States, but each group probably also has some knowledge that is more likely to be found among residents of its particular state. If one knows that Tom is an electrician, one is likely to assume that he knows how to wire a house; one is less likely to make this assumption with respect to Dick, if one knows that he is an accountant.

Shared Immediate Context


Among the most obvious bases for common knowledge is a shared physical context, which is sometimes referred to as copresence or common ground (Clark & Haviland, 1977; Clark & Marshall, 1981). Copresence is a property of face-to-face conversations. If there is only one automobile in view, what is meant by reference to "the car" will be clear. Similarly, if there are several automobiles in view but only one that is blue, "the blue car" will suffice to identify the object of interest. A speaker's assumption that the object to which "the car" refers will be clear to a listener rests on the prior assumption that the listener's perception of the situation in which speaker and listener both find themselves corresponds reasonably closely to the speaker's, at least in certain relevant respects. Reference to a person or place by name moves the conversation forward only if both parties know the person or place named. If a conversation is to be successful, not only must participants share common ground, but each must know, at least approximately, what the common ground is and must be aware that the other participant also knows what it is, of the fact that they share it, and so on (Clark & Carlson, 1981).

Shared Past Experiences


Having sat for an hour, twice a week for several months, in Professor X's class, one is likely to have learned something about Professor X. It seems reasonable to assume that what one has learned is not vastly different from what others who sat through the same lectures learned. So, in discussion, classmates may assume some commonality of knowledge about Professor X, if not about the subject taught. People who grew up in the same household, lived in the same town, or served in the same military organization can safely assume many shared memories and much common knowledge pertaining to home, town, or the military. The awareness of common experiences can provide a basis for the assumption of specific

744

NICKERSON

In general, one assumes that people have the knowledge that people in their professions or occupations are typically required to have, although one who is not in a particular profession is unlikely to know, in detail, what knowledge is required of someone who is. Our willingness to rely on specific professionals to have the knowledge that members of their profession are generally assumed to have involves the same principle of division of labor as does the idea of transactive memory applied to people in close relationships (Wegner, 1986), but at a somewhat more societal level. The role of categorical membership as a clue to what one knows is in keeping with the distinction between prototypical and idiosyncratic knowledge structures made by Karniol (1995) to represent generalized knowledge about people in the first case and knowledge of how individuals (oneself or specific others) deviate from the norms in the second. The emphasis here, however, is on clues to what others know, or think they know, as distinct from other types of knowledge about them.

Just as we can make inferences of the form "If one knows X, one probably knows Y," we can also infer from the discovery that one does not have some particular bit of knowledge some other things that the person probably does not know. Suppose, for example, that an avid basketball fan finds himself sitting beside Harry, a stranger, at a basketball game. If, during the game, Harry asks what the three-second rule is, the fan is likely to infer from the fact that this question was asked that Harry's knowledge of the game is slight. If the fan discovers that Harry does not know who Michael Jordan is, he will undoubtedly come to the conclusion that his knowledge of National Basketball Association (NBA) basketballperhaps even of professional sports more generallyis not very extensive.

Levels of Knowledge
The idea that people can understand a subject or domain at different levels is frequently encountered in the mathematical and scientific literature. Hardy (1940/1989) contended that what distinguishes the best mathematics is its "seriousness":
The "seriousness" of a mathematical theorem lies, not in its practical consequences, which are usually negligible, but in the significance of the mathematical ideas which it connects. We may say, roughly, that a mathematical idea is "significant" if it can be connected, in a natural and illuminating way, with a large complex of other mathematical ideas, (p. 89)

Implicit Models of Knowledge Structures


Consider the question "If Mary knows how to find the square root of a number (without the aid of a calculator), how probable is it that she knows how to do long division?" Presumably most people would feel more confident in giving a high probability in answer to this question than in doing so in answer to the converse: "If Mary knows how to do long division, how probable is it that she knows how to find the square root of a number?" One basis for inferences of the form "If one knows X, one must know Y" is the condition that knowledge of Y is a prerequisite for knowledge of X. The square root-division example illustrates the point because the process for finding square roots, at least as it was once taught, requires the use of division; so it is relatively safe to assume that anyone who knows how to find a square root must know how to divide. When the prerequisite relationship between X and Y holds, anything that reveals that a person knows X permits one to make the inference that the person knows Y as well. However, people can make inferences of the form "If one knows X, one probably knows Y" even when knowledge of Y is not a prerequisite for knowledge of X. In such cases, the bases of the inference are less causal and direct; they may rest on implicit models of knowledge structuremodels of how knowledge hangs together, at least as it is usually acquired in our culture. The idea applies across conventional subject areas as well as within them, but in a looser way. One might find it hard to decide what to make of the question "If Mary knows how to find the square root of a number, how probable is it that she knows how to tie a clove hitch?" On the other hand, the question "If Mary knows how to find the square root of a number, how probable is it that she knows how to read?" does not seem quite so bizarre. The abilities to find square roots and to tie clove hitches seem not to be very closely related, at least in any causal way, but then neither do the abilities to find square roots and to read. The ability to find square roots does, however, suggest attainment of a certain level of cognitive development and academic achievement, and considering only that portion of the population that has attained that level of development and achievement, it seems reasonable to assume that the percentage that can read is larger than the percentage that knows how to tie a clove hitch.

Hardy (1940/1989) acknowledged that it is difficult to be precise about what constitutes mathematical significance but suggested that essential aspects include generality and depth. A general mathematical idea is one that figures in many mathematical constructs and is used in the proofs of different kinds of theorems. Depth relates to an idea that Hardy considered very difficult to explain, but one that he believed would be recognized by mathematicians. Mathematical ideas, in his view, are arranged in strata representing different depths. The idea of an irrational number is deeper, for example, than that of an integer. According to this view, people who understand concepts at a given level are also likely to understand related concepts at a less deep level, but it would not be expected that people who understand concepts at a given level would necessarily understand all related concepts at a deeper level. Thus, a highly knowledgeable mathematician should be able to calibrate another's knowledge of mathematics, or of a particular area of mathematics, by determining at what conceptual level one could participate in meaningful discourse. This approach to inferring what another knows is likely to work better when used by the more knowledgeable person, because its use requires a sufficiently deep knowledge of the subject to assess the level at which another can function. The less knowledgeable person can discover that his or her own knowledge is not as deep as that of the other party to the conversation appears to be, but he or she may not be able to tell whether the other party is really more knowledgeable or just good at pretending to be so. Research on problem solving, and especially on ways in which the performance of experts on problems in their domains of expertise differs from that of novices in those domains, has also revealed the importance of levels of knowledge (Larkin, McDermott, Simon, & Simon, 1980). Experts typically think about problems at a deeper conceptual level than do novices: When asked to

IMPUTING KNOWLEDGE

745

sort problems in terms of similarities, novices are likely to sort on the basis of surface characteristics (e.g., objects referred to explicitly in the problem statement, such as inclined planes, pulleys, or springs; the physical configuration described), whereas experts are more likely to group the problems on the basis of the fundamental physical or mathematical principles to which they relate (e.g., the law of conservation of energy, the law of conservation of momentum) (Chi, Feltovich, & Glaser, 1981; Chi, Glaser, & Rees, 1982). Categories that are used by experts tend to involve higher levels of abstraction than those used by novices. Chi et al. (1981) found that when novices were asked to specify the features of problems that led them to adopt specific approaches for attempting to solve them, they mentioned literal objects and terms contained in problem statements; however, experts were more likely to mention states and conditions of the physical situations described by the problem statements and derived features not explicitly mentioned. Such differences can provide the basis for inferring the depth of one's knowledge of a problem domain.

Assumed Commonality of One's Own Knowledge


Although knowledge of who people are (i.e., of the categories to which they belong) and information gained from what they say or how they act in conversation provide useful clues to what they know, clues from these sources are not always available. Sometimes people communicate with others about whom they know very little (as when giving directions to a stranger), sometimes people address themselves to an audience that includes people with many different backgrounds (as when writing something for the general public), and sometimes, even when addressing a single individual, there is no opportunity for immediate feedback that reveals whether the message is being understood (as when writing a letter). One has little alternative, in such situations, to that of relying on common knowledge. Davidson (1982) put the case this way:
If we know that in speaking certain words a man meant to assert that the price of plutonium is rising, then generally we must know a great deal more about his intentions, his beliefs, and the meaning of his words. If we imagine ourselves starting out from scratch to construct a theory that would unify and explain what we observea theory of the man's thoughts and emotions and languagewe should be overwhelmed by the difficulty. There are too many unknowns for the number of equations. We necessarily cope with this problem by a strategy that is simple to state, though vastly complex in application: the strategy is to assume that the person to be understood is much like ourselves. That is perforce the opening strategy, from which we deviate as the evidence piles up ... [UJnless we can interpret others as sharing a vast amount of what makes up our common sense we will not be able to identify any of their beliefs and desires and intentions, any of their prepositional attitudes, (p. 302)

filling in the gaps. For example, I know that my archeologist friend, Jane, knows a great deal about archeology that I do not knowjust because I know she is an archeologistbut when discussing with her for the first time the current situation in the Balkans, my best model of what she knows when we begin the conversation is what I know myself. In particular, the idea that one imputes one's own knowledge to specific others when one lacks individuated models of what they know or to fill in gaps in such models as they have, is not contrary to the idea that one acquires individuated models of the knowledge of specific others that are accurate, to varying degrees, and more useful than an entirely default model. If one has no direct knowledge of what another, whom one is addressing, does or does not know, and little or no knowledge that would provide the basis for making inferences in this regard, the only thing left to do is to use one's own knowledge as a default assumption as to what the other knows. The conjecture that people construct models of what other individuals know by starting with what they themselves know as a point of departure is consistent with the assumption that people tend to see themselves as normative in a variety of ways. The tendency to view oneself as representative of other people in specific respects is well documented; people who engage in a particular behavior, for example, tend to estimate that behavior to be more prevalent than do people who do not engage in that behavior (Marks & Miller, 1987; Mullen et al., 1985; L. Ross, Greene, & House, 1977). Similarly, people tend to project their own attitudes when attempting to assess the attitudes of specified groups (Hoch, 1987). Children may judge how others feel by imagining how they themselves would feel in the same situation (Chandler & Greenspan, 1972). Lillard (1998) noted that few anthropologists have looked for differing concepts of the mind among different cultural groups and suggested that this may be because people tend to assume that others share their ideas about the mind and world; she noted too the likelihood that, in ethnographic reports, a folk psychology will be made more like one's own than it really is. Lillard reviewed evidence that beliefs about the mind and how it works vary considerably from culture to culture, and pointed out the difficulty of inferring feelings and other internal states from external clues across cultures. Salmon (1974) pointed out that people tend to equate rationality with agreement with themselves, to see their own thinking and behavior as normatively rational:
Those who are politically far right are likely to regard those of the far left as irrational and vice versa, while the moderate is likely to doubt seriously the rationality of all extremists (except, perhaps, those who carry moderation to an extreme), (p. 70)

Imputing One's Own Knowledge to Others


I want to emphasize that I am arguing that imputing one's knowledge to a specific other is a default measure; it is what one does in the absence of knowledge, or of a basis for inferring, that the other's knowledge is different from one's own. It is the starting point for developing a model of what a random other person, about whom one knows little or nothing, knows. In the case of a familiar other, about whom one may know quite a lot, it is the basis for

Hansen and Donoghue (1977) presented evidence that people sometimes take their own behavior as the norm even in the light of sample-based information to the contrary. There is evidence also that when people make comparisons between themselves and others, they are more likely to consider whether the others are similar to them than vice versa; that is, people spontaneously take themselves as the reference point, or prototypical person (Dunning & Cohen, 1992; Dunning, Perie, & Story, 1991). When people are forced to make comparisons in both directions, they are likely to judge the similarity to be greater when they compare others with themselves than when they compare

746

NICKERSON actuarial basis) and to the difficulty they themselves had with the questions. Karniol (1986, 1990, 1995) proposed a model of how people attempt to take other people's perspectives, or to "put themselves in others' shoes," that assumes a hierarchy of transformation rules that directs a search process by indicating directions in which others' thoughts or feelings could be channeled. Karniol argued that people take for granted that other people's psychological reactions to events and situations are rule governed. As she used the term, transformation rules are bits of knowledge that link situations or contexts and psychological reactions to them; such rules are assumed to be accessed whenever one has a need to anticipate what another person is likely to experience. Karniol (1986) noted that of course people can apply to others' thought processes only those transformation rules of which they are aware: "Because they have no direct knowledge of targets' transformation rules, observers assume that the transformation rules they themselves know account for the link between targets' perceptions and their psychological reactions" (p. 933). She proposed 10 such rules, which she considered capable of accounting for much perspective-taking behavior. These rules are assumed to be hierarchically organized in the sense that they are considered in a fixed order when one is trying to anticipate another's reaction to a situation. The order is assumed to differ for different people but to be fixed for any given person. Notably, for present purposes, a primary basis of the intuitive appeal of a rule is recognition of its applicability to one's own experience. In effect, this means that one's best guess as to how another person will react in a specific context is one's awareness, or belief, of how one would react in that context. Karniol (1986) suggested that individuals may differ in the extent to which they consider their own reactions to situations to be representative of those of others; however, even people who see themselves as unusual or unique may use their own reactions as a basis for anticipating the reactions of others, if only by way of contrast. Several other studies have demonstrated that people tend to impute to others knowledge that they themselves have. Some of these are mentioned later in connection with a discussion of the problem of overimputation of one's own knowledge to others.

themselves with others (Catrambone, Beike, & Niedenthal, 1996; Srull & Gaelick, 1983). Several studies have shown that estimates of what other people know are influenced by what the estimators believe they themselves know. In one such study, college students attempted to answer general-knowledge questions and to estimate, for each question, the percentage of other college students that would be able to answer that question correctly. Students gave higher estimates for questions to which they thought they knew the answers (as indicated by confidence ratings), even when their own answers were wrong, than for those to which they knew they did not know the answers (Nickerson, Baddeley, & Freeman, 1987). Fussell and Krauss (1991) had student residents of New York City rate their familiarity with each of 22 local landmarks and estimate the proportions of other city residents that would be able to identify them. The landmarks varied in familiarity over a considerable range: Some were familiar to nearly all participants; others were familiar to few or none of them. Estimates of familiarity correlated highly with actual familiarity. Participants generally overestimated the familiarity of the less familiar landmarks and underestimated the familiarity of the more familiar ones. This could be a regression phenomenon to some degree, inasmuch as there is not much opportunity to overestimate the familiarity of something that is in fact very familiar or to underestimate the familiarity of something that is very unfamiliar. However, overestimation was extreme in the cases of estimates of how familiar the least familiar landmarks would be, and participants who were highly confident of being able to identify specific landmarks judged those landmarks to be more familiar to others than did participants who were not very confident of their own ability to identify them. In a follow-up study of similar design, Fussell and Krauss (1992, Experiment 1; Krauss & Fussell, 1991b) had students rate the recognizability, to themselves and to others, of the pictures of public figures and to give the names of those they could identify. For any given figure, the participants who could identify the person by name rated his or her recognizability to be higher than did the participants who could not identify that person by name. In still another experiment, Fussell and Krauss (1992, Experiment 3; Krauss & Fussell, 1991b) had college students estimate the proportion of peers that would be able to identify each of several everyday objects. For any given object, people who could identify it estimated the proportion of other people who would be able to identify it to be higher than did people who could not identify the item. When participants were unable to name an object, they gave a feeling-of-knowing rating for that object; for most participants, these ratings were highly correlated with their estimates of the percentage of others that would know the name. I have already mentioned a study by Jameson et al., (1993) in which participants who observed others trying to answer generalknowledge questions predicted, better than did those who did not observe, which of the answers that they were unable to produce they would be able to recognize in a subsequent multiple-choice test. Even those participants who did not observe performance on the original task were able to predict, at a better than chance level, recognition performance on the unanswered items. Jameson et al. attributed the predictions in the latter case to a combination of the predictors' sensitivity to the relative difficulty of the items (on an

The Utility of One's Own Knowledge as an Indicant of What Others Know


As experimental studies have shown, the tendency to impute one's own knowledge to others can lead to misestimation of what others know. However, many heuristics that work very well in a wide range of naturally occurring situations can be shown to work poorly in contrived laboratory contexts; thus, showing that a heuristic can yield miscalculations in laboratory situations does not demonstrate that it is bound to be ineffective in more natural settings. Several investigators have made this point with respect to many of the judgmental biases that have been identified in laboratory research (Cosmides, 1989; Friedrich, 1993; Funder, 1987; Hogarth, 1981). What can be said about the general effects on everyday cognition of the tendency to impute one's own knowledge to others? Investigators have argued that one's knowledge of how one would behave or react in specific situations can be a useful basis,

IMPUTING KNOWLEDGE

747

possibly the best basis one has, for anticipating how other people will behave or react in those situations (Dawes, 1989; Hoch, 1987; Kelley, 1999). If this were not the case, how would people be able to understand other people's reactions, to be happy with them when they have cause to celebrate, or to sympathize when they are in pain? This idea is captured in the principle of humanity, according to which, when trying to understand what someone has said, especially something ambiguous, one should impute to the speaker beliefs and desires similar to one's own (Gordon, 1986; Grandy, 1973). It should not be surprising, of course, that such imputations tend to be more accurate the more similar to oneself are those with respect to whom they are made (Cronbach, 1955) and, as Shantz (1983) has pointed out, accurate social judgments are not compelling evidence of nonegocentric functioning, because egocentrically based judgments of others can often be correct. It can be argued that people must base their knowledge of others' feelings and knowledge on what they themselves feel and know. According to this view, people cannot impute to others feelings that they themselves have not experienced at least to some degree. People assume that when one says one is in pain, that person is experiencing what they themselves experience when they say they are in pain. At a behavioral level, one can express empathy for another, simply in response to overt cues, without actually imagining what the other is experiencing; however, if empathy is taken to mean participation in another's feelings or ideas, it necessarily requires some understanding of what the other's feelings or ideas are likely to be. Dawes (1989) showed that one does well, statistically speaking, to take one's own opinion as representative of that of the group to which one belongs. The argument is as follows: If each individual in a group takes his or her own opinion on any particular issue as indicative of the opinions of others, people who hold the majority opinion will tend to be more often right than wrong, whereas those who hold a minority opinion will be more often wrong than right; however, because the majority is the majority, people will, on average, be more often right than wrong. The argument shows that one's own opinion is a better basis for assuming what others believe than none at all. It is not intended to show that this basis is better than direct information one might have regarding what other individuals are likely to believe. Jacoby and Kelley (1987; Kelley & Jacoby, 1996) have shown that when people are asked to judge the difficulty for others to solve anagrams that they themselves have just solved, judged difficulty for others correlates highly with solution time for themselves. These investigators also got judgments of difficulty for others from participants who were given the solutions and, thus, did not have the experience of solving the anagrams themselves. The experience-based judgments proved to be better predictors of actual difficulty for others (as reflected in solution times) than judgments produced by people who did not solve the anagrams and so had to rely on some more analytic assessment of difficulty.

some other assumption, the best default assumption one can make regarding another's knowledge on a particular subject is arguably one's own knowledge of that subject. Especially in the case of general, everyday factual knowledge or commonsense beliefs, assuming that what one knows is representative of what random others know may be defended as a cost-effective use of one's cognitive resources. If commonsense beliefs about reality (one's own and others') tend to be true, then assuming that other people's beliefs about reality are much the same as one's own is an efficient and effective way of judging others' beliefs (Leslie, 1994; P. Mitchell, 1994). Even if commonsense beliefs often are not true, they are, by definition, widely held; thus, imputing to others one's commonsense beliefs about reality may still make practical sense. The assumption that one's own knowledge is generally representative of what other people know serves people well, especially in a statistical sense. It also can be the basis, however, for misunderstandings and failures of communication in specific instances. The problem is that people make the assumption with respect to specific knowledge and specific others not only when it is justified but also often when it is not.

Overestimation of the Commonality of One's Own Knowledge


Many of the studies that have demonstrated that people tend to base assumptions about what others know on what they themselves know, or think they know, have not revealed how accurate such assumptions tend to be, but the results from some studies suggest that people often overimpute their own knowledge to others; that is, they find it easy to impute to other people knowledge that they themselves have but others do not. Keysar, Ginzel, and Bazerman (1995) described the tendency to uncritically impute one's own knowledge to others:
Knowledge of the state of the world seems to have an overwhelming effect on people when they attempt to take the perspective of another. They behave as if what they know to be true is also accessible to others who are known to be completely uninformed, (p. 284)

Steedman and Johnson-Laird (1980) surmised that, in the absence of evidence to the contrary, speakers in conversations assume that hearers know everything they themselves know about the world and about the conversations. Piaget (1962) had this to say on the same subject:
Every beginning instructor discovers sooner or later that his first lectures were incomprehensible because he was talking to himself, so to say, mindful only of his own point of view. He realizes only gradually and with difficulty that it is not easy to place oneself in the shoes of students who do not know what he knows about the subject matter of his course, (p. 5)

Complications With Knowledge Imputation


So far, I have discussed evidence that people use their own knowledge (beliefs, feelings) as a basis for assumptions about what others know (believe, feel)they impute their own knowledge to othersand that, in many cases, this helps people understand others better. In the absence of compelling, identifiable reasons for

Anyone who has asked for directions to some desired destination in an unfamiliar city will find it easy to believe that natives of that city are prone to overestimate how easy a stranger is likely to find it to follow the directions they give, despite the fact that people tend to give more detailed responses to requests for directions if the requesters identify themselves as being from out of town than if they do not (Kingsbury, 1968). One plausible explanation for this is that people tend to impute to others at least some of what they know about an area with which they are familiar, and,

748

NICKERSON

therefore, they overestimate the ease with which their directions can be understood, remembered, and followed. A similar observation applies to procedural directions. When, for example, one describes the rules of a game with which one is very familiar to an acquaintance who is about to try to play it for the first time, one may believe one has described them relatively completely when, in fact, one has omitted numerous important details. When one knows something well, and has known it for a long time, it is difficult to put oneself in the position of a person who has none of that knowledge. In the study by Nickerson et al. (1987) participants were more likely to overestimate the commonality of knowledge if they themselves had it than if they did not, the commonality of knowledge being indicated by normative data collected by Nelson and Narens (1980). Another illustration of the over-imputation of one's own knowledge to others comes from an experiment by Goranson (described in Kelley, 1999), who had instructors attempt to answer quiz questions as they expected the average students in their classes to answer them. Instructors provided, on average, about twice as many correct answers as did their students. One must presume that instructors realized their own knowledge of the answers was greater than that of their students and that they reflected this realization in their attempts to predict students' quiz performance; however, it is apparent that any adjustments they made were too small to accommodate the actual difference in their knowledge bases. I have mentioned a study in which Fussell and Krauss (1989b) found that people tailored verbal descriptions of nonsense figures to their expectations regarding who would later have to use the descriptions. These investigators noted that the participants who took part in their experiments were only moderately successful in taking others' perspectives. In particular, many of the messages that were intended for others used idiosyncratic perspectives that were poorly understood by their recipients. "Some of these probably resulted from speakers' miscalculation of the common ground that existed between themselves and their addressees, that is, from a belief that others would view the figure from the same perspective as they did" (Krauss & Fussell, 19915, p. 8). Keysar (1994) presented evidence of the imputation of one's own knowledge to a listener by nonparticipating observers of a speaker-listener communication. In this case, the observers were readers who, when attempting to comprehend a communication described in text, attributed to the listener the same understanding of the speaker's utterance as their own, even when the utterance was clearly ambiguous and the readers knew that the listener did not have the disambiguating information that they, the readers, had. A similar finding was reported by P. Mitchell, Robinson, Isaacs, and Nye (1996). These investigators found that observers judged it more likely that listeners would believe a message that contradicted the listeners' prior belief about a situation if they (the observers) knew the message to be true than if they knew the message to be false, on the basis of information that they were aware the listeners could not have. This experiment was counterbalanced in such a way that precluded an artifact resulting from the true message being more plausible than the false one. The investigators took the results as supportive of the notion that people's judgments of what others believe are contaminated by what they themselves believe.

In Fussell and Krauss's (1992; Krauss & Fussell, 1991b) experiments with New York City landmarks, participants who identified an item correctly were more likely than those who did not to overestimate the proportion of other people that would be able to identify it. Krauss and Fussell (1991b) summarized their findings this way:
We have found that people's estimates display considerable sensitivity to the way knowledge is distributed socially. However, these judgments also display a systematic bias: people tend to overestimate the prevalence of things they know and to underestimate the prevalence of things they don't know. (p. 19).

Krauss and Fussell suggested that the tendency to overestimate the likelihood that one's own perspective will be shared by others may relate to the availability heuristic, as described by Tversky and Kahneman (1973): The ready availability of one's own perspective, according to this view, may make it difficult for one to think of alternatives or even to be keenly aware of the possibility of their existence. The tendency of people to assume that the knowledge of others is similar to their own reveals itself in negotiation situations in two ways:
First, when others are more informed than they are themselves, people do not fully take into account others' privileged access to information; they sometimes behave as if the others do not have such extra information. Second, even when people know that others do not have access to their own privileged information, they may behave as if those others had access to this information. (Keysar et al., 1995, p. 283)

Most of the experiments reviewed here provide evidence of the second type of miscalculation; Bazerman and his colleagues have shown, however, that the first type also occurs (Ball, Bazerman, & Carroll, 1991; Bazerman & Carroll, 1987; Samuelson & Bazerman, 1985). In some instances, the tendency to overimpute one's knowledge to others appears to stem from a failure to recognize the privacy of one's own experiences, or a failure to make a sufficiently sharp distinction between internally and externally produced aspects of those experiences. This point was illustrated in an experiment by Newton (1990). Some participants tapped the rhythms of wellknown songs; others attempted to identify the songs on the basis of the tapped rhythms. Tappers estimated the likelihood that listeners would be able to identify the songs to be about .5; the actual probability of correct identification was about .025. An explanation of this very large difference is that the tappers, who imagined a musical rendition of a song when they tapped its rhythm, failed to recognize the extent to which their subjective experience differed from that of the listeners and to make allowance for this difference when making their estimates (Griffin & Ross, 1991). The problem of distinguishing between externally and internally produced experiences has been studied by several researchers. Research has focused on two processes: that of distinguishing between a present perception and a present act of imagination (reality testing) and that of distinguishing between memories of externally caused events and memories of imagined events (reality monitoring; Johnson & Raye, 1981). Both concepts are relevant. Although a major reason for interest in reality testing has been the need for a better understanding of the impaired ability that people

IMPUTING KNOWLEDGE with certain types of mental illness have to tell the difference between real and imagined events (McGuigan, 1966; Mintz & Alpert, 1972), findings like those of Newton (1990) suggest that people need not be ill to overestimate the extent to which their private experiences are public. Also, to the extent that people remember imagined experiences as externally produced, they may assume that others are likely to remember them as well. In either case, the confusion could promote the unjustified imputation of one's own knowledge to others. Reality monitoring, as distinct from reality testing, also relates to the question of how people estimate what others know. Evidence suggests that in evaluating the veridicality of others' memories, people use the same kinds of indicants that they use when evaluating the veridicality of their own (Johnson & Suengas, 1989; Schooler, Gerhard, & Loftus, 1986). These include the greater sensory and contextual content of memories of actual events and their tendency to fade less rapidly over time (Johnson, 1988; Johnson, Foley, Suengas, & Raye, 1988; Suengas & Johnson, 1988). People's application of the same criteria to others' memories as they use for their own also illustrates the use of one's own experience as the basis for inferring that of others. Jacoby, Bjork, and Kelley (1994) pointed out that communication problems often stem from the failure of speakers to make adequate allowance for the differences between their subjective experiences and those of their listeners. Hayes, Flower, Schriver, Stratman, and Carey (1987) noted that the uncritical assumption that one's intended audience shares one's own perspectives is a common problem among inexperienced writers. Griffin et al. (1990) spoke of the inability of people to recognize the need for, or their unwillingness to make, "adequate inferential allowance for the fact that their construals of relevant social situations (i.e., inferences about, constructions of, or images of these situations) are neither isomorphic with reality nor universally shared by other actors" (p. 1138). There is an obvious asymmetry associated with the tendency to impute one's own knowledge to others. One can impute what one knows to others much more precisely than one can impute to them what one does not know, simply because, for the most part, one does not know what it is that one does not know. I can know that I do not know a specific factI know, for example, that I do not know the atomic weight of manganese, the lifetime batting average of Yogi Berra, and many other facts. But I know that these facts are facts; I know that the answers to these questions exist, and I probably know how to find some of them. But there are countless facts that I do not even realize existanswers to questions that I am not able to ask. This severely limits my ability to impute to others knowledge that I do not have. I may be sure that a crystallographer knows a lot that I do not know about crystals, but I cannot know in any detail what it is that the crystallographer knows that I do not. In short, one's knowledge of the limitations of one's knowledge is itself necessarily limited by one's knowledge: The less one knows, the less one can be aware of how much one does not know. My point in this article is not that, in using one's own knowledge as a basis for constructing a model of another's knowledge, one is bound to overestimate what another person knows, but rather that one is likely to overestimate the probability that another has specific knowledge that one has. It may also result in an underestimation of the probability that the other has specific

749

knowledge that one lacks. Indeed, it seems highly likely that this will be the case.

The False Consensus Effect


Evidence of a tendency to see oneself as more representative of others (in various ways) than one really is has been obtained in several studies of what has been called the false consensus effect (Goethals, Allison, & Frost, 1979; Marks & Miller, 1987; Mullen et al., 1985; L. Ross et al., 1977). From the perspective of this paper, the false consensus effect and other manifestations of people's tendency to see their own knowledge, beliefs, attitudes, and actions as more representative of those of others than they really are, are examples of a useful heuristic being applied in a less than optimal way. People are likely, for example, to overestimate the amount of general consensus on opinions that they themselves hold and to underestimate the amount on opinions that differ from their own (Crano, 1983; Kassin, 1979; L. Ross et al., 1977). This point is illustrated by the finding that U.S. voters are likely to overestimate the popularity of their favored candidate in a presidential election (Granberg & Brent, 1983) or to overestimate the extent to which the positions of favored candidates correspond to their own (Brent & Granberg, 1982; Granberg & Brent, 1974; Page & Jones, 1979). Sniderman, Brody, and Tetlock (1991) suggested that "many voters derive their sense of what candidates think should be done partly from their own beliefs about what should be done" (p. 168). Mullen (1983) reported overestimation of agreement with one's own opinions in a situation in which there were tangible incentives for one to estimate accurately (the TV game show "Play the Percentages"). From this he concluded that the bias is more likely to be the result of perceptual distortion than of motivational factors. The tendency to overestimate the commonality of one's opinions appears to be stronger for opinions that matter than for those that do not. Crano (1983) had college students evaluate a plan for a tuition surcharge and to estimate the number of other students who would share their evaluation. Three different plans were used: one affecting only 1st- and 2nd-year students, one affecting only 3rd- and 4th-year students, and one affecting all students. Estimates of the number of students who would agree with one's evaluation were higher when students evaluated plans that would affect themselves than when they evaluated plans that would not. There is some evidence that people may also sometimes misjudge what others perceive, as a consequence of assuming that others will perceive what they themselves do in a situation. For example, pedestrians tend to overestimate the ability of drivers to see them at night (Allen, Hazlett, Tacker, & Graham, 1969; Shinar, 1984). Often, a pedestrian can see better than a driver in a nighttime road environment because the former's eyes are adapted to a lower level of ambient illumination, and by failing to take this difference into account, the pedestrian overestimates what a driver can see (Leibowitz, 1996). Although the emphasis in this paper is on the overimputation of one's own knowledge to others, it is also possible to err in the other direction, as has been pointed out by Dawes (1989). Dawes did not deny the possibility of a false consensus effect whereby people sometimes take themselves to be more representative of others than they are, but he pointed out the possibility also of a contrary

750

NICKERSON technology products underestimate how much difficulty other people will have in learning to use those products. Human-factors psychologists know that the worst judges of how easy people will find it to use devices and procedures are those who designed them, because they cannot put themselves in the position of one who is experiencing the devke or procedure for the first time.

failure of consensus effect, which is a failure to appreciate the diagnosticity of one's own beliefs and behavior with respect to those of others. He argued that failure to assume consensus uncritical assumption of dissimilaritycan have undesirable consequences (Dawes, Singer, & Lemmons, 1972), and he noted that there are data indicating that people could have improved their performance in some experimental situations by relying more on their own positions to infer those of others than they actually did (Hoch, 1987). Underimputation of one's own knowledge to others is a possibility, then, that should not be overlooked; however, judging from the literature in the aggregate, overimputation seems to be the more common problem.

The Illusion of Simplicity


Jacoby and Kelley (1987; Kelley & Jacoby, 1996) found that people's judgments of anagram difficulty may be influenced by recent exposure to solution words. Participants who had recently read the solution words judged anagrams to be easier objectively than did participants who had not read them. This is an example of what has been referred to as the illusion of simplicity, whereby one mistakenly judges something to be simple only because it is familiar. Kelley and Jacoby (1996) also found that when participants were given anagram solutions, and therefore did not have the experience of finding them themselves, their judgments of difficulty were less accurate as predictors of the rank ordering of actual difficulty, as indicated by others' performance. They concluded that presentation of solution words blocked participants' direct experience of item difficulty and forced participants to use a different, and more analytic, basis for making the judgments, and that basis was less effective than was direct experience. Familiarity with subject matter can mislead one into judging text to be clear and easily comprehended (Glenberg & Epstein, 1985, 1987; Glenberg, Sanocki, Epstein, & Morris, 1987; Glenberg, Wilkinson, & Epstein, 1982). Familiarity, in this case, is mistaken for comprehensibility. Kelley (1999) had people judge the reading level that was most appropriate for each of several sentences. The participants had read, and explained, some of the sentences the day preceding that on which they made their reading-level judgments. Sentences that had previously been read and explained were judged to be appropriate for a lower grade than were those that had not been read before. Kelley interpreted the results to mean that people judged grade level on the basis of their own ease of comprehending the sentences, and that sentences that were being read for the second time were more easily comprehended than were those that had not been seen beforea confusion of ease of comprehension, or objective simplicity, with personal familiarity. In a second experiment in the same study, Kelley (1999) had participants read some sentences twice on the day before they judged the difficulty of those sentences and that of others not previously seen. The participants judged the previously seen sentences to be appropriate for a lower grade level than the grade for which they considered the sentences they had not seen before to be appropriate. The effect of prior reading was mitigated by having some participants paraphrase each sentence before rating it. Kelley attributed the debiasing effect of paraphrasing to the relatively deep analysis and integration with world knowledge it requires; presumably this makes the difficulty of a sentence more apparent and therefore would override, at least partially, the effects of simply having read some of the sentences the preceding day.

The Curse of Knowledge


Camerer, Loewenstein, and Weber (1989) have shown how a tendency of well-informed agents to impute their knowledge to less informed agents can work to the disadvantage of the better informed agents in market situations. These investigators demonstrated this curse of knowledge, a term they credit to Hogarth, in situations calling for people who were relatively well informed with respect to some economic variables to predict what other, less informed people would forecast (the forecasts had to do with corporate earnings). To maximize performance, the well-informed forecasters should have discounted completely the knowledge they had that the less informed forecasters did not have in predicting the forecasts of the latter, but they discounted it only partially. Keysar et al. (1995) conducted an experiment on a simulated purchase of a firm that was being offered for sale. Observers made predictions about the behavior of the buyer. Experimental variables were the seller's asking price, the true value of the firm, and the seller's agent's belief about the value of the firm. Keysar et al. found the typical curse-of-knowledge effect in that observers predicted the buyer would be more willing to purchase when they (but not the buyer) knew the real worth of the firm to be close to the asking price than when they knew it to be considerably lower. A similar effect was obtained on the basis of observers' knowledge that the seller's agent believed the true value to be close to the asking price, even when they (the observers) knew that belief to be wrong. Keysar et al. concluded that the curse of knowledge applies not only to knowledge of states of affairs but to knowledge of states of mindthat privileged knowledge of beliefs can have the same effect as privileged knowledge of facts, causing people to act, in both cases, as though others had access to that knowledge. Evidence of imputation of one's own expertise to others comes from a study by Hinds (1999), who found that experts in performing a task were more likely than those with only an intermediate level of expertise to underestimate the time novices would take to complete the task. The same finding was obtained whether the participants' expertise came from on-the-job experience or was developed for experimental purposes in a laboratory setting. In Hinds's study, experts proved to be resistant to debiasing techniques that were intended to reduce the tendency to underestimate how difficult novices would find a task to be. It is easy to see how the tendency to overimpute one's own knowledge to others can be problematic in specific contexts. Camerer (1992) noted, for example, that teaching can be adversely affected if teachers underestimate the difference between their own knowledge and that of their students, or if the designers of high-

Limitations of Ability to Assess One's Own Knowledge


Fischhoff (1975) and his colleagues (Fischhoff & BeythMarom, 1975; Slovic & Fischhoff, 1977) discovered a systematic

IMPUTING KNOWLEDGE

751

failure of people to assess their own knowledge accurately that has become known as the hindsight bias, and that has stimulated much follow-up research. Fischhoff (1975) had people judge the likelihoods of specified alternative outcomes of various historical events on the basis of written descriptions of those events. Some participants were informed of the actual outcomes before making their likelihood judgments; others were not. Those in the former category assigned higher "before-the-fact" likelihoods to the actual outcomes than did those in the latter category; the informed participants also judged information that pointed in the direction of the actual outcomes to be more relevant than information that pointed to other possibilities. Numerous investigators have subsequently reported findings that support the general notion that, in hindsight, people are prone to overestimate the degree to which they anticipated a future event before it occurred and the probability that they would have given the correct answer to a question had they been asked (Arkes, Faust, Guilmette, & Hart, 1988; Arkes, Wortmann, Saville, & Harkness, 1981; Campbell & Tesser, 1983; Conway, 1990; Fisher & Budescu, 1994; Hoch & Lowenstein, 1989; Leary, 1982; T. R. Mitchell & Kalb, 1981; Snyder & Uranowitz, 1978; Synodinos, 1986; Wood, 1978). It seems unlikely that the hindsight bias has a simple singlecause explanation. Both motivational and cognitive factors may be involved. Claims that one knew something all along may reflect, in part, a desire to appear more knowledgeable than one actually is. It may also be, however, that people sometimes find it impossible to distinguish between what they know about a subject, having recently received some information about it, and what they knew about the subject before receiving that information. Jacoby et al. (1994) argued, for example, that it is impossible to tell how easy one would have found it to solve an anagram problem if given the solution before having a chance to try to generate it. They also pointed to evidence, obtained by Nelson and Dunlosky (1991; Dunlosky & Nelson, 1992), that when people who are learning lists of paired associates are shown a cue-target pair, they cannot say with much accuracy whether they would have been able to produce the target if given only the cue; having both the cue and target before them precludes basing a judgment on the experience of trying to think of the target in response to the cue. The hindsight bias can be seen as a special case of the tendency to overimpute one's own knowledge to others. In this case, the "other" is oneself at an earlier time. One assumes that one already had knowledge that one, in fact, only recently acquired; it appears that after having acquired a bit of knowledge, one may, in many instances, find it hard to imagine not having had it before. Whatever makes it difficult for one to imagine not having known something may also make it difficult for one to imagine someone else not having that knowledge. More generally, anything that contributes to overestimating what one knows might be expected to inflate one's estimate of what others know as well. Arguably the single most frequent finding in studies of how accurately people judge their own knowledge (calibration studies) is that people are more likely to overestimate than to underestimate what they know. The literature on this topic has been reviewed many times (e.g., Keren, 1991; Lichtenstein, Fischhoff, & Phillips, 1982; O'Connor, 1989; Wallsten & Budescu, 1983). Of particular relevance in the present context is the fact that when people are unable to retrieve a specific item of information from memory (e.g., the answer to a general-

knowledge question), their degree of certainty that they know the answer (feeling of knowing) can be increasedindependently of whether they are eventually able to produce itby such manipulations as increasing their exposure to retrieval cues (Koriat & Lieblich, 1977; Metcalfe, 1986; Nelson et al., 1984; Reder, 1987; Reder & Ritter, 1992; Schwartz & Metcalfe, 1992). Koriat and Lieblich (1977), for example, found that the feeling of knowing could be strengthened simply by repeating questions without repeating the answers. Similarly, Reder (1987) increased the feeling of knowing the answers to general-knowledge questions, without increasing the probability of recall or recognition of the answers, by preexposing participants to words used to compose the questions. With a paired-associate recall task, Schwartz and Metcalfe (1992) increased participants' feeling of knowing target words by priming the cue words. To the extent that people use what they know, or think they know, as a primary basis for inferring what others know, such effects are expected to generalize to people's estimates of what others know.

The General Problem of Inadequate Consideration of Alternatives


Several investigators of human judgment, especially in social contexts, have demonstrated that, once having identified a plausible answer to a question or imagined a scenario for the future, people often fail to consider possible alternative answers or scenarios (Griffin et al., 1990; Hoch, 1985; Shaklee & Fischhoff, 1982). Such a failure to consider alternatives can lead to overconfidence in one's inferences or predictions and to misconstruals of situations and misattributions of actions. Inadequate consideration of alternatives applies to the problem of judging what other people know. Imputing one's own knowledge to others is often useful, especially as a default point of departure for developing an individuated model of what a specific other person knows, but it is sometimes done uncritically, with the result that people are assumed to have knowledge that they lack, and this can impede effective communication. If one generally tends to assume that a random other person knows a fact that one knows oneself and, having made that assumption in a particular case, gives insufficient consideration to reasons why it might be false in that case, one is likely, as a general rule, to overimpute one's own knowledge to others. Overimputation of own knowledge to others may be seen then, in part at least, as one manifestation of a general tendency to give less than adequate attention to alternatives to assumptions we find it natural to make. To recap, recognizing that people, and especially people from the same culture, have much knowledge in common, it is reasonable to use one's own knowledge as the basis for a default model about what a random other person knows. But it is not reasonable for an individual to assume that he or she knows precisely what everyone else knows, so although one's own knowledge may be an effective point of departure for constructing a model of what a specific other person knows, the model must be refined to take into account ways in which either or both of the knowledge bases may be special. A conclusion that comes out of several investigations of how what people believe about what other people know depends on what they themselves know is that people tend to overimpute to others what they know themselvesthat they tend not to correct sufficiently for the idiosyncrasies in their own knowledge bases,

752

NICKERSON experience is a good point of departure for inferring that of others, it is important also to be more sensitive than we appear typically to be to ways in which it might differ.

but to overestimate the probability that if they know something, a random other person will know it as well.

Coping With the Complications


What can be done to improve the accuracy of people's estimates of what other people know? In particular, what might be done to countermand the tendency to overimpute one's own knowledge to others? What follows is largely conjectural, because little research on the question has been done. Work on the related question of whether the perspective-taking ability of young children can be improved through training has shown some promise (Burns & Brainerd, 1979; lannotti, 1978). Efforts to enhance children's social problem-solving skills have included elements intended to increase their awareness that other people's feelings and preferences may differ from their own (Spivack & Shure, 1974) and to increase sensitivity to cues to emotional states (Greenberg, Kusche, Cook, & Quamma, 1995). Little attention has been given, however, to the possibility of improving the accuracy with which people estimate what others know. I believe the following conjectures are consistent with the research that is relevant, however, and I offer them as possibilities for experimental exploration.

Correcting for Hindsight Bias


The hindsight bias has proved to be difficult to eliminate. Simply warning people of its occurrence and urging them to avoid it has not proved to be very effective (Fischhoff, 1977, 1980; Wood, 1978). This finding lends credence to an assimilation explanation of the bias, according to which new knowledge is assimilated with the old and what one believed before the new knowledge was received is no longer retrievable. Hasher, Attig, and Alba (1981) have shown, however, that knowledge states that existed prior to receipt of new information do remain accessible after the new information has been received. Noting that recovering one's original state of mind after the receipt of new information followed by a disclaimer as to the accuracy of the latter proved to be difficult for their participants, Hasher et al. concluded that one must exert unusual effort to retrieve preupdate information. Nevertheless, their demonstration that such an effort can be successful suggests that development of effective methods for correcting the hindsight bias may be possible.

Reflecting on One's Own Knowledge and Knowledge Generally


Overestimation of what other people know may -stem, in some instances, from lack of attention to the complexity of one's own knowledge. When one knows something, and especially if one has known it for a long time, one is likely to take that knowledge for granted and to view it as simpler and more straightforward than it will appear to someone who encounters it for the first time. The idea that being forced to reflect on what one knows, so as to bring its complexity into focus, gets some support from Kelley's (1999) finding that the effect of prior reading on the judged difficulty of sentences can be mitigated by having people paraphrase each sentence before rating it. Having to paraphrase a sentence forces one to process its meaning more deeply than one otherwise might, and thus, perhaps, makes one more aware of what is needed in terms of supporting knowledge to understand it.

Becoming More Sensitive to Uncertainty in a General Sense


Several investigators have found that the tendency to be overconfident of answers to general-knowledge questions or other types of judgments can be countermanded, within limits, if one is required to evaluate or justify one's answers or views, or to generate explicit reasons why they could be wrong (Arkes, Christensen, Lai, & Blumer, 1987; Fischhoff, 1977; Fischhoff & MacGregor, 1982; Hoch, 1984, 1985; Koriat, Lichtenstein, & Fischhoff, 1980; May, 1986; Sniezek, Paese, & Switzer, 1990; Tetlock & Kim, 1987). This finding, coupled with the considerable body of evidence that people typically do not spontaneously try hard to think of alternatives to beliefs or points of view that they currently hold (Nickerson, 1998), suggests the reasonableness of a continuing search for practical techniques for heightening people's awareness of the uncertainty of their own knowledge. Tempering the assumption that countermanding the tendency to be overconfident of one's own knowledge is easy are several attempts to do this that have met with very limited, if any, success (Ferrell & McGoey, 1980; Fischer, 1982; Seaver, von Winterfeldt, & Edwards, 1978). Simply informing people of the tendency and asking them to avoid it appears not to work (Lichtenstein & Fischhoff, 1980).

Becoming More Sensitive to the Privacy of Subjective Experience


Jacoby et al. (1994) argued that "people are surprisingly insensitive to the ways their construal of a particular situation is idiosyncratic" (p. 59). The solution to this problem seems to be the cultivation of a keener awareness of the privacy of one's own subjective experience and of the fact that it may be less than perfectly indicative of the experience of others. There is a need for caution here, however, against the possibility of pushing the pendulum too far in the opposite direction. Presumably one's own subjective experiences are reasonably representative of those of others in the same situations and do provide a good basis for inferring how difficult others will find it to deal with specific challenges. It is often "the failure to realize that one's own response is sufficiently like others' to make it a cue to theirs," Dawes (1989) argued, "that constitutes egoism" (p. 3). The objective should be to get people to be aware that, although their subjective

Summary
I propose that to construct a model of what a specific other person knows, one, in effect, begins with a model of what oneself knows, and, by adjusting that model to take account of ways in which one considers one's own knowledge to be unusual, produces a default model of what a random other person knows. One then adjusts this default model of a random other person's knowledge to take account of what one knows or can infer about how the specific other's knowledge is likely to be unusual. Finally, one continues to

IMPUTING KNOWLEDGE adjust the working model of the specific other's knowledge to reflect what one learns regarding the need for further adjustment on a continuing basis. The process is essentially that of an anchor and adjustment heuristic (Carlson, 1990; Tversky & Kahneman, 1974) in which one's model of one's own knowledge is the anchor. I do not claim that the work just reviewed substantiates the conjecture in a conclusive way, but I believe it establishes its plausibility and lends credence to it. This process is quite useful. For many purposes, the assumption that one is representative of people in general with respect to how one behaves, what one knows, and what one believes, provides one with a good basis for understanding other people. As the results considered here show, however, if not adequately qualified, it can also lead to misjudgments of various sorts. A common problem appears to be a tendency people have to overimputeto be somewhat too uncritical in assuming that others know what they themselves know in particular instances. In part, this may stem from a tendency to take specific knowledge for granted and to forget what it is like to be without it. People make allowances for others whom they perceive to have disabilities or special needs of various sorts, but even here there is a question as to whether these allowances are commensurate with the difficulties involved. People with normal hearing may understand, for example, that a child who is born deaf will find it more difficult to learn to speak intelligibly than will a child who hears normally. However, one may wonder whether the average hearing person has anything close to an accurate appreciation of precisely how difficult the task is for the deaf child. A highly intelligent person may realize that a person with significantly lower intelligence is likely to find it more difficult to learn, but may easily underestimate how much more difficult it is and to attribute slow progress to lack of effort. The problem is illustrated, perhaps, by a comment made by Poincare (reported in Henle, 1962) regarding his own difficulty in understanding why anyone should find mathematics abstruse: How does it happen that there are people who do not understand mathematics? . .. There is nothing mysterious in the fact that everyone is not capable of discovery. . . . But what does seem most surprising, when we consider it, is that anyone should be unable to understand a mathematical argument at the very moment it is stated to him. (p. 35) Easy for Poincare to say.

753

References
Allen, M. J., Hazlett, R. D., Tacker, H. L., & Graham, B. V. (1969). Actual pedestrian visibility and the pedestrian's estimate of his own visibility. Paper presented at the 13th Annual Conference of the American Association for Automotive Medicine, Minneapolis, MN. Archer, R. L. (1980). Self-disclosure. In D. M. Wegner & R. R. Vallacher (Eds.), The self in social psychology (pp. 183-205). New York: Oxford University Press. Arkes, H. R., Christensen, C., Lai, C., & BJumer, C. (1987). Two methods of reducing overconfidence. Organizational Behavior and Human Decision Processes, 39, 133-144. Arkes, H. R., Faust, D., Guilmette, T. J., & Hart, K. (1988). Elimination of the hindsight bias. Journal of Applied Psychology, 73, 305307. Arkes, H. R., Wortmann, R. L., Saville, P. D., & Harkness, A. R. (1981). Hindsight bias among physicians weighing the likelihood of diagnosis. Journal of Applied Psychology, 66, 252-254.

Astington, J. W. (1993). The child's discovery of mind. Cambridge, MA: Harvard University Press. Astington, J. W., & Gopnick, A. (1991). Theoretical explanations of children's understanding of the mind. British Journal of Developmental Psychology, 9, 7-33. Astington, J. W., Harris, P. L., & Olson, D. (Eds.). (1988). Developing theories of mind. Cambridge, England: Cambridge University Press. Atkinson, J. W., & Huston, T. (1984). Sex role orientation and division of labor early in marriage. Journal of Personality and Social Psychology, 46, 330-345. Baldwin, J. M. (1906). Social and ethical interpretations of mental development. New York: Macmillan. Ball, S. B., Bazerman, M. H., & Carroll, J. S. (1991). An evaluation of learning in the bilateral winner's curse. Organizational Behavior and Human Decision Processes, 48, 1-22. Baron-Cohen, S. (1989). The autistic child's theory of mind: A case of specific developmental delay. Journal of Child Psychology and Psychiatry, 30, 285-297. Baron-Cohen, S. (1995). Mindblindness. Cambridge, MA: MIT Press. Bazerman, M. H., & Carroll, J. S. (1987). Negotiator cognition. In L. L. Cummings & B. M. Straw (Eds.), Research in organizational behavior (pp. 247-288). Greenwich, CT: JAI Press. Bazerman, M. H., & Neale, M. A. (1982). Approving negotiation effectiveness under final offer arbitration: The role of selection and training. Journal of Applied Psychology, 67, 543-548. . Bennett, M. (1993). Introduction. In M. Bennett (Ed.), The development of social cognition: The child as psychologist. New York: Guilford Press. Blake, M. (1973). Prediction of recognition when recall fails: Exploring the feeling-of-knowing phenomenon. Journal of Verbal Learning and Verbal Behavior, 12, 311-319. Brandt, M. M. (1978). Relations between cognitive role-taking performance and age. Developmental Psychology, 11, 206-213. Brent, E., & Granberg, D. (1982). Subjective agreement with the presidential candidates of 1976 and 1980. Journal of Personality and Social Psychology, 42, 393-403. Brown, R. (1965). Social psychology. New York: Free Press. Brown, R., & McNeill, D. (1966). The "tip of the tongue" phenomenon. Journal of Verbal Learning and Verbal Behavior, 5, 325-337. Burns, S. M., & Brainerd, C. J. (1979). Effects of constructive and dramatic play on perspective taking in very young children. Developmental Psychology, 15, 512-521. Camerer, C. F. (1992). The rationality of prices and volume in experimental markets. Organizational Behavior and Human Decision Processes, 51, 237-272. Camerer, C. F., Loewenstein, G., & Weber, M. (1989). The curse of knowledge in economic settings: An experimental analysis. Journal of Political Economy, 97, 1232-1254. Campbell, J. D., & Tesser, A. (1983). Motivational interpretations of hindsight bias: An individual difference analysis. Journal of Personality, 51, 605-620. Carlson, B. W. (1990). Anchoring and adjustment in judgments under risk. Journal of Experimental Psychology: Learning, Memory, and Cognition, 16, 655-676. Catrambone, R., Beike, D., & Niedenthal, P. (1996). Is the self-concept a habitual referent in judgments of similarity? Psychological Science, 7, 158-163. Chandler, M. (1988). Doubt and developing theories of mind. In J. W. Astington, P. L. Harris, & D. R. Olson (Eds.), Developing theories of mind (pp. 387-413). New York: Cambridge University Press. Chandler, M. J., & Greenspan, H. D. (1972). Ersatz egocentrism: A reply to H. Borke. Developmental Psychology, 7, 145-156. Chapman, G. B., & Bornstein, B. H. (1996). The more you ask for, the more you get: Anchoring in personal injury verdicts. Applied Cognitive Psychology, 10, 519-540.

754

NICKERSON syndrome in subjective probability forecasting. Organizational Behavior and Human Performance, 13, 1-16. Fischhoff, B. (1975). Hindsight *= foresight: The effect of outcome knowledge on judgment under uncertainty. Journal of Experimental Psychology: Human Perception and Performance, 1, 288-299. Fischhoff, B. (1977). Perceived informativeness of facts. Journal of Experimental Psychology: Human Perception and Performance, 3, 349358. Fischhoff, B. (1980). For those condemned to study the past: Reflective on historical judgment. In R. A. Schwede & D. W. Fiske (Eds.), New directions for methodology of behavioral science: Fallible judgments in behavioral research (pp. 79-93). San Francisco: Jossey-Bass. Fischhoff, B., & Beyth-Marom, R. (1975). I knew it would happen: Remembered probabilities of once-future things. Organizational Behavior and Human Performance, 13, 1-16. Fischhoff, B., & MacGregor, D. (1982). Subjective confidence in forecasts. Journal of Forecasting, I, 155-172. Fisher, I., & Budescu, D. V. (1994). Desirability and hindsight biases in predicting results of a multi-party election. In J. P. Caverni, M. BarHillel, H. Barron, & H. Jungermann (Eds.), Contributions to decision making (pp. 193-211). Amsterdam: North Holland. Fiske, S. T., & Taylor, S. E. (1991). Social cognition (2nd ed.). New York: Random House. Flavell, J. H. (1977). Cognitive development. Englewood Cliffs, NJ: Prentice Hall. Flavell, J. H. (1992). Perspectives on perspective taking. In H. Beilin & P. Pufall (Eds.), Piaget's theory: Prospects and possibilities (pp. 101-139). Hillsdale, NJ: Erlbaum. Flavell, J. H., Botkin, P. T., Fry, C. L., Jr., Wright, J. W., & Jarvis, P. E. (1968). The development of role-taking and communication skills in children (pp. 3-33). New York: Wiley. Flavell, J. H., & Miller, P. H. (1998). Social cognition. In W. Damon (Series Ed.) and D. Kuhn & S. Siegler (Vol. Eds.), Handbook of child psychology (Vol. 2, pp. 851-898). New York: Wiley. Flavell, J. H., & Wellman, H. M. (1977). Metamemory. In R. V. Kail, Jr., & J. W. Hagen (Eds.), Perspectives on the development of memory and cognition. Hillsdale, NJ: Erlbaum. Friedrich, J. (1993). Primary error detection and minimization (PEDMIN) strategies in social cognition: A reintrerpretation of confirmation bias phenomena. Psychological Review, 100, 298-319. Funder, D. C. (1987). Errors and mistakes: Evaluating the accuracy of social judgment. Psychological Bulletin, 101, 79-90. Fussell, S. R., & Krauss, R. M. (1989a). The effects of intended audience on message production and comprehension: Reference in a common ground framework. Journal of Experimental Social Psychology, 25, 203-219. Fussell, S. R., & Krauss, R. M. (1989b). Understanding friends and strangers: The effects of audience design on message comprehension. European Journal of Social Psychology, 19, 509526. Fussell, S. R., & Krauss, R. M. (1991). Accuracy and bias in estimates of others' knowledge. European Journal of Social Psychology, 21, 445 454. Fussell, S. R., & Krauss, R. M. (1992). Coordination of knowledge in communication: Effects of speakers' assumptions about what others know. Journal of Personality and Social Psychology, 62, 378-391. Gauvain, M. (1998). Culture, development, and theory of mind: Comment on Lillard (1998). Psychological Bulletin, 123, 37-42. Gigone, D., & Hastie, R. (1993). The common knowledge effect: Information sharing and group judgment. Journal of Personality and Social Psychology, 65, 959-974. Glenberg, A. M., & Epstein, W. (1985). Calibration of comprehension. Journal of Experimental Psychology: Learning, Memory, and Cognition, 11, 702-718.

Chi, M. T. H., Feltovich, P. J., & Glaser, R. (1981). Categorization and representation of physics problems by experts and novices. Cognitive Science, 5, 121-152. Chi, M. T. H., Glaser, R., & Rees, E. (1982). Expertise in problem solving. In R. Stemberg (Ed.), Advances in the psychology of human intelligence (Vol. 1, pp. 7-75). Hillsdale, NJ: Erlbaum. Clark, H. H. (1992). Arenas of language use. Chicago: University of Chicago Press. Clark, H. H., & Carlson, T. B. (1981). Context for comprehension. In J. Long & A. Baddeley (Eds.), Attention and performance IX (pp. 313330). Hillsdale, NJ: Erlbaum. Clark, H. H., & Haviland, S. E. (1977). Comprehension and the given-new contract. In R. O. Freedle (Ed.), Discourse production and comprehension (pp. 1-40). Norwood, NJ: Ablex. Clark, H. H., & Marshall, C. R. (1981). Definite reference and mutual knowledge. In A. H. Joshi, B. Webber, & I. A. Sag (Eds.), Elements of discourse understanding (pp. 10-63). Cambridge, England: Cambridge University Press. Clark, H. H., & Murphy, G. L. (1982). Audience design in meaning and reference. In J.-F. L. Ny & W. Kintsch (Eds.), Language and comprehension (pp. 287-299). Amsterdam: North Holland. Cole, M., & Engestrom, Y. (1993). A cultural-historical approach to distributed cognition. In G. Salomon (Ed.), Distributed cognitions: Psychological and educational considerations (pp. 1-46). New York: Cambridge University Press. Colvin, C. R., Vogt, D., & Ickes, W. (1997). Why do friends understand each other better than strangers do? In W. Ickes (Ed.), Empathic accuracy (pp. 169-193). New York: Guilford Press. Conway, M. (1990). On bias in autobiographical recall: Retrospective adjustments following discontinued expectation. Journal of Social Psychology, 130, 183-189. Cosmides, L. (1989). The logic of social exchange: Has natural selection shaped how humans reason? Studies with the Wason selection task. Cognition, 31, 187-276. Crano, W. D. (1983). Assumed consensus of attitudes: The effect of vested interest. Personality and Social Psychology Bulletin, 9, 597-608. Cronbach, L. J. (1955). Processes affecting scores on "understanding others" and "assumed similarity." Psychological Bulletin, 52, 177-193. Davidson, D. (1982). Paradoxes of irrationality. In R. Wollheim & J. Hopkins (Eds.), Philosophical essays on Freud (pp. 289-305). Cambridge, England: Cambridge University Press. Dawes, R. M. (1989). Statistical criteria for establishing a truly false consensus effect. Journal of Experimental Social Psychology, 25, 1-17. Dawes, R. M., Singer, D., & Lemmons, F. (1972). An experimental analysis of the contrast effect and its implications for intergroup communication and the indirect assessment of attitude. Journal of Personality and Social Psychology, 21, 281-294. Dunlosky, J., & Nelson, T. O. (1992). Importance of the kind of cue for judgments of learning (JOL) and the delayed-JOL effect. Memory and Cognition, 20, 374-380. Dunning, D., & Cohen, G. L. (1992). Egocentric definitions of traits and abilities in social judgment. Journal of Personality and Social Psychology, 63, 341-355. Dunning, D., Perie, M., & Story, A. L. (1991). Self-serving prototypes of social categories. Journal of Personality and Social Psychology, 61, 957-968. Eisenberg, N., Murphy, B. C., & Shepard, S. (1997). The development of empathic accuracy. In W. Ickes (Ed.), Empathic accuracy (pp. 73-116). New York: Guilford Press. Ferrell, W. R., & McGoey, P. J. (1980). A model of calibration for subjective probabilities. Organizational Behavior and Human Performance, 26, 32-53. Fischer, G. W. (1982). Scoring-rule feedback and the overconfidence

IMPUTING KNOWLEDGE Glenberg, A. M., & Epstein, W. (1987), Inexpert calibration of comprehension. Memory and Cognition, 151, 84-93. Glenberg, A. M., Sanocki, T., Epstein, W., & Morris, C. (1987). Enhancing calibration of comprehension. Journal of Experimental Psychology: General, 116, 119-136. Glenberg, A. M., Wilkinson, A. C., & Epstein, W. (1982). The illusion of knowing: Failure in the self-assessment of comprehension. Memory and Cognition, W, 597-602. Glucksberg, S., Krauss, R. M., & Higgins, E. I. (1975). The development of referential communication skills. In F. D. Horowitz, E. M. Hetherington, S. Scarr-Salapek, & G. M. Siegel (Eds.), Review of child development research (Vol. 4, pp. 305-345). Chicago: University of Chicago Press. Goethals, G. F., Allison, S. J., & Frost, M. (1979). Perception of the magnitude and diversity of social support. Journal of Experimental Social Psychology, 15, 570-581. Gopnik, A. (1993). How we know our minds: The illusion of first-person knowledge of intentionality. Behavioral and Brain Sciences, 16, 1-14. Gopnik, A., & Astington, J. W. (1988). Children's understanding of representational change and its relation to the understanding of false belief and the appearance-reality distinction. Child Development, 59, 26-37. Gordon, R. (1986). Folk psychology as simulation. Mind and Language, 1, 158-171. Gordon, R. (1995a). Folk psychology as simulation. In M. Davies & T. Stone (Eds.), Folk psychology (Vol. 3, pp. 60-73). Oxford, England: BlackweH. Gordon, R. (1995b). The simulation theory: Objections and misconceptions. In M. Davies & T. Stone (Eds.), Folk psychology (Vol. 3, pp. 100-122). Oxford, England: BlackweH. Granberg, D., & Brent, E. (1974). Dove-hawk placements in the 1968 election: Application of social judgment and balance theories. Journal of Personality and Social Psychology, 40, 833-842. Granberg, D., & Brent, E. (1983). When prophecy bends: The preferenceexpectation link in U.S. presidential elections, 1952-1980. Journal of Personality and Social Psychology, 45, 477-491. Grandy, R. (1973). Reference, meaning, and belief. Journal of Philosophy, 70, 439-452. Greenberg, M. T., Kusche, C. A., Cook, E. T., & Quamma, J. P. (1995). Promoting emotional competence in school-aged children: The effects of the PATHS curriculum. Development and Psychopathology, 7, 117-136. Grice, H. (1975). Logic and conversation. In P. Cole & J. L. Morgan (Eds.), Syntax and semantics: Speech acts. New York: Academic Press. Griffin, D. W., Dunning, D., & Ross, L. (1990). The role of construal processes in overconfident predictions about the self and others. Journal of Personality and Social Psychology, 59, 1128-1139. Griffin, D. W., & Ross, L. (1991). Subjective construal, social inference, and human misunderstanding. In M. Zanna (Ed.), Advances in experimental social psychology (Vol. 24, pp. 319-359). New York: Academic Press. Gruneberg, M. M., & Monks, J. (1974). "Feeling of knowing" and cued recall. Acta Psychologica, 38, 257-265. Gruneberg, M. M., Monks, J., & Sykes, R. N. (1977). Some methodological problems with feeling of knowing studies. Acta Psychologica, 41, 365-371. Hackman, J. R. (1987). The design of work teams. In J. Lorsch (Ed.), Handbook of organizational behavior (pp. 315-342). Englewood Cliffs, NJ: Prentice Hall. Hancock, M., & Ickes, W. (1996). Empathic accuracy: When does the perceiver-target relationship make a difference? Journal of Social and Personality Relationships, 13, 179-199. Hansen, R. D., & Donoghue, J. M. (1977). The power of consensus: Information derived from one's own and others' behavior. Journal of Personality and Social Psychology, 35, 294-302.

755

Hardy, G. H. (1989). A mathematician's apology. Cambridge, England: Cambridge University Press. (Original work published 1940) Harris, P. L., Johnson, C. N., Hutton, D., Andrews, G., & Cooke, T. (1989). Young children's theory of mind and emotion. Cognition and Emotion, 3, 379-400. Hart, J. T. (1965). Memory and the feeling of knowing experience. Journal of Educational Psychology, 56, 208-216. .Hart, J. T. (1967). Memory and the memory-monitoring process. Journal of Verbal Learning and Verbal Behavior, 6, 685-691. Hasher, L., Attig, M. S., & Alba, J. W. (1981). I knew it all along: Or, did I? Journal of Verbal Learning and Verbal Behavior, 20, 86-96. Hayes, J. R., Flower, L., Schriver, K. A., Stratman, J. F., & Carey, L. (1987). Cognitive processes in revision. In S. Rosenberg (Ed.), Advances in applied psycholinguistics: Vol. 2. Reading, writing, and language learning (pp. 176-240). New York: Cambridge University Press. Henle, M. (1962). The birth and death of ideas. In H. Gruber, G. Terrell, & M. Wertheimer (Eds.), Contemporary approaches to creative thinking (pp. 31-62). New York: Atherton Press. Hinds, P. J. (1999). The curse of expertise: The effects of expertise and debiasing methods on predictions of novice performance. Journal of Experimental Psychology: Applied, 5, 205-221. Hoch, S. J. (1984). Availability and inference in predictive judgment. Journal of Experimental Psychology: Learning, Memory, and Cognition, 10, 649-662. Hoch, S. J. (1985). Counterfactual reasoning and accuracy in predicting personal events. Journal of Experimental Psychology: Learning, Memory, and Cognition, 11, 719-731. Hoch, S. J. (1987). Perceived consensus and predictive accuracy: The pros and cons of projection. Journal of Personality and Social Psychology, 53, 221-234. Hoch, S. J., & Lowenstein, G. F. (1989). Outcome feedback: Hindsight and information. Journal of Experimental Psychology: Learning, Memory, and Cognition, 15, 605-619. Hodges, S. D., & Wegner, D. M. (1997). Automatic and controlled empathy. In W. Ickes (Ed.), Empathic accuracy (pp. 311-339). New York: Guilford Press. Hogarth, R. M. (1981). Beyond discrete biases: Functional and dysfunctional aspects of judgmental heuristics. Psychological Bulletin, 90, 197 217. Hollingshead, A. B. (1996). The rank-order effect in group decision making. Organizational Behavior and Human Decision Processes, 68, 181193. Horton, W. S., & Keysar, B. (1996). When do speakers take into account common ground? Cognition, 59, 91-117. lannotti, R. J. (1978). Effect of role-taking experiences on role-taking, empathy, altruism, and aggression. Developmental Psychology, 14, 119-124. Ickes, W. (1993). Empathic accuracy. Journal of Personality, 61, 587-610. Ickes, W. (1997). Introduction. In W. Ickes (Ed.), Empathic accuracy (pp. 1-16). New York: Guilford Press. Ickes, W., Marangoni, C., & Garcia, S. (1997). Studying empathic accuracy in a clinically relevant context. In W. Ickes (Ed.), Empathic accuracy (pp. 282-310). New York: Guilford Press. Ickes, W., & Simpson, J. (1997). Managing empathic accuracy in close relationships. In W. Ickes (Ed.), Empathic accuracy (pp. 218-250). New York: Guilford Press. Innes, J. M. (1976). The structure and communicative effectiveness of "inner" and "external" speech. British Journal of Social and Clinical Psychology, 15, 97-99. Issacs, E. A., & Clark, H. H. (1987). References in conversation between experts and novices. Journal of Experimental Psychology: General, 116, 26-37. Jacoby, L. L., Bjork, R. A., & Kelley, C. M. (1994). Illusions of comprehension, competence, and remembering. In D. Druckman & R. A. Bjork

756

NICKERSON Koriat, A., & Lieblich, I. (1977). A study of memory pointers. Acta Psychologica, 41, 151-164. Krauss, R. M., & Fussell, S. R. (1990). Mutual knowledge and communicative effectiveness. In J. Galegher, R. E. Kraut, & C. Egido (Eds.), Intellectual teamwork: Social and technical bases of collaborative work (pp. 111-145). Hillsdale, NJ: Erlbaum. Krauss, R. M., & Fussell, S. R. (1991a). Constructing shared communicative environments. In L. Resnick, J. Levine, & S. Behrend (Eds.), Perspectives on socially shared cognition (pp. 172200). Washington, DC: American Psychological Association. Krauss, R. M., & Fussell, S. R. (1991b). Perspective-taking in communication: Representations of others' knowledge in reference. Social Cognition, 9, 2-24. Krauss, R. M., & Glucksberg, S. (1969). The development of communication: Competence as a function of age. Child Development, 40, 255 266. Krauss, R. M., Vivekananthan, P. S., & Weinheimer, S. (1968). "Inner speech" and "external speech": Characteristics and communication effectiveness of socially and nonsocially encoded messages. Journal of Personality and Social Psychology, 9, 295-300. Kurdek, L. A. (1977). Structural components and intellectual correlates of cognitive perspective taking in first- through fourth-grade children. Child Development, 48, 1503-1511. Lachman, J. L., & Lachman, R. (1980). Age and the actualization of world knowledge. In L. W. Poon, J. L. Fozard, L. S. Cermak, D. Arenberg, & L. W. Thompson (Eds.), New directions in memory and aging (pp. 313-343). Hillsdale, NJ: Erlbaum. Larkin, J., McDermott, J., Simon, D. P., & Simon, H. A. (1980). Expert and novice performance in solving physics problems. Science, 208, 13351342. Laughlin, P. R., & Hollingshead, A. B. (1995). A theory of collective induction. Organizational Behavior and Human Decision Processes, 61, 94-107. Laughlin, P. R., VanderStoep, S. W., & Hollingshead, A. B. (1991). Collective versus individual induction: Recognition of truth, rejection of error, and collective information processing. Journal of Personality and Social Psychology, 61, 50-67. Leary, M. R. (1982). Hindsight distortion and the 1980 presidential election. Personality and Social Psychology Bulletin, 8, 257-263. Leibowitz, H. W. (1996). The symbiosis between basic and applied research. American Psychologist, 51, 366-370. Leonesio, R. J., & Nelson, T. O. (1990). Do different metamemory judgments tap the same underlying aspects of memory? Journal of Experimental Psychology: Learning, Memory, and Cognition, 16, 464-470. Leslie, A. M. (1994). Pretending and believing: Issues in the theory of ToMM. Cognition, 50, 211-238. Leslie, A. M., & Frith, U. (1988). Autistic children's understanding of seeing, knowing and believing. British Journal of Developmental Psychology, 6, 315-324. Levine, J. M., & Moreland, R. L. (1991). Culture and socialization in work groups. In L. B. Resnick, J. M. Levine, & S. D. Teasley (Eds.), Perspectives on socially shared cognition (pp. 257-279). Washington, DC: American Psychological Association. Levine, J. M., Resnick, L. B., & Higgins, E. T. (1993). Social foundations of cognition. Annual Review of Psychology, 44, 585-612. Liang, D. W., Moreland, R., & Argote, L. (1995). Group versus individual training and group performance: The mediating role of transactive memory. Personality and Social Psychology Bulletin, 21, 384-393. Lichtenstein, S., & Fischhoff, B. (1980). Training for calibration. Organizational Behavior and Human Performance, 20, 159-183. Lichtenstein, S., Fischhoff, B., & Phillips, L. D. (1982). Calibration of probabilities: The state of the art to 1980. In D. Kahneman, P. Slovic, & A. Tversky (Eds.), Judgment under uncertainty: Heuristics and biases (pp. 306-334). Cambridge, England: Cambridge University Press.

(Eds.), Learning, remembering, believing: Enhancing human performance (pp. 57-80). Washington, DC: National Academy Press. Jacoby, L. L., & Kelley, C. M. (1987). Unconscious influences of memory for a prior event. Personality and Social Psychology Bulletin, 13, 314 336. Jameson, A., Nelson, T. O., Leonesio, R. J., & Narens, L. (1993). The feeling of another person's knowing. Journal of Memory and Language, 32, 320-333. Johnson, M. K. (1988). Reality monitoring: An experimental phenomenological approach. Journal of Experimental Psychology: General, 117, 390-394. Johnson, M. K., Foley, M. A., Suengas, A. G., & Raye, C. L. (1988). Phenomenal characteristics of memories for perceived and imagined autobiographical events. Journal of Experimental Psychology: General, 117, 371-376. Johnson, M. K., & Raye, C. L. (1981). Reality monitoring. Psychological Review, 88, 67-85. Johnson, M. K., & Suengas, A. G. (1989). Reality monitoring judgments of other people's memories. Bulletin of the Psychonomic Society, 27, 107-110. Karniol, R. (1986). What will they think of next? Transformation rules used to predict other people's thoughts and feelings. Journal of Personality and Social Psychology, 51, 932-944. Karniol, R. (1990). Reading people's minds: A transformation rule model for predicting others' thoughts and feelings. In L. Berkowitz (Ed.), Advances in experimental social psychology (Vol. 23, pp. 211-247). New York: Academic Press. Karniol, R. (1995). Developmental and individual differences in predicting others' thoughts and feelings: Applying the transformation rule model. In N. Eisenberg (Ed.), Review of personality and social psychology: Vol. 15. Social development (pp. 27-48). Thousand Oaks, CA: Sage. Kassin, S. M. (1979). Consensus information, prediction, and causal attribution: A review of the literature and issues. Journal of Personality and Social Psychology, 37, 1966-1981. Kelley, C. M. (1999). Subjective experience as a basis of "objective" judgments: Effects of past experience on judgments of. difficulty. In D. Gopher & A. Koriat (Eds.), Attention and performance (Vol. 17, pp. 515-536). Cambridge, MA: MIT Press. Kelley, C. M., & Jacoby, L. L. (1996). Adult egocentrism: Subjective experience versus analytic bases for judgment. Journal of Memory and Language, 35, 157-175. Keren, G. B. (1991). Calibration and probability judgments: Conceptual and methodological issues. Acta Psychologica, 77, 217-273. Keysar, B. (1994). The illusory transparency of intention: Linguistic perspective taking in text. Cognitive psychology, 26, 165208. Keysar, B., Ginzel, L. E., & Bazerman, M. H. (1995). States of affairs and states of mind: The effects of knowledge of beliefs. Organizational Behavior and Human Decision Processes, 64, 283-293. Kim, P. H. (1997). When what you know can hurt you: A study of experiential effects on group discussion and performance. Organizational Behavior and Human Performance, 69, 165-177. Kingsbury, D. (1968). Manipulating the amount of information obtained from a person giving directions. Unpublished honors thesis, Harvard University, Cambridge, MA. Knudson, R. M., Sommers, A. A., & Golding, S. L. (1980). Interpersonal perception and mode of resolution in marital conflict. Journal of Personality and Social Psychology, 38, 751-763. Kohlberg, L. (1969). Stage and sequence: The cognitive-developmental approach to socialization. In D. A. Goslin (Ed.), Handbook of socialization theory and research (pp. 347-480). Chicago: Rand McNally. Koriat, A., Lichtenstein, S., & Fischhoff, B. (1980). Reasons for confidence. Journal of Experimental Psychology: Human Learning and Memory, 6, 107-118.

IMPUTING KNOWLEDGE Lillard, A. (1998). Ethnopsychoiogies: Cultural variations in theories of mind. Psychological Bulletin, 123, 3-32. Livesley, W. J., & Bromley, D. B. (1973). Person perception in childhood and adolescence. London: Wiley. Marangoni, C., Garcia, S., Ickes, W., & Teng, G. (1995), Empathic accuracy in a clinically relevant setting. Journal of Personality and Social Psychology, 68, 854-869. Marks, G., & Miller, N. (1987). Ten years of research on the falseconsensus effect: An empirical and theoretical review. Psychological Review, 102, 72-90. May, R. S. (1986). Inferences, subjective probability and frequency of correct answers: A cognitive approach to the overconfidence phenomenona. In B. Brehmer, H. Jungermann, P. Lourens, & G. Sevon (Eds.), New directions in research on decision making. Amsterdam: Elsevier. McGuigan, F. J. (1966). Covert oral behavior and auditory hallucinations. Psychophysiology, 3, 73-80. Mead, G. H. (1934). Mind, self, and society. Chicago: University of Chicago Press. Meltzoff, A. (1995). Understanding the intentions of others: Re-enactment of intended acts by 18-month-old children. Developmental Psychology, 31, 838-850. Metcalfe, J. (1986). Feeling of knowing in memory and problem solving. Journal of Experimental Psychology: Learing, Memory, and Cognition, 12, 288-294. Minsky, M. L. (1975). A framework for representing knowledge. In P. H. Winston (Ed.), The psychology of computer vision. New York: McGrawHill. Mintz, S., & Alpert, M. (1972). Imagery vividness, reality testing, and schizophrenic hallucinations. Journal of Abnormal Psychology, 79, 310-316. Mitchell, P. (1994). Realism and early conception of mind: A synthesis of phylogenetic and ontogenetic issues. In C. Lewis & P. Mitchell (Eds.), Children's early understanding of mind: Origins and development (pp. 19-45). Hillsdale, NJ: Erlbaum. Mitchell, P., Robinson, E. J., Isaacs, J. E., & Nye, R. M. (1996). Contamination in reasoning about false belief: An instance of realist bias in adults but not children. Cognition, 59, 1-21. Mitchell, T. R., & Kalb, L. S. (1981). Effects of outcome knowledge and outcome valence on supervisors' evaluations. Journal of Applied Psychology, 66, 604-612. Mossier, D. G., Marvin, R. S., & Greenberg, M. T. (1976). Conceptual perspective taking in 2- to 6-year old children. Developmental Psychology, 12, 85-86. Mullen, B. (1983). Egocentric bias in estimates of consensus. Journal of Social Psychology, 121, 31-38. Mullen, B., Atkins, J. L., Champion, D. S., Edwards, C., Hardy, D., Story, J. E., & Venderklok, M. (1985). The false consensus effect: A metaanalysis of 115 hypothesis tests. Journal of Experimental Social Psychology, 21, 263-283. Neale, M. A., & Bazerman, M. H. (1983). The effect of perspective-taking ability under alternative forms of arbitration on the negotiation process. Industrial and Labor Relations Review, 36, 378-388. Nelson, T. O., & Dunlosky, J. (1991). When people's judgments of learning JOLs are extremely accurate at predicting subsequent recall: The "delayed-JOL effect." Psychological Science, 2, 267-270. Nelson, T. O., Gerler, D., & Narens, L. (1984). Accuracy of feeling-ofknowing judgments for predicting perceptual identification and relearning. Journal of Experimental Psychology: General, 113, 282-300. Nelson, T. O., & Narens, L. (1980). Norms of 300 general-information questions: Accuracy of recall, latency of recall, and feeling-of-knowing ratings. Journal of Verbal Learning and Verbal Behavior, 19, 338368. Nelson, T. O., & Narens, L. (1990). Metamemory: A theoretical framework and new findings. In G. H. Bower (Ed.), The psychology of learning and motivation (Vol. 26, pp. 125-173). New York: Academic Press.

757

Newton, L. (1990). Overconfidence in the communication of intent: Heard and unheard melodies. Unpublished doctoral dissertation, Stanford University, Stanford, CA. Nickerson, R. S. (1993). On the distribution of cognition. In G. Salomon (Ed.), Distributed cognitions: Psychological and educational considerations (pp. 229-261). New York: Cambridge University Press. Nickerson, R. S. (1998). Confirmation bias: A ubiquitous phenomenon in many guises. Review of General Psychology, 2, 175220. Nickerson, R. S., Baddeley, A., & Freeman, B. (1987). Are people's estimates of what other people know influenced by what they themselves know? Acta Psychologica, 64, 245-259. Niedenthal, P. M., Cantor, N., & Kihlstrom, J. F. (1985). Prototypematching: A strategy for social decision-making. Journal of Personality and Social Psychology, 48, 575-584. Noller, P., & Venardos, C. (1986). Communication awareness in married couples. Journal of Social and Personality Relationships, 3, 31-42. O'Connor, M. (1989). Models of human behavior and confidence in judgment: A review. International Journal of Forecasting, 5, 159-169. Page, B. I., & Jones, C. C. (1979). Reciprocal effects of policy preferences, party loyalties, and the vote. American Political Science Review, 73, 1071-1089. Perner, J. (1991). Understanding the representational mind. Cambridge, MA: MIT Press. Perner, J., Leekam, S., & Wimmer, H. (1987). Three-year-olds' difficulty understanding false belief: Cognitive limitation, lack of knowledge, or pragmatic misunderstanding. British Journal of Developmental Psychology, 5, 125-137. Piaget, J. (1926). The language and thought of the child (M. Warden, Trans.). New York: Harcourt, Brace. (Original work published 1923) Piaget, J. (1928). Judgment and reasoning in the child (M. Warden, Trans.). New York: Harcourt, Brace. (Original work published 1924) Piaget, J. (1962). Comments: Addendum to Vygotsky, L. S. (1962). In J. Hanfmann & G. Valcar (Ed, and Trans.), Thought and language. Cambridge, MA: MIT Press. Piaget, J., & Inhelder, B. (1956). The child's conception of space. London: Routledge & Kegan Paul. Premack, D., & Woodruff, G. (1978). Does the chimpanzee have a theory of mind? Behavioral and Brain Sciences, 1, 515-526. Raiffa, H. (1982). The art and science of negotiation. Cambridge, MA: Belknap Press of Harvard University Press. Read, J. D., & Bruce, D. (1982). Longitudinal tracking of difficult memory retrievals. Cognitive Psychology, 14, 280-300. Reder, L. M. (1987). Selection strategies in question answering. Cognitive Psychology, 19, 90-138. Reder, L. M. (1988). Strategic control of retrieval strategies. In G. Bower (Ed.), The psychology of learning and motivation (Vol. 22, pp. 227259). San Diego, CA: Academic Press. Reder, L. M., & Ritter, F. E. (1992). What determines initial feeling of knowing? Familiarity with question terms, not with the answer. Journal of Experimental Psychology: Learning, Memory, and Cognition, 18, 435-451. Resnick, L. B., Levine, J. M., & Teasley, S. D. (Eds.). (1991). Perspectives on socially shared cognition. Washington, DC: American Psychological Association. Rommetveit, R. (1974). On message structure: A framework for the study of language and communication. New York: Wiley. Ross, L., Greene, D., & House, P. (1977). The "false consensus" effect: An egocentric bias in social perception and attribution processes. Journal of Experimental Social Psychology, 13, 279-301. Ross, M., & Holmberg, D. (1988). Recounting the past: Gender differences in the recall of events in the history of a close relationship. In J. M. Olson & M. P. Zanna (Eds.), Self-inferences processes: The Ontario Symposium (Vol. 6, pp. 135-152). Hillsdale, NJ: Erlbaum.

758

NICKERSON Shinar, D. (1984). Actual versus estimated night-time pedestrian visibility. Ergonomics, 27, 863-871. Sillars, A. L. (1985). Interpersonal perception in relationships. In W. Ickes (Ed.), Compatible and incompatible relationships (pp. 277-305). New York: Springer-Verlag. Sillars, A. L., Pike, G. R., Jones, T. S., & Murphy, M. A. (1984). Communication and understanding in marriage. Human Communication Research, 10, 317-350. Sillars, A. L., & Scott, M. D. (1983). Interpersonal perception between intimates: An integrative review. Human Communication Research, 10, 153-176. Simpson, J. A., Ickes, W., & Blackstone, T. (1995). When the head protects the heart: Empathic accuracy in dating relationships. Journal of Personality and Social Psychology, 69, 629-641. Slovic, P., & Fischhoff, B. (1977). On the psychology of experimental surprises. Journal of Experimental Psychology: Human Perception and Performance, 3, 544551. Slovic, P., & Lichtenstein, S. (1971). Comparison of Bayesian and regression approaches to the study of information processing in judgment. Organizational Behavior and Human Performance, 6, 649-744. Smith, V. L., & Clark, H. H. (1993). On the course of answering questions. Journal of Memory and Language, 32, 25-38. Sniderman, P. M., Brody, R. A., & Tetlock, P. E. (1991). Reasoning and choice: Explorations in political psychology. New York: Cambridge University Press. Sniezek, R. A., Paese, P. W., & Switzer, F. S. (1990). The effect of choosing on confidence in choice. Organizational Behavior and Human Decision Processes, 46, 264-282. Snyder, M., & Uranowitz, S. W. (1978). Reconstructing the past: Some cognitive consequences of person perception. Journal of Personality and Social Psychology, 36, 941-950. Spivack, G., & Shure, M. B. (1974). Social adjustment of young children: A cognitive approach to solving real-life problems. San Francisco: Jossey-Bass. Srull, T. K., & Gaelick, L. (1983). General principles and individual differences in the self as a habitual reference point: An examination of self-other judgments of similarity. Social Cognition, 2, 108-121. Stasser, G., Taylor, L. A., & Hanna, C. (1989). Information sampling in structured and unstructured discussions of three- and six-person groups. Journal of Personality and Social Psychology, 57, 67-78. Stasser, G., & Titus, W. (1985). Pooling of unique information in group decision making: Biased information sampling during discussion. Journal of Personality and Social Psychology, 48, 1467-1478. Stasser, G., & Titus, W. (1987). Effects of information load and percentage of common information on the dissemination of unique information during group discussion. Journal of Personality and Social Psychology, 53, 81-93. Steedman, M. J., & Johnson-Laird, P. N. (1980). The production of sentences, utterances and speech acts: Have computers anything to say? In B. Butterworth (Ed.), Language production: Vol. L Speech and talk. London: Academic Press. Sticht, T. G., Beeler, M. J., & McDonald, B. A. (Eds.). (1992). The intergenerational transfer of cognitive skills (vols. 1 and 2). Norwood, NJ: Ablex. Stinson, L., & Ickes, W. (1992). Empathic accuracy in the interactions of male friends versus male strangers. Journal of Personality and Social Psychology, 62, 787-797. Suengas, A. G., & Johnson, M. K. (1988). Qualitative effects of rehearsal on memories for perceived and imagined complex events. Journal of Experimental Psychology: General, 117, 377-389. Synodinos, N. E. (1986). Hindsight distortion: "I knew-it-all along and I was sure about it." Journal of Applied Social Psychology, 16, 107-117. Taylor, M. (1996). Social cognitive development from a theory of mind perspective. In E. C. Carterette, M. P. Friedman (Series Eds.), R.

Rubin, J. Z., & Brown, B. R. (1975). The social psychology of bargaining and negotiation. New York: Academic Press. Ryan, M. P., Petty, C. R., & Wenzlaff, R. M. (1982). Motivated remembering efforts during tip-of-the tongue states. Acta Psychologica, 51, 137-147. Sachs, J., & Devin, J. (1976). Young children's use of age-appropriate speech styles in social interaction and role-playing. Journal of Child Language, 3, 81-98. Salmon, W. C. (1974). Rejoinder to Barker and Kyburg. In R. Swinburne (Ed.), The justification of induction (pp. 66-73). London: Oxford University Press. Salomon, G. (Ed.). (1993). Distributed cognitions: Psychological and educational considerations. New York: Cambridge University Press. Samuelson, W. F., & Bazerman, M. H. (1985). The winner's curse in bilateral negotiation. In V. Smith (Ed.), Research in experimental economics (Vol. 3, pp. 105-137). Greenwich, CT: JAI Press. Scarlett, H. H., Press, A. N., & Crockett, W. H. (1971). Children's descriptions of peers: A Wernerian developmental analysis. Child Development, 42, 439-453. Schank, R. C., & Abelson, R. P. (1977). Scripts, plans, goals, and understanding: An inquiry into human knowledge structures. Hillsdale, NJ: Erlbaum. Schegloff, E. A. (1972). Notes on a conversational practice: Formulating place. In D. Sudnow (Ed.), Studies in social interaction (pp. 75-119). New York: Free Press. Schegloff, E. A., Jefferson, G., & Sacks, H. (1977). The preference for self-correction in the organization of repair in conversation. Language, 53, 361-382. Schooler, J. W., Gerhard, D., & Loftus, E. F. (1986). Qualities of the unreal. Journal of Experimental Psychology: Learning, Memory, and Cognition, 12, 171-181. Schutz, A. (1970). On phenomenology and social relations. Chicago: University of Chicago Press. Schwartz, B. L., & Metcalfe, J. (1992). Cue familiarity but not target retrievability enhances feeling-of-knowing judgments. Journal of Experimental Psychology: Learning, Memory, and Cognition, 18, 1074-1083. Seaver, D. A., von Winterfeldt, D. V., & Edwards, W. (1978). Eliciting subjective probability distributions on continuous variables. Organizational Behavior and Human Performance, 21, 379-391. Selman, R. L. (1971). Taking another's perspective: Role taking development in early childhood. Child Development, 42, 1721-1734. Selman, R. L. (1980). The growth of interpersonal understanding: Developmental and clinical analyses. New York: Academic Press. Selman, R. L. (1981). The child as a friendship philosopher. In S. R. Asher & J. M. Gottman (Eds.), The development of children's friendships (pp. 242-272). Cambridge, England: Cambridge University Press. Selman, R. L., & Byrne, D. F. (1974). A structural-developmental analysis of levels of role-taking in middle childhood. Child Development, 45, 803-806. Shaklee, H., & Fischhoff, B. (1982). Strategies of information search in causal analysis. Memory and Cognition, 10, 520-530. Shantz, C. U. (1975). The development of social cognition. In E. M. Hetherington (Ed.), Review of child development research (Vol. 5, pp. 257-323). Chicago: University of Chicago Press. Shantz, C. U. (1983). Social cognition. In P. Mussen (Ed.), Handbook of child psychology (Vol. 3, pp. 495-555). New York: Wiley. Shatz, C. V., & Gelman, R. (1973). The development of communication skills: Modifications in the speech of young children as a function of listener. Monographs of the Society for Research in Child Development, 38, (5, Serial No. 152). Shatz, M. (1983). Communication. In P. Mussen (Series Ed.), J. H. Flavell, & E. M. Markman (Vol. Eds.), Handbook of child psychology: Vol. 3, Cognitive Development, (pp. 841-889). New York: Wiley.

IMPUTING KNOWLEDGE Gelman, & T. Au (Vol. Eds.), Handbook of perception and cognition: Vol. 13. Perceptual and cognitive development (pp. 283-329). San Diego, CA: Academic Press. Tetlock, P. E., & Kim, J. I. (1987). Accountability and judgment processes in a personality prediction task. Journal of Personality and Social Psychology, 52, 700-709. Thomas, G., & Fletcher, G. J. O. (1997). Empathic accuracy in close relationships. In W. Ickes (Ed.), Empathic accuracy (pp. 194-217). New York: Guilford Press. Tversky, A., & Kahneman, D. (1973). Availability: A heuristic for judging frequency and probability. Cognitive Psychology, 5, 207-232. Tversky, A., & Kahneman, D. (1974, September 27). Judgment under uncertainty: Heuristics and biases. Science, 185, 1124-1131. Vygotsky, L. S. (1962). Thought and language. Cambridge, MA: MIT Press. Wallsten, T. S., & Budescu, D. V. (1983). Encoding subjective probabilities: A psychological and psychometric review. Management Science, 29, 152-173. Wegner, D. M. (1986). Transactive memory: A contemporary analysis of the group mind. In B. Mullen & G. R. Goethals (Eds.), Theories of group behavior (pp. 185-208). New York: Springer-Verlag.

759

Wegner, D. M. (1995). A computer network model of human transactive memory. Social Cognition, 13, 319-339. Wegner, D. M., Erber, R., & Raymond, P. (1991). Transactive memory in close relationships. Journal of Personality and Social Psychology, 61, 923-929. Wegner, D. M., Giuliano, T., & Hertel, P. (1985). Cognitive interdependence in close relationships. In W. J. Ickes (Ed.), Compatible and incompatible relationships (pp. 253-276). New York: Springer-Verlag. Werner, H. (1948). Comparative psychology of mental development. Madison, CT: International Universities Press. Werner, H., & Kaplan, B. (1963). Symbol formation. New York: Wiley. Wood, G. (1978). The knew-it-all-along effect. Journal of Experimental Psychology: Human Perception and Performance, 4, 345-353. Yirmiya, N., Erel, O., Snaked, M., & Solomonica-Levi, D. (1998). Metaanalysis comparing theory of mind abilities of individuals with autism, individuals with mental retardation, and normally developing individuals. Psychological Bulletin, 124, 283-307.

Received August 7, 1998 Revision received May 17, 1999 Accepted May 18, 1999

AMERICAN PSYCHOLOGICAL ASSOCIATION SUBSCRIPTION CLAIMS INFORMATION

Today'sDate:_

We provide this form to assist members, institutions, and nonmember individuals with any subscription problems. With the appropriate information we can begin a resolution. If you use the services of an agent, please do NOT duplicate claims through them and directly to us. PLEASE PRINT CLEARLY AND IN INK IF POSSIBLE.

PRINT FULL NAME OR KEY NAME OF INSTITUTION

MEMBER OR CUSTOMER NUMBER (MAY BE FOUND ON ANY PAST ISSUE LABEL)

DATE YOUR ORDER WAS MAILED (OR PHONED) PREPAID CITY STATE/COUNTRY (If possible, send a copy, front and back, of your cancelled check to help us in our research of your claim.) ISSUES: MISSING DAMAGED CHECK CHARGE CHECK/CARD CLEARED DATE:_

YOUR NAME AND PHONE NUMBER

TITLE

VOLUME OR YEAR

NUMBER OR MONTH

Thank you. Once a claim is received and resolved, delivery of replacement issues routinely takes 4-6 weeks. ^-^ DATE RECEIVED:. ACTION TAKEN: _ STAFF NAME: (TO BE FILLED OUT BY APA STAFF) ^ DATE OF ACTION: _ INV. NO. & DATE: LABEL NO. & DATE:

Send this form to APA Subscription Claims, 750 First Street, NE, Washington, DC 20002-4242 PLEASE DO NOT REMOVE. A PHOTOCOPY MAY BE USED.

Das könnte Ihnen auch gefallen