Sie sind auf Seite 1von 288

List of biases in judgment and decision making



List of biases in judgment and decision making


Ambiguity effect




Attentional bias


Availability heuristic


Availability cascade


Confirmation bias


Bandwagon effect


Base rate fallacy


Belief bias


Bias blind spot


Choice-supportive bias


Clustering illusion


Congruence bias


Conjunction fallacy


Conservatism (belief revision)


Contrast effect


Curse of knowledge


Decoy effect


Denomination effect


Distinction bias


Duration neglect


Empathy gap


Endowment effect




Experimenter's bias


False-consensus effect


Functional fixedness


Forer effect


Framing effect (psychology)


Gambler's fallacy


Hindsight bias


Hostile media effect


Hyperbolic discounting


Illusion of control


Illusion of validity


Illusory correlation


Information bias (psychology)


Insensitivity to sample size


Just-world hypothesis


Less-is-better effect


Loss aversion


Ludic fallacy


Mere-exposure effect


Money illusion


Moral credential


Negativity bias


Neglect of probability


Normalcy bias


Observer-expectancy effect


Omission bias


Optimism bias


Ostrich effect


Outcome bias


Overconfidence effect




Pessimism bias


Planning fallacy


Post-purchase rationalization


Pro-innovation bias


Pseudocertainty effect


Reactance (psychology)


Reactive devaluation


Serial position effect


Recency illusion


Restraint bias


Rhyme-as-reason effect


Risk compensation


Selective perception


Semmelweis reflex


Selection bias


Social comparison bias


Social desirability bias


Status quo bias




Subadditivity effect


Subjective validation


Survivorship bias


Texas sharpshooter fallacy


Time-saving bias


Well travelled road effect


Zero-risk bias


Actorobserver asymmetry


Defensive attribution hypothesis


DunningKruger effect


Egocentric bias


Extrinsic incentives bias


Halo effect


Illusion of asymmetric insight


Illusion of external agency


Illusion of transparency


Illusory superiority


In-group favoritism


Naïve cynicism


Worse-than-average effect


Google effect



Article Sources and Contributors


Image Sources, Licenses and Contributors


Article Licenses



List of biases in judgment and decision making


List of biases in judgment and decision making

Many biases in judgment and decision making have been demonstrated by research in psychology and behavioral economics. These are systematic deviations from a standard of rationality or good judgment.

Although the reality of these biases is confirmed by replicable research, there are often controversies about how to classify these biases or how to explain them. [1] Some are effects of information-processing rules, called heuristics, that the brain uses to produce decisions or judgments. These are called cognitive biases. [2][3] Biases in judgment or decision-making can also result from motivation, such as when beliefs are distorted by wishful thinking. Some biases have a variety of cognitive ("cold") or motivational ("hot") explanations. Both effects can be present at the same

time. [4][5]

There are also controversies about whether some of these biases count as truly irrational or whether they result in useful attitudes or behavior. An example is that, when getting to know others, people tend to ask leading questions which seem biased towards confirming their assumptions about the person. This kind of confirmation bias has been argued to be an example of social skill: a way to establish a connection with the other person. [6]

The research on these biases overwhelmingly involves humans. However, some of the findings have appeared in animals as well. For example, hyperbolic discounting has also been observed in rats, pigeons and monkeys. [7]

Decision-making, belief and behavioral biases

Many of these biases affect belief formation, business and economic decisions, and human behavior in general. They arise as a replicable result to a specific condition: when confronted with a specific situation, the deviation from what is normatively expected can be characterized by:

Ambiguity effect the tendency to avoid options for which missing information makes the probability seem "unknown." [8]

Anchoring or focalism the tendency to rely too heavily, or "anchor," on a past reference or on one trait or piece of information when making decisions.

Attentional bias the tendency to pay attention to emotionally dominant stimuli in one's environment and to neglect relevant data, when making judgments of a correlation or association.

Availability heuristic the tendency to overestimate the likelihood of events with greater "availability" in memory, which can be influenced by how recent the memories are, or how unusual or emotionally charged they may be.

Availability cascade a self-reinforcing process in which a collective belief gains more and more plausibility through its increasing repetition in public discourse (or "repeat something long enough and it will become true").

Backfire effect when people react to disconfirming evidence by strengthening their beliefs. [9]

Bandwagon effect the tendency to do (or believe) things because many other people do (or believe) the same. Related to groupthink and herd behavior.

Base rate fallacy or base rate neglect the tendency to base judgments on specifics, ignoring general statistical information. [10]

Belief bias an effect where someone's evaluation of the logical strength of an argument is biased by the believability of the conclusion. [11]

Bias blind spot the tendency to see oneself as less biased than other people, or to be able to identify more cognitive biases in others than in oneself. [12]

Choice-supportive bias the tendency to remember one's choices as better than they actually were. [13]

Clustering illusion the tendency to over-expect small runs, streaks or clusters in large samples of random data

Confirmation bias the tendency to search for or interpret information or memories in a way that confirms one's preconceptions. [14]

List of biases in judgment and decision making


Congruence bias the tendency to test hypotheses exclusively through direct testing, instead of testing possible alternative hypotheses.

Conjunction fallacy the tendency to assume that specific conditions are more probable than general ones. [15]

Conservatism or regressive bias tendency to underestimate high values and high likelihoods/probabilities/frequencies and overestimate low ones. Based on the observed evidence, estimates are not extreme enough [16][17][18]

Conservatism (Bayesian) the tendency to revise belief insufficiently when presented with new evidence (estimates of conditional probabilities are conservative) [16][19][20]

Contrast effect the enhancement or diminishing of a weight or other measurement when compared with a recently observed contrasting object. [21]

Curse of knowledge when knowledge of a topic diminishes one's ability to think about it from a less-informed perspective.

Decoy effect preferences change when there is a third option that is asymmetrically dominated

Denomination effect the tendency to spend more money when it is denominated in small amounts (e.g. coins) rather than large amounts (e.g. bills). [22]

Distinction bias the tendency to view two options as more dissimilar when evaluating them simultaneously than when evaluating them separately. [23]

Duration neglect the neglect of the duration of an episode in determining its value

Empathy gap the tendency to underestimate the influence or strength of feelings, in either oneself or others.

Endowment effect the fact that people often demand much more to give up an object than they would be willing to pay to acquire it. [24]

Essentialism categorizing people and things according to their essential nature, in spite of variations. [25]

Exaggerated expectation based on the estimates, real-world evidence turns out to be less extreme than our expectations (conditionally inverse of the conservatism bias). [16][26]

Experimenter's or expectation bias the tendency for experimenters to believe, certify, and publish data that agree with their expectations for the outcome of an experiment, and to disbelieve, discard, or downgrade the corresponding weightings for data that appear to conflict with those expectations. [27]

False-consensus effect - the tendency of a person to overestimate how much other people agree with him or her.

Functional fixedness - limits a person to using an object only in the way it is traditionally used

Focusing effect the tendency to place too much importance on one aspect of an event; causes error in accurately predicting the utility of a future outcome. [28]

Forer effect or Barnum effect the observation that individuals will give high accuracy ratings to descriptions of their personality that supposedly are tailored specifically for them, but are in fact vague and general enough to apply to a wide range of people. This effect can provide a partial explanation for the widespread acceptance of some beliefs and practices, such as astrology, fortune telling, graphology, and some types of personality tests.

Framing effect drawing different conclusions from the same information, depending on how or by whom that information is presented.

Frequency illusion the illusion in which a word, a name or other thing that has recently come to one's attention suddenly appears "everywhere" with improbable frequency (see also recency illusion). [29]

Gambler's fallacy the tendency to think that future probabilities are altered by past events, when in reality they are unchanged. Results from an erroneous conceptualization of the law of large numbers. For example, "I've flipped heads with this coin five times consecutively, so the chance of tails coming out on the sixth flip is much greater than heads."

Hard-easy effect Based on a specific level of task difficulty, the confidence in judgments is too conservative and not extreme enough [16][30][31][32]

Hindsight bias sometimes called the "I-knew-it-all-along" effect, the tendency to see past events as being predictable [33] at the time those events happened. Colloquially referred to as "Hindsight is 20/20".

List of biases in judgment and decision making


Hostile media effect the tendency to see a media report as being biased, owing to one's own strong partisan views.

Hyperbolic discounting the tendency for people to have a stronger preference for more immediate payoffs relative to later payoffs, where the tendency increases the closer to the present both payoffs are. [34]

Illusion of control the tendency to overestimate one's degree of influence over other external events. [35]

Illusion of validity when consistent but predictively weak data leads to confident predictions

Illusory correlation inaccurately perceiving a relationship between two unrelated events. [36][37]

Impact bias the tendency to overestimate the length or the intensity of the impact of future feeling states. [38]

Information bias the tendency to seek information even when it cannot affect action. [39]

Insensitivity to sample size the tendency to under-expect variation in small samples

Irrational escalation the phenomenon where people justify increased investment in a decision, based on the cumulative prior investment, despite new evidence suggesting that the decision was probably wrong.

Just-world hypothesis the tendency for people to want to believe that the world is fundamentally just, causing them to rationalize an otherwise inexplicable injustice as deserved by the victim(s).

Less-is-better effect a preference reversal where a dominated smaller set is preferred to a larger set

Loss aversion "the disutility of giving up an object is greater than the utility associated with acquiring it". [40] (see also Sunk cost effects and endowment effect).

Ludic fallacy - the misuse of games to model real-life situations.

Mere exposure effect the tendency to express undue liking for things merely because of familiarity with them. [41]

Money illusion the tendency to concentrate on the nominal (face value) of money rather than its value in terms of purchasing power. [42]

Moral credential effect the tendency of a track record of non-prejudice to increase subsequent prejudice.

Negativity bias the tendency to pay more attention and give more weight to negative than positive experiences or other kinds of information.

Neglect of probability the tendency to completely disregard probability when making a decision under uncertainty. [43]

Normalcy bias the refusal to plan for, or react to, a disaster which has never happened before.

Observer-expectancy effect when a researcher expects a given result and therefore unconsciously manipulates an experiment or misinterprets data in order to find it (see also subject-expectancy effect).

Omission bias the tendency to judge harmful actions as worse, or less moral, than equally harmful omissions (inactions). [44]

Optimism bias the tendency to be over-optimistic, overestimating favorable and pleasing outcomes (see also wishful thinking, valence effect, positive outcome bias). [45][46]

Ostrich effect ignoring an obvious (negative) situation.

Outcome bias the tendency to judge a decision by its eventual outcome instead of based on the quality of the decision at the time it was made.

Overconfidence effect excessive confidence in one's own answers to questions. For example, for certain types of questions, answers that people rate as "99% certain" turn out to be wrong 40% of the time. [16][47][48][49]

Pareidolia a vague and random stimulus (often an image or sound) is perceived as significant, e.g., seeing images of animals or faces in clouds, the man in the moon, and hearing non-existent hidden messages on records played in reverse.

Pessimism bias the tendency for some people, especially those suffering from depression, to overestimate the likelihood of negative things happening to them.

Planning fallacy the tendency to underestimate task-completion times. [38]

Post-purchase rationalization the tendency to persuade oneself through rational argument that a purchase was a good value.

List of biases in judgment and decision making


Pro-innovation bias the tendency to reflect a personal bias towards an invention/innovation, while often failing to identify limitations and weaknesses or address the possibility of failure.

Pseudocertainty effect the tendency to make risk-averse choices if the expected outcome is positive, but make risk-seeking choices to avoid negative outcomes. [50]

Reactance the urge to do the opposite of what someone wants you to do out of a need to resist a perceived attempt to constrain your freedom of choice (see also Reverse psychology).

Reactive devaluation devaluing proposals that are no longer hypothetical or purportedly originated with an adversary.

Recency bias a cognitive bias that results from disproportionate salience attributed to recent stimuli or observations the tendency to weigh recent events more than earlier events (see also peak-end rule, recency effect).

Recency illusion the illusion that a phenomenon, typically a word or language usage, that one has just begun to notice is a recent innovation (see also frequency illusion).

Restraint bias the tendency to overestimate one's ability to show restraint in the face of temptation.

Rhyme as reason effect rhyming statements are perceived as more truthful. A famous example being used in the O.J Simpson trial with the defenses use of the phrase "If the gloves don't fit then you must acquit."

Risk compensation / Peltzman effect the tendency to take greater risks when perceived safety increases.

Selective perception the tendency for expectations to affect perception.

Semmelweis reflex the tendency to reject new evidence that contradicts a paradigm. [51]

Selection bias - the distortion of a statistical analysis, resulting from the method of collecting samples. If the selection bias is not taken into account then certain conclusions drawn may be wrong.

Social comparison bias the tendency, when making hiring decisions, to favour potential candidates who don't compete with one's own particular strengths. [52]

Social desirability bias - the tendency to over-report socially desirable characteristics or behaviours and under-report socially undesirable characteristics or behaviours. [53]

Status quo bias the tendency to like things to stay relatively the same (see also loss aversion, endowment effect, and system justification). [54][55]

Stereotyping expecting a member of a group to have certain characteristics without having actual information about that individual.

Subadditivity effect the tendency to estimate that the likelihood of an event is less than the sum of its (more than two) mutually exclusive components. [56]

Subjective validation perception that something is true if a subject's belief demands it to be true. Also assigns perceived connections between coincidences.

Survivorship bias - concentrating on the people or things that "survived" some process and inadvertently overlooking those that didn't because of their lack of visibility.

Texas sharpshooter fallacy - pieces of information that have no relationship to one another are called out for their similarities, and that similarity is used for claiming the existence of a pattern.

Time-saving bias underestimations of the time that could be saved (or lost) when increasing (or decreasing) from a relatively low speed and overestimations of the time that could be saved (or lost) when increasing (or decreasing) from a relatively high speed.

Unit bias the tendency to want to finish a given unit of a task or an item. Strong effects on the consumption of food in particular. [57]

Well travelled road effect underestimation of the duration taken to traverse oft-traveled routes and overestimation of the duration taken to traverse less familiar routes.

Zero-risk bias preference for reducing a small risk to zero over a greater reduction in a larger risk.

Zero-sum heuristic Intuitively judging a situation to be zero-sum (i.e., that gains and losses are correlated). Derives from the zero-sum game in game theory, where wins and losses sum to zero. [58][59] The frequency with

List of biases in judgment and decision making


which this bias occurs may be related to the social dominance orientation personality factor.

Social biases

Most of these biases are labeled as attributional biases.

Actor-observer bias the tendency for explanations of other individuals' behaviors to overemphasize the influence of their personality and underemphasize the influence of their situation (see also Fundamental attribution error), and for explanations of one's own behaviors to do the opposite (that is, to overemphasize the influence of our situation and underemphasize the influence of our own personality).

Defensive attribution hypothesis defensive attributions are made when individuals witness or learn of a mishap happening to another person. In these situations, attributions of responsibility to the victim or harm-doer for the mishap will depend upon the severity of the outcomes of the mishap and the level of personal and situational similarity between the individual and victim. More responsibility will be attributed to the harm-doer as the outcome becomes more severe, and as personal or situational similarity decreases.

DunningKruger effect an effect in which incompetent people fail to realise they are incompetent because they lack the skill to distinguish between competence and incompetence [60]

Egocentric bias occurs when people claim more responsibility for themselves for the results of a joint action than an outside observer would credit them.

Extrinsic incentives bias an exception to the fundamental attribution error, when people view others as having (situational) extrinsic motivations and (dispositional) intrinsic motivations for oneself

False consensus effect the tendency for people to overestimate the degree to which others agree with them. [61]

Forer effect (aka Barnum effect) the tendency to give high accuracy ratings to descriptions of their personality that supposedly are tailored specifically for them, but are in fact vague and general enough to apply to a wide range of people. For example, horoscopes.

Fundamental attribution error the tendency for people to over-emphasize personality-based explanations for behaviors observed in others while under-emphasizing the role and power of situational influences on the same behavior (see also actor-observer bias, group attribution error, positivity effect, and negativity effect). [62]

Halo effect the tendency for a person's positive or negative traits to "spill over" from one area of their personality to another in others' perceptions of them (see also physical attractiveness stereotype). [63]

Illusion of asymmetric insight people perceive their knowledge of their peers to surpass their peers' knowledge of them. [64]

Illusion of external agency when people view self-generated preferences as instead being caused by insightful, effective and benevolent agents

Illusion of transparency people overestimate others' ability to know them, and they also overestimate their ability to know others.

Illusory superiority overestimating one's desirable qualities, and underestimating undesirable qualities, relative to other people. (Also known as "Lake Wobegon effect," "better-than-average effect," or "superiority bias"). [65]

Ingroup bias the tendency for people to give preferential treatment to others they perceive to be members of their own groups.

Just-world phenomenon the tendency for people to believe that the world is just and therefore people "get what they deserve."

Moral luck the tendency for people to ascribe greater or lesser moral standing based on the outcome of an event rather than the intention

Naive cynicism expecting more egocentric bias in others than in oneself

Outgroup homogeneity bias individuals see members of their own group as being relatively more varied than members of other groups. [66]

Projection bias the tendency to unconsciously assume that others (or one's future selves) share one's current emotional states, thoughts and values. [67]

List of biases in judgment and decision making


Self-serving bias the tendency to claim more responsibility for successes than failures. It may also manifest itself as a tendency for people to evaluate ambiguous information in a way beneficial to their interests (see also group-serving bias). [68]

System justification the tendency to defend and bolster the status quo. Existing social, economic, and political arrangements tend to be preferred, and alternatives disparaged sometimes even at the expense of individual and collective self-interest. (See also status quo bias.)

Trait ascription bias the tendency for people to view themselves as relatively variable in terms of personality, behavior, and mood while viewing others as much more predictable.

Ultimate attribution error similar to the fundamental attribution error, in this error a person is likely to make an internal attribution to an entire group instead of the individuals within the group.

Worse-than-average effect a tendency to believe ourselves to be worse than others at tasks which are difficult [69]

Memory errors and biases

In psychology and cognitive science, a memory bias is a cognitive bias that either enhances or impairs the recall of a memory (either the chances that the memory will be recalled at all, or the amount of time it takes for it to be recalled, or both), or that alters the content of a reported memory. There are many types of memory bias, including:

Bizarreness effect: bizarre, or uncommon material, is better remembered than common material

Choice-supportive bias: remembering chosen options as having been better than rejected options [70]

Change bias: after an investment of effort in producing change, remembering one's past performance as more difficult than it actually was [71]

Childhood amnesia: the retention of few memories from before the age of four

Conservatism or Regressive Bias tendency to remember high values and high likelihoods/probabilities/frequencies lower than they actually were and low ones higher than they actually were. Based on the evidence, memories are not extreme enough [72][73]

Consistency bias: incorrectly remembering one's past attitudes and behaviour as resembling present attitudes and behaviour. [74]

Context effect: that cognition and memory are dependent on context, such that out-of-context memories are more difficult to retrieve than in-context memories (e.g., recall time and accuracy for a work-related memory will be lower at home, and vice versa)

Cross-race effect: the tendency for people of one race to have difficulty identifying members of a race other than their own

Cryptomnesia: a form of misattribution where a memory is mistaken for imagination, because there is no subjective experience of it being a memory. [71]

Egocentric bias: recalling the past in a self-serving manner, e.g., remembering one's exam grades as being better than they were, or remembering a caught fish as bigger than it really was

Fading affect bias: a bias in which the emotion associated with unpleasant memories fades more quickly than the emotion associated with positive events. [75]

False memory a form of misattribution where imagination is mistaken for a memory.

Generation effect (Self-generation effect): that self-generated information is remembered best. For instance, people are better able to recall memories of statements that they have generated than similar statements generated by others.

Google effect: the tendency to forget information that can be easily found online.

Hindsight bias: the inclination to see past events as being predictable; also called the "I-knew-it-all-along" effect.

Humor effect: that humorous items are more easily remembered than non-humorous ones, which might be explained by the distinctiveness of humor, the increased cognitive processing time to understand the humor, or the emotional arousal caused by the humor.

List of biases in judgment and decision making


Illusion-of-truth effect: that people are more likely to identify as true statements those they have previously heard (even if they cannot consciously remember having heard them), regardless of the actual validity of the statement. In other words, a person is more likely to believe a familiar statement than an unfamiliar one.

Illusory correlation inaccurately remembering a relationship between two events. [16][76]

Lag effect: see spacing effect

Leveling and Sharpening: memory distortions introduced by the loss of details in a recollection over time, often concurrent with sharpening or selective recollection of certain details that take on exaggerated significance in relation to the details or aspects of the experience lost through leveling. Both biases may be reinforced over time, and by repeated recollection or re-telling of a memory. [77]

Levels-of-processing effect: that different methods of encoding information into memory have different levels of effectiveness [78]

List-length effect: a smaller percentage of items are remembered in a longer list, but as the length of the list increases, the absolute number of items remembered increases as well. [79]

Misinformation effect: that misinformation affects people's reports of their own memory.

Misattribution: when information is retained in memory but the source of the memory is forgotten. One of Schacter's (1999) Seven Sins of Memory, Misattribution was divided into Source Confusion, Cryptomnesia and False Recall/False Recognition. [71]

Modality effect: that memory recall is higher for the last items of a list when the list items were received via speech than when they were received via writing.

Mood-congruent memory bias: the improved recall of information congruent with one's current mood.

Next-in-line effect: that a person in a group has diminished recall for the words of others who spoke immediately before or after this person.

Osborn effect: that being intoxicated with a mind-altering substance makes it harder to retrieve motor patterns from the Basal Ganglion. [80]

Part-list cueing effect: that being shown some items from a list makes it harder to retrieve the other items [81]

Peak-end rule: that people seem to perceive not the sum of an experience but the average of how it was at its peak (e.g. pleasant or unpleasant) and how it ended.

Persistence: the unwanted recurrence of memories of a traumatic event.

Picture superiority effect: that concepts are much more likely to be remembered experientially if they are presented in picture form than if they are presented in word form. [82]

Placement bias tendency of people to remember themselves as better than others at tasks at which they rate themselves above average (also Illusory superiority or Better-than-average effect) [83] and tendency to remember themselves as worse than others at tasks at which they rate themselves below average (also Worse-than-average effect [16][69]

Positivity effect: that older adults favor positive over negative information in their memories.

Primacy effect, Recency effect & Serial position effect: that items near the end of a list are the easiest to recall,


followed by the items at the beginning of a list; items in the middle are the least likely to be remembered. [84] Processing difficulty effect

Reminiscence bump: the recalling of more personal events from adolescence and early adulthood than personal events from other lifetime periods [85]

Rosy retrospection: the remembering of the past as having been better than it really was.

Self-relevance effect: that memories relating to the self are better recalled than similar information relating to others.

Self-serving bias perceiving oneself responsible for desirable outcomes but not responsible for undesirable ones.

Source Confusion: misattributing the source of a memory, e.g. misremembering that one saw an event personally when actually it was seen on television.

List of biases in judgment and decision making


Spacing effect: that information is better recalled if exposure to it is repeated over a longer span of time.

Stereotypical bias: memory distorted towards stereotypes (e.g. racial or gender), e.g. "black-sounding" names being misremembered as names of criminals. [71]

Suffix effect: the weakening of the recency effect in the case that an item is appended to the list that the subject is not required to recall [86]

Suggestibility: a form of misattribution where ideas suggested by a questioner are mistaken for memory.

Subadditivity effect the tendency to estimate that the likelihood of a remembered event is less than the sum of its (more than two) mutually exclusive components. [16][87]

Telescoping effect: the tendency to displace recent events backward in time and remote events forward in time, so that recent events appear more remote, and remote events, more recent.

Testing effect: that frequent testing of material that has been committed to memory improves memory recall.

Tip of the tongue phenomenon: when a subject is able to recall parts of an item, or related information, but is frustratingly unable to recall the whole item. This is thought an instance of "blocking" where multiple similar memories are being recalled and interfere with each other. [71]

Verbatim effect: that the "gist" of what someone has said is better remembered than the verbatim wording [88]

Von Restorff effect: that an item that sticks out is more likely to be remembered than other items [89]

Zeigarnik effect: that uncompleted or interrupted tasks are remembered better than completed ones.

Common theoretical causes of some cognitive biases

Bounded rationality limits on optimization and rationality



Adaptive bias basing decisions on limited information and biasing them based on the costs of being wrong.

Attribute substitution making a complex, difficult judgment by unconsciously substituting it by an easier judgment [90]



•• Salience



Cognitive dissonance, and related:



Heuristics, including:

Availability heuristic estimating what is more likely by what is more available in memory, which is biased toward vivid, unusual, or emotionally charged examples [36]

Representativeness heuristic judging probabilities on the basis of resemblance [36]

Affect heuristic basing a decision on an emotional reaction rather than a calculation of risks and benefits [91]

Some theories of emotion such as:




Misinterpretations or misuse of statistics; innumeracy.


2012 Psychological Bulletin article suggested that at least eight seemingly unrelated biases can be produced by the

same information-theoretic generative mechanism that assumes noisy information processing during storage and

retrieval of information in human memory. [16]

List of biases in judgment and decision making


Methods for dealing with cognitive biases

Reference class forecasting was developed by Daniel Kahneman, Amos Tversky, and Bent Flyvbjerg to eliminate or reduce the impact of cognitive biases on decision making. [92]


[1] Dougherty, M. R. P., Gettys, C. F., & Ogden, E. E. (1999). MINERVA-DM: A memory processes model for judgments of likelihood. Psychological Review, 106(1), 180209. [2] Kahneman, D.; Tversky, A. (1972), "Subjective probability: A judgment of representativeness", Cognitive Psychology 3: 430454,


[[33]] Baron, J. (2007). Thinking and deciding (4th ed.). New York, NY: Cambridge University Press. [4] Maccoun, Robert J. (1998), "Biases in the interpretation and use of research results" ( MacCoun_AnnualReview98.pdf), Annual Review of Psychology 49: 25987, doi:10.1146/annurev.psych.49.1.259, PMID 15012470, [5] Nickerson, Raymond S. (1998), "Confirmation Bias; A Ubiquitous Phenomenon in Many Guises", Review of General Psychology (Educational Publishing Foundation) 2 (2): 175220, doi:10.1037/1089-2680.2.2.175, ISSN 1089-2680

[6] Dardenne, Benoit; Leyens, Jacques-Philippe (1995), "Confirmation Bias as a Social Skill", Personality and Social Psychology Bulletin (Society for Personality and Social Psychology) 21 (11): 12291239, doi:10.1177/01461672952111011, ISSN 1552-7433 [7] Alexander, William H.; Brown, Joshua W. (1 June 2010). "Hyperbolically Discounted Temporal Difference Learning". Neural Computation 22 (6): 15111527. doi:10.1162/neco.2010.08-09-1080. [[88]] Baron 1994, p. 372 [9] Sanna, Lawrence J.; Schwarz, Norbert; Stocker, Shevaun L. (2002). "When debiasing backfires: Accessible content and accessibility experiences in debiasing hindsight." ( Journal of Experimental Psychology: Learning, Memory, and Cognition 28 (3): 497502. doi:10.1037//0278-7393.28.3.497. ISSN 0278-7393.


[10] Baron 1994, pp. 224228 [11] Klauer, K. C.; J. Musch, B. Naumer (2000), "On belief bias in syllogistic reasoning", Psychological Review 107 (4): 852884, doi:10.1037/0033-295X.107.4.852, PMID 11089409

[12] Pronin, Emily; Matthew B. Kugler (July 2007), "Valuing thoughts, ignoring behavior: The introspection illusion as a source of the bias blind spot", Journal of Experimental Social Psychology (Elsevier) 43 (4): 565578, doi:10.1016/j.jesp.2006.05.011, ISSN 0022-1031. [13] Mather, M.; Shafir, E.; Johnson, M.K. (2000), "Misrememberance of options past: Source monitoring and choice" ( projects/matherlab/pdfs/Matheretal2000.pdf), Psychological Science 11: 132138, doi:10.1111/1467-9280.00228, . [14] Oswald, Margit E.; Grosjean, Stefan (2004), "Confirmation Bias", in Pohl, Rüdiger F., Cognitive Illusions: A Handbook on Fallacies and Biases in Thinking, Judgement and Memory, Hove, UK: Psychology Press, pp. 7996, ISBN 978-1-84169-351-4, OCLC 55124398 [15] Fisk, John E. (2004), "Conjunction fallacy", in Pohl, Rüdiger F., Cognitive Illusions: A Handbook on Fallacies and Biases in Thinking, Judgement and Memory, Hove, UK: Psychology Press, pp. 2342, ISBN 978-1-84169-351-4, OCLC 55124398 [16] Martin Hilbert (2012) " "Toward a synthesis of cognitive biases: How noisy information processing can bias human decision making" (http:/ /". Psychological Bulletin, 138(2), 211237; Also at

[17] Attneave, F. (1953). Psychological probability as a function of experienced frequency. Journal of Experimental Psychology, 46(2), 8186. [18] Fischhoff, B., Slovic, P., & Lichtenstein, S. (1977). Knowing with certainty: The appropriateness of extreme confidence. Journal of Experimental Psychology: Human Perception and Performance, 3(4), 552564. doi:10.1037/0096-1523.3.4.552 [19] DuCharme, W. M. (1970). Response bias explanation of conservative human inference. Journal of Experimental Psychology, 85(1), 6674. [20] Edwards, W. (1968). Conservatism in human information processing. In B. Kleinmuntz (Ed.), Formal representation of human judgment, (pp. 1752). New York: Wiley. [21] Plous 1993, pp. 3841 [22] Why We Spend Coins Faster Than Bills ( by Chana Joffe-Walt. All Things Considered, 12 May 2009. [23] Hsee, Christopher K.; Zhang, Jiao (2004), "Distinction bias: Misprediction and mischoice due to joint evaluation", Journal of Personality and Social Psychology 86 (5): 680695, doi:10.1037/0022-3514.86.5.680, PMID 15161394 [24] (Kahneman, Knetsch & Thaler 1991, p. 193) Richard Thaler coined the term "endowment effect." [25] ( [26] Wagenaar, W. A., & Keren, G. B. (1985). Calibration of probability assessments by professional blackjack dealers, statistical experts, and lay people. Organizational Behavior and Human Decision Processes, 36(3), 406416. [27] Jeng, M. (2006). "A selected history of expectation bias in physics". American Journal of Physics 74 (7): 578583. doi:10.1119/1.2186333.

[28] Kahneman, Daniel; Alan B. Krueger, David Schkade, Norbert Schwarz, Arthur A. Stone (2006-06-30), "Would you be happier if you were richer? A focusing illusion" (, Science 312 (5782):

190810, doi:10.1126/science.1129688, PMID 16809528,

List of biases in judgment and decision making


[29] Zwicky, Arnold (2005-08-07). "Just Between Dr. Language and I" ( Language [30] Lichtenstein, S., & Fischhoff, B. (1977). Do those who know more also know more about how much they know? Organizational Behavior and Human Performance, 20(2), 159183. doi:10.1016/0030-5073(77)90001-0 [31] Merkle, E. C. (2009). The disutility of the hard-easy effect in choice confidence. Psychonomic Bulletin & Review, 16(1), 204213.


[32] Juslin, P, Winman, A., & Olsson, H. (2000). Naive empiricism and dogmatism in confidence research: a critical examination of the hard-easy effect. Psychological Review, 107(2), 384396. [33] Pohl, Rüdiger F. (2004), "Hindsight Bias", in Pohl, Rüdiger F., Cognitive Illusions: A Handbook on Fallacies and Biases in Thinking, Judgement and Memory, Hove, UK: Psychology Press, pp. 363378, ISBN 978-1-84169-351-4, OCLC 55124398 [[3434]] Hardman 2009, p. 110 [35] Thompson, Suzanne C. (1999), "Illusions of Control: How We Overestimate Our Personal Influence", Current Directions in Psychological Science (Association for Psychological Science) 8 (6): 187190, ISSN 09637214, JSTOR 20182602 [36] Tversky, Amos; Daniel Kahneman (September 27, 1974), "Judgment under Uncertainty: Heuristics and Biases", Science (American Association for the Advancement of Science) 185 (4157): 11241131, doi:10.1126/science.185.4157.1124, PMID 17835457 [37] Fiedler, K. (1991). The tricky nature of skewed frequency tables: An information loss account of distinctiveness-based illusory correlations. Journal of Personality and Social Psychology, 60(1), 2436. [38] Sanna, Lawrence J.; Schwarz, Norbert (2004), "Integrating Temporal Biases: The Interplay of Focal Thoughts and Accessibility Experiences", Psychological Science (American Psychological Society) 15 (7): 474481, doi:10.1111/j.0956-7976.2004.00704.x, PMID 15200632 [39] Baron 1994, pp. 258259 [40] (Kahneman, Knetsch & Thaler 1991, p. 193) Daniel Kahneman, together with Amos Tversky, coined the term "loss aversion." [41] Bornstein, Robert F.; Crave-Lemley, Catherine (2004), "Mere exposure effect", in Pohl, Rüdiger F., Cognitive Illusions: A Handbook on Fallacies and Biases in Thinking, Judgement and Memory, Hove, UK: Psychology Press, pp. 215234, ISBN 978-1-84169-351-4, OCLC 55124398 [42] Shafir, Eldar; Diamond, Peter; Tversky, Amos (2000), "Money Illusion", Choices, values, and frames, Cambridge University Press, pp. 335355, ISBN 978-0-521-62749-8 [[4343]] Baron 1994, p. 353 [[4444]] Baron 1994, p. 386 [[4545]] Baron 1994, p. 44 [[4646]] Hardman 2009, p. 104 [47] Adams, P. A., & Adams, J. K. (1960). Confidence in the recognition and reproduction of words difficult to spell. The American Journal of Psychology, 73(4), 544552. [48] Hoffrage, Ulrich (2004), "Overconfidence", in Rüdiger Pohl, Cognitive Illusions: a handbook on fallacies and biases in thinking, judgement and memory, Psychology Press, ISBN 978-1-84169-351-4 [49] Sutherland 2007, pp. 172178 [[5050]] Hardman 2009, p. 137 [51] Edwards, W. (1968). Conservatism in human information processing. In: B. Kleinmutz (Ed.), Formal Representation of Human Judgment. (pp. 1752). New York: John Wiley and Sons. [52] Stephen M. Garciaa, Hyunjin Song and Abraham Tesser (November 2010), "Tainted recommendations: The social comparison bias", Organizational Behavior and Human Decision Processes 113 (2): 97101, doi:10.1016/j.obhdp.2010.06.002, ISSN 07495978, Lay summary ( BPS Research Digest (2010-10-30). [53] Dalton, D. & Ortegren, M. (2011). "Gender differences in ethics research: The importance of controlling for the social desirability response bias". Journal of Business Ethics 103 (1): 73-93. doi:10.1007/s10551-011-0843-8. [54] Kahneman, Knetsch & Thaler 1991, p. 193 [[5555]] Baron 1994, p. 382 [56] Tversky, A., & Koehler, D. J. (1994). Support theory: A nonextensional representation of subjective probability. Psychological Review, 101(4), 547567. [57] "Penn Psychologists Believe 'Unit Bias' Determines The Acceptable Amount To Eat" ( 051121163748.htm). ScienceDaily (Nov. 21, 2005) [58] Meegan, Daniel V. (2010). "Zero-Sum Bias: Perceived Competition Despite Unlimited Resources". Frontiers in Psychology 1. doi:10.3389/fpsyg.2010.00191. ISSN 1664-1078. [59] Chernev, Alexander (2007). "Jack of All Trades or Master of One? Product Differentiation and Compensatory Reasoning in Consumer Choice". Journal of Consumer Research 33 (4): 430444. doi:10.1086/510217. ISSN 0093-5301. [60] Morris, Errol (2010-06-20). "The Anosognosics Dilemma: Somethings Wrong but Youll Never Know What It Is (Part 1)" (http:// Opinionator: Exclusive Online Commentary From The Times.

New York

Retrieved 2011-03-07.

[61] Marks, Gary; Miller, Norman (1987), "Ten years of research on the false-consensus effect: An empirical and theoretical review", Psychological Bulletin (American Psychological Association) 102 (1): 7290, doi:10.1037/0033-2909.102.1.72

List of biases in judgment and decision making


[62] Sutherland 2007, pp. 138139 [[6363]] Baron 1994, p. 275 [64] Pronin, E.; Kruger, J.; Savitsky, K.; Ross, L. (2001), "You don't know me, but I know you: the illusion of asymmetric insight", Journal of Personality and Social Psychology 81 (4): 639656, doi:10.1037/0022-3514.81.4.639, PMID 11642351 [65] Hoorens, Vera (1993), "Self-enhancement and Superiority Biases in Social Comparison", European Review of Social Psychology (Psychology Press) 4 (1): 113139, doi:10.1080/14792779343000040. [[6666]] Plous 2006, p. 206 [67] Hsee, Christopher K.; Reid Hastie (2006), "Decision and experience: why don't we choose what makes us happy?", Trends in Cognitive Sciences 10 (1): 3137, doi:10.1016/j.tics.2005.11.007, PMID 16318925. [[6868]] Plous 2006, p. 185 [[6969]] Kruger, J. (1999). Lake Wobegon be gone! The "below-average effect" and the egocentric nature of comparative ability judgments. Journal of Personality and Social Psychology, 77(2) [70] Mather, Shafir & Johnson, 2000 [71] Schacter, Daniel L. (1999). "The Seven Sins of Memory: Insights From Psychology and Cognitive Neuroscience". American Psychologist 54 (3): 182203. doi:10.1037/0003-066X.54.3.182. PMID 10199218. [[7272]] Attneave, F. (1953). Psychological probability as a function of experienced frequency. Journal of Experimental Psychology, 46(2), 81-86. [73] Fischhoff, B., Slovic, P., & Lichtenstein, S. (1977). Knowing with certainty: The appropriateness of extreme confidence. Journal of Experimental Psychology: Human Perception and Performance, 3(4), 552-564. doi:10.1037/0096-1523.3.4.552 [74] Cacioppo, John (2002). Foundations in social neuroscience. Cambridge, Mass: MIT Press. pp. 130-132. ISBN 026253195X. [75] Walker, W. Richard; John J. Skowronski, Charles P. Thompson (2003). "Life Is Pleasantand Memory Helps to Keep It That Way!" (http:/ / Review of General Psychology (Educational Publishing Foundation) 7 (2): 203210. Retrieved 2009-08-27. [[7676]] Fiedler, K. (1991). The tricky nature of skewed frequency tables: An information loss account of distinctiveness-based illusory correlations. Journal of Personality and Social Psychology, 60(1), 24-36. [77] Koriat, A.; M. Goldsmith, A. Pansky (2000). "Toward a Psychology of Memory Accuracy". Annual Review of Psychology 51 (1): 481537. doi:10.1146/annurev.psych.51.1.481. PMID 10751979. [78] Craik & Lockhart, 1972 [79] Kinnell, Angela; Dennis, S. (2011). "The list length effect in recognition memory: an analysis of potential confounds.". Memory & Cognition (Adelaide, Australia: School of Psychology, University of Adelaide) 39 (2): 348-63. [[8080]] e.g., Shushaka, 1958 [[8181]] e.g., Slamecka, 1968 [82] Nelson, D. L.; U. S. Reed, J. R. Walling (1976). "Pictorial superiority effect". Journal of Experimental Psychology: Human Learning & Memory 2: 523528. [83] Kruger, J., & Dunning, D. (1999). Unskilled and unaware of it: How difficulties in recognizing ones own incompetence lead to inflated self-assessments. Journal of Personality and Social Psychology, 77(6), 1121-1134. doi:10.1037/0022-3514.77.6.1121 [84] Martin, G. Neil; Neil R. Carlson, William Buskist (2007). Psychology (3rd ed.). Pearson Education. pp. 309310. ISBN 978-0-273-71086-8. [85] Rubin, Wetzler & Nebes, 1986; Rubin, Rahhal & Poon, 1998 [86] Morton, Crowder & Prussin, 1971 [87] Tversky, A., & Koehler, D. J. (1994). Support theory: A nonextensional representation of subjective probability. Psychological Review, 101(4), 547-567. [88] Poppenk, Walia, Joanisse, Danckert, & Köhler, 2006 [[8989]] von Restorff, 1933 [90] Kahneman, Daniel; Shane Frederick (2002), "Representativeness Revisited: Attribute Substitution in Intuitive Judgment", in Thomas Gilovich, Dale Griffin, Daniel Kahneman, Heuristics and Biases: The Psychology of Intuitive Judgment, Cambridge: Cambridge University Press, pp. 4981, ISBN 978-0-521-79679-8, OCLC 47364085 [91] Slovic, Paul; Melissa Finucane, Ellen Peters, Donald G. MacGregor (2002), "The Affect Heuristic", in Thomas Gilovich, Dale Griffin, Daniel Kahneman, Heuristics and Biases: The Psychology of Intuitive Judgment, Cambridge University Press, pp. 397420, ISBN 0-521-79679-2 [92] Flyvbjerg, B., 2008, "Curbing Optimism Bias and Strategic Misrepresentation in Planning: Reference Class Forecasting in Practice." European Planning Studies, vol. 16, no. 1, January, pp. 321. ( Optimism Bias and Strategic Misrepresentation.pdf)

List of biases in judgment and decision making



Baron, Jonathan (1994), Thinking and deciding (2nd ed.), Cambridge University Press, ISBN 0-521-43732-6

Baron, Jonathan (2000), Thinking and deciding (3rd ed.), New York: Cambridge University Press, ISBN 0-521-65030-5

Bishop, Michael A.; J.D. Trout (2004), Epistemology and the Psychology of Human Judgment, New York:

Oxford University Press, ISBN 0-19-516229-3

Gilovich, Thomas (1993), How We Know What Isn't So: The Fallibility of Human Reason in Everyday Life, New York: The Free Press, ISBN 0-02-911706-2

Gilovich, Thomas; Dale Griffin, Daniel Kahneman (2002), Heuristics and biases: The psychology of intuitive judgment, Cambridge, UK: Cambridge University Press, ISBN 0-521-79679-2

Greenwald, A. (1980), "The Totalitarian Ego: Fabrication and Revision of Personal History", American Psychologist (American Psychological Association) 35 (7), ISSN 0003-066X

Hardman, David (2009), Judgment and decision making: psychological perspectives, Wiley-Blackwell, ISBN 978-1-4051-2398-3

Kahneman, Daniel; Paul Slovic, Amos Tversky (1982), Judgment under Uncertainty: Heuristics and Biases, Cambridge, UK: Cambridge University Press, ISBN 0-521-28414-7

Kahneman, Daniel; Knetsch, Jack L.; Thaler, Richard H. (1991), "Anomalies: The Endowment Effect, Loss Aversion, and Status Quo Bias", The Journal of Economic Perspectives (American Economic Association) 5 (1):


Plous, Scott (1993), The Psychology of Judgment and Decision Making, New York: McGraw-Hill, ISBN 0-07-050477-6

Schacter, Daniel L. (1999), "The Seven Sins of Memory: Insights From Psychology and Cognitive Neuroscience", American Psychologist (American Psychological Association) 54 (3): 182203, doi:10.1037/0003-066X.54.3.182, ISSN 0003-066X, PMID 10199218

Sutherland, Stuart (2007), Irrationality, Pinter & Martin, ISBN 978-1-905177-07-3

Tetlock, Philip E. (2005), Expert Political Judgment: how good is it? how can we know?, Princeton: Princeton


University Press, ISBN 978-0-691-12302-8 Virine, L.; M. Trumper (2007), Project Decisions: The Art and Science, Vienna, VA: Management Concepts, ISBN 978-1-56726-217-9

Ambiguity effect

Ambiguity effect


The ambiguity effect is a cognitive bias where decision making is affected by a lack of information, or "ambiguity". The effect implies that people tend to select options for which the probability of a favorable outcome is known, over an option for which the probability of a favorable outcome is unknown. The effect was first described by Daniel Ellsberg in 1961.

As an example, consider a bucket containing 30 balls. The balls are colored red, black and white. Ten of the balls are red, and the remaining 20 are some combination of black and white, with all combinations of black and white being equally likely. In option X, drawing a red ball wins a person $100, and in option Y, drawing a black ball wins them $100. The probability of picking a winning ball is the same for both options X and Y. In option X, the probability of selecting a winning ball is 1 in 3 (10 red balls out of 30 total balls). In option Y, despite the fact that the number of black balls is uncertain, the probability of selecting a winning ball is also 1 in 3. This is because the number of black balls is equally distributed among all possibilities between 0 and 20, so the probability of there being (10 - n) black balls is the same as there being (10 + n) black balls. The difference between the two options is that in option X, the probability of a favorable outcome is known, but in option Y, the probability of a favorable outcome is unknown ("ambiguous").

In spite of the equal probability of a favorable outcome, people have a greater tendency to select a ball under option X, where the probability of selecting a winning ball is perceived to be more certain. The uncertainty as to the number of black balls means that option Y tends to be viewed less favorably. Despite the fact that there could possibly be twice as many black balls as red balls, people tend not to want to take the opposing risk that there may be fewer than 10 black balls. The "ambiguity" behind option Y means that people tend to favor option X, even when the probability is equivalent.

One possible explanation of the effect is that people have a rule of thumb (heuristic) to avoid options where information is missing (Frisch & Baron, 1988; Ritov & Baron, 1990). This will often lead them to seek out the missing information. In many cases, though, the information cannot be obtained. The effect is often the result of calling some particular missing piece of information to the person's attention.

However, not all people act this way. In Wilkinson's Modes of Leadership, what he describes as Mode Four individuals do not require such disambiguation and actively look for ambiguity especially in business and other such situations where an advantage might be found. This response appears to be linked to an individual's understanding of complexity and the search for emergent properties.



Baron, J. (2000). Thinking and deciding (3d ed.). New York: Cambridge University Press.

Ellsberg, D. (1961). Risk, ambiguity, and the Savage axioms. Quarterly Journal of Economics, 75, 643699.

Frisch, D., & Baron, J. (1988). Ambiguity and rationality. Journal of Behavioral Decision Making, 1, 149-157.

Ritov, I., & Baron, J. (1990). Reluctance to vaccinate: omission bias and ambiguity. Journal of Behavioral


Decision Making, 3, 263-277. Wilkinson, D.J. (2006). The Ambiguity Advantage: what great leaders are great at. London: Palgrave Macmillian.




Anchoring or focalism is a cognitive bias that describes the common human tendency to rely too heavily on the first piece of information offered (the "anchor") when making decisions. During decision making, anchoring occurs when individuals use an initial piece of information to make subsequent judgments. Once an anchor is set, other judgments are made by adjusting away from that anchor, and there is a bias toward interpreting other information around the anchor. For example, the initial price offered for a used car sets the standard for the rest of the negotiations, so that prices lower than the initial price seem more reasonable even if they are still higher than what the car is really worth.

Focusing effect

The focusing effect (or focusing illusion) is a cognitive bias that occurs when people place too much importance on one aspect of an event, causing an error in accurately predicting the utility of a future outcome.

People focus on notable differences, excluding those that are less conspicuous, when making predictions about happiness or convenience. For example, when people were asked how much happier they believe Californians are compared to Midwesterners, Californians and Midwesterners both said Californians must be considerably happier, when, in fact, there was no difference between the actual happiness rating of Californians and Midwesterners. The bias lies in that most people asked focused on and overweighed the sunny weather and ostensibly easy-going lifestyle of California and devalued and underrated other aspects of life and determinants of happiness, such as low crime rates and safety from natural disasters like earthquakes (both of which large parts of California lack). [1]

A rise in income has only a small and transient effect on happiness and well-being, but people consistently overestimate this effect. Kahneman et al. proposed that this is a result of a focusing illusion, with people focusing on conventional measures of achievement rather than on everyday routine. [2]

Anchoring and adjustment heuristic

Anchoring and adjustment is a psychological heuristic that influences the way people intuitively assess probabilities. According to this heuristic, people start with an implicitly suggested reference point (the "anchor") and make adjustments to it to reach their estimate. A person begins with a first approximation (anchor) and then makes incremental adjustments based on additional information. These adjustments are usually insufficient, giving the initial anchor a great deal of influence over future assessments.

The anchoring and adjustment heuristic was first theorized by Amos Tversky and Daniel Kahneman. In one of their first studies, participants

were asked to compute the product of the numbers one through eight, either

as . The anchor was the number shown first in the sequence, either 1 or 8. When 1 was the anchor, the average estimate was 512; when 8 was the anchor, the average estimate was 2,250. The correct answer was 40,320, indicating that both groups made insufficient adjustments away from the initial anchor. In another study by Tversky and Kahneman, participants observed a roulette wheel that was predetermined to stop on either 10 or 65. Participants were then asked to guess the percentage of African nations that were members of the United Nations. Participants whose wheel stopped on 10 guessed lower values (25% on average) than participants whose wheel stopped at 65 (45% on average). [3] The pattern has held in other experiments for a wide variety of different subjects of estimation.

for a wide variety of different subjects of estimation. Daniel Kahneman , one of the first

Daniel Kahneman, one of the first researchers to study anchoring.






As a second example, in a study by Dan Ariely, an audience is first asked to write the last two digits of their social security number and consider whether they would pay this number of dollars for items whose value they did not know, such as wine, chocolate and computer equipment. They were then asked to bid for these items, with the result that the audience members with higher two-digit numbers would submit bids that were between 60 percent and 120 percent higher than those with the lower social security numbers, which had become their anchor. [4]

Difficulty of avoiding anchoring

Various studies have shown that anchoring is very difficult to avoid. For example, in one study students were given anchors that were obviously wrong. They were asked whether Mahatma Gandhi died before or after age 9, or before or after age 140. Clearly neither of these anchors can be correct, but the two groups still guessed significantly differently (average age of 50 vs. average age of 67). [5]

Other studies have tried to eliminate anchoring much more directly. In a study exploring the causes and properties of anchoring, participants were exposed to an anchor and asked to guess how many physicians were listed in the local phone book. In addition, they were explicitly informed that anchoring would "contaminate" their responses, and that they should do their best to correct for that. A control group received no anchor and no explanation. Regardless of how they were informed and whether they were informed correctly, all of the experimental groups reported higher estimates than the control group. Thus, despite being expressly aware of the anchoring effect, participants were still unable to avoid it. [6] A later study found that even when offered monetary incentives, people are unable to effectively adjust from an anchor. [7]


Several theories have been put forth to explain what causes anchoring, although some explanations are more popular than others, there is no consensus as to which is best. [8] In a study on possible causes of anchoring, two authors described anchoring as easy to demonstrate, but hard to explain. [5] At least one group of researchers has argued that multiple causes are at play, and that what is called "anchoring" is actually several different effects. [9]


In their original study, Tversky and Kahneman put forth a view later termed anchoring-and-adjusting. According to this theory, once an anchor is set, people adjust away from it to get to their final answer; however, they adjust insufficiently, resulting in their final guess being closer to the anchor than it would be otherwise. [10] Other researchers also found evidence supporting the anchoring-and-adjusting explanation. [11]

However, later researchers criticized this model, saying that it only works when the initial anchor is outside the range of acceptable answers. To use an earlier example, since Mahatma Gandhi obviously did not die at age 9, then people will adjust from there. If a reasonable number were given, though (e.g. age 60), then adjustment would not explain the anchoring effect. [12] Another study found that the anchoring effect holds even when the anchor is subliminal. According to Tversky and Kahneman's theory, this is impossible, since anchoring is only the result of conscious adjustment. [13] Because of arguments like these, anchoring-and-adjusting has fallen out of favor.

Selective accessibility

In the same study that criticized anchoring-and-adjusting, the authors proposed an alternate explanation regarding selective accessibility, which is derived from a theory called "confirmatory hypothesis testing". In short, selective accessibility proposes that when given an anchor, a judge (i.e. a person making some judgment) will evaluate the hypothesis that the anchor is a suitable answer. Assuming it is not, the judge moves on to another guess, but not before accessing all the relevant attributes of the anchor itself. Then, when evaluating the new answer, the judge looks for ways in which it is similar to the anchor, resulting in the anchoring effect. [12] Various studies have found empirical support for this hypothesis. [14] This explanation assumes that the judge considers the anchor to be a



plausible value so that it is not immediately rejected, which would preclude considering its relevant attributes.

Attitude change

More recently, a third explanation of anchoring has been proposed concerning attitude change. According to this theory, providing an anchor changes someone's attitudes to be more favorable to the particular attributes of that anchor, biasing future answers to have similar characteristics as the anchor. Leading proponents of this theory consider it to be an alternate explanation in line with prior research on anchoring-and-adjusting and selective accessibility. [15][16]

Factors that influence anchoring


A wide range of research has linked sad or depressed moods with more extensive and accurate evaluation of

problems. [17] As a result of this, earlier studies hypothesized that people with more depressed moods would tend to use anchoring less than those with happier moods. However, more recent studies have shown the opposite effect: sad people are more likely to use anchoring than people with happy or neutral mood. [18]


Early research found that experts (those with high knowledge, experience, or expertise in some field) were more resistant to the anchoring effect. [19] Since then, however, numerous studies have demonstrated that while experience can sometimes reduce the effect, even experts are susceptible to anchoring. In a study concerning the effects of anchoring on judicial decisions, researchers found that even experienced legal professionals were affected by anchoring. This remained true even when the anchors provided were arbitrary and unrelated to the case in question. [20]


Research has correlated susceptibility to anchoring with most of the Big Five personality traits. People high in agreeableness and conscientiousness are more likely to be affected by anchoring, while those high in extroversion are less likely to be affected. [21] Another study found that those high in openness to new experiences were more susceptible to the anchoring effect. [22]

Cognitive ability

The impact of cognitive ability on anchoring is contested. A recent study on willingness to pay for consumer goods found that anchoring decreased in those with greater cognitive ability, though it did not disappear. [23] Another study, however, found that cognitive ability had no significant effect on how likely people were to use anchoring. [24]

Anchoring in negotiations

In negotiations, anchoring refers to the concept of setting a boundary that outlines the basic constraints for a

negotiation; subsequently, the anchoring effect is the phenomenon in which we set our estimation for the true value

of the item at hand. [25] In addition to the initial research conducted by Tversky and Kahneman, multiple other studies

have shown that anchoring can greatly influence the estimated value of an object. [26] For instance, although negotiators can generally appraise an offer based on multiple characteristics, studies have shown that they tend focus

on only one aspect. In this way, a deliberate starting point can strongly affect the range of possible counteroffers. [10] The process of offer and counteroffer results in a mutually beneficial arrangement. However, multiple studies have shown that initial offers have a stronger influence on the outcome of negotiations than subsequent counteroffers. [27]



An example of the power of anchoring has been conducted during the Strategic Negotiation Process Workshops. During the workshop, a group of participants is divided into two sections: buyers and sellers. Each side receives identical information about the other party before going into a one-on-one negotiation. Following this exercise, both sides debrief about their experiences. The results show that where the participants anchor the negotiation had a significant effect on their success. [28]

Anchoring affects everyone, even people who are highly knowledgeable in a field. Northcraft and Neale conducted a study to measure the difference in the estimated value of a house between students and real-estate agents. In this experiment, both groups were shown a house and then given different listing prices. After making their offer, each group was then asked to discuss what factors influenced their decisions. In the follow-up interviews, the real-estate agents denied being influenced by the initial price, but the results showed that both groups were equally influenced by that anchor. [29]

Anchoring can have more subtle effects on negotiations as well. Janiszewski and Uy investigated the effects of precision of an anchor. Participants read an initial price for a beach house, then gave the price they thought it was worth. They received either a general, seemingly nonspecific anchor (e.g. $800,000) or a more precise and specific anchor (e.g. $799,800). Participants with a general anchor adjusted their estimate more than those given a precise anchor ($751,867 vs $784,671). The authors propose that this effect comes from difference in scale; in other words, the anchor affects not only the starting value, but also the starting scale. When given a general anchor of $20, people will adjust in large increments ($19, $21, etc.), but when given a more specific anchor like $19.85, people will adjust on a lower scale ($19.75, $19.95, etc.). [30] Thus, a more specific initial price will tend to result in a final price closer to the initial one.


[1] Schkade, D.A., & Kahneman, D. (1998). "Does living in California make people happy? A focusing illusion in judgments of life satisfaction". Psychological Science, 9, 340346. [2] Kahneman, Daniel; Alan B. Krueger, David Schkade, Norbert Schwarz, Arthur A. Stone (2006-06-30). "Would you be happier if you were richer? A focusing illusion" ( Science 312 (5782):

190810. doi:10.1126/science.1129688. PMID [3] Tversky, A. & Kahneman, D. (1974). "Judgment under uncertainty: Heuristics and biases" ( JudgementUncertainty.pdf). Science, 185, 11241130. [4] Edward Teach, " Avoiding Decision Traps (", CFO (1 June 2004). Retrieved 29 May 2007. [5] Strack, F., & Mussweiler, T. (1997). "Explaining the enigmatic anchoring effect: Mechanisms of selective accessibility". Journal of Personality and Social Psychology, 73(3), 437-446. [6] Wilson, T. D., Houston, C. E., Etling, K. M., & Brekke, N. (1996). "A new look at anchoring effects: Basic anchoring and its antecedents". Journal Of Experimental Psychology, 125(4), 387-402. [7] Simmons, J., LeBoeuf, R., Nelson, L. (2010). "The effect of accuracy motivation on anchoring and adjustment: Do people adjust from provided anchors?". Journal of Personality and Social Psychology, 99(6), 917-932. [8] Furnham, A. & Boo, H. C. (2011). "A literature review of the anchoring effect". Journal of Socio-Economics, 40(1), 35-42. [9] Epley, N. & Gilovich, T. (2005). "When effortful thinking influences judgmental anchoring: Differential effects of forewarning and incentives on self-generated and externally provided anchors". Journal of Behavioral Decision Making, 18, 199212 [10] Tversky, A., & Kahneman, D. (1992). Advances in prospect theory: Cumulative representation of uncertainty. Journal of Risk and Uncertainty, 5, 297-323. [11] Epley, N. & Gilovich, T. (2001). "Putting adjustment back into the anchoring and adjustment heuristic: Differential processing of self-generated and experimenter-provided anchors". Psychological Science, 12, 391-396. [12] Mussweiler, T. & Strack, F. (1999). "Hypothesis-consistent testing and semantic priming in the anchoring paradigm: A selective accessibility model". Journal of Experimental Social Psychology, 35, 136-164. [13] Mussweiler, T. & Englich, B. (2005). "Subliminal anchoring: Judgmental consequences and underlying mechanisms". Organizational Behavior and Human Decision Processes, 98, 133-143. [14] Chapman, G. B. & Johnson, E. J. (1999). "Anchoring, activation, and the construction of values". Organizational Behavior and Human Decision Processes, 79, 139. [15] Wegener, D. T., Petty, R. E., Detweiler-Bedell, B., & Jarvis, W. B. G. (2001). "Implications of attitude change theories for numerical anchoring: Anchor plausibility and the limits of anchor effectiveness". Journal of Experimental Social Psychology, 37, 6269 [16] Blankenship, K. L., Wegener, D. T., Petty, R. E., Detweiler-Bedell B., & Macy, C. L. (2008). "Elaboration and consequences of anchored estimates: An attitudinal perspective on numerical anchoring". Journal of Experimental Social Psychology, 44, 14651476



[17] Bodenhausen, G. V., Gabriel, S., & Lineberger, M. (2000). "Sadness and susceptibility to judgmental bias: The case of anchoring". Psychological Science, 11, 320323. [18] Englich, B., & Soder, K. (2009). "Moody experts: How mood and expertise influence judgmental anchoring". Judgmental and Decision Making, 4, 4150. [19] Wilson, T. D., Houston, C. E., Etling, K. M., Brekke, N. (1996). "A new look at anchoring effects: Basic anchoring and its antecedents". Journal of Experimental Psychology, 125, 387402. [20] Englich, B., Mussweiler, T., & Strack, F. (2006). "Playing dice with criminal sentences: The influence of irrelevant anchors on expertsjudicial decision making". Personality and Social Psychology Bulletin, 32, 188200. [21] Eroglu, C., & Croxton, K. L. (2010). "Biases in judgmental adjustments of statistical forecasts: The role of individual differences". International Journal of Forecasting, 26, 116133. [22] McElroy, T., & Dowd, K. (2007). "Susceptibility to anchoring effects: How openness-to-experience influences responses to anchoring cues". Judgment and Decision Making, 2, 4853. [23] Bergman, O., Ellingsen, T., Johannesson, M., & Svensson, C. (2010). "Anchoring and cognitive ability". Economics Letters, 107, 6668. [24] Oechssler, J., Roider, S., & Schmitz, P. W. (2009). "Cognitive abilities and behavioural biases". Journal of Economic Behavior and Organization, 72, 147152. [25] Tversky, A. & Kahneman, D. (1974). "Judgment under uncertainty: Heuristics and biases". Science, 185, 1124, 1128-1130. [26] Orr, D. & Guthrie, C. (2005). "Anchoring, information, expertise, and negotiation: New insights from meta-analysis". Ohio St. J. Disp. Resol., 597, 21. [27] Kristensen, H. & Garling, T. (1997). "The effects of anchor points and reference points on negotiation processes and outcomes". Goteborg Psychological Reports, 2, 8:27. [[2828]] Dietmeyer, B. (2004). Strategic Negotiation: A Breakthrough Four-Step Process for Effective Business Negotiation. New York City: Kaplan Publishing. [29] Northcraft, G. B., & Neale, M. A. (1987). "Expert, amateurs, and real estate: An anchoring-and-adjustment perspective on property pricing decisions". Organizational Behavior and Human Decision Processes, 39, 228-241. [30] Janiszewski, C., & Uy, D. (2008). "Precision of the anchor influences the amount of adjustment". Psychological Science, 19(2), 121-127.

Attentional bias

Several types of cognitive bias occur due to an attentional bias. One example occurs when a person does not examine all possible outcomes when making a judgment about a correlation or association. They may focus on one or two possibilities, while ignoring the rest.

The most commonly studied type of decision for attentional bias, is one in which there are two conditions (A and B), which can be present (P) or not present (N). This leaves four possible combination outcomes: both are present (AP/BP), both are not present (AN/BN), only A is present (AP/BN), only B is present (AN/BP). This can be better shown in table form:


A Present

A Not Present

B Present



B Not Present



In everyday life, people are often subject to this type of attentional bias when asking themselves, "Does God answer prayers?" [1] Many would say "Yes" and justify it with "many times I've asked God for something, and He's given it to me." These people would be accepting and overemphasizing the data from the present/present (top-left) cell, because an unbiased person would counter this logic and consider data from the present/absent cell. "Has God ever given me something that I didn't ask for?" Or "Have I asked God for something and didn't receive it?" This experiment too supports Smedslund's general conclusion that subjects tend to ignore part of the table.

Attentional biases can also influence what information people are likely to focus upon. For instance, patients with anxiety disorders [2] and chronic pain [3] show increased attention to information representing their concerns (i.e., angry and painful facial expressions respectively) in studies using the dot-probe paradigm. It is important to note that two different forms of attentional bias may be measured. A 'within-subjects' bias occurs when an individual displays greater bias towards one type of information (e.g., painful faces) when compared to different types of information

Attentional bias


(e.g., neutral faces). A 'between-subjects' bias, alternatively, occurs when one group of participants displays greater bias than another group of participants (e.g., chronic pain patients shown greater bias towards painful expressions than healthy control participants). These two types of bias therefore arise due to different mechanisms, and both are not always present in the same sample of participants. Another commonly used paradigm to measure attentional biases is the Stroop paradigm.

Attentional Bias and Smoking

Recent research has found a strong correlation between smoking cues and attentional bias. These studies not only illustrate the importance of attentional bias in addiction and cravings but also how we look at addiction from a scientific standpoint. The behavioral aspects of cravings are extensively covered; however, the perceptual and neurological aspects of attentional bias and the role they play is supported by recent research to be significant.

Smoking Cues

The Stroop paradigm is used in attentional bias research to distinguish types of smoking cues and their effect on smokers. Research using the Stroop paradigm tested the effect of smoking related words such as cigarette, puff, and smoke, with negative effect words such as sick, pain and guilty and positive effect words such as safe, glad and hopeful and neutral words such as tool, shovel and hammer. Results showed a strong correlation in a slower reaction time between smoking related and negative-effect word lists. A slower reaction time to negative and smoking word lists indicates lingering attention or attentional bias by the participant. This is significant because the task call for the participant to focus on the color of the word rather than meaning, possibly implicating an underlying negative feeling towards their smoking behavior. [4] Smokers have attentional bias to a subliminal images and therefore are more likely to be influenced by environmental cues such as seeing other people smoking, ads for cigarettes or triggers such as coffee or alcohol. [5] This idea further illustrates that influence of smoking cues implicate that dependence on nicotine is reinforced by attentional bias.Smokers may have underlying negative feelings toward smoking, when asked to think of the negative consequences of smoking, they showed less craving than those who were encouraged to smoke. [6] This illustrates the influence of attentional bias on environmental smoking cues and could contribute to a smokersinability to quit.

Similar Stroop paradigm studies have explained that attentional bias is not dependent on smoking itself, but rather the person who is the smoker displays attentional bias. A recent study required one group of smokers to refrain from smoking the night before and another less than an hour before. Abstinence from smoking created slower reaction time, but a smoke break between study sessions showed increased reaction time. Researchers say this shows that nicotine dependence intensifies attention, but does not directly depend on smoking itself due to lack of evidence. [7] The longer reaction time suggests smokers craving a cigarette linger on smoking related words. [8] Smokers and smokers attempting to quit displayed the same slower reaction time for smoking related words, [9] supporting research that implies attentional bias is a behavioral mechanism versus a dependence mechanism.

Neurological Basis

Attentional bias often seen in eye tracking movements is thought to be an underlying issue of addiction. Smokers linger on smoking cues compared with neutral cues. Researchers found higher activation in the insular cortex, the orbitofrontal cortex and the amygdala when presented with smoking cues. The orbitofrontal cortex is known to be coordinated with drug-seeking behavior and the insular cortex and amygdala are involved in the autonomic and emotional state of an individual. [10][11]

Neural activity is also known to decrease upon the beginning of smoking, focusing the smokersattention on their upcoming cigarette. Therefore when smoking cues are nearby it is harder for a smoker to concentrate on other tasks. This is seen in the activation of the dorsal anterior cingulate cortex, known for focusing attention on relevant

Attentional bias


stimuli. [12][13]


[1] Nisbett, R.E., & Ross, L. (1980). Human inference: Strategies and shortcomings of social judgment. Englewood Cliffs, N.J.: Prentice-Hall. [2] (Bar-Haim, Y., Lamy, D., Pergamin, L., Bakermans-Kranenburg, M.J., & van IJzendoorn, M.H. (2007). Threat-related attentional bias in anxious and non-anxious individuals: A meta-analytic study. Psychological Bulletin. [3] Schoth, D.E., & Liossi, C. (2010). Attentional bias towards pictorial representations of pain in individuals with chronic headache. The Clinical Journal of Pain. 26 (3): 244250. [4] Drobes, David J.; Elibero, Andrea; Evans, David E. (2006). "Attentional bias for smoking and affective stimuli: A Stroop task study.". Psychology of Addictive Behaviors 20 (4): 490495. doi:10.1037/0893-164X.20.4.490. ISSN 1939-1501. [5] Yan, Xiaodan; Jiang, Yi; Wang, Jin; Deng, Yuan; He, Sheng; Weng, Xuchu (2009). "Preconscious attentional bias in cigarette smokers: a probe into awareness modulation on attentional bias". Addiction Biology 14 (4): 478488. doi:10.1111/j.1369-1600.2009.00172.x. ISSN 13556215.

[6] Szasz, P. L., Szentagotai, A., & Hofmann, S. G. (2012).Effects of emotion regulation strategies on smoking craving, attentional bias, and task persistence.Behaviour Research and Therapy, 50, 333-340. [7] Canamar, Catherine P.; London, Edythe (2012). "Acute cigarette smoking reduces latencies on a Smoking Stroop test". Addictive Behaviors 37 (5): 627631. doi:10.1016/j.addbeh.2012.01.017. ISSN 03064603. [8] Field, M., Munafò, M. R., & Franken, I. A. (2009). A meta-analytic investigation of the relationship between attentional bias and subjective craving in substance abuse.Psychological Bulletin, 135(4), 589-607. doi:10.1037/a0015843 [9] Cane, J. E., Sharma, D. D., &Albery, I. P. (2009). The addiction Stroop task: Examining the fast and slow effects of smoking and marijuana-related cues. Journal Of Psychopharmacology, 23(5), 510-519. doi:10.1177/0269881108091253

[[1010]] Janes, A. C., Pizzagalli, D. A., Richardt, S., Frederick, B. D. B., Holmes, A. J., Sousa, J.,

attentional bias for smoking-related cues: An fMRI study. Neuropsychopharmacology, 35, 2339-2345. [11] Kang, O-Seok; Chang, Dong-Seon; Jahng, Geon-Ho; Kim, Song-Yi; Kim, Hackjin; Kim, Jong-Woo; Chung, Sun-Yong; Yang, Seung-In et al. (2012). "Individual differences in smoking-related cue reactivity in smokers: An eye-tracking and fMRI study". Progress in Neuro-Psychopharmacology and Biological Psychiatry 38 (2): 285293. doi:10.1016/j.pnpbp.2012.04.013. ISSN 02785846. [12] Luijten, M., Veltman, D., den Brink, W., Hester, R., Field, M., Smits, M., , & Franken, I. (2011). Neurobiological substrate of smoking-related attentional bias.Neuroimage, 54(3), 2374-2381. doi:10.1016/j.neuroimage.2010.09.064 [13] Stippekohl, B., Walter, B., Winkler, M. H., Mucha, R. F., Pauli, P., Vaitl, D., & Stark, R. (2012).An early attentional bias to BEGIN stimuli of the smoking ritual is accompanied with mesocorticolimbic deactivations in smokers. Psychopharmacology, 222, 593-607.

.Kaufman, M. J. (2012). Neural substrates of

Further reading

• Baron, Jonathan. (2000). Thinking and Deciding (3d edition). Cambridge University Press.

• Smith, N.K.; Chartrand, T.L.; Larsen, J.T.; Cacioppo, J.T.; Katafiasz, HA; Moran, KE (2006). "Being bad isn't always good: Affective context moderates the attention bias towards negative information" (http://psychology. Journal of Personality and Social Psychology 90 (2): 210220. doi:10.1037/0022-3514.90.2.210. PMID 16536647.

Availability heuristic

Availability heuristic


The availability heuristic is a mental shortcut that occurs when people make judgments about the probability of events by the ease with which examples come to mind. The availability heuristic operates on the notion that, "if you can think of it, it must be important." The availability of consequences associated with an action is positively related to perceptions of the magnitude of the consequences of that action. In other words, the easier it is to recall the consequences of something, the greater we perceive these consequences to be. Sometimes, this heuristic is beneficial, but the frequencies that events come to mind are usually not accurate reflections of their actual probability in real life. [1] For example, if someone asked you whether your college had more students from Colorado or more from California, under the availability heuristic, you would probably answer the question based on the relative availability of examples of Colorado students and California students. If you recall more students that come from California that you know, you will be more likely to conclude that more students in your college are from California than from Colorado. [2]

Overview and History

When faced with the difficult task of judging probability or frequency, people use a limited number of strategies, called heuristics, to simplify these judgements. One of these strategies, the availability heuristic, is the tendency to make a judgement about the frequency of an event based on how easy it is to recall similar instances. [1] In 1973, Amos Tversky and Daniel Kahneman first studied this phenomenon and labeled it the Availability Heuristic. The availability heuristic is an unconscious process that operates on the notion that, "if you can think of it, it must be important." [1] In other words, how easily an example can be called to mind is related to perceptions about how often this event occurs. Thus, people tend to use a readily accessible attribute to base their beliefs about a relatively distant concept. [3]

In an experiment to test this heuristic, Tversky and Kahneman presented participants with four lists of names: two lists with the names of 19 famous women and 20 less famous men, and two lists with the names of 19 famous men and 20 less famous women. The first group was asked to recall as many names as possible and the second group was asked to estimate which class of names was more frequent: famous or less famous. The famous names were most easily recalled compared to the less famous names, and despite the fact that the less famous names were more frequent, the majority of the participants incorrectly judged that the famous names occurred more often. While the availability heuristic is an effective strategy in many situations, when judging probability the availability heuristic can lead to systematic errors. [1]


In a study by Schwarz et al., participants were asked to describe either 6 or 12 examples of assertive, or unassertive behavior. Participants were later asked to rate their own assertiveness. The results indicated that participants rated themselves as more assertive after describing 6, rather than 12, examples for the assertive behavior condition, and conversely rated themselves as less assertive after describing 6, rather than 12, examples for the unassertive behavior condition. The study reflected that the recalled content was qualified by the ease with which the content could be brought to mind (it was easier to recall 6 examples than 12). [4]

In another study, subjects were asked, If a random word is taken from an English text, is it more likely that the word starts with a K, or that K is the third letter?Most English-speaking people could immediately think of many words that begin with the letter "K" (kangaroo, kitchen, kale), but it would take a more concentrated effort to think of any words where "K" is the third letter (acknowledge). Results indicated that participants overestimated the number of words that began with the letter K, but underestimated the number of words that had Kas the third letter. Researchers concluded that people answer questions like these by comparing the availability of the two categories

Availability heuristic


and assessing how easily they can recall these instances. In other words, it is easier to think of words that begin with "K", than words with "K" as the third letter. Thus, people judge words beginning with a "K" to be a more common occurrence. In reality, however, a typical text contains twice as many words that have "K" as the third letter than "K" as the first letter. Additionally, there are three times as many words that have the letter "K" in the third position, as have it in the first position. [1]

Chapman (1967) described a bias in the judgment of the frequency with which two events co-occur. This demonstration showed that the co-occurrence of paired stimuli resulted in participants overestimating the frequency of the pairings. [5] To test this idea, participants were given information about several hypothetical mental patients. The data for each patient consisted of a clinical diagnosis and a drawing made by the patient. Later, participants estimated the frequency with which each diagnosis had been accompanied by various features of the drawing. The subjects vastly overestimated the frequency of this co-occurrence (such as suspiciousness and peculiar eyes). This effect was labeled the illusory correlation. Tversky and Kahneman suggested that availability provides a natural account for the illusory-correlation effect. The strength of the association between two events could provide the basis for the judgment of how frequently the two events co-occur. When the association is strong, it becomes more likely to conclude that the events have been paired frequently. Strong associations will be thought of as having occurred together frequently. [1]

Research in 1992 used mood manipulation to influence the availability heuristic by placing participants into a sad mood condition or a happy mood condition. People in the sad mood condition recalled better than those in the happy mood condition, revealing that the power of the availability heuristic changes in certain conditions. [6]


• A person claims to a group of friends that those who drive red cars receive more speeding tickets. The group agrees with the statement because a member of the group drives a red car and frequently receives speeding tickets. The reality could be that he just drives fast and would receive a speeding ticket regardless of the color of car that he drove. Even if statistics show fewer speeding tickets were given to red cars than to other colors of cars, he is an available example which makes the statement seem more plausible. [7]

• Where an anecdote ("I know a Brazilian man who

the availability heuristic is in play. In these instances the ease of imagining an example or the vividness and emotional impact of that example becomes more credible than actual statistical probability. Because an example is easily brought to mind or mentally "available," the single example is considered as representative of the whole rather than as just a single example in a range of data. [1] A specific example of this taking place would be when a person argues that cigarette smoking is not unhealthy because his grandfather smoked three packs of cigarettes each day and lived to be 100 years old. The grandfather's health could simply be an unusual case that does not speak to the health of smokers in general. [8]

• A person sees several news stories of cats leaping out of tall trees and surviving, so he believes that cats must be robust to long falls. However, these kinds of news reports are far more common than reports where a cat falls out of the tree and dies, which may in fact be a more common event. [1]

• A recent newspaper subscriber might compare the number of newspapers delivered versus those that were not delivered in order to calculate newspaper delivery failure. In this case, the calculation of delivery failure depends on the number of incidents recalled. However, it will be hard to recall all specific instances if the subscriber is trying to recall all newspaper deliveries over an extensive period of time. [9]

• After seeing many news stories of home foreclosures, people may judge that the likelihood of this event is greater. This may be true because it is easier to think of examples of this event. [1]


is used to "prove" an entire proposition or to support a bias,

Availability heuristic




After seeing news stories about child abductions, people may judge that the likelihood of this event is greater. Media coverage can help fuel a person's example bias with widespread and extensive coverage of unusual events, such as homicide or airline accidents, and less coverage of more routine, less sensational events, such as common diseases or car accidents. For example, when asked to rate the probability of a variety of causes of death, people tend to rate "newsworthy" events as more likely because they can more readily recall an example from memory. For example, in the USA, people rate the chance of death by homicide higher than the chance of death by stomach cancer, even though death by stomach cancer is five times higher than death by homicide. Moreover, unusual and vivid events like homicides, shark attacks, or lightning are more often reported in mass media than common and unsensational causes of death like common diseases. [10]

For example, many people think that the likelihood of dying from shark attacks is greater than that of dying from being hit by falling airplane parts, when more people actually die from falling airplane parts. When a shark attack occurs, the deaths are widely reported in the media whereas deaths as a result of being hit by falling airplane parts are rarely reported in the media. [11]

In a 2010 study exploring how vivid television portrayals are used when forming social reality judgments, people watching vivid violent media gave higher estimates of the prevalence of crime and police immorality in the real world than those not exposed to vivid television. These results suggest that television violence does in fact have a direct causal impact on participantssocial reality beliefs. Repeated exposure to vivid violence leads to an increase in peoples risk estimates about the prevalence of crime and violence in the real world. [12] Counter to these findings, researchers from a similar study argued that these effects may be due to effects of new information. Researchers tested the new information effect by showing movies depicting dramatic risk events and measuring their risk assessment after the film. Contrary to previous research, there were no effects on risk perception due to exposure to dramatic movies. [13]


According to the department of Psychology at Nancy University France, studies have examined the impact of the availability heuristic in the perceptions of health-related events: lifetime risk of breast cancer, subjective life expectancy, and subjective age of onset of menopause. [14] In each section, three conditions were set up: control, anchoring heuristic, and availability heuristic. The findings revealed that availability and anchoring were being used to estimate personal health-related events. Hypochondriac tendencies, optimism, depressive mood, subjective health, internal locus of control and recall of information had a significant impact on judgments of riskiness. Availability also impacted perceived health risks. [14]

In another study, risk assessments of contracting breast cancer were based on experiences with an abnormal breast symptom, experiences with affected family members and friends.Researchers analyzed interviews from women talking about their own breast cancer risk. They found the availability, simulation, representativeness, affect, and perceived control heuristics, and search were most frequently used for making risk assessments. [15]

Researchers examined the role of cognitive heuristics in the AIDS risk-assessment process. 331 physicians reported worry about on-the-job HIV exposure, and experience with patients who have HIV. They tested to see if participants used the availability heuristic by analyzing their response to questions about talking and reading about AIDS. Availability of AIDS information did not relate strongly to perceived risk. Availability was not significantly related to worry after variance associated with simulation and experience with AIDS was removed. [16]

Participants in a 1992 study read case descriptions of hypothetical patients who varied on their sex and sexual preference. These hypothetical patients showed symptoms of two different diseases. Participants were instructed to indicate which disease they thought the patient had and then they rated patient responsibility and interactional

Availability heuristic


desirability. Consistent with the availability heuristic, either the more common (influenza) or the more publicized (AIDS) disease was chosen. [17]

Business and Economy

A previous study sought to analyze the role of the availability heuristic in financial markets. Researchers defined and

tested two aspects of the availability heuristic: [18]

1. Outcome Availability availability of positive and negative investment outcomes

2. Risk Availability availability of financial risk [18]

Researchers tested the availability effect on investors' reactions to analyst recommendation revisions and found that positive stock price reactions to recommendation upgrades are stronger when accompanied by positive stock market index returns. On the other hand, negative stock price reactions to recommendation downgrades are stronger when accompanied by negative stock market index returns. On days of substantial stock market moves, abnormal stock price reactions to upgrades are weaker, and abnormal stock price reactions to downgrades are stronger. These availability effects are still significant even after controlling for event-specific and company-specific factors. [18]

Similarly, research has pointed out that under the availability heuristic, humans are not reliable because they assess probabilities by overweighting current or easily recalled information instead of processing all relevant information. Since information regarding the current state of the economy is readily available, researchers attempted to expose the properties of business cycles to predict the availability bias in analystsgrowth forecasts. The availability heuristic was shown to play a role in analysis of forecasts and influenced investments because of this. [19]

Additionally, a study by Hayibor and Wasieleski found that the availability of others who believe that a particular act

is morally acceptable is positively related to others' perceptions of the morality of that act. This suggests that

availability heuristic also has an effect on ethical decision making and ethical behavior in organizations. [20]


A study done by Craig R. Fox, provides an example of how availability heuristics can work in the classroom. In this

study, Fox is testing if difficulty of recall influences judgment, specifically with course evaluations among college students. In his study he had two groups complete a course evaluation form. He asked the first group to write two recommended improvements for the course (a relatively easy task) and then write two positives about the class. The second group was asked to write ten suggestions where the professor could improve (a relatively difficult task) and then write two positive comments about the course. At the end of the evaluation both groups were asked to rate the course on a scale from one to seven. The results showed that students asked to write ten suggestions (difficult task) rated the course less harshly because it was more difficult for them to recall the information. Students asked to do the easier evaluation with only two complaints had less difficulty in terms of availability of information, resulting in a harsher rating of the course. [21]

Criminal Justice

The media usually focuses on violent or extreme cases, which are more readily available in the public's mind. This may come into play when it is time for the judicial to evaluate and determine the proper punishment for a crime. In a previous study, respondents rated how much they agreed with hypothetical laws and policies such as "Would you support a law that required all offenders convicted of unarmed muggings to serve a minimum prison term of two years?" Participants then read cases and rated each case on several questions about punishment. As hypothesized, respondents recalled more easily from long-term memory stories that contain severe harm, which seemed to influence their sentencing choices to make them push for harsher punishments. This can be eliminated by adding high concrete or high contextually distinct details into the crime stories about less severe injuries. [22]

Availability heuristic


A similar study asked jurors and college students to choose sentences on four severe criminal cases in which prison

was a possible but not inevitable sentencing outcome. Respondents answering questions about court performance on a public opinion formulated a picture of what the courts do and then evaluated the appropriateness of that behavior. Respondents recalled from public information about crime and sentencing. This type of information is incomplete because the news media present a highly selective and non-representative selection of crime, focusing on the violent and extreme, rather than the ordinary. This makes most people think that judges are too lenient. But, when asked to choose the punishments, the sentences given by students were equal to or less severe than those given by judges. In other words, the availability heuristic made people believe that judges and jurors were too lenient in the courtroom, but the participants gave similar sentences when placed in the position of the judge, suggesting that the information they recalled was not correct. [23]

Researchers in 1989 predicted that mock jurors would rate a witness to be more deceptive if the witness testified truthfully before lying than when the witness was caught lying first before telling the truth. If the availability heuristic played a role in this, lying second would remain in jurors minds and they would most likely remember the witness lying over them telling the truth. To test the hypothesis, 312 university students played the roles of mock jurors and watched a videotape of a witness presenting testimony during a trial. Results confirmed the hypothesis, as mock jurors were most influenced by the most recent act. [24]


Some researchers have suggested that perceived causes or reasons for an event, rather than imagery of the event itself, influence probability estimates. [25] Evidence for this notion stems from a study where participants either imagined the winner of the debate, or came up with reasons for why Ronald Reagan or Walter Mondale would win the 1984 U.S. Presidential Candidate debate. The results of this study explained that imagining Reagan or Mondale winning the debate had no effect on predictions of who would win the debate. However, imagining and considering reasons for why Reagan or Mondale would win the debate did significantly affect predictions. [25]

Other psychologists argue that the classic studies on the availability heuristic are vague and do not explain the underlying processes. [26] For example, in the famous Tversky and Kahneman study, Wanke et al. believe that this differential ease of recall, may alter subjectsfrequency estimates in two different ways. In one way, as the

availability heuristic suggests, the subjects may use the subjective experience of ease or difficulty of recall as a basis

of judgment. Researchers also assert that if this is done, they would predict a higher frequency if the recall task is

experienced as easy rather than difficult. In a contrasting scenario, researchers suggest that the subjects may recall as many words of each type as possible within the time given to them and may base their judgment on the recalled sample of words. If it is easier to recall words which begin with a certain letter, these words would be over-represented in the recalled sample, again producing a prediction of higher frequency. In the second scenario the estimate would be based on recalled content rather than on the subjective experience of ease of recall. [26]

Some researchers have shown concern about confounding variables in the original Tversky and Kahneman study. [4] Researchers question if the participants recalling celebrity names were basing frequency estimates on the amount of content recalled or on the ease of recall. Some researchers suggest that the design of the earlier experiment was flawed and did not actually determine how the availability heuristic works. [4]

Recent research has provided some evidence that the availability heuristic is only one of many strategies involved in frequency judgment. [27] Future research should attempt to incorporate all these factors.

Availability heuristic



[1] Tversky, A; Kahneman (1973). "Availability: A heuristic for judging frequency and probability". Cognitive Psychology 5 (1): 207233.


[2] Matlin, Margaret (2009). Cognition. Hoboken, NJ: John Wiley & Sons, Inc. pp. 413. ISBN 978-0-470-08764-0. [3] Kahneman, D; Tversky, A (January 1982). "The psychology of preferences". Scientific American 246: 160173. [4] Schwarz, N; Strack, F., Bless, H., Klumpp, G., Rittenauer-Schatka, H., & Simons, A. (1991). "Ease of retrieval as information: Another look at the availability heuristic". Journal of Personality and Social Psychology 61 (2): 195202. [5] Chapman, L.J (1967). "Illusory correlation in observational report". Journal of Verbal Learning 6: 151155. [6] Colin, M; Campbell, L. (1992). "Memory accessibility and probability of judgements:An experimental evaluation of the availability heuristic". Journal of Personality and Social Psychology 63 (6): 890902. [7] Manis, Melvin; Shelder, J., Jonides, J., Nelson, N.E. (1993). "Availability Heuristic in Judgments of Set Size and Frequency of Occurrence". Journal of Personality and Social Psychology 65 (3): 448457. [8] Esgate, Groome, A, D (2004). An Introduction to Applied Cognitive Psychology. Psychology Press. ISBN ISBN 1841693170. [9] Folkes, Valerie S. (June 1988). "The Availability Heuristic and Perceived Risk". Journal of Consumer Research 15 (1). [10] Briñol, P; Petty, R.E, & Tormala, Z.L. (2006). "The malleable meaning of subjective ease". Psychological Science 17: 200206.


[11] Read, J.D. (1995). "The availability heuristic in person identification: The sometimes misleading consequences of enhanced contextual information". Applied Cognitive Psychology 9: 91121. [12] riddle, Karen (2010). "Always on My Mind: Exploring How Frequent, Recent, and Vivid Television Portrayals Are Used in the Formation of Social Reality Judgments". Media Psychology 13: 155179. doi:10.1080/15213261003800140. [13] Sjoberg, Lennart; Engelberg, E. (2010). "Risk Perception and Movies: A Study of Availability as a Factor in Risk Perception". Risk Analysis 30 (1): 95106. doi:10.1111/j.1539-6924.2009.01335.x. [14] Gana, K; Lourel, M., Trouillet, R., Fort, I., Mezred, D., Blaison, C., Boujemadi, V., K'Delant, P., Ledrich, J. (2010). "Judgment of riskiness:

Impact of personality, naive theories and heuristic thinking among female students". Psychology and Health 25 (2): 131147.


[15] Katapodi, M.C; Facione, N.C., Humphreys, J.C., Dodd, M.J. (2005). "Perceived breast cancer risk: Heuristic reasoning and search for a dominance structure". Social Science & Medicine 60 (2): 421432. [16] Heath, Linda; Acklin, M., Wiley, K. (1991). "Cognitive heuristics and AIDS risk assessment among physicians". Journal of Applied Social Psychology 21 (22): 18591867. [17] Triplet, R.G (1992). "Discriminatory biases in the perception of illness: The application of availability and representativeness heuristics to the AIDS crisis". Basic and Applied Social Psychology 13 (3): 303322. [18] Klinger, D; Kudryavtsev, A. (2010). "The availability heuristic and investors' reactions to company-specific events". The Journal of Behavioral Finance 11 (50-65). doi:10.1080/15427561003591116. [19] Lee, B; OBrien, J., Sivaramakrishnan, K. (2008). "An Analysis of Financial AnalystsOptimism in Long-term Growth Forecasts". The Journal of Behavioral Finance 9: 171184. doi:10.1080/15427560802341889. [20] Hayibor, S; Wasieleski, D.M. (2009). "Effects of the use of availability". Journal of Business Ethics 84: 151165.


[21] Fox, Craig R. (July 2006). "The availability heuristic in the classroom: How soliciting more criticism can boost your course ratings". Judgment and Decision Making 1 (1): 86-90. [22] Stalans, L.J (1993). "Citizens' crime stereotypes, biased recall, and punishment preferences in abstract cases". Law and Human Behavior 17


[23] Diamond, S.S; Stalans, L.J (1989). "The myth of judicial leniency in sentencing". Behavioral Sciences & the Law 7: 7389. [24] DeTurck, M.A; Texter, L.A., Harszlak, J.J. (1989). "Effects of information processing objectives on judgments of deception following perjury". Communication Research 16 (3): 434452. [25] Levi, A; Pryor, J.B. (1987). "Use of the availability heuristic in probability estimates of future events: The effects of imagining outcomes versus imagining reasons". Organizational Behavior & Human Performance 40 (2). [26] Wanke, M; Schwarz, N., Bless, H. (1995). "The availability heuristic revisited: Experienced ease of retrieval in mundane frequency estimates". Acta Psychologica 89: 8390. [27] Hulme, C; Roodenrys, S., Brown, G., Mercer, R. (1995). "The role of long-term memory mechanisms in memory span". British Journal of Psychology 86 (4): 527536. doi:10.1111/j.2044-8295.1995.tb02570.x.

Availability heuristic


External links

• How Belief Works ( - an article on the origins of the availability bias.

Availability cascade

An availability cascade is a self-reinforcing cycle that explains the development of certain kinds of collective beliefs. A novel idea or insight, usually one that seems to explain a complex process in a simple or straightforward manner, gains rapid currency in the popular discourse by its very simplicity and by its apparent insightfulness. Its rising popularity triggers a chain reaction within the social network: individuals adopt the new insight because other people within the network have adopted it, and on its face it seems plausible. The reason for this increased use and popularity of the new idea involves both the availability of the previously obscure term or idea, and the need of individuals using the term or idea to appear to be current with the stated beliefs and ideas of others, regardless of whether they in fact fully believe in the idea that they are expressing. Their need for social acceptance, and the apparent sophistication of the new insight, overwhelm their critical thinking.

The idea of the availability cascade was first developed by Timur Kuran and Cass Sunstein, building upon the concept of information cascades and on the availability bias as identified by Daniel Kahneman and Amos Tversky.

The concept has been highly influential in finance theory and regulatory research. [1]


[1] Availability Cascades and Risk Regulation (

Confirmation bias

Confirmation bias (also called confirmatory bias or myside bias) is a tendency of people to favor information that confirms their beliefs or hypotheses. [1][2] People display this bias when they gather or remember information selectively, or when they interpret it in a biased way. The effect is stronger for emotionally charged issues and for deeply entrenched beliefs. For example, in reading about current political issues, people usually prefer sources that affirm their existing attitudes. They also tend to interpret ambiguous evidence as supporting their existing position. Biased search, interpretation and memory have been invoked to explain attitude polarization (when a disagreement becomes more extreme even though the different parties are exposed to the same evidence), belief perseverance (when beliefs persist after the evidence for them is shown to be false), the irrational primacy effect (a greater reliance on information encountered early in a series) and illusory correlation (when people falsely perceive an association between two events or situations).

A series of experiments in the 1960s suggested that people are biased toward confirming their existing beliefs. Later work re-interpreted these results as a tendency to test ideas in a one-sided way, focusing on one possibility and ignoring alternatives. In certain situations, this tendency can bias people's conclusions. Explanations for the observed biases include wishful thinking and the limited human capacity to process information. Another explanation is that people show confirmation bias because they are weighing up the costs of being wrong, rather than investigating in a neutral, scientific way.

Confirmation biases contribute to overconfidence in personal beliefs and can maintain or strengthen beliefs in the face of contrary evidence. Poor decisions due to these biases have been found in military, political, and organizational contexts.

Confirmation bias



Confirmation biases are effects in information processing, distinct from the behavioral confirmation effect, also called "self-fulfilling prophecy", in which people's expectations affect their behaviour to make the expectations come true. [3] Some psychologists use "confirmation bias" to refer to any way in which people avoid rejecting a belief, whether in searching for evidence, interpreting it, or recalling it from memory. Others restrict the term to selective collection of evidence. [4][5]

Biased search for information

Experiments have repeatedly found that people tend to test hypotheses in a one-sided way, by searching for evidence consistent with the hypothesis they hold at a given time. [7][8] Rather than searching through all the relevant evidence, they ask questions that are phrased so that an affirmative answer supports their hypothesis. [9] They look for the consequences that they would expect if their hypothesis were true, rather than what would happen if it were false. [9] For example, someone who is trying to identify a number using yes/no questions and suspects that the number is 3 might ask, "Is it an odd number?" People prefer this sort of question, called a "positive test", even when a negative test such as "Is it an even number?" would yield exactly the same information. [10] However, this does not mean that people seek tests that are guaranteed to give a positive answer. In studies where subjects could select either such pseudo-tests or genuinely diagnostic ones, they favored the genuinely diagnostic. [11][12]

favored the genuinely diagnostic. [ 1 1 ] [ 1 2 ] Confirmation bias has been

Confirmation bias has been described as an internal "yes man", echoing back a person's beliefs like Charles Dickens' character Uriah Heep. [6]

The preference for positive tests is not itself a bias, since positive tests can be highly informative. [13] However, in conjunction with other effects, this strategy can confirm existing beliefs or assumptions, independently of whether they are true. [14] In real-world situations, evidence is often complex and mixed. For example, various contradictory ideas about someone could each be supported by

concentrating on one aspect of his or her behavior. [8] Thus any search for evidence in favor of a hypothesis is likely

to succeed. [14] One illustration of this is the way the phrasing of a question can significantly change the answer. [8]

For example, people who are asked, "Are you happy with your social life?" report greater satisfaction than those asked, "Are you unhappy with your social life?" [15]

Even a small change in the wording of a question can affect how people search through available information, and

hence the conclusions they reach. This was shown using a fictional child custody case. [16] Subjects read that Parent

A was moderately suitable to be the guardian in multiple ways. Parent B had a mix of salient positive and negative

qualities: a close relationship with the child but a job that would take him or her away for long periods. When asked, "Which parent should have custody of the child?" the subjects looked for positive attributes and a majority chose Parent B. However, when the question was, "Which parent should be denied custody of the child?" they looked for negative attributes, but again a majority answered Parent B, implying that Parent A should have custody. [16]

Similar studies have demonstrated how people engage in biased search for information, but also that this phenomenon may be limited by a preference for genuine diagnostic tests, where they are available. In an initial

experiment, subjects had to rate another person on the introversion-extroversion personality dimension on the basis

of an interview. They chose the interview questions from a given list. When the interviewee was introduced as an

introvert, the subjects chose questions that presumed introversion, such as, "What do you find unpleasant about noisy

Confirmation bias


parties?" When the interviewee was described as extroverted, almost all the questions presumed extroversion, such as, "What would you do to liven up a dull party?" These loaded questions gave the interviewees little or no opportunity to falsify the hypothesis about them. [17] However, a later version of the experiment gave the subjects less presumptive questions to choose from, such as, "Do you shy away from social interactions?" [18] Subjects preferred to ask these more diagnostic questions, showing only a weak bias towards positive tests. This pattern, of a main preference for diagnostic tests and a weaker preference for positive tests, has been replicated in other studies. [18]

Another experiment gave subjects a particularly complex rule-discovery task involving moving objects simulated by a computer. [19] Objects on the computer screen followed specific laws, which the subjects had to figure out. They could "fire" objects across the screen to test their hypotheses. Despite making many attempts over a ten hour session, none of the subjects worked out the rules of the system. They typically sought to confirm rather than falsify their hypotheses, and were reluctant to consider alternatives. Even after seeing evidence that objectively refuted their working hypotheses, they frequently continued doing the same tests. Some of the subjects were instructed in proper hypothesis-testing, but these instructions had almost no effect. [19]

Biased interpretation

"Smart people believe weird things because they are skilled at defending beliefs they arrived at for non-smart reasons."

Confirmation biases are not limited to the collection of evidence. Even if two individuals have the same information, the way they interpret it can be biased.


team at Stanford University ran an experiment with subjects who felt strongly about capital punishment, with half


favor and half against. [21][22] Each of these subjects read descriptions of two studies; a comparison of U.S. states

with and without the death penalty, and a comparison of murder rates in a state before and after the introduction of the death penalty. After reading a quick description of each study, the subjects were a