Sie sind auf Seite 1von 85

Irrationality

A Handbook

PDF generated using the open source mwlib toolkit. See http://code.pediapress.com/ for more information. PDF generated at: Sun, 16 Oct 2011 13:58:57 UTC

Contents
Articles
Introduction
Irrationality Fallacy Heuristic 1 1 4 11 16 16 17 19 21 21 22 23 25 25 30 30 31 32 33 36 36 38 41 45 46 46 50 54 56 59 67

Heuristics and Fallacies


Affect heuristic Anchoring Availability heuristic Contagion heuristic Effort heuristic Escalation of commitment Familiarity heuristic Fluency heuristic Gambler's fallacy Gaze heuristic Naive diversification Peak-end rule Recognition heuristic Representativeness heuristic Scarcity heuristic Similarity heuristic Simulation heuristic Social proof Take-the-best heuristic

Related Ideas
Aestheticism Attribute substitution Bounded rationality Cognitive bias List of cognitive biases Dysrationalia

Rational emotive behavior therapy Self-serving bias

68 76

References
Article Sources and Contributors Image Sources, Licenses and Contributors 79 81

Article Licenses
License 82

Introduction
Irrationality
Irrationality is cognition, thinking, talking or acting without inclusion of rationality. It is more specifically described as an action or opinion given through inadequate reasoning, emotional distress, or cognitive deficiency. The term is used, usually pejoratively, to describe thinking and actions that are, or appear to be, less useful or more illogical than other more rational alternatives. Women are generally regarded as more irrational than men.[1] [2] Irrational behaviors of individuals include taking offense or becoming angry about a situation that has not yet occurred, expressing emotions exaggeratedly (such as crying hysterically), maintaining unrealistic expectations, engaging in irresponsible conduct such as problem intoxication, disorganization, or extravagance, and falling victim to confidence tricks. People with a mental illness like schizophrenia may exhibit irrational paranoia. These more contemporary "normative" conceptions of what constitutes a manifestation of irrationality are difficult to demonstrate empirically because it is not clear by whose standards we are to judge the behavior rational or irrational.

Explanation of occurrence
The study of irrational behavior is of interest in fields such as psychology, cognitive science, economics, game theory, and evolutionary psychology, as well as of practical interest to the practitioners of advertising and propaganda. Theories of irrational behavior include: people's actual interests differ from what they believe to be their interests. mechanisms that have evolved to give optimal behavior in normal conditions lead to irrational behavior in abnormal conditions. situations outside one's ordinary circumstances, where one may experience intense levels of fear, or may regress to a fight-or-flight mentality. people fail to realize the irrationality of their actions and believe they are acting perfectly rational, possibly due to flaws in their reasoning. apparently irrational decisions that are actually optimal, but made unconsciously on the basis of "hidden" interests that are not known to the conscious mind. an inability to comprehend the social consequences of one's own actions, possibly due in part to a lack of empathy. Some people find themselves in this condition by living "double" lives. They try to put on one "mask" for one group of people and another for a different group of people. Many will become confused as to which they really are or which they wish to become. Factors which affect rational behavior include: stress, which in turn may be emotional or physical the introduction of a new or unique situation intoxication peers who convey irrational thoughts as necessary idiosyncrasy for social acceptance

Irrationality

Intentional irrationality
Irrationality is not always viewed as a negative. The Dada and Surrealist art movements embraced irrationality as a means to "reject reason and logic". Andr Breton, for example, argued for a rejection of pure logic and reason which are seen as responsible for many contemporary social problems.[3] In science fiction literature, the progress of pure rationality is viewed as a quality which may lead civilization ultimately toward a scientific future dependent on technology. Irrationality in this case, is a positive factor which helps to balance excessive reason. In psychology, excessive rationality without creativity may be viewed as a form of self-control and protection. Certain problems, such as death and loss, may have no rational solution when they are being experienced. We may seek logical explanations for such events, when in fact the proper emotional response is grief. Irrationality is thus a means of freeing the mind toward purely imaginative solutions, to break out of historic patterns of dependence into new patterns that allow one to move on.

Irrationalist
Irrationalist is a wide term. It may be applied to mean "one without rationality", for their beliefs or ideas. Or, more precisely, it may mean someone who rejects some aspect of rationalism, variously defined. For example religious faith may be seen as, in part, a rejection of complete rationalism about the world; this would be contested by some religious thinkers, in that the rational is a debatable term. On the other hand, it might be considered irrationalist to buy a lottery ticket, on the basis that the expected value is negative. In contemporary philosophy "irrationalism" is, inspired by Hindu and Buddhist philosophies, emerging into a new growing school of thought in which the importance of our intuitive capability is stressed. One such young philosopher, Robin Vermoesen (Belgium, 3 November 1978), states in his Dutch written book Rationaliteit is vals (translated: "Rationality is false") that every conscious is at the same time an individual being and total reality, thus as a consequence our rationalistic capabilities aren't enough if we truly want to understand reality in its truest form. For us to truly know reality we need to become it, we need to become our Self. He further states that there are four "knowledge relationships", (in growing order of intimacy: knowing, sensing, realizing and being) a being can experience, and all true knowledge of the Self, thus reality, must eventually grow into the most intimate of the four: being. Since rationalistic knowledge can bring us only knowing, it is, although a necessary tool, not enough to uncover reality. It needs to be balanced by other capabilities and framed in a patchwork that gives positive meaning to personal commitment. Irrational thought was seen in Europe as part of the reaction against Continental rationalism. For example Hamann is sometimes classified as an irrationalist.

In literature
Much subject matter in literature can be seen as an expression of human longing for the irrational. In Romanticism irrationality was valued over the sterile, calculating and emotionless philosophy brought about by the Age of Enlightenment and the Industrial Revolution.[4] The Dadaists and Surrealists later used irrationality as a basis for their art. The disregard of reason and preference for dream states in Surrealism was an exaltation of the irrational and the rejection of logic. Mythology nearly always incorporates elements of fantasy and the supernatural; however myths are largely accepted by the societies that create them, and only come to be seen as irrational through the spyglass of time and by other cultures. But though mythology serves as a way to rationalize the universe in symbolic and often anthropomorphic ways, a pre-rational and irrational way of thinking can be seen as tacitly valued in myhtology's supremacy of the imagination, where rationality as a philosophical method has not been developed.

Irrationality On the other side the irrational is often depicted from a rational point of view in all types of literature, provoking amusement, contempt, disgust, hatred, awe, and many other reactions.

In psychotherapy
The term irrational is often used in psychotherapy and the concept of irrationality is especially known in rational emotive behavior therapy originated and developed by American psychologist Albert Ellis. In this approach, the term irrational is used in a slightly different way than in general. Here irrationality is defined as the tendency and leaning that humans have to act, emote and think in ways that are inflexible, unrealistic, absolutist and most importantly selfand social-defeating and destructive.[5]

Notes
[1] Mead, Margaret. Male and Female: The Classic Study of the Sexes (1949) Quill (HarperCollins) 1998 edition: ISBN 0-688-14676-7 [2] Fletcher, Joyce K. "Castrating the Female Advantage: Feminist Standpoint Research and Management Science." Journal of Management Inquiry 3, no. I (March 1994): 74-82. [3] http:/ / www. screensite. org/ courses/ Jbutler/ T340/ SurManifesto/ ManifestoOfSurrealism. htm [4] http:/ / www. historyguide. org/ intellect/ lecture16a. html [5] Ellis, Albert (2001). Overcoming Destructive Beliefs, Feelings, and Behaviors: New Directions for Rational Emotive Behavior Therapy. Promotheus Books.

References
Stuart Sutherland Irrationality: Why We Don't Think Straight, 1992, reissued 2007 by Pinter & Martin ISBN 978-1-905177-07-3 Robin Vermoesen "Rationaliteit is Vals", 2007, www.unibook.com

External links
Craig R. M. McKenzie. Rational models as theories not standards of behavior. (http://psy.ucsd.edu/ ~mckenzie/mckenzie-tics.pdf) TRENDS in Cognitive Sciences Vol.7 No.9 September 2003 REBT-CBT NET- Internet Guide to Rational Emotive Behavior Therapy (http://rebt-cbt.net)

Fallacy

Fallacy
In informal logic and rhetoric, a fallacy is usually incorrect argumentation in reasoning resulting in a misconception or presumption. By accident or design, fallacies may exploit emotional triggers in the listener or interlocutor (e.g. appeal to emotion), or take advantage of social relationships between people (e.g. argument from authority). Fallacious arguments are often structured using rhetorical patterns that obscure any logical argument. Fallacies can generally be classified as informal or formal.

Material Fallacies
The taxonomy of material fallacies is based on that of Aristotle's body structure Organon (Sophistici elenchi). This taxonomy is as follows:

Fallacy of accident or sweeping generalization


Fallacy of accident or sweeping generalization: a generalization that disregards exceptions. Example Argument: Cutting people is a crime. Surgeons cut people's diaphragms. Therefore, surgeons are criminals. Problem: Cutting people is only sometimes a crime. Argument: It is illegal for a stranger to enter someone's home uninvited. Firefighters enter people's homes uninvited, therefore firefighters are breaking the law. Problem: The exception does not break nor define the rule; a dicto simpliciter ad dictum secundum quid (where an accountable exception is ignored).

Converse fallacy of accident or hasty generalization


Converse fallacy of accident or hasty generalization: argues from a special case to a general rule. Example Argument: Every person I've met speaks English, so it must be true that all people speak English. Problem: Those who have been met are a subset of the entire set. Also called reverse accident, destroying the exception, a dicto secundum quid ad dictum simpliciter

Irrelevant conclusion
Irrelevant conclusion: diverts attention away from a fact in dispute rather than addressing it directly. Example Argument: Billy believes that war is justifiable, therefore it must be justifiable. Problem: Billy can be wrong. (In particular this is an appeal to authority.) Special cases: purely personal considerations (argumentum ad hominem), popular sentiment (argumentum ad populumappeal to the majority; appeal to loyalty.), fear (argumentum ad baculum), conventional propriety (argumentum ad verecundiamappeal to authority) to arouse pity for getting one's conclusion accepted (argumentum ad misericordiam)

proving the proposition under dispute without any certain proof (argumentum ad ignorantiam) assuming a perceived defect in the origin of a claim discredits the claim itself (genetic fallacy) Also called Ignoratio Elenchi, a "red herring"

Fallacy

Affirming the consequent


Affirming the consequent: draws a conclusion from premises that do not support that conclusion. Example: Argument: If people have the flu, they cough. Torres is coughing. Therefore, Torres has the flu. Problem: Other things, such as asthma, can cause someone to cough. Argument: If it rains, the ground gets wet. The ground is wet, therefore it rained. Problem: There are other ways by which the ground could get wet (e.g. someone spilled water).

Denying the antecedent


Denying the antecedent: draws a conclusion from premises that do not support that conclusion. Example Argument: If it is raining outside, it must be cloudy. It is not raining outside. Therefore, it is not cloudy. Problem: There does not have to be rain in order for there to be clouds.

Begging the question


Begging the question: demonstrates a conclusion by means of premises that assume that conclusion. Example Argument: Billy always tells the truth, I know this because he told me so. Problem: Billy may be lying. Also called Petitio Principii, Circulus in Probando, arguing in a circle, assuming the answer. Begging the question does not preclude the possibility that the statement in question is correct, but is insufficient proof in and of itself.

Fallacy of false cause


Fallacy of false cause or non sequitur: incorrectly assumes one thing is the cause of another. Non Sequitur is Latin for "It does not follow." Example Argument: I hear the rain falling outside my window; therefore, the sun is not shining. Problem: The conclusion is false because the sun can shine while it is raining. Special cases post hoc ergo propter hoc: believing that temporal succession implies a causal relation. Example Argument: After Billy was vaccinated he developed autism, therefore the vaccine caused his autism. Problem: This does not provide any evidence that the vaccine was the cause. The characteristics of autism may generally become noticeable at the age just following the typical age children receive vaccinations. cum hoc ergo propter hoc: believing that correlation implies a causal relation. Example Argument: More cows die in India in the summer months. More ice cream is consumed in summer months. Therefore, the consumption of ice cream in the summer months is killing Indian cows. Problem: No premise suggests the ice cream consumption is causing the deaths. The deaths and consumption could be unrelated, or something else could be causing both, such as summer heat. Also called causation versus correlation.

Fallacy

Fallacy of many questions


Fallacy of many questions or loaded question: groups more than one question in the form of a single question. Example Argument: Have you stopped beating your wife? Problem: A yes or no answer will still be an admission of guilt to beating your wife at some point. (See also Mu.) Also called Plurium Interrogationum and other terms

Straw man
Straw man: A straw man argument is an informal fallacy based on misrepresentation of an opponent's position. Example Person A: Sunny days are good. Person B: If all days were sunny, we'd never have rain, and without rain, we'd have famine and death. Therefore, you are wrong. Problem: B has misrepresented A's claim by falsely suggesting that A claimed that only sunny days are good, and then B refuted the misrepresented version of the claim, rather than refuting A's original assertion.

Verbal Fallacies
Verbal fallacies are those in which a conclusion is obtained by improper or ambiguous use of words. They are generally classified as follows.

Equivocation
Equivocation consists in employing the same word in two or more senses, e.g. in a syllogism, the middle term being used in one sense in the major and another in the minor premise, so that in fact there are four not three terms. Example Argument: All heavy things have a great mass; Jim has a "heavy heart"; therefore Jim's heart has a great mass. Problem: Heavy describes more than just weight. (Jim is sad.)

Connotation fallacies
Connotation fallacies occur when a dysphemistic word is substituted for the speaker's actual quote and used to discredit the argument. It is a form of attribution fallacy.

Argument by innuendo
Argument by innuendo involves implicitly suggesting a conclusion without stating it outright. For example, a job reference that says a former employee "was never caught taking money from the cash box" In this example the overly specific nature of the innuendo implies that the employee was a thief, even though it does not make (or justify) a direct negative statement.[1]

Fallacy

Amphiboly
Amphiboly is the result of ambiguity of grammatical structure. Example: The position of the adverb "only" in a sentence starting with "He only said that" results in a sentence in which it is uncertain as to which of the other three words the speaker is intending to modify with the adverb.

Fallacy of composition
Fallacy of composition "From each to all". Arguing from some property of constituent parts, to the conclusion that the composite item has that property. This can be acceptable (i.e., not a fallacy) with certain arguments such as spatial arguments (e.g. "all the parts of the car are in the garage, therefore the car is in the garage"). Example Argument: All the musicians in a band (constituent parts) are highly skilled, therefore the band itself (composite item) is highly skilled. Problem: The band members may be skilled musicians but lack the ability to function properly as a group.

Division
Division, the converse of the preceding, arguing from a property of the whole, to each constituent part. Example Argument: "The university (the whole) is 700 years old, therefore, all the staff (each part) are 700 years old". Problem: Each and every person currently on staff is younger than 700 years. The university continues to exist even when, one by one, each and every person on the original staff leaves and is replaced by a younger person. See Theseus' Ship paradox. Example Argument: "This cereal is part of a nutritious breakfast therefore the cereal is nutritious." Problem: Simply because the breakfast taken as a whole is nutritious does not necessarily mean that each part of that breakfast is nutritious.

Proof by verbosity
Proof by verbosity, sometimes colloquially referred to as argumentum verbosium - a rhetorical technique that tries to persuade by overwhelming those considering an argument with such a volume of material that the argument sounds plausible, superficially appears to be well-researched, and it is so laborious to untangle and check supporting facts that the argument might be allowed to slide by unchallenged.

Accent
Accent, which occurs only in speaking and consists of emphasizing the wrong word in a sentence. e.g., "He is a fairly good pianist", according to the emphasis on the words, may imply praise of a beginner's progress or insult of an expert pianist. "He is a fairly good pianist." This argument places emphasis on the fact that "He", as opposed to anyone else, is a fairly good pianist. "He is a fairly good pianist." This is an assertion that he "is" a good pianist, as opposed to a poor one. "He is a fairly good pianist." This is an assertion that his ability as a pianist is fair, perhaps in need of improvement. "He is a fairly good pianist." This is isolating his ability as only being good in the field of musical instruments, namely, the piano, and possibly excludes the idea that he is good at anything else. "I killed my wife?" in response to a police officer asking if he killed his wife. In court, the police officer states his reply to his question was "I killed my wife."

Fallacy

Figure of Speech
Figure of Speech, the confusion between the metaphorical and ordinary uses of a word or phrase. Example: The sailor was at home on the sea. Problem: The expression 'to be at home' does not literally mean that one's domicile is in that location.

Fallacy of misplaced concreteness


Fallacy of misplaced concreteness, identified by Whitehead in his discussion of metaphysics, this refers to the reification of concepts which exist only in discussion. Example 1 Timmy argues: 1. Billy is a good tennis player. 2. Therefore, Billy is 'good', that is to say a morally good person. Here the problem is that the word good has different meanings, which is to say that it is an ambiguous word. In the premise, Timmy says that Billy is good at some particular activity, in this case tennis. In the conclusion, Timmy states that Billy is a morally good person. These are clearly two different senses of the word "good". The premise might be true but the conclusion can still be false: Billy might be the best tennis player in the world but a rotten person morally. However, it is not legitimate to infer he is a bad person on the ground there has been a fallacious argument on the part of Timmy. Nothing concerning Billy's moral qualities is to be inferred from the premise. Appropriately, since it plays on an ambiguity, this sort of fallacy is called the fallacy of equivocation, that is, equating two incompatible terms or claims. Example 2 One posits the argument: 1. Nothing is better than eternal happiness. 2. Eating a hamburger is better than nothing. 3. Therefore, eating a hamburger is better than eternal happiness. This argument has the appearance of an inference that applies transitivity of the two-placed relation is better than, which in this critique we grant is a valid property. The argument is an example of syntactic ambiguity. In fact, the first premise semantically does not predicate an attribute of the subject, as would for instance the assertion Nothing is better than eternal happiness. In fact it is semantically equivalent to the following universal quantification: Everything fails to be better than eternal happiness. So instantiating this fact with eating a hamburger, it logically follows that Eating a hamburger fails to be better than eternal happiness. Note that the premise A hamburger is better than nothing does not provide anything to this argument. This fact really means something such as Eating a hamburger is better than eating nothing at all. Thus this is a fallacy of equivocation.

Fallacy

Deductive Fallacy
In philosophy, the term logical fallacy properly refers to a formal fallacy: a flaw in the structure of a deductive argument which renders the argument invalid. However, it is often used more generally in informal discourse to mean an argument which is problematic for any reason, and thus encompasses informal fallacies as well as formal fallacies. The presence of a formal fallacy in a deductive argument does not imply anything about the argument's premises or its conclusion (see fallacy fallacy). Both may actually be true, or even more probable as a result of the argument (e.g., appeal to authority), but the deductive argument is still invalid because the conclusion does not follow from the premises in the manner described. By extension, an argument can contain a formal fallacy even if the argument is not a deductive one; for instance an inductive argument that incorrectly applies principles of probability or causality can be said to commit a formal fallacy.

Formalisms and frameworks used to understand fallacies


A different approach to understanding and classifying fallacies is provided by argumentation theory; see for instance the van Eemeren, Grootendorst.[2] In this approach, an argument is regarded as an interactive protocol between individuals which attempts to resolve a disagreement. The protocol is regulated by certain rules of interaction, and violations of these rules are fallacies. Many of the fallacies in the list above are best understood as being fallacies in this sense.

Other systems of classification


Of other classifications of fallacies in general the most famous are those of Francis Bacon and J. S. Mill. Bacon (Novum Organum, Aph. 33, 38 sqq.) divided fallacies into four Idola (Idols, i.e. False Appearances), which summarize the various kinds of mistakes to which the human intellect is prone. With these should be compared the Offendicula of Roger Bacon, contained in the Opus maius, pt. i. J. S. Mill discussed the subject in book v. of his Logic, and Jeremy Bentham's Book of Fallacies (1824) contains valuable remarks. See Rd. Whateley's Logic, bk. v.; A. de Morgan, Formal Logic (1847) ; A. Sidgwick, Fallacies (1883) and other textbooks.

References
[1] Damer, T. Edward (2008). Attacking Faulty Reasoning: A Practical Guide to Fallacy-free Arguments (6 ed.). Cengage Learning. pp.130. ISBN9780495095064. [2] F. H. van Eemeren and R. Grootendorst, Argumentation, Communication and Fallacies: A Pragma-Dialectical Perspective, Lawrence Erlbaum and Associates, 1992.

Fearnside, W. Ward and William B. Holther, Fallacy: The Counterfeit of Argument, 1959. Vincent F. Hendricks, Thought 2 Talk: A Crash Course in Reflection and Expression, New York: Automatic Press / VIP, 2005, ISBN 87-991013-7-8 D. H. Fischer, Historians' Fallacies: Toward a Logic of Historical Thought, Harper Torchbooks, 1970. Warburton Nigel, Thinking from A to Z, Routledge 1998. T. Edward Damer. Attacking Faulty Reasoning, 5th Edition, Wadsworth, 2005. ISBN 0-534-60516-8 Sagan, Carl, "The Demon-Haunted World: Science As a Candle in the Dark". Ballantine Books, March 1997 ISBN 0-345-40946-9, 480 pgs. 1996 hardback edition: Random House, ISBN 0-394-53512-X, xv+457 pages plus addenda insert (some printings). Ch.12.

Fallacy

10

Further reading
C. L. Hamblin, Fallacies, Methuen London, 1970. reprinted by Vale Press in 1998 as ISBN 0916475247. Douglas N. Walton, Informal logic: A handbook for critical argumentation. Cambridge University Press, 1989. Hans V. Hansen; Robert C. Pinto (1995). Fallacies: classical and contemporary readings. Penn State Press. ISBN9780271014173. John Woods (2004). The death of argument: fallacies in agent based reasoning. Springer. ISBN9781402026638. Frans van Eemeren; Bart Garssen; Bert Meuffels (2009). Fallacies and Judgments of Reasonableness: Empirical Research Concerning the Pragma-Dialectical Discussion. Springer. ISBN9789048126132. Historical texts Aristotle, On Sophistical Refutations (http://etext.library.adelaide.edu.au/a/aristotle/sophistical/), De Sophistici Elenchi. library.adelaide.edu.au William of Ockham, Summa of Logic (ca. 1323) Part III.4. John Buridan, Summulae de dialectica Book VII. Francis Bacon, the doctrine of the idols in Novum Organum Scientiarum, Aphorisms concerning The Interpretation of Nature and the Kingdom of Man, XXIIIff (http://fly.hiwaay.net/~paul/bacon/organum/ aphorisms1.html). fly.hiwaay.net Arthur Schopenhauer, The Art of Controversy (http://www.gutenberg.net/1/0/7/3/10731/10731-8.txt) | Die Kunst, Recht zu behalten - The Art Of Controversy (bilingual) (http://coolhaus.de/art-of-controversy/), (also known as "Schopenhauers 38 stratagems"). gutenberg.net John Stuart Mill, A System of Logic - Raciocinative and Inductive (http://www.la.utexas.edu/research/ poltheory/mill/sol/). Book 5, Chapter 7, Fallacies of Confusion (http://www.la.utexas.edu/research/ poltheory/mill/sol/sol.b05.c07.html). la.utexas.edu

External links
SEP entry on informal logic has a section on fallacy theory (http://plato.stanford.edu/entries/logic-informal/ #Fal) Appeal to Authority (http://www.appealtoauthority.info) Appeal to Authority Logical Fallacy FallacyFiles.org (http://www.fallacyfiles.org) contains categorization (http://www.fallacyfiles.org/ taxonomy.html) of fallacies with examples. 42 informal logical fallacies explained by Dr. Michael C. Labossiere (including examples) (http://www.nizkor. org/features/fallacies/), nizkor.org Humbug! The skeptics field guide to spotting fallacies in thinking (http://www.scribd.com/doc/8009498/ HUMBUG-eBook-by-Jef-Clark-and-Theo-Clark) textbook on fallacies. scribd.com List of fallacies with clear examples (http://www.infidels.org/library/modern/mathew/logic.html), infidels.org Interactive Syllogistic Machine (http://www.theotherscience.com/syllogism-machine) A web based syllogistic machine for exploring fallacies, figures, and modes of syllogisms. Logical Fallacies and the Art of Debate (http://www.csun.edu/~dgw61315/fallacies.html), csun.edu LogicalFallacies.Info (http://www.logicalfallacies.info/) An Informal Fallacy Primer (http://www.acontrario.org/node/350), acontrario.org Stephen Downes Guide to the Logical Fallacies (http://onegoodmove.org/fallacy/welcome.htm), onegoodmove.org "Love is a Fallacy" (http://www1.asknlearn.com/ri_Ilearning/English/631/elang-ilearning/page3a.htm), a short story written by Max Shulman. WebCitation archive (http://www.webcitation.org/5xp6R6dwJ).

Heuristic

11

Heuristic
Heuristic ( /hjrstk/; or heuristics; Greek: "", "find" or "discover") refers to experience-based techniques for problem solving, learning, and discovery. Heuristic methods are used to speed up the process of finding a satisfactory solution, where an exhaustive search is impractical. Examples of this method include using a "rule of thumb", an educated guess, an intuitive judgment, or common sense. In more precise terms, heuristics are strategies using readily accessible, though loosely applicable, information to control problem solving in human beings and machines.[1]

Example
The most fundamental heuristic is trial and error, which can be used in everything from matching nuts and bolts to finding the values of variables in algebra problems. Here are a few other commonly used heuristics, from George Plya's 1945 book, How to Solve It:[2] If you are having difficulty understanding a problem, try drawing a picture. If you can't find a solution, try assuming that you have a solution and seeing what you can derive from that ("working backward"). If the problem is abstract, try examining a concrete example. Try solving a more general problem first (the "inventor's paradox": the more ambitious plan may have more chances of success).

Psychology
In psychology, heuristics are simple, efficient rules, hard-coded by evolutionary processes or learned, which have been proposed to explain how people make decisions, come to judgments, and solve problems, typically when facing complex problems or incomplete information. These rules work well under most circumstances, but in certain cases lead to systematic errors or cognitive biases. Although much of the work of discovering heuristics in human decision-makers was done by Amos Tversky and Daniel Kahneman,[3] the concept was originally introduced by Nobel laureate Herbert Simon. Gerd Gigerenzer focuses on how heuristics can be used to make judgments that are in principle accurate, rather than producing cognitive biases heuristics that are "fast and frugal".[4] In 2002, Daniel Kahneman and Shane Frederick proposed that cognitive heuristics work by a process called attribute substitution which happens without conscious awareness.[5] According to this theory, when somebody makes a judgment (of a target attribute) which is computationally complex, a rather easier calculated heuristic attribute is substituted. In effect, a cognitively difficult problem is dealt with by answering a rather simpler problem, without being aware of this happening.[5] This theory explains cases where judgments fail to show regression toward the mean.[6] Heuristics can be considered to reduce the complexity of clinical judgements in healthcare.[7]

Heuristic

12

Theorized psychological heuristics


Well known Anchoring and adjustment Availability heuristic Representativeness heuristic Nave diversification Escalation of commitment

Less well known


Affect heuristic Contagion heuristic Effort heuristic Familiarity heuristic Fluency heuristic Gaze heuristic Peak-end rule Recognition heuristic Scarcity heuristic Similarity heuristic Simulation heuristic Social proof Take-the-best heuristic

Philosophy
In philosophy, especially in Continental European philosophy, the adjective "heuristic" (or the designation "heuristic device") is used when an entity X exists to enable understanding of, or knowledge concerning, some other entity Y. A good example is a model which, as it is never identical with what it models, is a heuristic device to enable understanding of what it models. Stories, metaphors, etc., can also be termed heuristic in that sense. A classic example is the notion of utopia as described in Plato's best-known work, The Republic. This means that the "ideal city" as depicted in The Republic is not given as something to be pursued, or to present an orientation-point for development; rather, it shows how things would have to be connected, and how one thing would lead to another (often with highly problematic results), if one would opt for certain principles and carry them through rigorously. "Heuristic" is also often commonly used as a noun to describe a rule-of-thumb, procedure, or method.[8] Philosophers of science have emphasized the importance of heuristics in creative thought and constructing scientific theories.[9] (See The Logic of Scientific Discovery, and philosophers such as Imre Lakatos,[10] Lindley Darden, William C. Wimsatt, and others.)

Law
In legal theory, especially in the theory of law and economics, heuristics are used in the law when case-by-case analysis would be impractical, insofar as "practicality" is defined by the interests of a governing body.[11] For instance, in all states in the United States the legal drinking age is 21, because it is argued that people need to be mature enough to make decisions involving the risks of alcohol consumption. However, assuming people mature at different rates, the specific age of 21 would be too late for some and too early for others. In this case, the somewhat arbitrary deadline is used because it is impossible or impractical to tell whether an individual is sufficiently mature for society to trust them with that kind of responsibility. Some proposed changes, however, have included the completion of an alcohol education course rather than the attainment of 21 years of age as the criterion for legal alcohol possession. This would put youth alcohol policy more on a case-by-case basis and less on a heuristic one, since the completion of such a course would presumably be voluntary and not uniform across the population. The same reasoning applies to patent law. Patents are justified on the grounds that inventors need to be protected in order to have incentive to invent. It is therefore argued that, in society's best interest, inventors should be issued with a temporary government-granted monopoly on their product, so that they can recoup their investment costs and make economic profit for a limited period. In the United States the length of this temporary monopoly is 20 years from the

Heuristic date the application for patent was filed, though the monopoly does not actually begin until the application has matured into a patent. However, like the drinking-age problem above, the specific length of time would need to be different for every product in order to be efficient; a 20-year term is used because it is difficult to tell what the number should be for any individual patent. More recently, some, including University of North Dakota law professor Eric E. Johnson, have argued that patents in different kinds of industries such as software patents should be protected for different lengths of time.[12]

13

Computer science
In computer science, a heuristic is a technique designed to solve a problem that ignores whether the solution can be proven to be correct, but which usually produces a good solution or solves a simpler problem that contains or intersects with the solution of the more complex problem. Most real-time, and even some on-demand, anti-virus scanners use heuristic signatures to look for specific attributes and characteristics for detecting viruses and other forms of malware. Heuristics are intended to gain computational performance or conceptual simplicity, potentially at the cost of accuracy or precision. In their Turing Award acceptance speech, Herbert Simon and Allen Newell discuss the Heuristic Search Hypothesis: a physical symbol system will repeatedly generate and modify known symbol structures until the created structure matches the solution structure. That is, each successive iteration depends upon the step before it, thus the heuristic search learns what avenues to pursue and which ones to disregard by measuring how close the current iteration is to the solution. Therefore, some possibilities will never be generated as they are measured to be less likely to complete the solution. A heuristic method can accomplish its task by using search trees. However, instead of generating all possible solution branches, a heuristic selects branches more likely to produce outcomes than other branches. It is selective at each decision point, picking branches that are more likely to produce solutions.[13] In human-computer interaction, heuristic evaluation is a usability-testing technique devised by expert usability consultants. In heuristic evaluation, the user interface is reviewed by experts and its compliance to usability heuristics (broadly stated characteristics of a good user interface, based on prior experience) is assessed, and any violating aspects are recorded.

Software interface design


In software development, the use of a heuristic approach can facilitate a well-designed user interface, enabling users to navigate complex systems intuitively and without difficulty. The interface may guide the user when necessary using tooltips, help buttons, invitations to chat with support, etc., providing help when needed. However, in practice, the designer of the user interface may not find it easy to strike the optimum balance for assistance of the user. Software developers and targeted end-users alike disregard heuristics at their own peril. End users often need to increase their understanding of the basic framework that a project entails (so that their expectations are realistic), and developers often need to push to learn more about their target audience (so that their learning styles can be judged). Business rules crucial to the organization are often so obvious to the end-user that they are not conveyed to the developer, who may lack domain knowledge in the particular field of endeavor the application is meant to serve. A proper Software Requirements Specification (SRS) models the heuristics of how a user will process the information being rendered on-screen. An SRS is ideally shared with the end-user well before the actual Software Design Specification (SDS) is written and the application is developed, so users' feedback about their experience can be used to adapt the design of the application. This saves much time in the Software Development Life Cycle (SDLC). Unless heuristics are adequately considered, the project will likely suffer many implementation problems and setbacks.

Heuristic

14

Engineering
In engineering, a heuristic is an experience-based method that can be used as an aid to solve process design problems, varying from size of equipment to operating conditions. By using heuristics, time can be reduced when solving problems. There are several methods which are available to engineers. These include Failure mode and effects analysis and Fault tree analysis. The former relies on a group of qualified engineers to evaluate problems, rank them in order of importance and then recommend solutions. The methods of forensic engineering are an important source of information for investigating problems, especially by elimination of unlikely causes and using the weakest link principle. Because heuristics are fallible, it is important to understand their limitations. They are intended to be used as aids in order to make quick estimates and preliminary process designs.

Pitfalls of heuristics
Heuristic algorithms are often employed because they may be seen to "work" without having been mathematically proven to meet a given set of requirements. One common pitfall in implementing a heuristic method to meet a requirement comes when the engineer or designer fails to realize that the current data set does not necessarily represent future system states. While the existing data can be pored over and an algorithm can be devised to successfully handle the current data, it is imperative to ensure that the heuristic method employed is capable of handling future data sets. This means that the engineer or designer must fully understand the rules that generate the data and develop the algorithm to meet those requirements and not just address the current data sets. Statistical analysis should be conducted when employing heuristics to estimate the probability of incorrect outcomes. If one seeks to use a heuristic as a means of solving a search or knapsack problem, then one must be careful to make sure that the heuristic function which one is choosing to use is an admissible heuristic. Given a heuristic function labeled as: which is meant to approximate the true optimal distance graph containing total nodes or vertexes labeled for all and where . . where . or by skipping "Admissible" means that back and forth between two nodes to the goal node in a directed

If a heuristic is not admissible, it might never find the goal, by ending up in a dead end of graph

References
[1] Pearl, Judea (1983). Heuristics: Intelligent Search Strategies for Computer Problem Solving. New York, Addison-Wesley, p. vii. ISBN 978-0201055948 [2] Polya, George (1945) How To Solve It: A New Aspect of Mathematical Method, Princeton, NJ: Princeton University Press. ISBN 0-691-02356-5 ISBN 0-691-08097-6 [3] Daniel Kahneman, Amos Tversky and Paul Slovic, eds. (1982) Judgment under Uncertainty: Heuristics & Biases. Cambridge, UK, Cambridge University Press ISBN 0-521-28414-7 [4] Gerd Gigerenzer, Peter M. Todd, and the ABC Research Group (1999). Simple Heuristics That Make Us Smart. Oxford, UK, Oxford University Press. ISBN 0-19-514381-7 [5] Kahneman, Daniel; Shane Frederick (2002). "Representativeness Revisited: Attribute Substitution in Intuitive Judgment". In Thomas Gilovich, Dale Griffin, Daniel Kahneman. Heuristics and Biases: The Psychology of Intuitive Judgment. Cambridge: Cambridge University Press. pp.4981. ISBN9780521796798. OCLC47364085. [6] Kahneman, Daniel (December 2003). "Maps of Bounded Rationality: Psychology for Behavioral Economics". American Economic Review (American Economic Association) 93 (5): 14491475. doi:10.1257/000282803322655392. ISSN0002-8282. [7] Cioffi, Jane (1997). "Heuristics, servants to intuition, in clinical decision making". Journal of Advanced Nursing 26: 203208. doi:10.1046/j.1365-2648.1997.1997026203.x. [8] K. M. Jaszczolt (2006). "Defaults in Semantics and Pragmatics" (http:/ / plato. stanford. edu/ entries/ defaults-semantics-pragmatics/ ), The Stanford Encyclopedia of Philosophy, ISSN 1095-5054

Heuristic
[9] Roman Frigg and Stephan Hartmann (2006). "Models in Science" (http:/ / plato. stanford. edu/ entries/ models-science/ ), The Stanford Encyclopedia of Philosophy, ISSN 1095-5054 [10] Olga Kiss (2006). "Heuristic, Methodology or Logic of Discovery? Lakatos on Patterns of Thinking" (http:/ / www. mitpressjournals. org/ doi/ pdf/ 10. 1162/ posc. 2006. 14. 3. 302), Perspectives on Science, vol. 14, no. 3, pp. 302-317, ISSN 1063-6145 [11] Gerd Gigerenzer and Christoph Engel, eds. (2007). Heuristics and the Law, Cambridge, The MIT Press, ISBN 978-0-262-07275-5 [12] Eric E. Johnson (2006). "Calibrating Patent Lifetimes" (http:/ / www. eejlaw. com/ writings/ Johnson_Calibrating_Patent_Lifetimes. pdf), Santa Clara Computer & High Technology Law Journal, vol. 22, p. 269-314 [13] Newell, A. & Simon, H. A. (1976). Computer science as empirical inquiry: symbols and search. Comm. Of the ACM. 19, 113-126.

15

Further reading
How To Solve It: Modern Heuristics, Zbigniew Michalewicz and David B. Fogel, Springer Verlag, 2000. ISBN 3-540-66061-5 Russell, Stuart J.; Norvig, Peter (2003), Artificial Intelligence: A Modern Approach (http://aima.cs.berkeley. edu/) (2nd ed.), Upper Saddle River, New Jersey: Prentice Hall, ISBN0-13-790395-2 The Problem of Thinking Too Much (http://www-stat.stanford.edu/~cgates/PERSI/papers/thinking.pdf), 2002-12-11, Persi Diaconis

16

Heuristics and Fallacies


Affect heuristic
The affect heuristic is a heuristic in which current affect influences decisions. Simply put, it is a "rule of thumb" instead of a deliberative decision. It is one of the ways in which human beings show bias in making a decision, which may cause them to take action that is contrary to logic or self-interest.

Concept
"Affect", in this context, is simply a feelingfear, pleasure, surprise, etc. It is shorter in duration than a mood, occurring rapidly and involuntarily in response to a stimulus. Reading the words "lung cancer" usually generates an affect of dread, while reading the words "mother's love" usually generates an affect of affection and comfort. For the purposes of the psychological heuristic, affect is often judged on a simple diametric scale of "good" or "bad". The theory of affect heuristic is that a human being's affect can influence their decision-making. The affect heuristic got recent attention when it was used to explain the unexpected negative correlation between benefit and risk perception. Melissa Finucane and others theorised in 2000[1] that a good feeling towards a situation (i.e., positive affect) would lead to a lower risk perception and a higher benefit perception, even when this is logically not warranted for that situation. This implies that a strong emotional response to a word or other stimulus might alter a person's judgment. He or she might make different decisions based on the same set of facts and might thus make an illogical decision. For example, in a blind taste test, a man might like Mirelli Beer better than Saddle Sweat Beer; however, if he has a strong gender identification, an advertisement touting Saddle Sweat as "a real man's brew" might cause him to prefer Saddle Sweat. Positive affect related to gender pride biases his decision sufficiently to overcome his cognitive judgment. Another common situation involving affect heuristic is where a strong, emotional first impression can inform a decision, even if subsequent facts weigh cognitively against the decisions. Someone seeing a house from the street might decide to buy it immediately upon seeing it, based on the strength of the emotional response to its eye appeal. This can be true even if subsequent inspection shows that it is inferior to another house that is even more charming from the street, but which the potential buyer first encountered by entering through its back door into a rather shabby kitchen. The affect heuristic is of influence in nearly every decision-making arena.

Experimental findings
Winkielman, Zajonc, and Schwarz[2] flashed one of three images in the view of test subjects: a smiling face, a frowning face, or a neutral geometric shape. The subject was then shown a Chinese character and asked how he or she liked it. The test subjects preferred the characters they saw after the smiling face, even though the smiling face was shown only for 1/250 of a second, and the subject did not recall seeing it. The same experiment demonstrated the persistence of initial affect. The testers showed the subjects the same characters, but preceded by a different face. The subjects significantly tended to prefer the characters based on the first association, even where the second exposure was preceded by a different affective stimulus. That is, if a subject liked a character following exposure to a smiling face, he would continue to like the character even when it was preceded by a frowning face. (The experimental outcome was statistically significant and adjusted for variables such as non-affective preference for certain characters.)

Affect heuristic However, in spite of its intuitive appeal and a large number of indirect empirical findings supporting the affect heuristic (such as the experiment above), conclusive evidence proving the theoretical ideas posed in the affect heuristic has not been forthcoming as of yet.

17

Footnotes
[1] Finucane, M.L., Alhakami, A., Slovic, P., Johnson, S.M. (2000) The affect heuristic in judgments of risks and benefits. Journal of Behavioral Decision Making, 13(1), 1-17. [2] Winkielman, P., Zajonc, R.B., & Schwarz, N. (1997). Subliminal affective priming attributional interventions. Cognition and Emotion, 11(4), 433-465.

Further reading
Slovic, Paul; Melissa Finucane, Ellen Peters, Donald G. MacGregor (2002). "The Affect Heuristic". In Thomas Gilovich, Dale Griffin, Daniel Kahneman. Heuristics and Biases: The Psychology of Intuitive Judgment. Cambridge University Press. pp.397420. ISBN97805219796798. Shefrin, Hersh (2002). Behavioral Corporate Finance: Decisions that create value. McGraw-Hill. pp.2, 10, 164, 4042,6061, 69. ISBN9780072848656.

Anchoring
Anchoring or focalism is a cognitive bias that describes the common human tendency to rely too heavily, or "anchor," on one trait or piece of information when making decisions.

Background
During normal decision-making, anchoring occurs when individuals overly rely on a specific piece of information to govern their thought-process. Once the anchor is set, there is a bias toward adjusting or interpreting other information to reflect the "anchored" information. Through this cognitive bias, the first information learned about a subject (or, more generally, information learned at an early age) can affect future decision-making and information analysis. For example, as a person looks to buy a used car, he or she may focus excessively on the odometer reading and model year of the car, and use those criteria as a basis for evaluating the value of the car, rather than considering how well the engine or the transmission is maintained.

Focusing effect
The focusing effect (or focusing illusion) is a cognitive bias that occurs when people place too much importance on one aspect of an event, causing an error in accurately predicting the utility of a future outcome. People focus on notable differences, excluding those that are less conspicuous, when making predictions about happiness or convenience. For example, when people were asked how much happier they believe Californians are compared to Midwesterners, Californians and Midwesterners both said Californians must be considerably happier, when, in fact, there was no difference between the actual happiness rating of Californians and Midwesterners. The bias lies in that most people asked focused on and overweighed the sunny weather and ostensible easy-going lifestyle of California and devalued and underrated other aspects of life and determinants of happiness, such as low crime rates and safety from natural disasters like earthquakes (both of which large parts of California lack).[1] A rise in income has only a small and transient effect on happiness and well-being, but people consistently overestimate this effect. Kahneman et al. proposed that this is a result of a focusing illusion, with people focusing on

Anchoring conventional measures of achievement rather than on everyday routine.[2]

18

Anchoring and adjustment heuristic


Anchoring and adjustment is a psychological heuristic that influences the way people intuitively assess probabilities. According to this heuristic, people start with an implicitly suggested reference point (the "anchor") and make adjustments to it to reach their estimate. A person begins with a first approximation (anchor) and then makes incremental adjustments based on additional information. The anchoring and adjustment heuristic was first theorized by Amos Tversky and Daniel Kahneman. In one of their first studies, the two showed that when asked to guess the percentage of African nations which are members of the United Nations, people who were first asked "Was it more or less than 10%?" guessed lower values (25% on average) than those who had been asked if it was more or less than 65% (45% on average).[3] The pattern has held in other experiments for a wide variety of different subjects of estimation. As a second example, an audience is first asked to write the last two digits of their social security number and consider whether they would pay this number of dollars for items whose value they did not know, such as wine, chocolate and computer equipment. They were then asked to bid for these Daniel Kahneman items, with the result that the audience members with higher two-digit numbers would submit bids that were between 60 percent and 120 percent higher than those with the lower social security numbers, which had become their anchor.[4]

References
[1] Schkade, D.A., & Kahneman, D. (1998). Does living in California make people happy? A focusing illusion in judgments of life satisfaction. Psychological Science, 9, 340-346. [2] Kahneman, Daniel; Alan B Krueger, David Schkade, Norbert Schwarz, Arthur A Stone (2006-06-30). "Would you be happier if you were richer? A focusing illusion" (http:/ / www. morgenkommichspaeterrein. de/ ressources/ download/ 125krueger. pdf). Science 312 (5782): 190810. doi:10.1126/science.1129688. PMID16809528. . [3] Tversky, A. & Kahneman, D. (1974). Judgment under uncertainty: Heuristics and biases. (http:/ / www. hss. caltech. edu/ ~camerer/ Ec101/ JudgementUncertainty. pdf) Science, 185, 1124-1130. [4] Edward Teach, " Avoiding Decision Traps (http:/ / www. cfo. com/ article. cfm/ 3014027)," CFO (1 June 2004). Retrieved 29 May 2007.

Del Missier, F., Ferrante, D., & Costantini, E. (2007). Focusing effects in predecisional information acquisition. Acta Psychologica, 125, 155-174.

External links
Anchor pricing (http://www.treehugger.com/files/2010/04/ nissan-leaf-electric-car-new-price-anchor-benchmark-more-affordable.php)

Availability heuristic

19

Availability heuristic
The availability heuristic is a phenomenon (which can result in a cognitive bias) in which people predict the frequency of an event, or a proportion within a population, based on how easily an example can be brought to mind. This phenomenon was first reported by psychologists Amos Tversky and Daniel Kahneman, who also identified the representativeness heuristic. To see how availability differs from related terms vividness and salience, see availability, salience and vividness.

Overview
Essentially the availability heuristic operates on the notion that "if you can think of it, it must be important."[1] Media coverage can help fuel a person's example bias with widespread and extensive coverage of unusual events, such as homicide or airline accidents, and less coverage of more routine, less sensational events, such as common diseases or car accidents. For example, when asked to rate the probability of a variety of causes of death, people tend to rate more "newsworthy" events as more likely because they can more readily recall an example from memory. For example, in the USA, people rate the chance of death by homicide higher than the chance of death by stomach cancer, even though death by stomach cancer is five times higher than death by homicide. Moreover, unusual and vivid events like homicides, shark attacks, or lightning are more often reported in mass media than common and unsensational causes of death like common diseases.[2] Another instance of biased ratings is the relative overestimation of plane crash deaths, compared to car-accident deaths.

Examples
A person argues that cigarette smoking is not unhealthy because his grandfather smoked three packs of cigarettes a day and lived to be 100. The grandfather's health could simply be an unusual case that does not speak to the health of smokers in general.[1] A politician says that walnut farmers need a special farm subsidy. He points to a farmer standing nearby and explains how that farmer will benefit. Others who watch and discuss later agree that the subsidy is needed based on the benefit to that farmer. The farmer, however, might be the only person who will benefit from the subsidy. Walnut farmers in general may not necessarily need this subsidy. A person claims to a group of friends that drivers of red cars get more speeding tickets. The group agrees with the statement because a member of the group, "Jim," drives a red car and frequently gets speeding tickets. The reality could be that Jim just drives fast and would get a speeding ticket regardless of the color of car that he drove. Even if statistics show fewer speeding tickets were given to red cars than to other colors of cars, Jim is an available example which makes the statement seem more plausible. Someone is asked to estimate the proportion of words that begin with the letter "R" or "K" versus those words that have the letter "R" or "K" in the third position. Most English-speaking people could immediately think of many words that begin with the letters "R" (roar, rusty, ribald) or "K" (kangaroo, kitchen, kale), but it would take a more concentrated effort to think of any words where "R" or "K" is the third letter (street, care, borrow, acknowledge); the immediate answer would probably be that words that begin with "R" or "K" are more common. The reality is that words that have the letter "R" or "K" in the third position are more common. In fact, there are three times as many words that have the letter "K" in the third position, as have it in the first position.[3] Where an anecdote ("I know a Brazilian man who...") is used to "prove" an entire proposition or to support a bias, the availability heuristic is in play. In these instances the ease of imagining an example or the vividness and emotional impact of that example becomes more credible than actual statistical probability. Because an example is easily brought to mind or mentally "available," the single example is considered as representative of the whole rather than as just a single example in a range of data.

Availability heuristic A person sees several news stories of cats leaping out of tall trees and surviving, so he believes that cats must be robust to long falls. However, these kinds of news reports are far more common than reports where a cat falls out of the tree and dies, which may in fact be a more common event.

20

Imagining outcomes
One important corollary finding to this heuristic is that people asked to imagine an outcome tend to immediately view it as more likely than people that were not asked to imagine the specific outcome. If group A were asked to imagine a specific outcome and then asked if it were a likely outcome, and group B were asked whether the same specific outcome were likely without being asked to imagine it first, the members of group A tend to view the outcome as more likely than the members of group B, thereby demonstrating the tendency toward using an availability heuristic as a basis for logic.{Caroll, 1978} In one experiment that occurred before the 1976 US Presidential election, participants were asked simply to imagine Gerald Ford winning the upcoming election. Those who were asked to do this subsequently viewed Ford as being significantly more likely to win the upcoming election. A similar result was obtained from participants that had been asked to imagine Jimmy Carter winning.[4] Analogous results were found with vivid versus pallid descriptions of outcomes in other experiments.

References
[1] Esgate, A. & Groome, D. (2004). An Introduction to Applied Cognitive Psychology. New York: Psychology Press. [2] Lichtenstein, S., Slovic, P., Fischhoff, B., Layman, M. and Combs, B. (1978). Judged frequency of lethal events. Journal of Experimental Psychology: Human Learning and Memory, 4, 551-578. [3] Tversky, A. & Kahneman, D. (1974). Judgments under uncertainty: Heuristics and biases. Science, 185, 1124-1131. [4] Carroll, J.S. (1978). The effect of imagining an event on expectations for the event: An interpretation in terms of the availability heuristic. Journal of Experimental Social Psychology 14, 88-96.

Combs, B. & Slovic, P. (1979). Newspaper coverage of causes of death. Journalism Quarterly 56, 837-843. Reber, R. (2004). Availability. In R. Pohl (Ed.), Cognitive illusions (pp. 147-163). Hove, UK: Psychology Press. Tversky, A., & Kahneman, D. (1973). Availability: a heuristic for judging frequency and probability. Cognitive Psychology 5, 207-232.

External links
(http://www.posbase.uib.no/posbase/Presentasjoner/P_Tversky &Kahneman (1973).ppt) - A Powerpoint presentation on the classical experiments about the availability heuristic by Tversky and Kahneman. (http://posbase.uib.no/posbase/Presentasjoner/P_Lichtenstein et al.(1978).ppt) - A Powerpoint presentation on the frequency estimate study by Lichtenstein and colleagues. Test Yourself: Decision Making and the Availability Heuristic (http://www.learner.org/ discoveringpsychology/11/e11expand.html)

Contagion heuristic

21

Contagion heuristic
The contagion heuristic is a psychological heuristic leading people to avoid contact with people or objects viewed as "contaminated" by previous contact with someone or something viewed as bador, less often, to seek contact with objects that have been in contact with people or things considered good. For example, we tend to view food that has touched the ground as contaminated by the ground, and therefore unfit to eat, or we view a person who has touched a diseased person as likely to carry the disease (regardless of the actual contagiousness of the disease). The contagion heuristic includes "magical thinking", such as viewing a sweater worn by Adolf Hitler as bearing his negative essence and capable of transmitting it to another wearer. The perception of essence-transfer extends to rituals to purify items viewed as spiritually contaminated, such as having Mother Teresa wear Hitler's sweater to counteract his essence.[1]

References
[1] Heuristics and Biases: The Psychology of Intuitive Judgement by Daniel Kahneman, p. 212.

Further reading
Nemeroff, C., & Rozin, P. (2000). "The makings of the magical mind: The nature of function of sympathetic magic." In K. S. Rosengren, C. N. Johnson, & P. L. Harris (Eds.), Imagining the impossible: Magical, scientific, and religious thinking in children (pp.1-34). New York: Cambridge University Press.

Effort heuristic
In psychology, an effort heuristic is a rule of thumb in which the value of an object is assigned based on the amount of perceived effort that went into producing the object. An example of this would be the comparison of $100 earned, and $100 found. If someone finds $100 they might go spend it on a whim, but if that $100 is part of their paycheck, they are not going to waste it. Another way that effort heuristic can be considered is the amount of effort a person will put into an action depending on the goal. If the goal is of little importance, the amount of effort a person is willing to put into it is going to be lower. The effort heuristic can also affect the perceived quality rating and financial value of objects. Kruger et al.[1] found that people who were told that a poem required 18 hours to write rated it as higher quality and gave it a higher appraised value than did people who were told that it took only 4 hours to write. They found a similar effect in the valuation of paintings. In a third study, the researchers asked students to rate the quality of medieval armor that was shown in pictures and accompanied by a description that included manufacturing time. For the pieces of armor that were shown in clear pictures, there was only a small difference in ratings between those pieces that had long versus short manufacturing times, but when the pictures were blurry, the students gave substantially higher quality ratings to pieces of armor when the manufacturing time was long. Other students gave lower ratings to the same pieces of armor when the description listed only a short manufacturing time. The manipulation of blurry pictures suggested that people are prone to rely on perceived effort to value objects when other criteria is not readily available.

Effort heuristic

22

References
[1] Justin Kruger, Derrick Wirtz, Leaf Van Boven, and T. William Altermatt (2004). "The effort heuristic". Journal of Experimental Social Psychology 40 (1): 9198. doi:10.1016/S0022-1031(03)00065-9.

Escalation of commitment
Escalation of commitment was first described by Barry M. Staw in his 1976 paper, "Knee deep in the big muddy: A study of escalating commitment to a chosen course of action".[1] More recently the term sunk cost fallacy has been used to describe the phenomenon where people justify increased investment in a decision, based on the cumulative prior investment, despite new evidence suggesting that the cost, starting today, of continuing the decision outweighs the expected benefit. Such investment may include money, time, or in the case of military strategy human lives. The phenomenon and the sentiment underlying it are reflected in such proverbial images as Throwing good money after bad and In for a dime, in for a dollar (or In for a penny, in for a pound). The term is also used to describe poor decision-making in business, government, information systems in general, software project management in particular, politics, and gambling. The term has been used to describe the United States commitment to military conflicts including Vietnam in the 1960s - 1970s and in Iraq in the 2000s, where dollars spent and lives lost justify continued involvement.[2] Alternatively, irrational escalation (sometimes referred to as irrational escalation of commitment or commitment bias) is a term frequently used in psychology, philosophy, economics, and game theory to refer to a situation in which people can make irrational decisions based upon rational decisions in the past or to justify actions already taken. Examples are frequently seen when parties engage in a bidding war; the bidders can end up paying much more than the object is worth to justify the initial expenses associated with bidding (such as research), as well as part of a competitive instinct.

Examples
The dollar auction is a thought exercise demonstrating the concept. After a heated and aggressive bidding war, Robert Campeau ended up buying Bloomingdale's for an estimated 600 million dollars more than it was worth. The Wall Street Journal noted that "we're not dealing in price anymore but egos". Campeau was forced to declare bankruptcy soon afterwards.[3] Often, when two competing brands are attempting to increase market share, they end up spending money without either increasing market share in a significant manner. This can be seen as a commercial application of the Red Queen hypothesis. Another example was the competition in the 1970s between the McDonnell Douglas DC-10 and Lockheed L-1011 jetliners - two planes that were similar in size and architecture and therefore competed in the same niche of the market. In the end both products cannibalised each others sales and ultimately drove their manufacturers out of the commercial airliner business.

Escalation of commitment

23

References
[1] Barry M. Staw: "Knee-deep in the Big Muddy: A Study of Escalating Commitment to a Chosen Course of Action". Organizational Behavior and Human Performance 16(1):27-44. [2] Barry Schwartz, The Sunk-Cost Fallacy, Bush Falls Victim to a Bad New Argument for the Iraq War (http:/ / www. slate. com/ id/ 2125910/ ), Slate.com, Sept. 9, 2005, retrieved 6-11-08 [3] Max H. Bazerman: Negotiating Rationally January 1, 1994 (ISBN 0-02-901986-9).

Familiarity heuristic
In psychology, a mental heuristic is a rule of thumb in which current behavior is judged to be correct based on how similar it is to past behavior and its outcomes. Individuals assume that the circumstances underlying the past behavior still hold true for the present situation and that the past behavior thus can be correctly applied to the new situation. The familiarity heuristic was developed based on the discovery of the availability heuristic by Tversky and Kahneman. It can be applied to various situations that individuals experience in real life when these situations appear similar to previous situations, especially if the individuals are experiencing a high cognitive load. This heuristic is useful in most situations and can be applied to many fields of knowledge including medicine, psychology, sports, marketing, outdoor activities, and consumer choices.

Definition and history


The familiarity heuristic stems from the availability heuristic which was studied by Tversky and Kahneman. The availability heuristic suggests that the likelihood of events is estimated based on how many examples of such events come to mind. Thus the familiarity heuristic shows how "bias of availability is related to the ease of recall."[1] Tversky and Kahneman created an experiment in order to test this heuristic. They devised four lists of 39 names. Each list contained 19 female names and 20 male names. Half of the lists had famous female names, and the other half had famous male names. They showed the lists to two test groups. The first group was shown a list and asked to recall as many names as possible. The second group was shown a list and asked to determine if there were more female or more male names. The subjects who heard the list with famous female names said there were more female names than there were male names. Similarly, the subjects who heard the list with famous male names recalled more male names than female names. Thus the familiarity heuristic is defined as "judging events as more frequent or important because they are more familiar in memory."[1] The familiarity heuristic is based on using schemas or past actions as a scaffold for behavior in a new (yet familiar) situation. This is useful because it saves time for the subject who is trying to figure out the appropriate behavior for a situation they have experienced before. Individuals automatically assume that their previous behavior will yield the same results when a similar situation arises. This technique is typically useful. However, certain behaviors can be inappropriate when the situation is different from the time before.

Important research
Recent studies have used functional magnetic resonance imaging (fMRI) to demonstrate that people use different areas of the brain when reasoning about familiar and unfamiliar situations.[2] This holds true over different kinds of reasoning problems. Familiar situations are processed in a system involving the frontal and temporal lobes whereas unfamiliar situations are processed in the frontal and parietal lobes. These two similar but dissociated processes provide a biological explanation for the differences between heuristic reasoning and formal logic. Monin (2004) showed that familiarity of human faces is based on attractiveness. In this study Monin showed his subjects pictures of faces. The subjects were asked to rate how familiar the face was or was not using visual cues. The visual cues were choosing a picture of a butterfly (attractive) when the subject thought the face was familiar, and

Familiarity heuristic choosing a picture of a rat (unattractive) when the subject did not find the face familiar. The result of this study was that the subjects were more familiar when the face was attractive regardless of prior exposure to the picture (or person) itself. This has been referred to as the warm glow effect. The warm glow effect states that positive stimuli seem more familiar because of the positive emotions they evoke in us.[3]

24

Examples
Avalanche victims
To see whether or not familiarity would hinder subjects, McCammon (2004) looked at subjects that had been trapped in an avalanche (211 subjects) and those that had not (56 subjects). In most cases familiarity aided the subjects when navigating the terrain. Subjects that were familiar with the terrain took more risks. The risks typically helped the subjects, but there were a couple of situations where the risks hindered the subjects.[4]

Hindsight bias
The hindsight bias states that people perceive certain events to be more predictable after the fact than they seemed before they had occurred. People believe that a disaster could have been avoided when they are actually misattributing familiar knowledge to a time before it was available.

Applications
The familiarity heuristic increases the likelihood that customers will repeatedly buy products of the same brand. This concept is known as brand familiarity in consumer behavior. Due to the familiarity heuristic, the customers have the rule of thumb that their past behavior of buying this specific brand's product was most likely correct and should be repeated. A study examining the choice of various models of microwave ovens based on the subjects' familiarity with them showed that high familiarity with the features of microwave ovens allowed for a faster and more confident choice.[5] This effect can also have important implications for medical decision making. Lay people tend to make health decisions that are based on familiarity and availability as opposed to factual knowledge about diseases. This means that they are more likely to take actions and pursue treatment options that have worked in the past, whether they are effective in the current situation or not. This also extends to treatments the patient has not used before but is familiar with. For example, a lay person may request a name-brand medication because they have heard of it before, even though a generic drug may be essentially the same but less expensive. Medical professionals are much more likely to use scientific facts to prescribe treatments.[6]

Current criticisms
There is some criticism of the concept of familiarity heuristic. It mainly focuses on the point that past behavior does influence present behavior but that this is based on a different cognitive model than the familiarity heuristic. One study examining multiple possible mechanisms of how previous behavior influences present behavior found little support for the familiarity heuristic.[7] The study showed that the influence of past behavior on a present one decreased when subjects were distracted. However, in order for a heuristic to be valid, its effect should be more prevalent when individuals are distracted and their cognitive capacity is highly strained. This result indicates that it is unlikely that a familiarity heuristic was applied during the experiment. Another limit of familiarity heuristic according to a study by Quellette and Wood is that it might not always be applicable.[8] This study showed that the familiarity heuristic might only occur in situations where the target behavior is habitual and occurs in a stable context within the situation. Thus, the familiarity heuristic could be limited to habits and behaviors in routine situations.

Familiarity heuristic

25

References
[1] Ashcraft, M.H. (2006). Cognition. Upper Saddle River, New Jersey; Pearson Education Inc. [2] Goel, Vinod, Milan Makale & Jordan Grafman. (2004.) "The Hippocampal System Mediates Logical Reasoning about Familiar Spatial Environments." Journal of Cognitive Neuroscience, volume 16 issue 4 pp. 654664. [3] Corneille, O., Monin, B., Pleyers, G. (2004). "Is positivity a cue or a response option? Warm glow vs evaluative matching in the familiarity for attractive and not-so-attractive faces." Journal of Experimental Social Psychology, 41, pp. 431437. [4] McCammon, Ian (2004). "Heuristic Traps in Recreational Avalanche Accidents: Evidence and Implications" (http:/ / avtraining-admin. org/ pubs/ McCammonHTraps. pdf). . [5] Park, W., Lessig, P. (1981). "Familiarity and its impact on consumer decision biases and heuristics." Journal of Consumer Research. 8. p. 223230. [6] Cytryn, K.N. (2001). Lay reasoning and decision-making related to health and illness. Found in Dissertation Abstracts International: The Sciences and Engineering, p. 1200. [7] Albarracin, D., Wyer, R. (2000). "The cognitive impact of past behavior: influences on beliefs, attitudes, and future behavioral decisions." Journal of Personality and Social Psychology, 79, p. 522. [8] Quellette, J., Wood, W. (1998). "Habit and intention in everyday life: the multiple processes by which pas behavior predicts future behavior." Psychological Bulletin, 124, p. 5474.

Fluency heuristic
A fluency heuristic in psychology is a mental heuristic where, if one out of two objects is processed more fluently, faster, or more smoothly, the mind infers that this object has the higher value with respect to what question is being considered. (Jacoby & Brooks, 1984). See processing fluency

Gambler's fallacy
The Gambler's fallacy, also known as the Monte Carlo fallacy (because its most famous example happened in a Monte Carlo Casino in 1913)[1] . Also referred to as the fallacy of the maturity of chances, which is the belief that if deviations from expected behaviour are observed in repeated independent trials of some random process, future deviations in the opposite direction are then more likely. For example, if a fair coin is tossed repeatedly and tails comes up a larger number of times than is expected, a gambler may incorrectly believe that this means that heads is more likely in future tosses.[2] Or if a slot machine does not "jackpot" over a long period, a gambler may incorrectly believe that it is due for a win. Such an expectation could be mistakenly referred to as being due, and it probably arises from everyday experiences with nonrandom events (such as when a scheduled train is late, where it can be expected that it has a greater chance of arriving the later it gets). This is an informal fallacy. It is also known colloquially as the law of averages. What is true instead are the law of large numbers in the long term, averages of independent trials will tend to approach the expected value, even though individual trials are independent and regression toward the mean, namely that following a rare extreme event (say, a run of 10 heads), the next event is likely to be less extreme (the next run of heads is likely to be less than 10), simply because extreme events are rare. The gambler's fallacy implicitly involves an assertion of negative correlation between trials of the random process and therefore involves a denial of the exchangeability of outcomes of the random process. In other words, one implicitly assigns a higher chance of occurrence to an event even though from the point of view of "nature" or the "experiment", all such events are equally probable (or distributed in a known way). The reversal is also a fallacy, in which a gambler may instead decide that tails are more likely out of some mystical preconception that fate has thus far allowed for consistent results of tails; the false conclusion being: Why change if odds favor tails? Again, the fallacy is the belief that the "universe" somehow carries a memory of past results which tend to favor or disfavor future outcomes.

Gambler's fallacy The conclusion of this reversed gambler's fallacy may be correct, however, if the empirical evidence suggests that an initial assumption about the probability distribution is false. If a coin is tossed ten times and lands "heads" ten times, the gambler's fallacy would suggest an even-money bet on "tails", while the reverse gambler's fallacy (not to be confused with the inverse gambler's fallacy) would suggest an even-money bet on "heads". In this case, the smart bet is "heads" because the empirical evidenceten "heads" in a rowsuggests that the coin is likely to be biased toward "heads", contradicting the (general) assumption that the coin is fair.

26

An example: coin-tossing
The gambler's fallacy can be illustrated by considering the repeated toss of a fair coin. With a fair coin, the outcomes in different tosses are statistically independent and the probability of getting heads on a single toss is exactly 12 (one in two). It follows that the probability of getting two heads in two tosses is 14 (one in four) and the probability of getting three heads in three tosses is 18 (one in eight). In general, if we let Ai be the event that toss i of a fair coin comes up heads, then we have,

Simulation of coin tosses: Each frame, a coin is flipped which is red on one side and blue on the other. The result of each flip is added as a colored dot in the corresponding column. As the pie chart shows, the proportion of red versus blue approaches 50-50 (the Law of Large Numbers). But the difference between red and blue does not systematically decrease to zero.

. Now suppose that we have just tossed four heads in a row, so that if the next coin toss were also to come up heads, it would complete a run of five successive heads. Since the probability of a run of five successive heads is only 132 (one in thirty-two), a believer in the gambler's fallacy might believe that this next flip is less likely to be heads than to be tails. However, this is not correct, and is a manifestation of the gambler's fallacy; the event of 5 heads in a row and the event of "first 4 heads, then a tails" are equally likely, each having probability 132. Given the first four rolls turn up heads, the probability that the next toss is a head is in fact, . While a run of five heads is only 132 = 0.03125, it is only that before the coin is first tossed. After the first four tosses the results are no longer unknown, so their probabilities are 1. Reasoning that it is more likely that the next toss will be a tail than a head due to the past tosses, that a run of luck in the past somehow influences the odds in the future, is the fallacy.

Gambler's fallacy

27

Explaining why the probability is 1/2 for a fair coin


We can see from the above that, if one flips a fair coin 21 times, then the probability of 21 heads is 1 in 2,097,152. However, the probability of flipping a head after having already flipped 20 heads in a row is simply 12. This is an application of Bayes' theorem. This can also be seen without knowing that 20 heads have occurred for certain (without applying of Bayes' theorem). Consider the following two probabilities, assuming a fair coin: probability of 20 heads, then 1 tail = 0.520 0.5 = 0.521 probability of 20 heads, then 1 head = 0.520 0.5 = 0.521 The probability of getting 20 heads then 1 tail, and the probability of getting 20 heads then another head are both 1 in 2,097,152. Therefore, it is equally likely to flip 21 heads as it is to flip 20 heads and then 1 tail when flipping a fair coin 21 times. Furthermore, these two probabilities are equally as likely as any other 21-flip combinations that can be obtained (there are 2,097,152 total); all 21-flip combinations will have probabilities equal to 0.521, or 1 in 2,097,152. From these observations, there is no reason to assume at any point that a change of luck is warranted based on prior trials (flips), because every outcome observed will always have been as likely as the other outcomes that were not observed for that particular trial, given a fair coin. Therefore, just as Bayes' theorem shows, the result of each trial comes down to the base probability of the fair coin: 12.

Other examples
There is another way to emphasize the fallacy. As already mentioned, the fallacy is built on the notion that previous failures indicate an increased probability of success on subsequent attempts. This is, in fact, the inverse of what actually happens, even on a fair chance of a successful event, given a set number of iterations. Assume a fair 16-sided die, where a win is defined as rolling a 1. Assume a player is given 16 rolls to obtain at least one win (1p(rolling no ones)). The low winning odds are just to make the change in probability more noticeable. The probability of having at least one win in the 16 rolls is:

However, assume now that the first roll was a loss (93.75% chance of that, 1516). The player now only has 15 rolls left and, according to the fallacy, should have a higher chance of winning since one loss has occurred. His chances of having at least one win are now:

Simply by losing one toss the player's probability of winning dropped by 2%. By the time this reaches 5 losses (11 rolls left), his probability of winning on one of the remaining rolls will have dropped to ~50%. The player's odds for at least one win in those 16 rolls has not increased given a series of losses; his odds have decreased because he has fewer iterations left to win. In other words, the previous losses in no way contribute to the odds of the remaining attempts, but there are fewer remaining attempts to gain a win, which results in a lower probability of obtaining it. The player becomes more likely to lose in a set number of iterations as he fails to win, and eventually his probability of winning will again equal the probability of winning a single toss, when only one toss is left: 6.25% in this instance. Some lottery players will choose the same numbers every time, or intentionally change their numbers, but both are equally likely to win any individual lottery draw. Copying the numbers that won the previous lottery draw gives an equal probability, although a rational gambler might attempt to predict other players' choices and then deliberately avoid these numbers. Low numbers (below 31 and especially below 12) are popular because people play birthdays as their so-called lucky numbers; hence a win in which these numbers are over-represented is more likely to result in a shared payout.

Gambler's fallacy A joke told among mathematicians demonstrates the nature of the fallacy. When flying on an aircraft, a man decides to always bring a bomb with him. "The chances of an aircraft having a bomb on it are very small," he reasons, "and certainly the chances of having two are almost none!" A similar example is in the book The World According to Garp when the hero Garp decides to buy a house a moment after a small plane crashes into it, reasoning that the chances of another plane hitting the house have just dropped to zero. A very real-world example of this is how mothers and couples trying for another child tend to think that if they've had several children of the same sex previously, that this somehow makes their chances more likely of finally having a child of the opposite sex. This is similar to what people tend to think of with Henry VIII of England trying so desperately for a son. While the TriversWillard hypothesis explains how there is actually a slight change in a woman's likelihood to birth males towards birthing females over the course of her life, it is almost always a 50% chance of either sex, despite what parents may hope for their next child. The most famous example happened in a Monte Carlo Casino in the summer of 1913, when the ball fell in black 26 times in a row, an extremely uncommon occurrence (but no more or less common than any of the other 67,108,863 sequences of 26 balls, neglecting the 0 or 00 spots on the wheel), and gamblers lost millions of francs betting against black after the black streak happened. Gamblers reasoned incorrectly that the streak was causing an "imbalance" in the randomness of the wheel, and that it had to be followed by a long streak of red.[1]

28

Non-examples of the fallacy


There are many scenarios where the gambler's fallacy might superficially seem to apply, but actually does not. When the probability of different events is not independent, the probability of future events can change based on the outcome of past events (see statistical permutation). Formally, the system is said to have memory. An example of this is cards drawn without replacement. For example, if an ace is drawn from a deck and not reinserted, the next draw is less likely to be an ace and more likely to be of another rank. The odds for drawing another ace, assuming that it was the first card drawn and that there are no jokers, have decreased from 452 (7.69%) to 351 (5.88%), while the odds for each other rank have increased from 452 (7.69%) to 451 (7.84%). This type of effect is what allows card counting schemes to work (for example in the game of blackjack). Meanwhile, the reversed gambler's fallacy may appear to apply in the story of Joseph Jagger, who hired clerks to record the results of roulette wheels in Monte Carlo. He discovered that one wheel favored nine numbers and won large sums of money until the casino started rebalancing the roulette wheels daily. In this situation, the observation of the wheel's behavior provided information about the physical properties of the wheel rather than its "probability" in some abstract sense, a concept which is the basis of both the gambler's fallacy and its reversal. Even a biased wheel's past results will not affect future results, but the results can provide information about what sort of results the wheel tends to produce. However, if it is known for certain that the wheel is completely fair, then past results provide no information about future ones. The outcome of future events can be affected if external factors are allowed to change the probability of the events (e.g., changes in the rules of a game affecting a sports team's performance levels). Additionally, an inexperienced player's success may decrease after opposing teams discover his or her weaknesses and exploit them. The player must then attempt to compensate and randomize his strategy. See Game theory. Many riddles trick the reader into believing that they are an example of the gambler's fallacy, such as the Monty Hall problem.

Gambler's fallacy

29

Non-example: unknown probability of event


When the probability of repeated events are not known, outcomes may not be equally probable. In the case of coin tossing, as a run of heads gets longer and longer, the likelihood that the coin is biased towards heads increases. If one flips a coin 21 times in a row and obtains 21 heads, one might rationally conclude a high probability of bias towards heads, and hence conclude that future flips of this coin are also highly likely to be heads. In fact, Bayesian inference can be used to show that when the long-run proportion of different outcomes are unknown but exchangeable (meaning that the random process from which they are generated may be biased but is equally likely to be biased in any direction) previous observations demonstrate the likely direction of the bias, such that the outcome which has occurred the most in the observed data is the most likely to occur again.[3]

Psychology behind the fallacy


Amos Tversky and Daniel Kahneman proposed that the gambler's fallacy is a cognitive bias produced by a psychological heuristic called the representativeness heuristic.[4] [5] According to this view, "after observing a long run of red on the roulette wheel, for example, most people erroneously believe that black will result in a more representative sequence than the occurrence of an additional red",[6] so people expect that a short run of random outcomes should share properties of a longer run, specifically in that deviations from average should balance out. When people are asked to make up a random-looking sequence of coin tosses, they tend to make sequences where the proportion of heads to tails stays closer to 0.5 in any short segment than would be predicted by chance;[7] Kahneman and Tversky interpret this to mean that people believe short sequences of random events should be representative of longer ones.[8] The representativeness heuristic is also cited behind the related phenomenon of the clustering illusion, according to which people see streaks of random events as being non-random when such streaks are actually much more likely to occur in small samples than people expect.[9]

References
[1] Lehrer, Jonah (2009). How We Decide. New York: Houghton Mifflin Harcourt. p.66. ISBN978-0-618-62011-1. [2] Colman, Andrew (2001). "Gambler's Fallacy - Encyclopedia.com" (http:/ / www. encyclopedia. com/ doc/ 1O87-gamblersfallacy. html). A Dictionary of Psychology. Oxford University Press. . Retrieved 2007-11-26. [3] O'Neill, B. and Puza, B.D. (2004) Dice have no memories but I do: A defence of the reverse gambler's belief. (http:/ / cbe. anu. edu. au/ research/ papers/ pdf/ STAT0004WP. pdf). Reprinted in abridged form as O'Neill, B. and Puza, B.D. (2005) In defence of the reverse gambler's belief. The Mathematical Scientist 30(1), pp. 1316. [4] Tversky, Amos; Daniel Kahneman (1974). "Judgment under uncertainty: Heuristics and biases". Science 185 (4157): 11241131. doi:10.1126/science.185.4157.1124. PMID17835457. [5] Tversky, Amos; Daniel Kahneman (1971). "Belief in the law of small numbers". Psychological Bulletin 76 (2): 105110. doi:10.1037/h0031322. [6] Tversky & Kahneman, 1974. [7] Tune, G.S. (1964). "Response preferences: A review of some relevant literature". Psychological Bulletin 61 (4): 286302. doi:10.1037/h0048618. PMID14140335. [8] Tversky & Kahneman, 1971. [9] Gilovich, Thomas (1991). How we know what isn't so. New York: The Free Press. pp.1619. ISBN0-02-911706-2.

Gaze heuristic

30

Gaze heuristic
The gaze heuristic is a heuristic employed by people when trying to catch a ball. Experimental studies have shown that people do not act as though they were solving a system of differential equations that describe the forces acting on the ball while it is in the air and then run to the place at which the ball is predicted to hit the ground. Instead they fixate the ball with their eyes and move so as to keep the angle of the gaze either constant or within a certain range. Moving in such a fashion assures that the ball will hit the catcher.[1] [2]

References
[1] "ScienceDirect - Psychology of Sport and Exercise : Fast and frugal heuristics in sports" (http:/ / www. sciencedirect. com/ science?_ob=ArticleURL& _udi=B6W6K-4KJ5T7G-1& _user=10& _rdoc=1& _fmt=& _orig=search& _sort=d& view=c& _acct=C000050221& _version=1& _urlVersion=0& _userid=10& md5=5697baffe205a95161b4034b9db04064). www.sciencedirect.com. . Retrieved 2008-01-16. [2] "Gut Feelings" (The Intelligence of the Unconscious) By Gerd Gigerenzer. Viking, 2007.

Naive diversification
Nave diversification is a choice heuristic (also known as "diversification heuristic"[1] ). Its first demonstration was made by Itamar Simonson in marketing in the context of consumption decisions by individuals.[2] It was subsequently shown in the context of economic and financial decisions. Simonson showed that when people have to make simultaneous choice (e.g. choose now which of six snacks to consume in the next three weeks), they tend to seek more variety (e.g., pick more kinds of snacks) than when they make sequential choices (e.g., choose once a week which of six snacks to consume that week for three weeks). That is, when asked to make several choices at once, people tend to diversify more than when making the same type of decision sequentially. Subsequent research replicated the effect using a field experiment: on Halloween night, young trick-or-treaters were required to make a simultaneous or subsequent choice between the candies they received. The results showed a strong diversification bias when choices had to be made simultaneously, but not when they were made sequentially.[3] Shlomo Benartzi and Richard Thaler commented on Read and Loewenstein's research: "This result is striking since in either case the candies are dumped into a bag and consumed later. It is the portfolio in the bag that matters, not the portfolio selected at each house."[4] Following on the naive diversification showed by children, Benartzi and Thaler turned to study whether the effect manifests itself among investors making decisions in the context of defined contribution saving plans. They found that "some investors follow the '1/n strategy': they divide their contributions evenly across the funds offered in the plan. Consistent with this Nave notion of diversification, we find that the proportion invested in stocks depends strongly on the proportion of stock funds in the plan." This finding is particularly troubling in the context of laypersons making financial decisions, because they may be diversifying in a way that is sub-optimal.

Naive diversification

31

References
[1] Read, Daniel, and George Loewenstein. 1995. "Diversification Bias: Explaining the Discrepancy in Variety Seeking between Combined and Separated Choices." Journal of Experimental Psychology: Applied 1:34-49. [2] Simonson, Itamar. 1990. "The Effect of Purchase Quantity and Timing on Variety-Seeking Behavior." Journal of Marketing Research 27:150-162. [3] Read, Daniel, and George Loewenstein. 1995. "Diversification Bias: Explaining the Discrepancy in Variety Seeking between Combined and Separated Choices." Journal of Experimental Psychology: Applied 1:34-49 [4] Benartzi, Shlomo, and Richard H. Thaler. 2001. "Nave Diversification Strategies in Defined Contribution Saving Plans." American Economic Review 91:79-98.

Peak-end rule
According to the peak-end rule, we judge our past experiences almost entirely on how they were at their peak (pleasant or unpleasant) and how they ended. Other information is not lost, but it is not used. This includes net pleasantness or unpleasantness and how long the experience lasted. In one experiment, one group of people were subjected to loud, painful noises. In a second group, subjects were exposed to the same loud, painful noises as the first group, after which were appended somewhat less painful noises. This second group rated the experience of listening to the noises as much less unpleasant than the first group, despite having been subjected to more discomfort than the first group, as they experienced the same initial duration, and then an extended duration of reduced unpleasantness. This heuristic was first suggested by Daniel Kahneman and others. He argues that because people seem to perceive not the sum of an experience but its average, it may be an instance of the representativeness heuristic.

References
Kahneman, D. (1999). Objective Happiness. In Kahneman, D., Diener, E. and Schwarz, N. (eds.). Well-Being: The Foundations of Hedonic Psychology. New York: Russel Sage. pp. 3-25.

Recognition heuristic

32

Recognition heuristic
The recognition heuristic has been used as a model in the psychology of judgment and decision making and as a heuristic in artificial intelligence. It states: [1] [2] :

If one of two objects is recognized and the other is not, then infer that the recognized object has the higher value with respect to the criterion.

Daniel Goldstein and Gerd Gigerenzer quizzed students in Germany and the United States on the populations of both German and American cities. Each group scored slightly higher on the foreign cities despite only recognizing a fraction of them. The experimenters theorized that the students would be able to attain such high accuracy on foreign cities if they relied on the heuristic and particular conditions, concerning cue validity for example, were met. They posited the heuristic as a domain specific strategy for inference. In later research, Daniel M. Oppenheimer presented participants pairs of cities made from actual cities and fictional cities. Although the recognition heuristic predicts that participants would judge the actual (recognizable) cities to be larger, participants judged the fictional (unrecognizable) cities to be larger, showing that more than recognition can play a role in such inferences. [3] Research by Newell & Fernandez [4] and Richter & Spth tests the non-compensatory prediction of the recognition heuristic and states that "recognition information is not used in an all-or-none fashion but is integrated with other types of knowledge in judgment and decision making."[5]

Notes
[1] Goldstein, D. G., & Gigerenzer, G. (2002). Models of ecological rationality: The recognition heuristic. Psychological Review, 109, 75-90. Full text (PDF) (http:/ / www. dangoldstein. com/ papers/ RecognitionPsychReview. pdf). [2] Goldstein, D. G., & Gigerenzer, G. (1999). The recognition heuristic: How ignorance makes us smart. In G. Gigerenzer, & P. M. Todd, (Eds.). Simple heuristics that make us smart. Oxford: Oxford University Press. [3] Oppenheimer, D.M. (2003). Not so Fast! (and not so Frugal!): Rethinking the Recognition Heuristic. Cognition, 90, B1-B9. [4] Newell, B. R. & Fernandez, D. (2006). On the binary quality of recognition and the inconsequentiality of further knowledge: Two critical tests of the recognition heuristic. Journal of Behavioral Decision Making. 19(4): 333-346. [5] Richter, T., & Spth, P. (2006). Recognition is used as one cue among others in judgment and decision making. Journal of Experimental Psychology: Learning, Memory & Cognition , 32, 150-162.

Representativeness heuristic

33

Representativeness heuristic
The representativeness heuristic is a psychological term describing a phenomenon wherein people judge the probability or frequency of a hypothesis by considering how much the hypothesis resembles available data as opposed to using a Bayesian calculation. While often very useful in everyday life, it can also result in neglect of relevant base rates and other cognitive biases. The representative heuristic was first proposed by Amos Tversky and Daniel Kahneman.[1] In causal reasoning, the representativeness heuristic leads to a bias toward the belief that causes and effects will resemble one another (examples include both the belief that "emotionally relevant events ought to have emotionally relevant causes", and magical associative thinking).[2]

Examples
Tom W.
In a study done in 1973, Kahneman and Tversky gave their subjects the following information: "Tom W. is of high intelligence, although lacking in true creativity. He has a need for order and clarity, and for neat and tidy systems in which every detail finds its appropriate place. His writing is rather dull and mechanical, occasionally enlivened by somewhat corny puns and by flashes of imagination of the sci-fi type. He has a strong drive for competence. He seems to feel little sympathy for other people and does not enjoy interacting with others. Self-centered, he nonetheless has a deep moral sense." The subjects were then divided into three groups who were given different decision tasks: One group of subjects was asked how similar Tom W. was to a student in one of nine types of college graduate majors (business administration, computer science, engineering, humanities/education, law, library science, medicine, physical/life sciences, or social science/social work). Most subjects associated Tom W. with an engineering student, and thought he was least like a student of social science/social work. A second group of subjects was asked instead to estimate the probability that Tom W. was a grad student in each of the nine majors. The probabilities were in line with the judgments from the previous group. A third group of subjects was asked to estimate the proportion of first-year grad students there were in each of the nine majors. The second group's probabilities were approximated by how much they thought Tom W. was representative of each of the majors, and less on the base rate probability of being that kind of student in the first place (the third group). Had the subjects approximated their answers by the base rates, their estimated probability that Tom W. was an engineer would have been much lower, as there were few engineering grad students at the time.
Daniel Kahneman

Representativeness heuristic

34

The Taxicab problem


In another study done by Tversky and Kahneman, subjects were given the following problem: "A cab was involved in a hit and run accident at night. Two cab companies, the Green and the Blue, operate in the city. 85% of the cabs in the city are Green and 15% are Blue. A witness identified the cab as Blue. The court tested the reliability of the witness under the same circumstances that existed on the night of the accident and concluded that the witness correctly identified each one of the two colors 80% of the time and failed 20% of the time. What is the probability that the cab involved in the accident was Blue rather than Green knowing that this witness identified it as Blue?" Most subjects gave probabilities over 50%, and some gave answers over 80%. The correct answer, found using Bayes' theorem, is lower than these estimates: There is a 12% chance (15% times 80%) of the witness correctly identifying a blue cab. There is a 17% chance (85% times 20%) of the witness incorrectly identifying a green cab as blue. There is therefore a 29% chance (12% plus 17%) the witness will identify the cab as blue. This results in a 41% chance (12% divided by 29%) that the cab identified as blue is actually blue.

Representativeness is cited in the similar effect of the gambler's fallacy, the regression fallacy and the conjunction fallacy.

Representativeness, Extensionality, and Bayes' Theorem


The Representativeness Heuristic violates one of the fundamental properties of probability: extensionality. For example, participants were provided with a description of Linda who resembles a feminist. Then participants were asked to evaluate the probability of her being a feminist, the probability of her being a bank teller, or the probability of being both a bank teller and feminist. Probability theory dictates that the probability of being both a bank teller and feminist (the conjunction of two sets) must be less than or equal to the probability of being either a feminist or a

Representativeness heuristic bank teller. However, participants judged the conjunction (bank teller and feminist) as being more probable than being a bank teller alone.[3] The use of the Representativeness Heuristic will likely lead to violations of Bayes' Theorem. Bayes' Theorem states:

35

However, judgments by Representativeness only look at the resemblance between the hypothesis and the data, thus inverse probabilities are equated:

As can be seen, the base rate P(H) is ignored in this equation, leading to the base rate fallacy. This was explicitly tested by Dawes, Mirels, Gold and Donahue (1993)[4] who had people judge both the base rate of people who had a particular personality trait and the probability that a person who had a given personality trait had another one. For example, participants were asked how many people out of 100 answered true to the question "I am a conscientious person" and also, given that a person answered true to this question, how many would answer true to a different personality question. They found that participants equated inverse probabilities (e.g., ) even when it was obvious that they were not the same (the two questions were answered immediately after each other).

Disjunction Fallacy
In addition to extensionality violation, base-rate neglect, and the conjunction fallacy, the use of Representativeness Heuristic may lead to a Disjunction Fallacy. From probability theory the disjunction of two events is at least as likely as either of the events individually. For example, the probability of being either a physics or biology major is at least as likely as being a physics major, if not more likely. However, when a personality description (data) seems to be very representative of a physics major (e.g., pocket protector) over a biology major, people judge that it is more likely for this person to be a physics major than a natural sciences major (which is a superset of physics). Further evidence that the Representativeness Heuristic may be causal to the Disjunction Fallacy comes from Bar-Hillel and Neter (1986).[5] They found that people judge a person who is highly representative of being a statistics major (e.g., highly intelligent, does math competitions) as being more likely to be a statistics major than a social sciences major (superset of statistics), but they do not think that he is more likely to be a Hebrew language major than a humanities major (superset of Hebrew language). Thus, only when the person seems highly representative of a category is that category judged as more probable than its superordinate category. These incorrect appraisals remained even in the face of losing real money in bets on probabilities.

References
[1] Tversky, A., Kahneman, D. (1974). Judgment under Uncertainty: Heuristics and Biases. Science, New Series, Vol. 185, No. 4157, pp. 1124-1131 [2] Nisbett, D., Ross, L. (1980). Human Inference: Strategies and Shortcomings of Social Judgment. Prentice Hall, Englewood Cliffs NJ, pp.115-118 [3] Tversky, A., Kahneman, D. (1983). Extensional versus intuitive reasoning: The conjunction fallacy in probability judgments. "Psychological Review", 90, 293-315. [4] Dawes, Mirels, Gold, Donahue (1993). Equating inverse probabilities in implicit personality judgments. Psychological Science,4,6, 396-400 [5] Bar-Hillel, M., Neter, E. (1986). How alike is it? versus how likely is it?: A disjunction fallacy in probability judgments, Journal of Personality and Social Psychology', 65, 1119-1131'

Baron, J. (2000). Thinking and Deciding (3d ed.). Cambridge University Press. Plous, S. (1993). The Psychology of Judgement and Decision Making New York: McGraw-Hill Kahneman, D., & Tversky, A. (1973). On the Psychology of Prediction. Psychological Review, 80, 237-251. Tversky, A., & Kahneman, D. (1982). Evidential Impact of Base Rates. In D. Kahneman, P. Slovic, & A. Tversky (Eds.), Judgment under Uncertainty: Heuristics and Biases. Cambridge: Cambridge University Press.

Representativeness heuristic

36

External links
(http://posbase.uib.no/posbase/Presentasjoner/K_Representativeness.ppt) - A Powerpoint presentation on the representativeness heuristic (with further links to presentations of classical experiments). (http://posbase.uib.no/posbase/Presentasjoner/P_Tversky &Kahneman (1983).ppt) - A Powerpoint presentation on the conjunction fallacy.

Scarcity heuristic
In human psychology, the scarcity heuristic is a mental heuristic in which the mind values something based on how easily it may lose it, especially to competitors. For example, take a group of boys playing marbles. Each player has at least one of every color marble except blue. Only one boy has a blue marble. By the scarcity heuristic, that boy and his playmates will value the blue marble more because there is only one, regardless of whether the blue marble is "better" (more aesthetically attractive, or better in the marbles game, for instance).

Similarity heuristic
The similarity heuristic is a lesser-known psychological heuristic pertaining to how people make judgments based on similarity. More specifically, the similarity heuristic is used to account for how people make judgments based on the similarity between current situations and other situations or prototypes of those situations. At its most basic level, the similarity heuristic is an adaptive strategy. The goal of the similarity heuristic is maximizing productivity through favorable experience while not repeating unfavorable experiences. Decisions based on how favorable or unfavorable the present seems are based on how similar the past was to the current situation. For example, a person may use the similarity heuristic when deciding on a book purchase. If a novel has a plot similar to that of novels read and enjoyed or the author has a writing style similar to that of favored authors, the purchasing decision will be positively influenced. A book with similar characteristics to previously pleasurable books is likely to also be enjoyed, causing the person to decide to obtain it.

Background
The similarity heuristic directly emphasizes learning from past experience. For example, the similarity heuristic has been observed indirectly in experiments such as phonological similarity tests. These tests observe how well a person can distinguish similar sounds from dissimilar ones based on a comparison to previously heard sounds. While not involving a decision making process characteristic to heuristics in general, these studies show a reliance on past experience and comparison to the current experience. In addition, the similarity heuristic has become a valuable tool in the field of economics and consumerism.

Real-world examples
The similarity heuristic is very easy to observe in the world of business, both from a marketing standpoint and from the position of the consumer. People tend to let past experience shape their world view; thus, if something presents itself as similar to a good experience had in the past, it is likely that the individual will partake in the current experience. The reverse holds true for situations that have proven unfavorable. A very basic example of this concept is a person deciding to get a meal at a particular restaurant because it reminds them of a similar establishment.

Similarity heuristic

37

Marketing
Companies often use the similarity heuristic as a marketing strategy. For example, companies will often advertise their services as something similar to a successful competitor, but better such a concept is evident in the motion picture industry. Trailers for upcoming films will promote the latest movie as being made by a particular director, citing said director's past film credentials. In effect, a similarity heuristic is created in an audience's mind; creating a similarity between the coming attraction and past successes will likely make people decide to see the upcoming film. Automotive parts companies and their distributors and dealers leverage similarity heuristics when they interchange the term, "OEM" (original equipment manufacturer), and "OE" (original equipment). For example, the OE design specifications may ask for a certain durability factor, corrosion resistance, and material composition. The OEM realizes they can produce the same part less expensively and with possibly greater profit, if they do not adhere to all or most of the OE design specifications. By marketing their product as "OEM" against a well-known brand or product (e.g., Mercedes-Benz), they predict that enough customers will purchase their OEM product vs. the OE product. The converse happens when the OE factory (e.g., Mercedes-Benz) promotes their brand of a commodity product (e.g., anti-freeze/coolant, spark plugs, etc.) as superior or better quality than the commodity product. In addition, the use of a reverse similarity heuristic can be a highly valuable marketing tool. For example, when Nintendo wished to launch its Nintendo Entertainment System (NES) in the United States, it did so in the middle of a video game depression; Atari had managed to make video games one of the least popular American pastimes. Initial showing of the NES were met poorly clearly, a similarity heuristic was in place, and people had created biases against anything relating to interactive television gaming. Nintendo's goal, then, became the differentiation of their system from the past examples. Employing a dissimilarity heuristic, Nintendo managed to create enough of a gap from the former video game industry and market a successful product.

Problem Solving
Some professions, such as software developers, regularly utilize the similarity heuristic. For software developers, the similarity heuristic is utilized when performing debugging tasks. A software bug exhibits a set of symptoms indicating the existence of a problem. In general, similar symptoms are caused by similar types of programming errors. By comparing these symptoms with those of previously corrected software flaws, a developer is able to determine the most probable cause and take an effective course of action. Over time, a developers past experiences will allow their use of the similarity heuristic to be highly effective, quickly choosing the debugging approach that will likely reveal the problems source. Problem solving in general is benefitted by the similarity heuristic. When new problems arise similar to previous problems, the similarity heuristic selects an approach that previously yielded favorable results. Even if the current problem is novel, any similarity to previous issues will help choose a proper course of action.

References
Sheff, David; Eddy, Andrew (1999). Game Over: Press Start to Continue. Cyberactive Media Group. ISBN0-9669617-0-6.

External links
Colet, Ed (1999). Interpreting Data Mining Results: The Influence of Heuristics [1]. Retrieved February 21, 2006. Russell, Stuart (1988). Analogy by Similarity [2]. Retrieved February 21, 2006. Read, Daniel Grushka-Cockayne, Yael. (2007). The similarity heuristic [3] Walker, Donald (2007). "Similarity Determination and Case Retrieval in an Intelligent Decision Support System for Diabetes Management [4]

Similarity heuristic

38

References
[1] [2] [3] [4] http:/ / www. taborcommunications. com/ dsstar/ 99/ 0202/ 100543. html http:/ / www. cs. berkeley. edu/ ~russell/ papers/ helman88-similarity. pdf http:/ / papers. ssrn. com/ sol3/ papers. cfm?abstract_id=1030517 http:/ / etd. ohiolink. edu/ send-pdf. cgi/ Walker%20Donald. pdf?acc_num=ohiou1194562654

Simulation heuristic
The simulation heuristic is a psychological heuristic, or simplified mental strategy, according to which people determine the likelihood of an event based on how easy it is to picture the event mentally. Partially as a result, people regret more missing outcomes that had been easier to imagine, such as "near misses" instead of when accomplishment had been much further away. The simulation heuristic was first theorized by Daniel Kahneman and Amos Tversky as a specialized adaptation of the availability heuristic to explain counterfactual thinking and regret. However, it should not be thought of as the same thing as the availability heuristic. Specifically the simulation heuristic is defined as how perceivers tend to substitute 'normal' antecedent events for exceptional ones in psychologically 'undoing' this specific outcome. It was also believed by Kahneman and Tversky that people utilized this heuristic to understand and predict others behaviors in certain circumstances and to answer questions involving counterfactual propositions. People, they believe, do this by mentally undoing events that have occurred and then running mental simulations of the events with the corresponding input values of the altered model. For example, a study was proposed that provided a group of participants with a situation describing two men who were delayed by half an hour in a traffic jam on the way to the airport. Both men were delayed enough that they both missed flights on which they were booked, one of them by half an hour and the second by only five minutes (because his flight had been delayed for 25 minutes). The results showed that a greater number of participants thought that the second man would be more upset then the first man. Kahneman and Tversky argued that this difference could not be attributed to disappointment, because both had expected to miss their flights. They believed instead that the true explanation was that the students utilized the simulation heuristic and so it was easier for them to imagine minor alterations that would have enabled the second man to arrive in time for his flight then it was for them to devise the same alterations for the first man.
Daniel Kahneman

History
This heuristic was introduced by the Israeli psychologists Daniel Kahneman (born 1934) and Amos Tversky (193796). They did so at a lecture in 1979 and also, published it as a book chapter in 1982.

Simulation Different from Availability


The Subjective probability judgments of an event, used in the simulation heuristic do not follow the availability heuristic, in that these judgments are not the cause of relevant examples in memory but are instead based on the ease with which self generated fictitious examples can be mentally simulated or imagined.

Simulation heuristic

39

Application
The theory that underlies the simulation heuristic assumes that ones judgments are bias towards information that is easily imagined or simulated mentally. It is because of this that we see biases having to do with the overestimation of how causally plausible an event could be or the enhanced regret experienced when it is easy to mentally undo an unfortunate event, such as an accident. Significant research on simulation heuristics application in counterfactual reasoning has been performed by Dale T Miller and Bryan Taylor. - For example, they found that if an affectively negative experience, such as a fatal car accident was brought about by an extraordinary event, such as someone usually goes by train to work but instead drove; the simulation heuristic will cause an emotional reaction of regret. This emotional reaction is because the exceptional event is easy to mentally undo and replace with a more common one that would not have caused the accident. - Kahneman and Tversky did a study in which two individuals were given lottery tickets and then were given the opportunity to sell those same tickets back either two weeks before the drawing or an hour before the drawing. They proposed this question to some participants whose responses showed that they believed that the man who had sold his ticket an hour before the drawing would experience the greatest anticipatory regret when that ticket won. Kahneman and Tversky explained these findings through the understanding of the norm theory, by stating that peoples anticipatory regret, along with reluctance to sell the ticket, should increase with their ease of imagining themselves still owning the winning ticket.[1] Therefore, the man who recently sold his ticket will experience more regret because the counterfactual world, in which he is the winner, is perceived as closer for him than the man who sold his ticket two weeks ago. This example shows the bias in this type of thinking because both men had the same probability of winning if they had not sold their tickets and the time differences in which they did will not increase or decrease these chances. - Similar results were found with plane crash survivors. These individuals experienced a greater amount of anticipatory regret when they engaged in the highly mutable action of switching flights last minute. It was reasoned that this was due to a person anticipating counterfactual thoughts that a negative event was evoked, because it tends to make the event more vivid, and so tends to make it more subjectively likely.[2]

Implication in Real World Situations


This heuristic has shown to be a salient feature of clinical anxiety and its disorders, which are marked by elevated subjective probability judgments that future negative events will happen to the individual. A study done by David Raune and Andrew Macleod tried to tie the cognitive mechanisms that underlie this type of judgment to the simulation heuristic. - Their findings showed that anxious patients simulation heuristic scores were correlated with the subjective probability. Such that, the more reasons anxious patients could think of why negative events would happen, relative to the number why they would not happen, the higher their subjective probability judgment that the events would happen to them. Further it was found that anxious patients displayed increase access to the simulation compared to control patients. They also found support for the hypothesis that the easier it was for anxious patients to form the visual image, the greater the subjective probability that the event would happen to them. Through this work they purposed that the main clinical implication of the simulation heuristic results is that, in order to lower elevated subjective probability in clinical anxiety, patients should be encouraged to think of more reasons why the negative events will not occur then why they will occur .

Simulation heuristic

40

How it is Affected by other Heuristics


A study done by Philip Broemer was done to test the hypothesis that the subjective ease with which one can imagine a symptom will be affected by the impact of differently framed messages on attitudes toward performing health behaviors. By drawing on the simulation heuristic, he argued that the vividness of information is reflected in the subjective ease with which people can imagine having symptoms of an illness. - His results showed that the impact of message framing upon attitudes was moderated by the ease of imagination and clearly supported the congruency hypothesis for different kinds of health behavior. Finding that, negatively framed messages led to more positive attitudes when the recipients of these messages could easily imagine the relevant symptoms. Ease of imagination thus facilitates persuasion when messages emphasize potential health risks. A positive framing however, leads to more positive attitudes when symptom imagination was rather difficult. Therefore, a message with a reassuring theme is more congruent with a recipients state of mind when he or she cannot easily imagine the symptoms whereas a message with an aversive theme is more congruent with a recipients state of mind when he or she can easily imagine having the symptoms .

Footnotes
[1] Gilovich p. 372 [2] Gilovich p. 374

References
Bouts, Patrick; Spears, Russell; Van Der Pligt, Joop (1992). "Counterfactual processing and the correspondence between events and outcomes: Normality versus value". European Journal of Social Psychology 22 (4): 38796. doi:10.1002/ejsp.2420220407. Colman, Andrew M. (2001). A dictionary of psychologyOxford Paperback Reference SeriesOxford reference online (http://books.google.com/books?id=2tenQgAACAAJ). Oxford University Press. ISBN9780198662112. Fiedler, Klaus (1996). "Simulation Heuristic" (http://books.google.com/books?id=m1O9PHDwfxAC). The Blackwell encyclopedia of social psychology. Wiley-Blackwell. ISBN9780631202899. Raune, David; MacLeod, Andrew; Holmes, Emily A. (2005). "The simulation heuristic and visual imagery in pessimism for future negative events in anxiety". Clinical Psychology & Psychotherapy 12 (4): 31325. doi:10.1002/cpp.455. Gilovich, Thomas; Griffin, Dale W.; Kahneman, Daniel (2002). Heuristics and biases: the psychology of intuitive judgement (http://books.google.com/books?id=FfTVDY-zrCoC). Cambridge University Press. pp.37475. ISBN9780521796798. Broemer, Philip (2004). "Ease of imagination moderates reactions to differently framed health messages". European Journal of Social Psychology 34 (2): 103. doi:10.1002/ejsp.185.

Simulation heuristic

41

Further reading
Goldman, Alvin I (2006). Simulating Minds : The Philosophy, Psychology, and Neuroscience of Mindreading (http://books.google.com/books?id=vsKpQ1An4hcC). Oxford University Press US. ISBN9780195138924. Hewstone, M; Manstead, A. S. R (1996). The Blackwell encyclopedia of social psychology (http://books.google. com/books?id=m1O9PHDwfxAC). Wiley-Blackwell. ISBN9780631202899. Sanna, Lawrence J. (2006). Judgments over time: the interplay of thoughts, feelings, and behaviors (http://books. google.com/books?id=Q9gju6gMFUcC). Oxford University Press US. ISBN9780195177664.

Social proof
Social proof, also known as informational social influence, is a psychological phenomenon where people assume the actions of others reflect correct behavior for a given situation. This effect is prominent in ambiguous social situations where people are unable to determine the appropriate mode of behavior, and is driven by the assumption that surrounding people possess more knowledge about the situation. The effects of social influence can be seen in the tendency of large groups to conform to choices which may be either correct or mistaken, a phenomenon sometimes referred to as herd behavior. Although social proof reflects a rational motive to take into account the information possessed by others, formal analysis shows that it can cause people to converge too quickly upon a single choice, so that decisions of even large groups of individuals may be grounded in very little information (see information cascades). Social proof is a type of conformity. When a person is in a situation where they are unsure of the correct way to behave, they will often look to others for cues concerning the correct behavior. When "we conform because we believe that other's interpretation of an ambiguous situation is more accurate than ours and will help us choose an appropriate course of action,"[1] it is informational social influence. This is contrasted with normative social influence wherein a person conforms to be liked or accepted by others. Social proof often leads not just to public compliance (conforming to the behavior of others publicly without necessarily believing it is correct) but private acceptance (conforming out of a genuine belief that others are correct).[2] Social proof is more powerful when being accurate is more important and when others are perceived as especially knowledgeable.

Mechanisms
Multiple source effect
The multiple source effect occurs when people give more credence to ideas that are stated by multiple sources. This effect can be clearly seen when social proof occurs. For instance, one study observed that people who hear five positive reviews on a book as read by five different synthesized voices perceive that book more favorably than if they hear the same five reviews as read by one sythesized voice.[3]

Uncertainty about the correct conclusion


Uncertainty is a major factor that encourages the use of social proof. One study found that when evaluating a product, consumers were more likely to incorporate the opinions of others through the use of social proof when their own experiences with the product were ambiguous, leaving uncertainty as to the correct conclusion that they should make.[4]

Social proof

42

Similarity to the surrounding group


Similarity also motivates the use of social proof; when a person perceives themselves as similar to the people around them, they are more likely to adopt and perceive as correct the observed behavior of these people. This has been noted in areas such as the use of laugh tracks, where participants will laugh longer and harder when they perceive the people laughing to be similar to themselves.[5]

Research
Early research
The most famous study of social proof is Muzafer Sherif's 1935 experiment.[6] In this experiment subjects were placed in a dark room and asked to look at a dot of light about 15 feet away. They were then asked how much, in inches, the dot of light was moving. In reality it was not moving at all, but due to the autokinetic effect it appeared to move. How much the light appears to move varies from person to person but is generally consistent over time for each individual. A few days later a second part of the experiment was conducted. Each subject was paired with two other subjects and asked to give their estimate of how much the light was moving out loud. Even though the subjects had previously given different estimates, the groups would come to a common estimate. To rule out the possibility that the subjects were simply giving the group answer to avoid looking foolish while still believing their original estimate was correct, Sherif had the subjects judge the lights again by themselves after doing so in the group. They maintained the group's judgment. Because the movement of the light is ambiguous the participants were relying on each other to define reality. Another study looked at informational social influence in eyewitness identification. Subjects were shown a slide of the "perpetrator". They were then shown a slide of a line-up of four men, one of whom was the perpetrator they had seen, and were asked to pick him out. The task was made difficult to the point of ambiguity by presenting the slides very quickly. The task was done in a group that consisted of one actual subject and three confederates (a person acting as a subject but actually working for the experimenter). The confederates answered first and all three gave the same wrong answer. In a high-importance condition of the experiment subjects were told that they were participating in a real test of eyewitness identification ability that would be used by police departments and courts, and their scores would establish the norm for performance. In a low-importance condition subjects were told that the slide task was still being developed and that the experimenters had no idea what the norm for performance wasthey were just looking for useful hints to improve the task. It was found that when subjects thought the task was of high importance they were more likely to conform, giving the confederate's wrong answer 51% of the time as opposed to 35% of the time in the low-importance condition.[7]

Cultural effects on social proof


The strength of social proof also varies across different cultures. For instance, studies have shown that subjects in collectivist cultures conform to others' social proof more often than those in individualist cultures. [8]

Copycat suicides
Further information: Copycat suicide Social proof has been proposed as an explanation for Copycat suicide, where suicide rates increase following media publication about suicides.[9] One study using agent-based modeling showed that copycat suicides are more likely when there are similarities between the person involved in the publicized suicide and the potential copycats.[10]

Social proof

43

Examples
In social interactions
The social value of unfamiliar people is ambiguous and requires a lot of effort to assess accurately. Given limited time and motivation, other people will often evaluate others based on how surrounding people behave towards them. For example, if a man is seen to be in the company of attractive women, or is associated with them, then his social value and attractiveness will be perceived to be greater. The implied cognition in this case would be "All those girls seem to really like him, there must be something about him that's high value". If he is seen to be rejected by many women, his social value will be judged negatively. The implied cognition is then "I just saw him being rejected by many women, there is probably a good reason why they don't like him". The concept of "Social Proof" and the fundamental attribution error can be easily exploited by persuading (or paying) attractive women to display (or at least fake) public interest in a man. Other people will attribute the women's behavior as due to the man's character and are unlikely to consider that they are interested in him due to the actual reasons (external gain). Some men use photos of themselves surrounded by attractive women to enhance their perceived social value. The effectiveness of such tactics without support by other consistent behaviors associated with high social value is questionable. Some nightclub and bar owners effectively employ social proof to increase the popularity of their venues. This is usually done by deliberately reducing the rate at which people are allowed to enter, thus artificially causing the line to be longer. Uninformed customers might perceive the long line as a signal of the place's desirability and may wait in the line merely because "if all these people are waiting, the place must be good", while in fact the venue is mediocre and nowhere near its full capacity.

In employment
Similarly, a person who has been unemployed for a long time may have a hard time finding a new job - even if they are highly skilled and qualified. Potential employers attribute wrongly the person's lack of employment to the person rather than the situation. This causes the potential employers to search more intensively for flaws or other negative characteristics that are "congruent" with or explain the person's failure and to discount the applicant's virtues. Similarly, a person who is in high demand - for example a CEO - may continue to get many attractive job offers and can as a result extract a considerable wage premium - even if his/her objective performance has been poor. When people appear successful, potential employers and others who evaluate them tend to search more intensively for virtues or positive characteristics that are "congruent" with or explain the person's success, and to ignore or underestimate the person's faults. People who experience positive social proof may also benefit from a halo effect. Other attributes are deemed to be more positive than they actually are. Additionally, the person's attributes may be viewed with a positive framing bias. For example, a person might be viewed as arrogant if they have negative social proof, and bold if they have positive social proof. For these reasons, social proof is important in determining a potential employer's consideration set. Social proof naturally also applies to products and is used extensively in marketing and sales. Situations that violate social proof can cause cognitive dissonance, and can cause people to have a sense of loss of control or failure of the "just world hypothesis".

Social proof

44

In entertainment
Theaters sometimes use specially planted audience members who are instructed to give ovations at pre-arranged times. Usually, these people are the ones who clap initially, and the rest of the audience follows. Such ovations may be perceived by non-expert audience members as signals of the performance's quality. Contrary to common annoyance of canned laughter in television shows, television studios have discovered that they can increase the perceived "funniness" of a show by merely playing canned laughter at key "funny" moments. They have found that even though viewers find canned laughter highly annoying, they perceive shows that happen to use canned laughter more funny than the shows that do not use canned laughter.[9]

Modifiers
Possession of special knowledge
If one perceives that s/he is better advised about a situation than the surrounding group, then s/he is less likely to follow the group's behavior.

Identification with authority


If one perceives themselves as a relevant authority figure in the situation, they are less likely to follow the surrounding group's behavior. This is a combination of "Identification of the surrounding group with self" and "Possession of special knowledge". People in authority positions tend to place themselves in different categories than other people and usually they have special training or knowledge that allows them to conclude that they are better informed than the surrounding group.

"Smart money"
One might perceive particular groups of others, identified by their behavior or other characteristics, to be more reliable guides to the situation than the average person. One might think truck drivers to be more frequent, and therefore more experienced drivers than others, and therefore weigh more heavily the number of trucks than the number of cars parked when judging the quality of a restaurant. One might identify the movement of betting odds or securities prices at certain times as revealing the preferences of "smart money" -- those more likely to be in the know.

References
[1] Aronson, E., Wilson, T.D., & Akert, A.M. (2005). Social Psychology (5th ed.). Upper Saddle River, NJ: Prentice Hall. [2] Kelman, H. C. (1 March 1958). "Compliance, identification, and internalization three processes of attitude change". Journal of Conflict Resolution 2 (1): 5160. doi:10.1177/002200275800200106. [3] Lee, Kwan Min (1 April 2004). "The Multiple Source Effect and Synthesized Speech.". Human Communication Research 30 (2): 182207. doi:10.1111/j.1468-2958.2004.tb00730.x. [4] Wooten, D; ReedII, A (1 January 1998). "Informational Influence and the Ambiguity of Product Experience: Order Effects on the Weighting of Evidence". Journal of Consumer Psychology 7 (1): 7999. doi:10.1207/s15327663jcp0701_04. [5] Platow, Michael J.; Haslam, S. Alexander, Both, Amanda, Chew, Ivanne, Cuddon, Michelle, Goharpey, Nahal, Maurer, Jacqui, Rosini, Simone, Tsekouras, Anna, Grace, Diana M. (1 September 2005). ""Its not funny if theyre laughing": Self-categorization, social influence, and responses to canned laughter". Journal of Experimental Social Psychology 41 (5): 542550. doi:10.1016/j.jesp.2004.09.005. [6] Sherif, M. (1935). A study of some social factors in perception. Archives of Psychology, 27(187) [7] Baron, Robert S.; Vandello, Joseph A., Brunsman, Bethany (1 January 1996). "The forgotten variable in conformity research: Impact of task importance on social influence.". Journal of Personality and Social Psychology 71 (5): 915927. doi:10.1037/0022-3514.71.5.915. [8] Bond, Rod; Peter B. Smith (1996). "Culture and Conformity: A Meta-analysis of Studies Using Aschs (1952, 1956) Line Judgment Task". Psychological Bulletin 119: 11137. doi:10.1037/0033-2909.119.1.111. [9] Cialdini, Robert (1993). Influence (3rd ed.). New York: HarperCollins. [10] Mesoudi, Alex; Jones, James Holland (NaN undefined NaN). "The Cultural Dynamics of Copycat Suicide". PLoS ONE 4 (9): e7252. doi:10.1371/journal.pone.0007252.

Take-the-best heuristic

45

Take-the-best heuristic
According to the take-the-best heuristic[1] , when making a judgment based on multiple criteria, the criteria are tried one at a time according to their cue validity, and a decision is made based on the first criterion which discriminates between the alternatives. Gerd Gigerenzer and Daniel Goldstein discovered that the heuristic did surprisingly well at making accurate inferences in real-world environments, such as inferring which of two cities is larger. The heuristic has since been modified and applied to domains from medicine, artificial intelligence, and political forecasting [2] [3] .

References
[1] Gigerenzer, G. & Goldstein, D. G. (1996). "Reasoning the fast and frugal way: Models of bounded rationality". Psychological Review, 103, 650-669. [2] Graefe, A., Armstrong, J. S. (2010), "Predicting Elections from the Most Important Issue: A Test of the Take-the-best Heuristic" Journal of Behavioral Decision Making, DOI: 10.1002/bdm.710 [3] Czerlinski, J., Goldstein, D. G., & Gigerenzer, G. (1999). "How good are simple heuristics?" In Gigerenzer, G., Todd, P. M. & the ABC Group, Simple Heuristics That Make Us Smart. New York: Oxford University Press.

46

Related Ideas
Aestheticism
Aestheticism (or the Aesthetic Movement) was a 19th century European art movement that emphasized aesthetic values more than socio-political themes for literature, fine art, the decorative arts, and interior design.[1] [2] Generally, it represents the same tendencies that symbolism or decadence represented in France, or decadentismo represented in Italy, and may be considered the British version of the same style. It was part of the anti-19th century reaction and had post-Romantic origins, and as such anticipates modernism. It was a feature of the late 19th century from about 1868 to about 1900.

Aesthetic literature
The British decadent writers were much influenced by the Oxford professor Walter Pater and his essays published during 186768, in which he stated that life had to be lived intensely, with an ideal of beauty. His text Studies in the History of the Renaissance (1873) was very well regarded by art-oriented young men of the late 19th century. The Peacock Room, Aesthetic Movement Writers of the Decadent movement writers used the slogan "Art for designed by James Abbott McNeill Whistler, one Art's Sake" (L'art pour l'art), the origin of which is debated. Some of the most famous examples of Aesthetic style claim that it was invented by the philosopher Victor Cousin, although interior design Angela Leighton in the publication On Form: Poetry, Aestheticism and the Legacy of a Word (2007) notes that the phrase was used by Benjamin Constant as early as 1804.[3] It is generally accepted to have been promoted by Thophile Gautier in France, who interpreted the phrase to suggest that there was not any real association between art and morality.

Aestheticism

47 The artists and writers of Aesthetic style tended to profess that the Arts should provide refined sensuous pleasure, rather than convey moral or sentimental messages. As a consequence, they did not accept John Ruskin and Matthew Arnold's utilitarian conception of art as something moral or useful. Instead, they believed that Art did not have any didactic purpose; it need only be beautiful. The Aesthetes developed a cult of beauty, which they considered the basic factor of art. Life should copy Art, they asserted. They considered nature as crude and lacking in design when compared to art. The main characteristics of the style were: suggestion rather than statement, sensuality, great use of symbols, and synaesthetic effectsthat is, correspondence between words, colours and music. Music was used

One of many Punch cartoons about sthetes.

to establish mood. Predecessors of the Aesthetics included John Keats and Percy Bysshe Shelley, and some of the Pre-Raphaelites. In Britain the best representatives were Oscar Wilde and Algernon Charles Swinburne, both influenced by the French Symbolists, and James McNeill Whistler and Dante Gabriel Rossetti. The style and these poets were satirised by Gilbert and Sullivan's comic opera Patience and other works, such as F. C. Burnand's drama The Colonel, and in comic magazines such as Punch. Compton Mackenzie's novel Sinister Street makes use of the type as a phase through which the protagonist passes as he is influenced by older, decadent individuals. The novels of Evelyn Waugh, who was a young participant of aesthete society at Oxford, describe the aesthetes mostly satirically, but also as a former participant. Some names associated with this assemblage are Robert Byron, Evelyn Waugh, Harold Acton, Nancy Mitford, A.E. Housman and Anthony Powell.

Aesthetic visual arts


Artists associated with the Aesthetic style include James McNeill Whistler, Dante Gabriel Rossetti, and Aubrey Vincent Beardsley. Although the work of Edward Burne-Jones was exhibited at the Grosvenor Gallery which promoted the movement, it also contains narrative and conveys moral or sentimental messages hence it falls outside the given definition.

Aesthetic Movement decorative arts


The primary element of Decorative Art is utility. The convenient but trite maxim 'Art for Art's Sake', identifying art or beauty as the primary element in other branches of the Aesthetic Movement, especially Fine Art cannot apply in this context. Decorative art must first have utility but may also be beautiful.[4] Decorative art is dissociated from Fine Art[5] Important elements of the Aesthetic Movement have been identified as Reform and Eastern Art[6] The Government Schools of Design were founded from 1837 onwards in order to improve the design of British goods. Following the Great Exhibition of 1851 efforts were intensified and Oriental objects purchased for the schools teaching collections.

Aestheticism Owen Jones, architect and Orientalist was requested to set out key principles of design and these became not only the basis of the schools teaching but also the propositions which preface The Grammar of Ornament 1856, which is still regarded as the finest systematic study or practical sourcebook of historic world ornament. Jones identified the need for a new and modern style which would meet the requirements of the modern world, rather than the continual re-cycling of historic styles, but saw no reason to reject the lessons of the past. Christopher Dresser, a student and later Professor at the school worked with Owen Jones on The Grammar of Ornament, as well as on the 1863 decoration of The Oriental Courts (Chinese, Japanese, and Indian) at the South Kensington Museum, advanced the search for a new style with his two publications The Art of Decorative Design 1862, and Principles of Design 1873. Production of Aesthetic style furniture was limited to approximately the late 19th century. Aesthetic style furniture is characterized by several common themes: Ebonized wood with gilt highlights. Far Eastern influence. Prominent use of nature, especially flowers, birds, ginkgo leaves, and peacock feathers. Blue and white on porcelain and other fine china.

48

Ebonized furniture means that the wood is painted or stained to a black ebony finish. The furniture is sometimes completely ebony-colored. More often however, there is gilding added to the carved surfaces of the feathers or stylized flowers that adorn the furniture. As aesthetic movement decor was similar to the corresponding writing style in that it was about sensuality and nature, nature themes often appear on the furniture. A typical aesthetic feature is the gilded carved flower, or the stylized peacock feather. Colored paintings of birds or flowers are often seen. Non-ebonized aesthetic movement furniture may have realistic-looking 3-dimensional-like renditions of birds or flowers carved into the wood. Contrasting with the ebonized-gilt furniture is use of blue and white for porcelain and china. Similar themes of peacock feathers and nature would be used in blue and white tones on dinnerware and other crockery. The blue and white design was also popular on square porcelain tiles. It is reported that Oscar Wilde used aesthetic decorations during his youth. This aspect of the movement was also satirised by Punch magazine and in Patience. In 1882, Oscar Wilde visited Canada where he toured the town of Woodstock, Ontario and gave a lecture on May 29 entitled; "The House Beautiful".[7] This particular lecture featured the early Aesthetic art movement, also known as the "Ornamental Aesthetic" art style, where local flora and fauna were celebrated as beautiful and textured, layered ceilings were popular. A gorgeous example of this can be seen in Annandale National Historic Site, located in Tillsonburg, Ontario, Canada. The house was built in 1880 and decorated by Mary Ann Tillson, who happened to attend Oscar Wilde's lecture in Woodstock, and was influenced by it. Since the Aesthetic art movement was only prevalent from about 1880 until about 1890, there are not very many examples of this particular style left nowadays.

Aestheticism

49

Irrationalism and aestheticism


The philosophies of irrationalism and aestheticism formed as a cultural reaction against positivism during the early 20th century. These perspectives opposed or deemphasized the importance of the rationality of human beings. Instead, they concentrated on the experience of one's own existence. Part of the philosophies involved claims that science was inferior to intuition. Art was considered A Private View at the Royal Academy, 1881 by William Powell Frith, a satire of the especially prestigious, as it was Aesthetic style of dress. Oscar Wilde is depicted at the right, surrounded by admirers. considered to represent the noumenon. The style was not accepted greatly by the public, as the social system generally limited access of the art to the elite. Some of the proponents of this style were Fyodor Dostoevsky, Henri Bergson, Lev Shestov and Georges Sorel. Symbolism and existentialism derived from these philosophies.

References
[1] Fargis, Paul (1998). The New York Public Library Desk Reference - 3rd Edition. Macmillan General Reference. pp.261. ISBN0-02-862169-7. [2] Denney, Colleen. "At the Temple of Art: the Grosvenor Gallery, 1877-1890" (http:/ / books. google. com/ books?id=RvTZrnmy5RoC& pg=PA38& lpg=PA38& dq="Aesthetic+ movement"+ swinburne& source=bl& ots=fQGt-5H3y4& sig=Ir3QCh29f7c926rkx620LbcB5fs& hl=en& ei=7kjKSuy9BNDdlAebobGSAw& sa=X& oi=book_result& ct=result& resnum=1#v=onepage& q="Aesthetic movement" swinburne& f=false), Issue 1165, p. 38, Fairleigh Dickinson University Press, 2000 ISBN 0-8386-3850-3 [3] Angela Leighton (2007) 32. [4] Christopher Dresser. The Art of Decorative Design 1862. [5] The Illustrated London News LXXXI, Saturday, August 12, 1882, p.175. [6] Christopher Morley. Reform and Eastern Art Decorative Arts Society Journal, 2010. [7] O'brien (1982) 114.

Sources
Gaunt, William. The Aesthetic Adventure. New York: Harcourt, 1945. ISBN None. Halen, Widar. Christopher Dresser, a Pioneer of Modern Design. Phaidon: 1990. ISBN 0-7148-2952-8. Lambourne, Lionel. The Aesthetic Movement. Phaidon Press: 1996. ISBN 0-7148-3000-3. O'Brien, Kevin. Oscar Wilde in Canada, an apostle for the arts. Personal Library, Publishers: 1982. Snodin, Michael and John Styles. Design & The Decorative Arts, Britain 15001900. V&A Publications: 2001. ISBN 1-85177-338-X. Christopher Morley. "'Reform and Eastern Art' in Decorative Arts Society Journal" 2010 Victoria and Albert Museum "A Higher Ambition: Owen Jones (1809-74).www.vam.ac.uk/collection/paintings/features/owen-jones/index/

Aestheticism

50

External links
Aesthetes & Decadents on Victorian Web (http://www.victorianweb.org/victorian/decadence/decadentov. html) Annandale National Historic Site (http://www.tillsonburg.ca/site/1256/default.aspx) Books, Research & Information (http://www.achome.co.uk/architecture/aesthetic.htm) "Aestheticism Style Guide" (http://www.vam.ac.uk/vastatic/microsites/british_galleries/bg_styles/Style08a/ homepage.html). British Galleries. Victoria and Albert Museum. Retrieved 2007-07-17.

Attribute substitution
Attribute substitution is a psychological process thought to underlie a number of cognitive biases and perceptual illusions. It occurs when an individual has to make a judgment (of a target attribute) that is computationally complex, and instead substitutes a more easily calculated heuristic attribute.[1] This substitution is thought of as taking place in the automatic intuitive judgment system, rather than the more self-aware reflective system. Hence, when someone tries to answer a difficult question, they may actually answer a related but different question, without realizing that a substitution has taken place. This explains why individuals can be unaware of their own biases, and why biases persist even when the subject is made aware of them. It also explains why human judgments often fail to show regression toward the mean.[2] The theory of attribute substitution unifies a number of separate explanations of reasoning errors in terms of cognitive heuristics.[1] In turn, the theory is subsumed by an effort-reduction framework proposed by Anuj K. Shah and Daniel M. Oppenheimer, which states that people use a variety of techniques to reduce the effort of making decisions.[3]

Background
In a 1974 paper, psychologists Amos Tversky and Daniel Kahneman had argued that a broad family of biases (systematic errors in judgment and decision) were explainable in terms of a few heuristics (information-processing shortcuts), including availability and representativeness. In a 2002 revision of the theory, Kahneman and Shane Frederick proposed attribute substitution as a process underlying these and other effects.[2] In 1975, psychologist Stanley Smith Stevens proposed that the strength of a stimulus (e.g. the brightness of a light, the severity of a crime) is encoded neurally in a way that is independent of modality. Kahneman and Frederick built on this idea, arguing that the target attribute and heuristic attribute could be very different in nature.[2]
Daniel Kahneman

Attribute substitution

51

Conditions
[P]eople are not accustomed to thinking hard, and are often content to trust a plausible judgment that comes to mind. Daniel Kahneman, American Economic Review 93 (5) December 2003, p. 1450

Kahneman and Frederick propose three conditions for attribute substitution:[2] 1. The target attribute is relatively inaccessible. Substitution is not expected to take place in answering factual questions that can be retrieved directly from memory ("What is your birthday?") or about current experience ("Do you feel thirsty now?). 2. An associated attribute is highly accessible. This might be because it is evaluated automatically in normal perception or because it has been primed. For example, someone who has been thinking about their love life and is then asked how happy they are might substitute how happy they are with their love life rather than other areas. 3. The substitution is not detected and corrected by the reflective system. For example, when asked "A bat and a ball together cost $1.10. The bat costs $1 more than the ball. How much does the ball cost?" many subjects incorrectly answer $0.10.[4] An explanation in terms of attribute substitution is that, rather than work out the sum, subjects parse the sum of $1.10 into a large amount and a small amount, which is easy to do. Whether they feel that is the right answer will depend on whether they check the calculation with their reflective system.

Examples
In optical illusions
Attribute substitution would also explain the persistence of some illusions. For example, when subjects judge the size of two figures in a perspective picture, their apparent sizes can be distorted by the 3D context, making a convincing optical illusion. The theory states that the three-dimensional size of the figure (which is accessible because it is automatically computed by the visual system) is substituted for its two-dimensional size on the page. Experienced painters and photographers are less susceptible to this illusion, because the two-dimensional size is more accessible to their perception.[4]

Valuing insurance

This illusion works because 3D (perspective) size is substituted for 2D size

Kahneman gives an example where some Americans were offered insurance against their own death in a terrorist attack while on a trip to Europe, while another group were offered insurance that would cover death of any kind on the trip. Even though "death of any kind" includes "death in a terrorist attack," the former group were willing to pay more than the latter. Kahneman suggests that the attribute of fear is being substituted for a calculation of the total risks of travel.[5] Fear of terrorism for these subjects was stronger than a general fear of dying on a foreign trip.

Attribute substitution

52

Stereotypes
Stereotypes can be a source of heuristic attributes.[2] In a face-to-face conversation with a stranger, judging their intelligence is more computationally complex than judging the colour of their skin. So if the subject has a stereotype about the relative intelligence of whites, blacks and Asians, that racial attribute might substitute for the more intangible attribute of intelligence. The pre-conscious, intuitive nature of attribute substitution explains how subjects can be influenced by the stereotype while thinking that they have made an honest, unbiased evaluation of the other person's intelligence.

In judgments of morality and fairness


Legal scholar Cass Sunstein has argued that attribute substitution is pervasive when people reason about moral, political or legal matters.[6] Given a difficult, novel problem in these areas, people search for a more familiar, related problem (a "prototypical case") and apply its solution as the solution to the harder problem. According to Sunstein, the opinions of trusted political or religious authorities can serve as heuristic attributes when people are asked their own opinions on a matter. Another source of heuristic attributes is emotion: people's moral opinions on sensitive subjects like sexuality and human cloning may be driven by reactions such as disgust, rather than by reasoned principles.[7] Sunstein has been challenged as not providing enough evidence that attribute substitution, rather than other processes, is at work in these cases.[3]

The Beautiful-is-Familiar effect


Psychologist Benot Monin reports a series of experiments in which subjects, looking at photographs of faces, have to judge whether they have seen those faces before. It is repeatedly found that attractive faces are more likely to be mistakenly labeled as familiar.[8] Monin interprets this result in terms of attribute substitution. The heuristic attribute in this case is a "warm glow"; a positive feeling towards someone that might either be due to their being familiar or being attractive. This interpretation has been criticised, because not all the variance in the familiarity data is accounted for by the attractiveness of the photograph.[3]

Evidence
The most direct evidence, according to Kahneman,[4] is a 1973 experiment which used a psychological profile of Tom W., a fictional graduate student.[9] One group of subjects had to rate Tom's similarity to a typical student in each of nine academic areas (Law, Engineering, Library Science etc.). Another group had to rate how likely it is that Tom specialised in each area. If these ratings of likelihood are governed by probability, then they should resemble the base rates, i.e. the proportion of students in each of the nine areas (which had been separately estimated by a third group). A probabilistic judgment would say that Tom is more likely to be a Humanities student than Library Science, because many more students study Humanities, and the additional information in the profile is vague and unreliable. Instead, the ratings of likelihood matched the ratings of similarity almost perfectly, both in this study and a similar one where subjects judged the likelihood of a fictional woman taking different careers. This suggests that rather than estimating probability using base rates, subjects had substituted the more accessible attribute of similarity.

Attribute substitution

53

Further reading
Kahneman, Daniel; Shane Frederick (2004). "Attribute Substitution in Intuitive Judgment". In Mie Augier, James G. March. Models of a man: essays in memory of Herbert A. Simon. MIT Press. pp.411432. ISBN9780262012089. OCLC52257877. Kahneman, Daniel; Shane Frederick (2005). "A Model of Heuristic Judgment". In Keith James Holyoak, Robert G. Morrison. The Cambridge Handbook of Thinking and Reasoning. Cambridge University Press. pp.267294. ISBN9780521824170. OCLC56011371. Kahneman, Daniel (December 8, 2002). "Maps of Bounded Rationality: A Perspective on Intuitive Judgement and Choice (Nobel Prize Lecture)" [10]. NobelPrize.org. The Nobel Foundation. Retrieved 2009-06-13. Kahneman, Daniel (July 22, 2007). "Short Course in Thinking about Thinking" [11]. Edge.org. Edge Foundation. Retrieved 2009-06-13. Sinnott-Armstrong, Walter; Liane Young, Fiery Cushman (in press). "Moral Intuitions as Heuristics" [12]. In J. Doris, G. Harman, S. Nichols, J. Prinz, W. Sinnott-Armstrong, S. Stich. The Oxford Handbook of Moral Psychology. Oxford University Press. Dai, Xianchi; Wertenbroch, Klaus (2008). "Advances in Judgmental and Inferential Heuristics" [13]. Advances in Consumer Research 35: 233236.

References
[1] Newell, Benjamin R.; David A. Lagnado, David R. Shanks (2007). Straight choices: the psychology of decision making. Routledge. pp.7174. ISBN9781841695884. [2] Kahneman, Daniel; Shane Frederick (2002). "Representativeness Revisited: Attribute Substitution in Intuitive Judgment". In Thomas Gilovich, Dale Griffin, Daniel Kahneman. Heuristics and Biases: The Psychology of Intuitive Judgment. Cambridge: Cambridge University Press. pp.4981. ISBN9780521796798. OCLC47364085. [3] Shah, Anuj K.; Daniel M. Oppenheimer (March 2008). "Heuristics Made Easy: An Effort-Reduction Framework". Psychological Bulletin (American Psychological Association) 134 (2): 207222. doi:10.1037/0033-2909.134.2.207. ISSN1939-1455. PMID18298269. [4] Kahneman, Daniel (December 2003). "Maps of Bounded Rationality: Psychology for Behavioral Economics". American Economic Review (American Economic Association) 93 (5): 14491475. doi:10.1257/000282803322655392. ISSN0002-8282. [5] Kahneman, Daniel (2007). "Short Course in Thinking About Thinking" (http:/ / www. edge. org/ 3rd_culture/ kahneman07/ kahneman07_index. html). Edge.org. Edge Foundation. . Retrieved 2009-06-03. [6] Sunstein, Cass R. (2005). "Moral heuristics". Behavioral and Brain Sciences (Cambridge University Press) 28 (4): 531542. doi:10.1017/S0140525X05000099. ISSN0140-525X. PMID16209802. [7] Sunstein, Cass R. (2009). "Some Effects of Moral Indignation on Law" (http:/ / lawreview. vermontlaw. edu/ articles/ 3/ 12 Sunstein Book 3, Vol. 33. pdf). Vermont Law Review (Vermont Law School) 33 (3): 405434. . Retrieved 2009-09-15. [8] Monin, Benot; Daniel M. Oppenheimer (2005). "Correlated Averages vs. Averaged Correlations: Demonstrating the Warm Glow Heuristic Beyond Aggregation" (http:/ / web. princeton. edu/ sites/ opplab/ papers/ Opp Demonstrating the Warm Glow Heuristic. pdf). Social Cognition 23 (3): 257278. doi:10.1521/soco.2005.23.3.257. ISSN0278-016X. . [9] Kahneman, Daniel; Amos Tversky (July 1973). "On the Psychology of Prediction". Psychological Review (American Psychological Association) 80 (4): 23751. doi:10.1037/h0034747. ISSN0033-295X. [10] http:/ / nobelprize. org/ nobel_prizes/ economics/ laureates/ 2002/ kahneman-lecture. html [11] http:/ / www. edge. org/ 3rd_culture/ kahneman07/ kahneman07_index. html [12] http:/ / www. mit. edu/ ~lyoung/ Site/ Publications_files/ Sinnott-Armstrong,%20Young,%20Cushman. doc [13] http:/ / home. uchicago. edu/ ~xdai/ ACR07. pdf

Bounded rationality

54

Bounded rationality
Bounded rationality is the idea that in decision making, rationality of individuals is limited by the information they have, the cognitive limitations of their minds, and the finite amount of time they have to make decisions. It was proposed by Herbert Simon as an alternative basis for the mathematical modeling of decision making, as used in economics and related disciplines; it complements rationality as optimization, which views decision making as a fully rational process of finding an optimal choice given the information available.[1] Another way to look at bounded rationality is that, because decision-makers lack the ability and resources to arrive at the optimal solution, they instead apply their rationality only after having greatly simplified the choices available. Thus the decision-maker is a satisficer, one seeking a satisfactory solution rather than the optimal one.[2] Simon used the analogy of a pair of scissors, where one blade is the "cognitive limitations" of actual humans and the other the "structures of the environment"; minds with limited cognitive resources can thus be successful by exploiting pre-existing structure and regularity in the environment.[1] Some models of human behavior in the social sciences assume that humans can be reasonably approximated or described as "rational" entities (see for example rational choice theory). Many economics models assume that people are on average rational, and can in large enough quantities be approximated to act according to their preferences. The concept of bounded rationality revises this assumption to account for the fact that perfectly rational decisions are often not feasible in practice due to the finite computational resources available for making them.

Models of bounded rationality


The term is thought to have been coined by Herbert Simon. In Models of Man, Simon points out that most people are only partly rational, and are emotional/irrational in the remaining part of their actions. In another work, he states "boundedly rational agents experience limits in formulating and solving complex problems and in processing (receiving, storing, retrieving, transmitting) information" (Williamson, p.553, citing Simon). Simon describes a number of dimensions along which "classical" models of rationality can be made somewhat more realistic, while sticking within the vein of fairly rigorous formalization. These include: limiting what sorts of utility functions there might be. recognizing the costs of gathering and processing information. the possibility of having a "vector" or "multi-valued" utility function. Simon suggests that economic agents employ the use of heuristics to make decisions rather than a strict rigid rule of optimization. They do this because of the complexity of the situation, and their inability to process and compute the expected utility of every alternative action. Deliberation costs might be high and there are often other concurrent economic activities also requiring decisions.

Bounded rationality

55

Daniel Kahneman proposes bounded rationality as a model to overcome some of the limitations of the rational-agent models in economic literature. As decision makers have to make decisions about how and when to decide, Ariel Rubinstein proposed to model bounded rationality [3] by explicitly specifying decision-making procedures. This puts the study of decision procedures on the research agenda. Gerd Gigerenzer argues that most decision theorists who have discussed bounded rationality have not really followed Simon's ideas about it. Rather, they have either considered how people's decisions might be made sub-optimal by the limitations of human rationality, or have constructed elaborate optimising models of how people might cope with their inability to optimize. Gigerenzer instead proposes to examine simple alternatives to a full rationality analysis as a mechanism for decision making, and he and his colleagues have shown that such simple heuristics frequently lead to better decisions than the theoretically optimal procedure.

Daniel Kahneman

From a computational point of view, decision procedures can be encoded in algorithms and heuristics. Edward Tsang argues that the effective rationality of an agent is determined by its computational intelligence. Everything else being equal, an agent that has better algorithms and heuristics could make "more rational" (more optimal) decisions than one that has poorer heuristics and algorithms.

Notes
[1] Gigerenzer, Gerd; Selten, Reinhard (2002). Bounded Rationality: The Adaptive Toolbox (http:/ / books. google. com/ ?id=dVMq5UoYS3YC& dq="bounded+ rationality"& printsec=frontcover). MIT Press. ISBN0262571641. . [2] "Bounded rationality: Definition from Answers.com" (http:/ / www. answers. com/ topic/ bounded-rationality). Answers Corporation. . Retrieved 2009-04-12. [3] http:/ / arielrubinstein. tau. ac. il/ book-br. html

References
Elster, Jon (1983). Sour Grapes: Studies in the Subversion of Rationality. Cambridge, UK: Cambridge University Press. ISBN052125230X. Gigerenzer, Gerd; Selten, Reinhard (2002). Bounded Rationality. Cambridge: MIT Press. ISBN0-262-57164-1. Hayek, F.A (1948) Individualism and Economic order Kahneman, Daniel (2003). "Maps of bounded rationality: psychology for behavioral economics". The American Economic Review 93 (5): 144975. doi:10.1257/000282803322655392. March, James G. (1994). A Primer on Decision Making: How Decisions Happen. New York: The Free Press. ISBN0029200350. Rubinstein, Ariel (1998). Modeling bounded rationality. MIT Press. ISBN0585053472. Simon, Herbert (1957). "A Behavioral Model of Rational Choice", in Models of Man, Social and Rational: Mathematical Essays on Rational Human Behavior in a Social Setting. New York: Wiley. Simon, Herbert (1990). "A mechanism for social selection and successful altruism". Science 250 (4988): 16658. doi:10.1126/science.2270480. PMID2270480. Simon, Herbert (1991). "Bounded Rationality and Organizational Learning". Organization Science 2 (1): 125134. doi:10.1287/orsc.2.1.125. Tisdell, Clem (1996). Bounded Rationality and Economic Evolution: A Contribution to Decision Making, Economics, and Management. Cheltenham, UK: Brookfield. ISBN1858983525.

Bounded rationality Tsang, E.P.K. (2008). "Computational intelligence determines effective rationality". International Journal on Automation and Control 5 (1): 636. doi:10.1007/s11633-008-0063-6. Williamson, Oliver E. (1981). "The economics of organization: the transaction cost approach". American Journal of Sociology 87 (3): 548577 (http://www.polisci.ucsd.edu/gcox/06 Ollie.pdf) (press +)..

56

External links
Mapping Bounded Rationality by Daniel Kahneman (http://choo.fis.utoronto.ca/FIS/courses/lis2149/ kahneman.NobelPrize.pdf)

Cognitive bias
A cognitive bias is a pattern of deviation in judgment that occurs in particular situations. Implicit in the concept of a "pattern of deviation" is a standard of comparison; this may be the judgment of people outside those particular situations, or may be a set of independently verifiable facts. Cognitive biases are instances of evolved mental behavior. Some are presumably adaptive, for example, because they lead to more effective actions in given contexts or enable faster decisions when faster decisions are of greater value. Others presumably result from a lack of appropriate mental mechanisms, or from the misapplication of a mechanism that is adaptive under different circumstances. Cognitive bias is a general term that is used to describe many observer effects in the human mind, some of which can lead to perceptual distortion, inaccurate judgment, or illogical interpretation.[1] It is a phenomenon studied in cognitive science and social psychology.

Overview
Bias arises from various processes that are sometimes difficult to distinguish. These include information-processing shortcuts (heuristics), motivational factors and social influence. The notion of cognitive biases was introduced by Amos Tversky and Daniel Kahneman in 1972[2] and grew out of their experience of people's innumeracy, or inability to reason intuitively with the greater orders of magnitude. They and their colleagues demonstrated several replicable ways in which human judgments and decisions differ from rational choice theory. They explained these differences in terms of heuristics, rules which are simple for the brain to compute but introduce systematic errors.[2] For instance the Availability heuristic, when the ease with which something comes to mind is used to indicate how often (or how recently) it has been encountered.

Daniel Kahneman

These experiments grew into the heuristics and biases research program which spread beyond academic psychology into other disciplines including medicine and political science.[3] It was a major factor in the emergence of behavioral economics, earning Kahneman a Nobel Prize in 2002.[4] Tversky and Kahneman developed prospect theory as a more realistic alternative to rational choice theory. Critics of Kahneman and Tversky such as Gerd Gigerenzer argue that heuristics should not lead us to conceive of human thinking as riddled with irrational cognitive biases, but rather to conceive rationality as an adaptive tool that is not identical to the rules of formal logic or the probability calculus.[5]

Cognitive bias

57

Types of cognitive biases


Biases can be distinguished on a number of dimensions. For example, there are biases specific to groups (such as the risky shift) as well as biases at the individual level. Some biases affect decision-making, where the desirability of options has to be considered (e.g., Sunk Cost fallacy). Others such as Illusory correlation affect judgment of how likely something is, or of whether one thing is the cause of another. A distinctive class of biases affect memory,[6] such as consistency bias (remembering one's past attitudes and behavior as more similar to one's present attitudes). Some biases reflect a subject's motivation,[7] for example, the desire for a positive self-image leading to Egocentric bias[8] and the avoidance of unpleasant cognitive dissonance. Other biases are due to the particular way the brain perceives, forms memories and makes judgments. This distinction is sometimes described as "Hot cognition" versus "Cold Cognition", as motivated reasoning can involve a state of arousal. Among the "cold" biases, some are due to ignoring relevant information (e.g. Neglect of probability), whereas some involve a decision or judgement being affected by irrelevant information (for example the Framing effect where the same problem receives different responses depending on how it is described) or giving excessive weight to an unimportant but salient feature of the problem (e.g., Anchoring). The fact that some biases reflect motivation, and in particular the motivation to have positive attitudes to oneself[8] accounts for the fact that many biases are self-serving or self-directed (e.g. Illusion of asymmetric insight, Self-serving bias, Projection bias). There are also biases in how subjects evaluate in-groups or out-groups; evaluating in-groups as more diverse and "better" in many respects, even when those groups are arbitrarily-defined (Ingroup bias, Outgroup homogeneity bias). Some cognitive biases belong to the subgroup of attentional biases which refer to the paying of increased attention to certain stimuli. It has been shown, for example, that people addicted to alcohol and other drugs pay more attention to drug-related stimuli. Common psychological tests to measure those biases are the Stroop Task[9] [10] and the Dot Probe Task. The following is a list of the more commonly studied cognitive biases: Framing by using a too-narrow approach and description of the situation or issue. Hindsight bias, sometimes called the "I-knew-it-all-along" effect, is the inclination to see past events as being predictable. Fundamental attribution error is the tendency for people to over-emphasize personality-based explanations for behaviors observed in others while under-emphasizing the role and power of situational influences on the same behavior. Confirmation bias is the tendency to search for or interpret information in a way that confirms one's preconceptions; this is related to the concept of cognitive dissonance. Self-serving bias is the tendency to claim more responsibility for successes than failures. It may also manifest itself as a tendency for people to evaluate ambiguous information in a way beneficial to their interests. Belief bias is when one's evaluation of the logical strength of an argument is biased by their belief in the truth or falsity of the conclusion.

Practical significance
Many social institutions rely on individuals to make rational judgments. A fair jury trial, for example, requires that the jury ignore irrelevant features of the case, weigh the relevant features appropriately, consider different possibilities open-mindedly and resist fallacies such as appeal to emotion. The various biases demonstrated in these psychological experiments suggest that people will frequently fail to do all these things.[11] However, they fail to do so in systematic, directional ways that are predictable.[12]

Cognitive bias Cognitive biases are also related to the persistence of superstition, to large social issues such as prejudice, and they also work as a hindrance in the acceptance of scientific non-intuitive knowledge by the public.[13]

58

Criticism
In heuristics and biases literature, it is almost impossible to make an accurate and unbiased decision as the "rational" decision is usually sandwiched between two contradictory biases.[14] For example, overestimating one's abilities can be due to the DunningKruger effect and underestimating them because of the false consensus effect. If one should estimate his skill in throwing a flying disc by throwing it as far as possible, one must get his estimate right on the spot or else he has demonstrated biased judgement.

References
[1] Kahneman, D.; Tversky, A. (1972). "Subjective probability: A judgment of representativeness". Cognitive Psychology 3 (3): 430454. doi:10.1016/0010-0285(72)90016-3. [2] Kahneman, Daniel; Shane Frederick (2002). "Representativeness Revisited: Attribute Substitution in Intuitive Judgment". In Thomas Gilovich, Dale Griffin, Daniel Kahneman. Heuristics and Biases: The Psychology of Intuitive Judgment. Cambridge: Cambridge University Press. pp.5152. ISBN9780521796798. [3] Gilovich, Thomas; Dale Griffin (2002). "Heuristics and Biases: Then and Now". In Thomas Gilovich, Dale Griffin, Daniel Kahneman. Heuristics and Biases: The Psychology of Intuitive Judgment. Cambridge: Cambridge University Press. pp.14. ISBN9780521796798. [4] (http:/ / nobelprize. org/ nobel_prizes/ economics/ laureates/ 2002/ kahneman-lecture. html) Nobelprize.org [5] Gigerenzer, G. (2006). "Bounded and Rational". In Stainton, R. J.. Contemporary Debates in Cognitive Science. Blackwell. p.129. ISBN1405113049. [6] Schacter, D.L. (1999). "The Seven Sins of Memory: Insights From Psychology and Cognitive Neuroscience". American Psychologist 54 (3): 182203. doi:10.1037/0003-066X.54.3.182. PMID10199218 [7] Kunda, Z. (1990). "The Case for Motivated Reasoning". Psychological Bulletin 108 (3): 480498. doi:10.1037/0033-2909.108.3.480. PMID2270237 [8] Hoorens, V. (1993). "Self-enhancement and Superiority Biases in Social Comparison". In Stroebe, W. and Hewstone, Miles. European Review of Social Psychology 4. Wiley [9] Jensen AR, Rohwer WD (1966). "The Stroop color-word test: a review". Acta psychologica 25 (1): 3693. doi:10.1016/0001-6918(66)90004-7. PMID5328883. [10] MacLeod CM (March 1991). "Half a century of research on the Stroop effect: an integrative review" (http:/ / content. apa. org/ journals/ bul/ 109/ 2/ 163). Psychological bulletin 109 (2): 163203. doi:10.1037/0033-2909.109.2.163. PMID2034749. . [11] Sutherland, Stuart (2007) Irrationality: The Enemy Within Second Edition (First Edition 1994) Pinter & Martin. ISBN 978-1-905177-07-3 [12] Ariely, Dan (2008). Predictably Irrational: The Hidden Forces That Shape Our Decisions. HarperCollins. p.304. ISBN9780061353239. [13] Gnter Radden, H. Cuyckens (2003). Motivation in language: studies in honor of Gnter Radden (http:/ / books. google. com/ books?id=qzhJ3KpLpQUC& pg=PA275& dq=essentialism+ definition& lr=& cd=3#v=onepage& q=essentialism definition& f=false). John Benjamins. p.275. ISBN9781588114266. . [14] Funder, David C.; Joachim I. Krueger (June 2004). "Towards a balanced social psychology: Causes, consequences, and cures for the problem-seeking approach to social behavior and cognition" (http:/ / web. mac. com/ kstanovich/ Site/ Research_on_Reasoning_files/ SocialBBS04. pdf). Behavioral and Brain Sciences 27: 313376. PMID15736870. .

Further reading
Eiser, J.R. and Joop van der Pligt (1988) Attitudes and Decisions London: Routledge. ISBN 978-0-415-01112-9 Fine, Cordelia (2006) A Mind of its Own: How your brain distorts and deceives Cambridge, UK: Icon Books. ISBN 1-84046-678-2 Gilovich, Thomas (1993). How We Know What Isn't So: The Fallibility of Human Reason in Everyday Life. New York: The Free Press. ISBN 0-02-911706-2 Haselton, M.G., Nettle, D. & Andrews, P.W. (2005). The evolution of cognitive bias. In D.M. Buss (Ed.), Handbook of Evolutionary Psychology, (pp.724746). Hoboken: Wiley. Full text (http://www.sscnet.ucla.edu/ comm/haselton/webdocs/handbookevpsych.pdf) Heuer, Richards J. Jr. (1999) Psychology of Intelligence Analysis. Central Intelligence Agency. http://www.au. af.mil/au/awc/awcgate/psych-intel/art5.html

Cognitive bias Kahneman D., Slovic P., and Tversky, A. (Eds.) (1982) Judgment Under Uncertainty: Heuristics and Biases. New York: Cambridge University Press ISBN 978-0-521-28414-1 Kida, Thomas (2006) Don't Believe Everything You Think: The 6 Basic Mistakes We Make in Thinking New York: Prometheus. ISBN 978-1-59102-408-8 Nisbett, R., and Ross, L. (1980) Human Inference: Strategies and shortcomings of human judgement. Englewood Cliffs, NJ: Prentice-Hall ISBN 978-0-13-445130-5 Piatelli-Palmarini, Massimo (1994) Inevitable Illusions: How Mistakes of Reason Rule Our Minds New York: John Wiley & Sons. ISBN 0-471-15962-X Stanovich, Keith (2009). What Intelligence Tests Miss: The Psychology of Rational Thought. New Haven (CT): Yale University Press. ISBN978-0-300-12385-2. Lay summary (http://web.mac.com/kstanovich/iWeb/Site/ YUP_Reviews_files/TICS_review.pdf) (21 November 2010). Sutherland, Stuart (2007) Irrationality: The Enemy Within Second Edition (First Edition 1994) Pinter & Martin. ISBN 978-1-905177-07-3 Tavris, Carol and Elliot Aronson (2007) Mistakes Were Made (But Not by Me): Why We Justify Foolish Beliefs, Bad Decisions and Hurtful Acts Orlando, Florida: Harcourt Books. ISBN 978-0-15-101098-1

59

External links
The Roots of Consciousness: To Err Is human (http://www.williamjames.com/Science/ERR.htm) Cognitive bias in the financial arena (http://www.cxoadvisory.com/gurus/Fisher/article/) A Visual Study Guide To Cognitive Biases (http://www.scribd.com/doc/30548590/ Cognitive-Biases-A-Visual-Study-Guide)

List of cognitive biases


A cognitive bias is a pattern of poor judgment, often triggered by a particular situation. Identifying "poor judgment," or more precisely, a "deviation in judgment," requires a standard for comparison, i.e. "good judgment". In scientific investigations of cognitive bias, the source of "good judgment" is that of people outside the situation hypothesized to cause the poor judgment, or, if possible, a set of independently verifiable facts. The existence of most of the particular cognitive biases listed below has been verified empirically in psychology experiments. Cognitive biases, like many behaviors are influenced by evolution and natural selection pressure. Some are presumably adaptive and beneficial, for example, because they lead to more effective actions in given contexts or enable faster decisions, when faster decisions are of greater value for reproductive success and survival. Others presumably result from a lack of appropriate mental mechanisms, i.e. a general fault in human brain structure, or from the misapplication of a mechanism that is adaptive (beneficial) under different circumstances. Cognitive bias is a general term that is used to describe many distortions in the human mind that are difficult to eliminate and that lead to perceptual distortion, inaccurate judgment, or illogical interpretation.[1]

List of cognitive biases

60

Decision-making and behavioral biases


Many of these biases are studied for how they affect belief formation, business decisions, and scientific research. Anchoring the common human tendency to rely too heavily, or "anchor," on one trait or piece of information when making decisions. Attentional Bias implicit cognitive bias defined as the tendency of emotionally dominant stimuli in one's environment to preferentially draw and hold attention. Backfire effect - Evidence disconfirming our beliefs only strengthens them. Bandwagon effect the tendency to do (or believe) things because many other people do (or believe) the same. Related to groupthink and herd behavior. Bias blind spot the tendency to see oneself as less biased than other people.[2] Choice-supportive bias the tendency to remember one's choices as better than they actually were.[3] Confirmation bias the tendency to search for or interpret information in a way that confirms one's preconceptions.[4] Congruence bias the tendency to test hypotheses exclusively through direct testing, in contrast to tests of possible alternative hypotheses. Contrast effect the enhancement or diminishing of a weight or other measurement when compared with a recently observed contrasting object.[5] Denomination effect the tendency to spend more money when it is denominated in small amounts (e.g. coins) rather than large amounts (e.g. bills).[6] Distinction bias the tendency to view two options as more dissimilar when evaluating them simultaneously than when evaluating them separately.[7] Endowment effect "the fact that people often demand much more to give up an object than they would be willing to pay to acquire it".[8] Experimenter's or Expectation bias the tendency for experimenters to believe, certify, and publish data that agree with their expectations for the outcome of an experiment, and to disbelieve, discard, or downgrade the corresponding weightings for data that appear to conflict with those expectations.[9] Focusing effect the tendency to place too much importance on one aspect of an event; causes error in accurately predicting the utility of a future outcome.[10] Framing effect drawing different conclusions from the same information, depending on how that information is presented. Hostile media effect - the tendency to see a media report as being biased due to one's own strong partisan views. Hyperbolic discounting the tendency for people to have a stronger preference for more immediate payoffs relative to later payoffs, where the tendency increases the closer to the present both payoffs are.[11] Illusion of control the tendency to overestimate one's degree of influence over other external events.[12] Impact bias the tendency to overestimate the length or the intensity of the impact of future feeling states.[13] Information bias the tendency to seek information even when it cannot affect action.[14] Irrational escalation the phenomenon where people justify increased investment in a decision, based on the cumulative prior investment, despite new evidence suggesting that the decision was probably wrong. Loss aversion "the disutility of giving up an object is greater than the utility associated with acquiring it".[15] (see also Sunk cost effects and Endowment effect). Mere exposure effect the tendency to express undue liking for things merely because of familiarity with them.[16] Money illusion the tendency to concentrate on the nominal (face value) of money rather than its value in terms of purchasing power.[17] Moral credential effect the tendency of a track record of non-prejudice to increase subsequent prejudice. Negativity bias the tendency to pay more attention and give more weight to negative than positive experiences or other kinds of information.

List of cognitive biases Neglect of probability the tendency to completely disregard probability when making a decision under uncertainty.[18] Normalcy bias the refusal to plan for, or react to, a disaster which has never happened before. Omission bias the tendency to judge harmful actions as worse, or less moral, than equally harmful omissions (inactions).[19] Outcome bias the tendency to judge a decision by its eventual outcome instead of based on the quality of the decision at the time it was made. Planning fallacy the tendency to underestimate task-completion times.[13] Post-purchase rationalization the tendency to persuade oneself through rational argument that a purchase was a good value. Pseudocertainty effect the tendency to make risk-averse choices if the expected outcome is positive, but make risk-seeking choices to avoid negative outcomes.[20] Reactance the urge to do the opposite of what someone wants you to do out of a need to resist a perceived attempt to constrain your freedom of choice. Restraint bias the tendency to overestimate one's ability to show restraint in the face of temptation. Selective perception the tendency for expectations to affect perception. Semmelweis reflex the tendency to reject new evidence that contradicts an established paradigm.[21] Social comparison bias the tendency, when making hiring decisions, to favour potential candidates who don't compete with one's own particular strengths.[22] Status quo bias the tendency to like things to stay relatively the same (see also loss aversion, endowment effect, and system justification).[23] [24] Unit bias the tendency to want to finish a given unit of a task or an item. Strong effects on the consumption of food in particular.[25] Wishful thinking the formation of beliefs and the making of decisions according to what is pleasing to imagine instead of by appeal to evidence or rationality.[26] Zero-risk bias preference for reducing a small risk to zero over a greater reduction in a larger risk.

61

Biases in probability and belief


Many of these biases are often studied for how they affect business and economic decisions and how they affect experimental research. Ambiguity effect the tendency to avoid options for which missing information makes the probability seem "unknown."[27] Anchoring effect the tendency to rely too heavily, or "anchor," on a past reference or on one trait or piece of information when making decisions (also called "insufficient adjustment"). Attentional bias the tendency to neglect relevant data when making judgments of a correlation or association. Availability heuristic estimating what is more likely by what is more available in memory, which is biased toward vivid, unusual, or emotionally charged examples. Availability cascade a self-reinforcing process in which a collective belief gains more and more plausibility through its increasing repetition in public discourse (or "repeat something long enough and it will become true"). Base rate neglect or Base rate fallacy the tendency to base judgments on specifics, ignoring general statistical information.[28] Belief bias an effect where someone's evaluation of the logical strength of an argument is biased by the believability of the conclusion.[29] Clustering illusion the tendency to see patterns where actually none exist. Conjunction fallacy the tendency to assume that specific conditions are more probable than general ones.[30] Forward Bias - the tendency to create models based on past data which are validated only against that past data.

List of cognitive biases Gambler's fallacy the tendency to think that future probabilities are altered by past events, when in reality they are unchanged. Results from an erroneous conceptualization of the Law of large numbers. For example, "I've flipped heads with this coin five times consecutively, so the chance of tails coming out on the sixth flip is much greater than heads." Hindsight bias sometimes called the "I-knew-it-all-along" effect, the tendency to see past events as being predictable[31] at the time those events happened.(sometimes phrased as "Hindsight is 20/20") Illusory correlation inaccurately perceiving a relationship between two events, either because of prejudice or selective processing of information.[32] Just-world hypothesis the tendency for people to want to believe that the world is fundamentally just, causing them to rationalize an otherwise inexplicable injustice as deserved by the victim(s). Observer-expectancy effect when a researcher expects a given result and therefore unconsciously manipulates an experiment or misinterprets data in order to find it (see also subject-expectancy effect). Optimism bias the tendency to be over-optimistic about the outcome of planned actions.[33] Ostrich effect ignoring an obvious (negative) situation. Overconfidence effect excessive confidence in one's own answers to questions. For example, for certain types of questions, answers that people rate as "99% certain" turn out to be wrong 40% of the time.[34] [35] Positive outcome bias the tendency of one to overestimate the probability of a favorable outcome coming to pass in a given situation (see also wishful thinking, optimism bias, and valence effect). Pareidolia a vague and random stimulus (often an image or sound) is perceived as significant, e.g., seeing images of animals or faces in clouds, the man in the moon, and hearing hidden messages on records played in reverse. Pessimism bias the tendency for some people, especially those suffering from depression, to overestimate the likelihood of negative things happening to them. Primacy effect the tendency to weigh initial events more than subsequent events.[36] Recency effect the tendency to weigh recent events more than earlier events (see also peak-end rule). Disregard of regression toward the mean the tendency to expect extreme performance to continue. Stereotyping expecting a member of a group to have certain characteristics without having actual information about that individual. Subadditivity effect the tendency to judge probability of the whole to be less than the probabilities of the parts. Subjective validation perception that something is true if a subject's belief demands it to be true. Also assigns perceived connections between coincidences. Well travelled road effect underestimation of the duration taken to traverse oft-traveled routes and over-estimate the duration taken to traverse less familiar routes.

62

Social biases
Most of these biases are labeled as attributional biases. Actorobserver bias the tendency for explanations of other individuals' behaviors to overemphasize the influence of their personality and underemphasize the influence of their situation (see also Fundamental attribution error), and for explanations of one's own behaviors to do the opposite (that is, to overemphasize the influence of our situation and underemphasize the influence of our own personality). DunningKruger effect a twofold bias. On one hand the lack of metacognitive ability deludes people, who overrate their capabilities. On the other hand, skilled people underrate their abilities, as they assume the others have a similar understanding.[37] Egocentric bias occurs when people claim more responsibility for themselves for the results of a joint action than an outside observer would. Forer effect (aka Barnum effect) the tendency to give high accuracy ratings to descriptions of their personality that supposedly are tailored specifically for them, but are in fact vague and general enough to apply to a wide

List of cognitive biases range of people. For example, horoscopes. False consensus effect the tendency for people to overestimate the degree to which others agree with them.[38] Fundamental attribution error the tendency for people to over-emphasize personality-based explanations for behaviors observed in others while under-emphasizing the role and power of situational influences on the same behavior (see also actor-observer bias, group attribution error, positivity effect, and negativity effect).[39] Halo effect the tendency for a person's positive or negative traits to "spill over" from one area of their personality to another in others' perceptions of them (see also physical attractiveness stereotype).[40] Illusion of asymmetric insight people perceive their knowledge of their peers to surpass their peers' knowledge of them.[41] Illusion of transparency people overestimate others' ability to know them, and they also overestimate their ability to know others. Illusory superiority overestimating one's desirable qualities, and underestimating undesirable qualities, relative to other people. (Also known as "Lake Wobegon effect," "better-than-average effect," or "superiority bias").[42] Ingroup bias the tendency for people to give preferential treatment to others they perceive to be members of their own groups. Just-world phenomenon the tendency for people to believe that the world is just and therefore people "get what they deserve."

63

Moral luck the tendency for people to ascribe greater or lesser moral standing based on the outcome of an event rather than the intention Outgroup homogeneity bias individuals see members of their own group as being relatively more varied than members of other groups.[43] Projection bias the tendency to unconsciously assume that others (or one's future selves) share one's current emotional states, thoughts and values.[44] Self-serving bias the tendency to claim more responsibility for successes than failures. It may also manifest itself as a tendency for people to evaluate ambiguous information in a way beneficial to their interests (see also group-serving bias).[45] System justification the tendency to defend and bolster the status quo. Existing social, economic, and political arrangements tend to be preferred, and alternatives disparaged sometimes even at the expense of individual and collective self-interest. (See also status quo bias.) Trait ascription bias the tendency for people to view themselves as relatively variable in terms of personality, behavior and mood while viewing others as much more predictable. Ultimate attribution error similar to the fundamental attribution error, in this error a person is likely to make an internal attribution to an entire group instead of the individuals within the group.

Memory errors and biases


Further information: Memory bias Cryptomnesia a form of misattribution where a memory is mistaken for imagination. Egocentric bias recalling the past in a self-serving manner, e.g. remembering one's exam grades as being better than they were, or remembering a caught fish as being bigger than it was. False memory confusion of imagination with memory, or the confusion of true memories with false memories. Hindsight bias filtering memory of past events through present knowledge, so that those events look more predictable than they actually were; also known as the "I-knew-it-all-along effect."[31] Positivity effect older adults remember relatively more positive than negative things, compared with younger adults[46] Reminiscence bump the effect that people tend to recall more personal events from adolescence and early adulthood than from other lifetime periods.

List of cognitive biases Rosy retrospection the tendency to rate past events more positively than they had actually rated them when the event occurred. Self-serving bias perceiving oneself responsible for desirable outcomes but not responsible for undesirable ones. Suggestibility a form of misattribution where ideas suggested by a questioner are mistaken for memory. Telescoping effect the effect that recent events appear to have occurred more remotely and remote events appear to have occurred more recently. Von Restorff effect the tendency for an item that "stands out like a sore thumb" to be more likely to be remembered than other items.

64

Common theoretical causes of some cognitive biases


Bounded rationality limits on optimization and rationality Attribute substitution making a complex, difficult judgment by unconsciously substituting an easier judgment[47] Attribution theory, especially: Salience Cognitive dissonance, and related: Impression management Self-perception theory Heuristics, including: Availability heuristic estimating what is more likely by what is more available in memory, which is biased toward vivid, unusual, or emotionally charged examples[32] Representativeness heuristic judging probabilities on the basis of resemblance[32] Affect heuristic basing a decision on an emotional reaction rather than a calculation of risks and benefits[48] Introspection illusion Adaptive bias Misinterpretations or misuse of statistics.

Methods for dealing with cognitive biases


Reference class forecasting was developed by Daniel Kahneman, Amos Tversky, and Bent Flyvbjerg to eliminate or reduce the impact of cognitive biases on decision making.[49]

Notes
[1] Kahneman, D.; Tversky, A. (1972), "Subjective probability: A judgment of representativeness", Cognitive Psychology 3: 430454, doi:10.1016/0010-0285(72)90016-3. [2] Pronin, Emily; Matthew B. Kugler (July 2007), "Valuing thoughts, ignoring behavior: The introspection illusion as a source of the bias blind spot", Journal of Experimental Social Psychology (Elsevier) 43 (4): 565578, doi:10.1016/j.jesp.2006.05.011, ISSN0022-1031. [3] Mather, M.; Shafir, E.; Johnson, M.K. (2000), "Misrememberance of options past: Source monitoring and choice" (http:/ / www. usc. edu/ projects/ matherlab/ pdfs/ Matheretal2000. pdf), Psychological Science 11: 132138, doi:10.1111/1467-9280.00228, . [4] Oswald, Margit E.; Grosjean, Stefan (2004), "Confirmation Bias", in Pohl, Rdiger F., Cognitive Illusions: A Handbook on Fallacies and Biases in Thinking, Judgement and Memory, Hove, UK: Psychology Press, pp.7996, ISBN9781841693514, OCLC55124398 [5] Plous 1993, pp.3841 [6] Why We Spend Coins Faster Than Bills (http:/ / www. npr. org/ templates/ story/ story. php?storyId=104063298) by Chana Joffe-Walt. All Things Considered, 12 May 2009. [7] Hsee, Christopher K.; Zhang, Jiao (2004), "Distinction bias: Misprediction and mischoice due to joint evaluation", Journal of Personality and Social Psychology 86 (5): 680695, doi:10.1037/0022-3514.86.5.680, PMID15161394 [8] (Kahneman, Knetsch & Thaler 1991, p.193) Richard Thaler coined the term "endowment effect." [9] M. Jeng, "A selected history of expectation bias in physics", American Journal of Physics 74 578-583 (2006)

List of cognitive biases


[10] Kahneman, Daniel; Alan B. Krueger, David Schkade, Norbert Schwarz, Arthur A. Stone (2006-06-30), "Would you be happier if you were richer? A focusing illusion" (http:/ / www. morgenkommichspaeterrein. de/ ressources/ download/ 125krueger. pdf), Science 312 (5782): 190810, doi:10.1126/science.1129688, PMID16809528, [11] Hardman 2009, p.110 [12] Thompson, Suzanne C. (1999), "Illusions of Control: How We Overestimate Our Personal Influence", Current Directions in Psychological Science (Association for Psychological Science) 8 (6): 187190, ISSN09637214, JSTOR20182602 [13] Sanna, Lawrence J.; Schwarz, Norbert (2004), "Integrating Temporal Biases: The Interplay of Focal Thoughts and Accessibility Experiences", Psychological Science (American Psychological Society) 15 (7): 474481, doi:10.1111/j.0956-7976.2004.00704.x, PMID15200632 [14] Baron 1994, pp.258259 [15] (Kahneman, Knetsch & Thaler 1991, p.193) Daniel Kahneman, together with Amos Tversky, coined the term "loss aversion." [16] Bornstein, Robert F.; Crave-Lemley, Catherine (2004), "Mere exposure effect", in Pohl, Rdiger F., Cognitive Illusions: A Handbook on Fallacies and Biases in Thinking, Judgement and Memory, Hove, UK: Psychology Press, pp.215234, ISBN9781841693514, OCLC55124398 [17] Shafir, Eldar; Diamond, Peter; Tversky, Amos (2000), "Money Illusion", Choices, values, and frames, Cambridge University Press, pp.335355, ISBN9780521627498 [18] Baron 1994, p.353 [19] Baron 1994, p.386 [20] Hardman 2009, p.137 [21] Edwards, W. (1968). Conservatism in human information processing. In: B. Kleinmutz (Ed.), Formal Representation of Human Judgment. (pp. 17-52). New York: John Wiley and Sons. [22] Stephen M. Garciaa, Hyunjin Song and Abraham Tesser (November 2010), "Tainted recommendations: The social comparison bias", Organizational Behavior and Human Decision Processes 113 (2): 97101, doi:10.1016/j.obhdp.2010.06.002, ISSN07495978, Lay summary (http:/ / bps-research-digest. blogspot. com/ 2010/ 10/ social-comparison-bias-or-why-we. html) (2010-10-30). [23] Kahneman, Knetsch & Thaler 1991, p.193 [24] Baron 1994, p.382 [25] "Penn Psychologists Believe 'Unit Bias' Determines The Acceptable Amount To Eat" (http:/ / www. sciencedaily. com/ releases/ 2005/ 11/ 051121163748. htm). ScienceDaily (Nov. 21, 2005) [26] Baron 1994, p.44 [27] Baron 1994, p.372 [28] Baron 1994, pp.224228 [29] Klauer, K. C.; J. Musch, B. Naumer (2000), "On belief bias in syllogistic reasoning", Psychological Review 107 (4): 852884, doi:10.1037/0033-295X.107.4.852, PMID11089409 [30] Fisk, John E. (2004), "Conjunction fallacy", in Pohl, Rdiger F., Cognitive Illusions: A Handbook on Fallacies and Biases in Thinking, Judgement and Memory, Hove, UK: Psychology Press, pp.2342, ISBN9781841693514, OCLC55124398 [31] Pohl, Rdiger F. (2004), "Hindsight Bias", in Pohl, Rdiger F., Cognitive Illusions: A Handbook on Fallacies and Biases in Thinking, Judgement and Memory, Hove, UK: Psychology Press, pp.363378, ISBN9781841693514, OCLC55124398 [32] Tversky, Amos; Daniel Kahneman (September 27, 1974), "Judgment under Uncertainty: Heuristics and Biases", Science (American Association for the Advancement of Science) 185 (4157): 11241131, doi:10.1126/science.185.4157.1124, PMID17835457 [33] Hardman 2009, p.104 [34] Hoffrage, Ulrich (2004), "Overconfidence", in Rdiger Pohl, Cognitive Illusions: a handbook on fallacies and biases in thinking, judgement and memory, Psychology Press, ISBN978-1-84169-351-4 [35] Sutherland 2007, pp.172178 [36] Baron 1994, p.283 [37] Kruger, Justin; David Dunning (1999). "Unskilled and Unaware of It: How Difficulties in Recognizing One's Own Incompetence Lead to Inflated Self-Assessments" (http:/ / citeseerx. ist. psu. edu/ viewdoc/ download?doi=10. 1. 1. 64. 2655& rep=rep1& type=pdf). Journal of Personality and Social Psychology 77 (6): 112134. doi:10.1037/0022-3514.77.6.1121. PMID10626367. . [38] Marks, Gary; Miller, Norman (1987), "Ten years of research on the false-consensus effect: An empirical and theoretical review", Psychological Bulletin (American Psychological Association) 102 (1): 7290, doi:10.1037/0033-2909.102.1.72 [39] Sutherland 2007, pp.138139 [40] Baron 1994, p.275 [41] Pronin, E.; Kruger, J.; Savitsky, K.; Ross, L. (2001), "You don't know me, but I know you: the illusion of asymmetric insight", Journal of Personality and Social Psychology 81 (4): 639656, doi:10.1037/0022-3514.81.4.639, PMID11642351 [42] Hoorens, Vera (1993), "Self-enhancement and Superiority Biases in Social Comparison", European Review of Social Psychology (Psychology Press) 4 (1): 113139, doi:10.1080/14792779343000040. [43] Plous 2006, p.206 [44] Hsee, Christopher K.; Reid Hastie (2006), "Decision and experience: why don't we choose what makes us happy?", Trends in Cognitive Sciences 10 (1): 3137, doi:10.1016/j.tics.2005.11.007, PMID16318925. [45] Plous 2006, p.185

65

List of cognitive biases


[46] Mather, M.; Carstensen, L.L. (2005), "Aging and motivated cognition: The positivity effect in attention and memory." (http:/ / www. usc. edu/ projects/ matherlab/ pdfs/ MatherCarstensen2005. pdf), Trends in Cognitive Sciences 9: 496502, doi:10.1016/j.tics.2005.08.005, PMID16154382, . [47] Kahneman, Daniel; Shane Frederick (2002), "Representativeness Revisited: Attribute Substitution in Intuitive Judgment", in Thomas Gilovich, Dale Griffin, Daniel Kahneman, Heuristics and Biases: The Psychology of Intuitive Judgment, Cambridge: Cambridge University Press, pp.4981, ISBN9780521796798, OCLC47364085 [48] Slovic, Paul; Melissa Finucane, Ellen Peters, Donald G. MacGregor (2002), "The Affect Heuristic", in Thomas Gilovich, Dale Griffin, Daniel Kahneman, Heuristics and Biases: The Psychology of Intuitive Judgment, Cambridge University Press, pp.397420, ISBN0521796792 [49] Flyvbjerg, B., 2008, "Curbing Optimism Bias and Strategic Misrepresentation in Planning: Reference Class Forecasting in Practice." European Planning Studies, vol. 16, no. 1, January, pp. 3-21. (http:/ / www. sbs. ox. ac. uk/ centres/ bt/ Documents/ Curbing Optimism Bias and Strategic Misrepresentation. pdf)

66

References
Baron, Jonathan (1994), Thinking and deciding (2nd ed.), Cambridge University Press, ISBN0-521-43732-6 Baron, Jonathan (2000), Thinking and deciding (3rd ed.), New York: Cambridge University Press, ISBN0-521-65030-5 Bishop, Michael A.; J.D. Trout (2004), Epistemology and the Psychology of Human Judgment, New York: Oxford University Press, ISBN0-19-516229-3 Gilovich, Thomas (1993), How We Know What Isn't So: The Fallibility of Human Reason in Everyday Life, New York: The Free Press, ISBN0-02-911706-2 Gilovich, Thomas; Dale Griffin, Daniel Kahneman (2002), Heuristics and biases: The psychology of intuitive judgment, Cambridge, UK: Cambridge University Press, ISBN0-521-79679-2 Greenwald, A. (1980), "The Totalitarian Ego: Fabrication and Revision of Personal History", American Psychologist (American Psychological Association) 35 (7), ISSN0003-066X Hardman, David (2009), Judgment and decision making: psychological perspectives, Wiley-Blackwell, ISBN9781405123983 Kahneman, Daniel; Paul Slovic, Amos Tversky (1982), Judgment under Uncertainty: Heuristics and Biases, Cambridge, UK: Cambridge University Press, ISBN0-521-28414-7 Kahneman, Daniel; Knetsch, Jack L.; Thaler, Richard H. (1991), "Anomalies: The Endowment Effect, Loss Aversion, and Status Quo Bias", The Journal of Economic Perspectives (American Economic Association) 5 (1): 193206 Plous, Scott (1993), The Psychology of Judgment and Decision Making, New York: McGraw-Hill, ISBN0-07-050477-6 Schacter, Daniel L. (1999), "The Seven Sins of Memory: Insights From Psychology and Cognitive Neuroscience", American Psychologist (American Psychological Association) 54 (3): 182203, doi:10.1037/0003-066X.54.3.182, ISSN0003-066X, PMID10199218 Sutherland, Stuart (2007), Irrationality, Pinter & Martin, ISBN978-1-905177-07-3 Tetlock, Philip E. (2005), Expert Political Judgment: how good is it? how can we know?, Princeton: Princeton University Press, ISBN978-0-691-12302-8 Virine, L.; M. Trumper (2007), Project Decisions: The Art and Science, Vienna, VA: Management Concepts, ISBN978-1567262179

Dysrationalia

67

Dysrationalia
Dysrationalia is defined as the inability to think and behave rationally despite adequate intelligence.[1] The concept of dysrationalia was first proposed by psychologist Keith Stanovich in the early 1990s. Stanovich classifies dysrationalia as a learning disability and characterizes it as a difficulty in belief formation, in assessing belief consistency, or in the determination of action to achieve one's goals.[2] This concept has not gone unchallenged, however. Special education researcher Kenneth Kavale notes that dysrationalia may be more aptly categorized as a thinking disorder rather than a learning disability, because it does not have a direct impact upon academic performance.[3] Further, psychologist Robert Sternberg argues that the construct of dysrationalia needs to be better conceptualized: it lacks a real theory (explaining why people are dysrational and how they become this way) and operationalization (how dysrationalia could be measured).[4] Sternberg also notes that the concept has the potential for misuse, as one may label another as dysrational simply because he or she does not agree with the other person's view. Stanovich has replied to both Kavale[5] and Sternberg.[6] He has elaborated on the dysrationalia concept in a later book.[7] Sternberg has edited a book in which the dysrationalia concept is extensively discussed.[8] In a recent volume, Stanovich has provided the detailed conceptualization that Sternberg called for in his earlier critique.[9] In that book, Stanovich shows that variation in rational thinking skills is surprisingly independent of intelligence. One implication of this finding is that dysrationalia should not be rare.

Notes
[1] Stanovich, K.E. (1993). "Dysrationalia: A new specific learning disability." Journal of Learning Disabilities, 26(8), 501515. [2] Stanovich, K.E. (1994). "An exchange: Reconceptualizing intelligence: Dysrationalia as an intuition pump." Educational Researcher, 23(4), 1122. [3] Kavale, K.A. (1993). "How many learning disabilities are there? A commentary on Stanovich's 'Dysrationalia: A new specific learning disability.'" Journal of Learning Disabilities, 26(8), 520523. [4] Sternberg, R.J. (1994). "What if the construct of dysrationalia were an example of itself?" Educational Researcher, 23(4), 2223, 27. [5] Stanovich, K.E. (1993). It's practical to be rational. Journal of Learning Disabilities, 26, 524532. [6] Stanovich, K.E. (1994). The evolving concept of rationality: A rejoinder to Sternberg. Educational Researcher, 23(7), p. 33. [7] Stanovich, K.E. (2004). The Robot's Rebellion: Finding Meaning in the Age of Darwin. Chicago: University of Chicago Press. [8] Sternberg, R.J. (Ed.). (2002). Why smart people can be so stupid. New Haven, CT: Yale University Press. [9] Stanovich, K.E. (2009). What intelligence tests miss: The psychology of rational thought. New Haven, CT: Yale University Press.

External links
(http://web.mac.com/kstanovich/iWeb/Site/Research on Reasoning_files/Stanovich_IQ-Tests-Miss_SAM09. pdf) article in Scientific American by Keith Stanovich

Rational emotive behavior therapy

68

Rational emotive behavior therapy


Rational emotive behavior therapy
Intervention MeSH D011617 [1]

Rational emotive behavior therapy (REBT), previously called rational therapy and rational emotive therapy, is a comprehensive, active-directive, philosophically and empirically based psychotherapy which focuses on resolving emotional and behavioral problems and disturbances and enabling people to lead happier and more fulfilling lives. REBT was created and developed by the American psychotherapist and psychologist Albert Ellis who was inspired by many of the teachings of Asian, Greek, Roman and modern philosophers. REBT is one form of cognitive behavior therapy (CBT) and was first expounded by Ellis in the mid-1950s; development continued until his death in 2007.

History
Rational Emotive Behavior Therapy (REBT) is both a psychotherapeutic system of theory and practices and a school of thought established by Albert Ellis. Originally called rational therapy, its appellation was revised to rational emotive therapy in 1959, then to its current appellation in 1992. REBT was one of the first of the cognitive behavior therapies, as it was predicated in articles Ellis first published in 1956,[2] nearly a decade before Aaron Beck first set forth his cognitive therapy.[3] Precursors of certain fundamental aspects of REBT have been identified in various ancient philosophical traditions, particularly Stoicism.[4] For example, Ellis' first major publication on rational therapy describes the philosophical basis of REBT as the principle that a person is rarely affected emotionally by outside things but rather by his perceptions, attitudes, or internalized sentences about outside things and events.' He adds, This principle, which I have inducted from many psychotherapeutic sessions with scores of patients during the last several years, was originally discovered and stated by the ancient Stoic philosophers, especially Zeno of Citium (the founder of the school), Chrysippus [his most influential disciple], Panaetius of Rhodes (who introduced Stoicism into Rome), Cicero, Seneca, Epictetus, and Marcus Aurelius. The truths of Stoicism were perhaps best set forth by Epictetus, who in the first century A.D. wrote in the Enchiridion: Men are disturbed not by things, but by the views which they take of them. Shakespeare, many centuries later, rephrased this thought in Hamlet: Theres nothing good or bad but thinking makes it so.[5]

Theoretical assumptions
One of the fundamental premises of REBT is that humans, in most cases, do not merely get upset by unfortunate adversities, but also by how they construct their views of reality through their language, evaluative beliefs, meanings and philosophies about the world, themselves and others.[6] This concept has been attributed as far back as the Greek Philosopher Epictetus, who is often cited as utilizing similar ideas in antiquity.[7] In REBT, clients usually learn and begin to apply this premise by learning the A-B-C-model of psychological disturbance and change. The A-B-C model states that it normally is not merely an A, adversity (or activating event) that contributes to disturbed and dysfunctional emotional and behavioral Cs, consequences, but also what people B, believe about the A, adversity. A, adversity can be either an external situation or a thought or other kind of internal event, and it can refer to an event in the past, present, or future.[8]

Rational emotive behavior therapy The Bs, beliefs that are most important in the A-B-C model are explicit and implicit philosophical meanings and assumptions about events, personal desires, and preferences. The Bs, beliefs that are most significant are highly evaluative and consist of interrelated and integrated cognitive, emotional and behavioral aspects and dimensions. According to REBT, if a person's evaluative B, belief about the A, activating event is rigid, absolutistic and dysfunctional, the C, the emotional and behavioral consequence, is likely to be self-defeating and destructive. Alternatively, if a person's evaluative B, belief is preferential, flexible and constructive, the C, the emotional and behavioral consequence is likely to be self-helping and constructive. Through REBT, by understanding the role of their mediating, evaluative and philosophically based illogical, unrealistic and self-defeating meanings, interpretations and assumptions in upset, people often can learn to identify them, begin to D, dispute, refute, challenge and question them, distinguish them from healthy constructs, and subscribe to more constructive and self-helping constructs.[9] The REBT framework assumes that humans have both innate rational (meaning self- and social-helping and constructive) and irrational (meaning self- and social-defeating and un-helpful) tendencies and leanings. REBT claims that people to a large degree consciously and unconsciously construct emotional difficulties such as self-blame, self-pity, clinical anger, hurt, guilt, shame, depression and anxiety, and behaviors and behavior tendencies like procrastination, over-compulsiveness, avoidance, addiction and withdrawal by the means of their irrational and self-defeating thinking, emoting and behaving.[10] REBT is then applied as an educational process in which the therapist often active-directively teaches the client how to identify irrational and self-defeating beliefs and philosophies which in nature are rigid, extreme, unrealistic, illogical and absolutist, and then to forcefully and actively question and dispute them and replace them with more rational and self-helping ones. By using different cognitive, emotive and behavioral methods and activities, the client, together with help from the therapist and in homework exercises, can gain a more rational, self-helping and constructive rational way of thinking, emoting and behaving. One of the main objectives in REBT is to show the client that whenever unpleasant and unfortunate activating events occur in people's lives, they have a choice of making themselves feel healthily and self-helpingly sorry, disappointed, frustrated, and annoyed, or making themselves feel unhealthily and self-defeatingly horrified, terrified, panicked, depressed, self-hating, and self-pitying.[11] By attaining and ingraining a more rational and self-constructive philosophy of themselves, others and the world, people often are more likely to behave and emote in more life-serving and adaptive ways. Albert Ellis[11] posits three major insights of REBT: Insight 1 - People seeing and accepting the reality that their emotional disturbances at point C only partially stem from the activating events or adversities at point A that precede C. Although A contributes to C, and although disturbed Cs (such as feelings of panic and depression) are much more likely to follow strong negative As (such as being assaulted or raped), than they are to follow weak As (such as being disliked by a stranger), the main or more direct cores of extreme and dysfunctional emotional disturbances (Cs) are peoples irrational beliefs the absolutistic musts and their accompanying inferences and attributions that people strongly believe about their undesirable activating events. Insight 2 - No matter how, when, and why people acquire self-defeating or irrational beliefs (i.e. beliefs which are the main cause of their dysfunctional emotional-behavioral consequences), if they are disturbed in the present, they tend to keep holding these irrational beliefs and continue upsetting themselves with these thoughts. They do so not because they held them in the past, but because they still actively hold them in the present, though often unconsciously, while continuing to reaffirm their beliefs and act as if they are still valid. In their minds and hearts they still follow the core "musturbatory" philosophies they adopted or invented long ago, or ones they recently accepted or constructed. Insight 3 - No matter how well they have achieved insight 1 and insight 2, insight alone will rarely enable people to undo their emotional disturbances. They may feel better when they know, or think they know, how they became disturbed - since insights can give the impression of being useful and curative.

69

Rational emotive behavior therapy But, it is unlikely that they will actually get better and stay better unless they accept insights 1 and 2, and then also go on to strongly apply insight 3: There is usually no way to get better and stay better but by: continual work and practice in looking for, and finding, ones core irrational beliefs; actively, energetically, and scientifically disputing them; replacing ones absolutist musts with flexible preferences; changing one's unhealthy feelings to healthy, self-helping emotions; and firmly acting against ones dysfunctional fears and compulsions. Only by a combined cognitive, emotive, and behavioral, as well as a quite persistent and forceful attack on one's serious emotional problems, is one likely to significantly ameliorate or remove them and keep them removed. Regarding cognitive-affective-behavioral processes in mental functioning and dysfunctioning, originator Albert Ellis explains:[11] "REBT assumes that human thinking, emotion, and action are not really separate or disparate processes, but that they all significantly overlap and are rarely experienced in a pure state. Much of what we call emotion is nothing more nor less than a certain kind a biased, prejudiced, or strongly evaluative kind of thought. But emotions and behaviors significantly influence and affect thinking, just as thinking influences emotions and behaviors. Evaluating is a fundamental characteristic of human organisms and seems to work in a kind of closed circuit with a feedback mechanism: Because perception biases response and then response tends to bias subsequent perception. Also, prior perceptions appear to bias subsequent perceptions, and prior responses appear to bias subsequent responses. What we call feelings almost always have a pronounced evaluating or appraisal element." REBT then generally proposes that many of these self-defeating cognitive, emotive and behavioral tendencies are both innately biological and indoctrinated early in and during life, and further grow stronger as a person continually revisits, clings and acts on them. Ellis alluded to similarities between REBT and General Semantics in explaining the role of irrational beliefs in self-defeating tendencies, citing Alfred Korzybski as a significant modern influence on this thinking.[12] REBT differs from other clinical approaches like psychoanalysis in that it places little emphasis on exploring the past, but instead focuses on changing the current evaluations and philosophical thinking-emoting and behaving in relation to themselves, others and the conditions under which people live.

70

Psychological dysfunction
One of the main pillars of REBT is that irrational and dysfunctional ways and patterns of thinking, feeling and behaving are contributing to much, though hardly all, human disturbance and emotional and behavioral self- and social defeatism. REBT generally teaches that when people turn flexible preferences, desires and wishes into grandiose, absolutistic and fatalistic dictates, this tends to contribute to disturbance and upsetness. Albert Ellis has suggested three core beliefs or philosophies that humans tend to disturb themselves through:[11]
"I absolutely MUST, under practically all conditions and at all times, perform well (or outstandingly well) and win the approval (or complete love) of significant others. If I fail in these importantand sacredrespects, that is awful and I am a bad, incompetent, unworthy person, who will probably always fail and deserves to suffer." Holding this belief when faced with adversity tends to contribute to feelings of anxiety, panic, depression, despair, and worthlessness. "Other people with whom I relate or associate, absolutely MUST, under practically all conditions and at all times, treat me nicely, considerately and fairly. Otherwise, it is terrible and they are rotten, bad, unworthy people who will always treat me badly and do not deserve a good life and should be severely punished for acting so abominably to me." Holding this belief when faced with adversity tends to contribute to feelings of anger, rage, fury, and vindictiveness. "The conditions under which I live absolutely MUST, at practically all times, be favorable, safe, hassle-free, and quickly and easily enjoyable, and if they are not that way it's awful and horrible and I can't bear it. I can't ever enjoy myself at all. My life is impossible and hardly worth living."

Holding this belief when faced with adversity tends to contribute to frustration and discomfort, intolerance, self-pity, anger, depression, and to behaviors such as procrastination, avoidance, and inaction.

Rational emotive behavior therapy REBT commonly posits that at the core of irrational beliefs there often are explicit or implicit rigid demands and commands, and that extreme derivatives like awfulizing, frustration intolerance, people deprecation and over-generalizations are accompanied by these.[8] According to REBT the core dysfunctional philosophies in a person's evaluative emotional and behavioral belief system, are also very likely to contribute to unrealistic, arbitrary and crooked inferences and distortions in thinking. REBT therefore first teaches that when people in an insensible and devout way overuse absolutistic, dogmatic and rigid "shoulds", "musts", and "oughts", they tend to disturb and upset themselves. Further REBT generally posits that disturbed evaluations to a large degree occur through over-generalization, wherein people exaggerate and globalize events or traits, usually unwanted events or traits or behavior, out of context, while almost always ignoring the positive events or traits or behaviors. For example, awfulizing is partly mental magnification of the importance of an unwanted situation to a catastrophe or horror, elevating the rating of something from bad to worse than it should be, to beyond totally bad, worse than bad to the intolerable and to a "holocaust". The same exaggeration and overgeneralizing occurs with human rating, wherein humans come to be arbitrarily and axiomatically defined by their perceived flaws or misdeeds. Frustration intolerance then occurs when a person perceives something to be too difficult, painful or tedious, and by doing so exaggerates these qualities beyond one's ability to cope with them. Essential to REBT theory is also the concept of secondary disturbances which people sometimes construct on top of their primary disturbance. As Ellis emphasizes:[11] "Because of their self-consciousness and their ability to think about their thinking, they can very easily disturb themselves about their disturbances and can also disturb themselves about their ineffective attempts to overcome their emotional disturbances."

71

Mental wellness
As would be expected, REBT argues that mental wellness and mental health to a large degree results from an adequate amount of self-helping, flexible, logico-empirical ways of thinking, emoting and behaving.[10] When a perceived undesired and stressful activating event occurs, and the individual is interpreting, evaluating and reacting to the situation rationally and self-helpingly, then the resulting consequence is, according to REBT, likely to be more healthy, constructive and functional. This does not by any means mean that a relatively un-disturbed person never experiences negative feelings, but REBT does hope to keep debilitating and un-healthy emotions and subsequent self-defeating behavior to a minimum. To do this REBT generally promotes a flexible, un-dogmatic, self-helping and efficient belief system and constructive life philosophy about adversities and human desires and preferences. REBT clearly acknowledges that people, in addition to disturbing themselves, also are innately constructivists. Because they largely upset themselves with their beliefs, emotions and behaviors, they can be helped to, in a multimodal manner, dispute and question these and develop a more workable, more self-helping set of constructs. REBT generally teaches and promotes: That the concepts and philosophies of life of unconditional self-acceptance, other-acceptance, and life-acceptance are effective philosophies of life in achieving mental wellness and mental health. That human beings are inherently fallible and imperfect and that they had better accept their and other human being's totality and humanity, while at the same time not like some of their behaviors and characteristics. That they are better off not measuring their entire self or their "being" and give up the narrow, grandiose and ultimately destructive notion to give themselves any global rating or report card. This is partly because all humans are continually evolving and are far too complex to accurately rate; all humans do both self- and social-defeating and self- and social-helping deeds, and have both beneficial and un-beneficial attributes and traits at certain times and in certain conditions. REBT holds that ideas and feelings about self-worth are largely definitional and are not empirically confirmable or falsifiable.

Rational emotive behavior therapy That people had better accept life with its hassles and difficulties not always in accordance with their wants, while trying to change what they can change and live as elegantly as possible with what they cannot change.

72

REBT Intervention
As explained, REBT is a therapeutic system of both theory and practices; generally one of the goals of REBT is to help clients see the ways in which they have learned how they often needlessly upset themselves, teach them how to un-upset themselves and then how to empower themselves to lead happier and more fulfilling lives.[6] The emphasis in therapy is generally to establish a successful collaborative therapeutic working alliance based on the REBT educational model. Although REBT teaches that the therapist or counsellor had better demonstrate unconditional other-acceptance or unconditional positive regard, the therapist is not necessarily always encouraged to build a warm and caring relationship with the client. The tasks of the therapist or counsellor include understanding the clients concerns from his point of reference and work as a facilitator, teacher and encourager. In traditional REBT, the client together with the therapist, in a structured active-directive manner, often work through a set of target problems and establish a set of therapeutic goals. In these target problems, situational dysfunctional emotions, behaviors and beliefs are assessed in regards to the client's values and goals. After working through these problems, the client learns to generalize insights to other relevant situations. In many cases after going through a client's different target problems, the therapist is interested in examining possible core beliefs and more deep rooted philosophical evaluations and schemas that might account for a wider array of problematic emotions and behaviors.[8] Although REBT much of the time is used as a brief therapy, in deeper and more complex problems, longer therapy is promoted. In therapy, the first step often is that the client acknowledges the problems, accepts emotional responsibility for these and has willingness and determination to change. This normally requires a considerable amount of insight, but as originator Albert Ellis[11] explains: "Humans, unlike just about all the other animals on earth, create fairly sophisticated languages which not only enable them to think about their feeling, their actions, and the results they get from doing and not doing certain things, but they also are able to think about their thinking and even think about thinking about their thinking." Through the therapeutic process, REBT employs a wide array of forceful and active, meaning multimodal and disputing, methodologies. Central through these methods and techniques is the intent to help the client challenge, dispute and question their destructive and self-defeating cognitions, emotions and behaviors. The methods and techniques incorporate cognitive-philosophic, emotive-evocative-dramatic, and behavioral methods for disputation of the client's irrational and self-defeating constructs and helps the client come up with more rational and self-constructive ones. REBT seeks to acknowledge that understanding and insight are not enough; in order for clients to significantly change, they had better pinpoint their irrational and self-defeating constructs and work forcefully and actively at changing them to more functional and self-helping ones. REBT posits that the client must work hard to get better, and in therapy this normally includes a wide array of homework exercises in day-to-day life assigned by the therapist. The assignments may for example include desensitization tasks, i.e., by having the client confront the very thing he or she is afraid of. By doing so, the client is actively acting against the belief that often is contributing significantly to the disturbance. Another factor contributing to the brevity of REBT is that the therapist seeks to empower the client to help himself through future adversities. REBT only promotes temporary solutions if more fundamental solutions are not found. An ideal successful collaboration between the REBT therapist and a client results in changes to the client's philosophical way of evaluating him- or herself, others, and his or her life, which will likely yield effective results. The client then moves toward unconditional self-acceptance, other-acceptance and life-acceptance while striving to live a more self-fulfilling and happier life.

Rational emotive behavior therapy

73

Limitations and critique


REBT and CBT in general have a substantial and strong research base to verify and support both their psychotherapeutic efficiency and their theoretical underpinnings. A great quantity of scientific empirical studies has proven REBT to be an effective and efficient treatment for many kinds of psychopathology, conditions and problems.[11] [13] [14] [15] A vast amount of outcome- and experimental studies support the effectiveness of REBT and CBT.[16] [17] Recently, REBT randomized clinical trials have offered a positive view on the efficacy of REBT.[18] In general REBT is arguably one of the most investigated theories in the field of psychotherapy and a large amount of clinical experience and a substantial body of modern psychological research have validated and substantiated many of REBTs theoretical assumptions on personality and psychotherapy.[14] [18] [19] Some critiques have been given on some of the clinical research that has been done on REBT both from within and by others. For instance originator Albert Ellis has on occasions emphasized the difficulty and complexity of measuring psychotherapeutic effectiveness, since many studies only tend to measure whether clients merely feel better after therapy instead of them getting better and staying better.[10] Ellis has also criticized studies for having limited focus primarily to cognitive restructuring aspects, as opposed to the combination of cognitive, emotive and behavioral aspects of REBT.[14] As REBT has been subject to criticisms during its existence, especially in its early years, REBT theorists have a long history of publishing and addressing those concerns. It has also been argued by Ellis and by other clinicians that REBT theory on numerous occasions has been misunderstood and misconstrued both in research and in general.[18] Some have criticized REBT for being harsh, formulaic and failing to address deep underlying problems.[19] This has been cogently refuted by REBT theorists who have pointed out that a careful study of REBT shows that it is both philosophically deep, humanistic and individualized collaboratively working on the basis of the clients point of reference.[6] [19] They have further pointed out that REBT utilizes an integrated and interrelated methodology of cognitive, emotive-experiential and behavioral interventions.[6] [14] Others have questioned REBTs view of rationality, both radical constructivists who have claimed that reason and logic are subjective properties and those who believe that reason can be objectively determined.[19] REBT theorists have refuted these claims by maintaining that REBT raises objections to clients' irrational choices and conclusions as a working hypothesis and through collaborative efforts demonstrate the irrationality on practical, functional and social consensual grounds.[11] [19] In 1998 when asked what the main criticism on REBT was, Albert Ellis replied that it was the claim that it was too rational and not dealing sufficiently enough with emotions. He repudiated the claim by saying that REBT on the contrary emphasized that thinking, feeling, and behaving are interrelated and integrated, and that it includes a vast amount of both emotional and behavioral methods in addition to cognitive ones.[20] Ellis has himself in very direct terms criticized opposing approaches such as psychoanalysis, transpersonal psychology and abreactive psychotherapies in addition to on several occasions questioning some of the doctrines in certain religious systems, spiritualism and mysticism. Many, including REBT practitioners, have warned against dogmatizing and sacredizing REBT as a supposedly perfect psychological cure-all and panacea. Prominent REBTers have promoted the importance of high quality and programmatic research, including originator Ellis, a self-proclaimed "passionate skeptic". He has on many occasions been open to challenges and acknowledged errors and inefficiencies in his approach and concurrently revised his theories and practices.[11] [19] In general, with regard to cognitive-behavioral psychotherapies' interventions, others have pointed out that as about 30-40% of people are still nonresponsive to interventions, that REBT could be a platform of reinvigorating empirical studies on the effectiveness of the cognitive-behavioral models of psychopathology and human functioning.[18] REBT has generally in quite many ways been developed, revised and augmented through the years as understanding, knowledge and science about psychology and psychotherapy have progressed. This includes both its theoretical concepts but also its practices and methodology. Inherent in REBT as an approach has been the teaching of scientific thinking, reasonableness and un-dogmatism and these ways of thinking have been part of REBT's empiricism and

Rational emotive behavior therapy skepticism.

74

Applications and interfaces


Applications and interfaces of REBT are used with a broad range of clinical problems in traditional psychotherapeutic settings such as individual-, group- and family therapy. It is used as a general treatment for a vast number of different conditions and psychological problems normally associated with psychotherapy. In addition, REBT is used with non-clinical problems and problems of living through counselling, consultation and coaching settings dealing with problems including relationships, social skills, career changes, stress management, assertiveness training, grief, problems with aging, money, weight control etc. REBT also has many interfaces and applications through self-help resources, phone- and internet counseling, workshops & seminars, workplace and educational programmes, etc. This includes Rational Emotive Education (REE) where REBT is applied in education settings, Rational Effectiveness Training in business and work-settings and SMART Recovery (Self Management And Recovery Training) in supporting those in addiction recovery, in addition to a wide variety of specialized treatment strategies and applications.

References
[1] [2] [3] [4] http:/ / www. nlm. nih. gov/ cgi/ mesh/ 2011/ MB_cgi?field=uid& term=D011617 Ellis, A. (1957). Rational psychotherapy and individual psychology. Journal of Individual Psychology, 13, 38-44. Beck, A. (1970). Cognitive therapy: Nature and relation to behavior therapy. Behavior Therapy, 1(2), 184-200. Robertson, D (2010). The Philosophy of Cognitive-Behavioural Therapy: Stoicism as Rational and Cognitive Psychotherapy (http:/ / books. google. co. uk/ books?id=XsOFyJaR5vEC& lpg). London: Karnac. ISBN978-1855757561. . [5] Ellis, Albert (1962) Reason and Emotion in Psychotherapy. p. 54 [6] Ellis, Albert (2001). Overcoming Destructive Beliefs, Feelings, and Behaviors: New Directions for Rational Emotive Behavior Therapy. Promotheus Books. [7] http:/ / www. getselfhelp. co. uk/ epictetus. htm [8] Dryden W., & Neenan M. (2003). Essential Rational Emotive Behaviour Therapy. Wiley. [9] Ellis, Albert. (1994). Reason and Emotion In Psychotherapy, Revised and Updated. Secaucus, NJ: Carol Publishing Group [10] Ellis, A. (2001). Feeling better, getting better, staying better. Impact Publishers [11] Ellis, Albert (2003). Early theories and practices of rational emotive behavior theory and how they have been augmented and revised during the last three decades. Journal of Rational-Emotive & Cognitive-Behavior Therapy, 21(3/4) [12] "REBT particularly follows Korzybski in this respect..." Albert Ellis in The Albert Ellis reader: A guide to well-being using rational emotive behavior therapy, p. 306. Google Books preview (http:/ / books. google. com/ books?id=LorJYkPSQOwC& vq) retrieved August 18, 2010. [13] Lyons, L. C., & Woods, P. J. (1991). The efficacy of rational-emotive therapy: A quantitative review of the outcome research. Clinical Psychology Review, 11, 357-369. [14] Colin Feltham (ed) (1997). Which Psychotherapy?: Leading Exponents Explain Their Differences. SAGE, 1997 [15] Cooper M. (2008). Essential Research Findings in Counselling and Psychotherapy. Sage. [16] Philosophy in Psychotherapy: Albert Ellis interview by Jeffrey Mishlove (http:/ / www. intuition. org/ txt/ ellis. htm) [17] Psychotherapy.net: Albert Ellis Behavioral Therapy Interview (http:/ / www. psychotherapy. net/ interview/ Albert_Ellis) [18] David D. et al. (2005). A synopsis of rational-emotive behavior therapy: Fundamental and applied research. Journal of rational-emotive and cognitive-behavior therapy 2005, vol. 23 [19] Ellis A., Abrams M. & Abrams L. (2008). Theories of Personality. Sage Press [20] Ask Dr. Ellis Achieve (1996-2001). Albert Ellis Institute

Rational emotive behavior therapy

75

Further reading
Albert Ellis and Michael Abrams, PhD, and Lidia Abrams, PhD. Theories of Personality: Critical Perspectives, New York: Sage Press, 7/2008 ISBN 9781412914222 (This was his final work, published posthumously). Albert Ellis & Windy Dryden, The Practice of Rational Emotive Behavior Therapy (2nd ed.); Springer Publishing, 2007. ISBN 9780826122162 Windy Dryden & Michael Neenan, Getting Started with REBT; Routledge, 2005. ISBN 9781583919392 Windy Dryden, Rational Emotive Behaviour Therapy in a Nutshell (Counselling in a Nutshell); Sage Publications, 2005. ISBN 9781412907705 Windy Dryden, Fundamentals of Rational Emotive Behaviour Therapy: A Training Manual; John Wiley & Sons, 2002. ISBN 1-86156-347-7 Windy Dryden, Rational Emotive Behaviour Therapy; Theoretical Developments; Brunner-Routledge, 2003. ISBN 1-58391-272-X Albert Ellis, Overcoming Destructive Beliefs, Feelings, and Behaviors: New Directions for Rational Emotive Behavior Therapy; Prometheus Books, 2001. ISBN 1-57392-879-8 Albert Ellis, Feeling better, getting better, staying better; Impact Publishers, 2001. ISBN 1-886230-35-8 Windy Dryden et al., A Practitioner's Guide to Rational-Emotive Therapy; Oxford University Press, 1992. ISBN 0-19-507169-7 Albert Ellis et al., A Guide to Rational Living (3rd rev ed.); Wilshire Book Company, 1997. ISBN 0-87980-042-9 Stevan Lars Nielsen, W. Brad Johnson & Albert Ellis, Counseling and Psychotherapy With Religious Persons: A Rational Emotive Behavior Therapy Approach; Lawrence Erlbaum, 2001. ISBN 0805828788. Windy Dryden, Raymond Di Giuseppe & Michael Neenan, A Primer on Rational-Emotive Behavior Therapy (2nd ed.); Research Press, 2002. ISBN 978-0878224784 Albert Ellis & Catharine MacLaren, Rational Emotive Behavior Therapy: A Therapist's Guide (2nd ed.); Impact Publishers, 2005. ISBN 978-1886230613

External links
MeSH Rational-Emotive+Psychotherapy (http://www.nlm.nih.gov/cgi/mesh/2011/MB_cgi?mode=& term=Rational-Emotive+Psychotherapy)

General
The Albert Ellis Institute (http://www.albertellis.org) REBT Network (http://www.rebtnetwork.org/) The Centre for Rational Emotive Behavior Therapy (http://www.rebt-uk.org/) Association for Rational Emotive Behaviour Therapy (http://www.arebt.org/) UK Centre for Rational Emotive Behaviour Therapy (http://www.centresofexpertise.com/ page_1218476501573.html) Rational.org New Zealand (http://www.rational.org.nz/) International Institute for the Advanced Studies of Psychotherapy and Applied Mental Health (http://www. psychotherapy.ro/) Journal of Rational-Emotive and Cognitive Behaviour Therapy (http://www.springerlink.com/link. asp?id=104937) Dr Debbie Joffe Ellis,Wife of Dr Albert Ellis and REBT Lecturer (http://www.debbiejoffeellis.com/)

Rational emotive behavior therapy

76

REBT Applications
24-7 Help (http://24-7help.com/) Stressgroup.com (http://www.stressgroup.com/home.html) Centre for Stress Management (http://www.managingstress.com/) Early learning programme (http://www.haveagospaghettio.com.au/) REBT Podcasts (http://thejoveinstitute.org/podcast.html) DEAD link REBT & Hypnotherapy Edinburgh (http://www.exclusivehypnotherapy.co.uk)

Self-serving bias
A self-serving bias occurs when people attribute their successes to internal or personal factors but attribute their failures to situational factors beyond their control. The self-serving bias can be seen in the common human tendency to take credit for success but to deny responsibility for failure.[1] It may also manifest itself as a tendency for people to evaluate ambiguous information in a way that is beneficial to their interests. Self-serving bias may be associated with the better-than-average effect, in which individuals are biased to believe that they typically perform better than the average person in areas important to their self-esteem. This effect, also called "illusory superiority", has been found when people rate their own driving skill, social sensitivity, leadership ability and many other attributes.[2] [3]
[4]

Use and purpose


The term "self-serving bias" is most often used to describe a pattern of biased causal inference, in which praise or blame depend on whether success or failure was achieved. For example, a student who gets a good grade on an exam might say, "I got an A because I am intelligent and I studied hard!" whereas a student who does poorly on an exam might say, "The teacher gave me an F because he does not like me!" When someone strategically strives to facilitate external causes for their poor performance (so that they will subsequently have a means to avoid blaming themselves for failure), it may be labeled self-handicapping.[5]

Self-serving bias

77

Examples
Another example of self-serving bias can be found in the workplace. Victims of serious occupational accidents tend to attribute their accidents to external factors, whereas their coworkers and management tend to attribute the accidents to the victims' own actions.[6] Several reasons have been proposed to explain the occurrence of self-serving bias. One class of explanation is motivational: people are motivated to protect their self-esteem, and so create causal explanations that serve to make them feel better. Another class of explanation focuses on strategic impression management: although people may not believe the content of a self-serving utterance, they may nevertheless offer it to others in order to create a favorable impression. Yet another class of explanation focuses on basic mechanisms of memory: people might make success more available in memory for internal reasons rather than for external reasons.[1] [3] Self-serving bias may result in bargaining impasse if each side interprets the facts of the dispute in their own favor. In this situation one or more of the parties may refuse to continue negotiating, believing the other side is either bluffing or refusing to accept a reasonable settlement and thus deserves to be "punished".
Wearing fall protection when working at height is essential to protecting workers from injury. Self-serving bias can be observed on fatalities that occur to those who work at height and are unwilling to wear protective gear

There is a good deal of experimental evidence to support this hypothesis. In one experiment, subjects played the role of either the plaintiff or defendant in a hypothetical automotive accident tort case with a maximum potential damages payment of $100,000. The plaintiff's prediction of the likely judicial award was on average $14,500 higher than the defendant's.[7] The plaintiff's average nomination of a "fair" figure was $17,700 higher than the defendant's. When parties subsequently attempted to negotiate a settlement agreement, the discrepancy between the two sides' assessment of a fair compensation figure strongly correlated with whether or not parties reached an agreement within a set period of time. This experiment was conducted with real money with one real dollar being equal to 10,000 experimental dollars and if parties did not reach a negotiated agreement the case was decided by a third party and each side had to pay costly court and legal fees. At the same time, there is evidence to suggest that people do not necessarily exhibit self-serving bias with respect to computer technologies. When they fail to achieve a desirable outcome when using a computer they often blame themselves, not the technology. The reason is that people are so used to bad functionality, counterintuitive features, bugs, and sudden crashes in contemporary software applications that they tend to not complain about them. Instead, they believe it is their personal responsibility to predict possible issues and to find solutions to computer problems. This unique phenomenon has been recently observed in several human-computer interaction investigations.[8] Group-serving bias is a similar bias on the group level.

Self-serving bias

78

References
[1] Miller, Dale T.; Ross, Michael (1975). "Self-serving biases in the attribution of causality: Fact or fiction?". Psychological Bulletin 82 (2): 213225. doi:10.1037/h0076486. ISSN0033-2909. [2] Kruger, Justin (1999). "Lake Wobegon be gone! The "below-average effect" and the egocentric nature of comparative ability judgments.". Journal of Personality and Social Psychology 77 (2): 221232. doi:10.1037/0022-3514.77.2.221. ISSN1939-1315. PMID10474208. [3] Roese, N.J., & Olson, J.M. (2007). Better, stronger, faster: Self-serving judgment, affect regulation, and the optimal vigilance hypothesis. Perspectives on Psychological Science, 2, 124-141. [4] Suls, J.; K. Lemos, H.L. Stewart (2002). "Self-esteem, construal, and comparisons with the self, friends and peers". Journal of Personality and Social Psychology 82 (2): 252261. doi:10.1037/0022-3514.82.2.252. PMID11831414. [5] Gilovich, Thomas (1993). How We Know What Isn't So: The fallibility of human reason in everyday life. New York: Simon & Schuster. pp.146149. ISBN0029117062. OCLC22956975. [6] Ayim Gyekye, Seth; Simo Salminen (February 2006). "The self-defensive attribution hypothesis in the work environment: Co-workers' perspectives" (http:/ / www. sciencedirect. com/ science?_ob=ArticleURL& _udi=B6VF9-4H98T1M-3& _user=10& _origUdi=B6V6F-4B9K6V0-1& _fmt=high& _coverDate=02/ 28/ 2006& _rdoc=1& _orig=article& _acct=C000050221& _version=1& _urlVersion=0& _userid=10& md5=be611c0cc15dbfb2aba660b7b55ad89a). Safety Science (Department of Social Psychology, University of Helsinki) 44 (2): 157168. doi:10.1016/j.ssci.2005.06.006. . Retrieved 2008-06-13. [7] Babcock, L., & Loewenstein, G. (1997). "Explaining bargaining impasse: The role of self-serving biases". Journal of Economic Perspectives 11: 109126. [8] Serenko, A. (2007). "Are interface agents scapegoats? Attributions of responsibility in human-agent interaction" (http:/ / foba. lakeheadu. ca/ serenko/ papers/ IwC_Published_Scapegoats_Serenko. pdf). Interacting with Computers 19 (2): 293303. doi:10.1016/j.intcom.2006.07.005. .

Further reading
Campbell, W.K., & Sedikides, C. (1999). Self-threat magnifies the self-serving bias: A meta-analytic integration. Review of General Psychology, 3, 23-43.

Article Sources and Contributors

79

Article Sources and Contributors


Irrationality Source: http://en.wikipedia.org/w/index.php?oldid=455523594 Contributors: -Majestic-, 99bluefoxx, Aaron Kauppi, Aaron Schulz, Abstractjazz, Andycjp, Argyll Lassie, Badgernet, Banno, Barbara Shack, Barticus88, Before My Ken, Bernard Marx, Boojum, Carlsotr, Chris the speller, CliffC, Colonies Chris, Common Man, Cretog8, Cybercobra, Dogah, Draicone, Drbreznjev, El C, Escape Orbit, Fergie4000, Flo98, GSGSGSG, Gaius Cornelius, Gtg204y, Hele 7, Husnock, Jahsonic, Jevansen, JimmycurN, John, Joyous!, Jtalledo, Kojangee, Lilliputian, Lova Falk, MC10, Marek69, MartinPoulter, Mel Etitis, Michael Hardy, Naufana, Omnieiunium, Owl, Pedrodurruti, Pgreenfinch, Phronetic, Prezbo, Ratpow, Reaverdrop, Rich Farmbrough, Rick Norwood, Rjwilmsi, TexasAndroid, The Anome, Timwi, Vildricianus, Voidone, Wackelpudding, Xhin2, Yakushima, Yelling Bird, 99 anonymous edits Fallacy Source: http://en.wikipedia.org/w/index.php?oldid=453444623 Contributors: ANHERDEDED, Aaronmthompson, Aeternus, Africangenesis, Alansohn, Alatari, Aldy, Allstarecho, Alpinwolf, Anapazapa, Andonic, Angryapathy, Ansh666, Anthonzi, Antonielly, Apokrif, Arakunem, Archenzo, ArglebargleIV, Arthur Frayn, Arwack, AtticusX, AxelBoldt, Ayla, Basesurge, Bd pride, BenFrantzDale, Beska, Bhikkhu Santi, Blanchette, Blue Dream, Boarshead2, Bobryuu, Breezeboy, Brianski, Bsadowski1, Btipling, CSTAR, Calvin 1998, Can't sleep, clown will eat me, Capricorn42, Carolsuehaney, Cataclysm, Causa sui, Chris G, Chris53516, ChrisCork, Christian Historybuff, ChristianEdwardGruber, ChristinaT3, Clarince63, Cleared as filed, Clementi, Clovis Sangrail, ClydeOnline, Codecreations, Cohesion, Colincbn, Common Man, Contemptofcourt, Cosmopolitan, Crazy Boris with a red beard, Crowne, Cybermud, Cymbalmonkey, DARTH SIDIOUS 2, DJ Clayworth, DVdm, DeAmazonia, Dennisthe2, Dfrg.msc, Discospinster, Dmsc893, Dominus, Doobie61, DoubleBlue, Doug1941, Drufin, DryaUnda, Dynaflow, Ed Poor, Elfchief, Enigmaman, Enviroboy, Epbr123, Esperant, Excirial, ExplicitImplicity, Factorial, Fenwaysoxfreak, Finell, Flewis, Fluffernutter, Froggo Zijgeb, Frozen4322, Furrykef, Gamkiller, Garrisonroo, Gblandst, General Ludd, George100, Georgius, Geraintluff, Gilliam, Gioto, Glover, Gogo Dodo, Graue, Graymornings, Gregbard, Grumpyyoungman01, Guyzero, Hacky, HannahSuzanneeee, Hda3ku, Helix84, Hiberniantears, Hintss, HorsePunchKid, Hu12, Hughcharlesparker, HumphreyW, Hyrim, IRP, IZAK, Ilikepie2221, Inarius, Iridescent, IrishPete, J.delanoy, JB Piggin, Jacobean Grid, Jalwikip, Jamelan, Jarhed, Jazzdrummers, Jdfawcett, Jeffrey Mall, JesseHogan, Jon Awbrey, Jon Harper, Jossi, Joycloete, Jrtayloriv, Juliano, K, Kazenokaze, Kendrick7, Keremkacel, Kermit2, KirbenS, Knucmo2, Koavf, Kpalion, Kungfuadam, Kuru, Kwhittingham, Kyu-san, Latreia, Leftware, Letranova, Liko81, Lipedia, Longbyte1, Lotje, Lova Falk, Luis Dantas, LuoWencan, M, MONGO, Mahmudmasri, Major Danby, Majorly, Maleonmoney123123, Marcn, Marcus Brute, Mark Renier, Martin451, MartinPoulter, Master of Puppets, MasterDarksol, Mathrick, MaxHund, McSly, Mdd, Meco, Mel Etitis, Mephisto Panic, Mephistopheles, Meredyth, Metron4, Mhhza, Michael Hardy, Mike Rosoft, Mikeo, Mnyakko, Mr.Z-man, MrDarcy, Mrwojo, Mscbray, Msrasnw, Mukadderat, N2e, N419BH, Napzilla, Nearfar, Neon white, Nihilist999, Nosbig, Nphyx, Nudve, Odinandsleipner, OlEnglish, Ollj, Onthesideoftheangels, Ouzo, Paul Klenk, Peabody80, Pearlg, Peter S., Philawsophy, Philip Trueman, Picapica, Pierc3000, Pinethicket, Poe Joe, Pologic, Poor Yorick, PrincessWortheverything, Pro bug catcher, Proatheism, QuizzicalBee, RHaworth, ROFL zealot, RavenStorm, Ravn, Rbj, Rdsmith4, Rexroad2, Rich Farmbrough, Richard001, Rmawhorter, RobinHood70, Robomaeyhem, Roger2909, Rsm99833, Rtrev, SMcCandlish, SQL, Safalra, SchmuckyTheCat, ScienceGolfFanatic, Scwlong, Serenity id, Shaka78, Silence, Sionus, Skarebo, Skoosh, Slavatrudu, Smalljim, Smallpond, Snowolf, SoWhy, Sonicsuns, Soundguy95, Southwestspringroll, SpeedyGonsales, Spinoff, Spirals31, Squatrano, Starfire777, Steel, SteinbDJ, Stephenb, Stevertigo, Sumail, SvenAERTS, Synergism, TableManners, Tarheel95, Techman224, The Anome, The Thing That Should Not Be, TheStrelok, Thedarkknight491, Theo Clark, Thomas Kist, Thomblake, Thorncrag, Thr4shl0v3r, Tide rolls, Tijfo098, Timwi, Tisane, Trevor Andersen, Ttiotsw, Vanisheduser12345, Vary, Versageek, VmanBG, Voxpuppet, WAC50, Waninge, Watershipper, Wayfarer, Wiki alf, Wikipe-tan, Wikipeterproject, Wikisian, Wikiuser100, Wikiwikibangbang, Wilde Jagd, Wjejskenewr, WookieInHeat, Wykypydya, XJDHDR, Xezbeth, Yayay, Zgrkbr, Zntrip, Zreid89, Zx-man, 569 anonymous edits Heuristic Source: http://en.wikipedia.org/w/index.php?oldid=455374882 Contributors: 2D, Aapo Laitinen, Aaronoakley, Ahoerstemeier, AlanUS, Alexbrewer, Alice, Altenmann, Alusayman, Ancheta Wis, AndriuZ, Andy120290, Andycjp, Anomalocaris, Anthony, Anthonzi, Arlo Barnes, Bcasterline, Belasted, BenKovitz, Benn, Beve, Bladestorm, Bobblewik, Bookuser, Charles Matthews, CharlesGillingham, Chendols, Clossius, Cogpsych, Colfer2, Conejorojo, Cyan, Cyneve, Cyrius, DCDuring, DainLoves, Danno uk, Debeo Morium, Deepstratagem, Deflective, DemonicPartyHat, DeutscherStahl, Dicklyon, DocWatson42, Dreadstar, ELApro, El C, Elizabeth84, Email4mobile, Epashou, Felicecumpeta, Fixmacs, Fordmadoxfraud, Fplay, Francob, Fred Bauder, Fuzzform, George100, Giftlite, Gilgamesh, Gogo Dodo, Grantsky, Grivantian, Ham Pastrami, Hamard Evitiatini, HappyDog, IncognitoErgoSum, Irontightarguments, It Is Me Here, J. Finkelstein, J.delanoy, JamesBWatson, Jclemens, JimmyShelter, Joconnor, Johannes Simon, Johnkarp, Jonathan.s.kt, Jose Ramos, Joshhoyt97, Joychowdhury2009, K.C. Tang, K.anvesh87, Kimberleyporter, KnightRider, Kwamikagami, LX, Leafpeeper, Lee Daniel Crocker, LilHelpa, Logan, Loren.wilton, Lothar76, LoveMonkey, Lucidish, Luna Santin, Lunaverse, MKil, Marcieri, Marco Krohn, Markcsg, Martarius, MartinPoulter, Matthew Stannard, Maximus Rex, Mellum, Michael Hardy, MrOllie, MrYdobon, Mrmiercoles, Mssetiadi, Muspilli, Nanshu, Nbarth, Nectarflowed, Neutrality, Newbyguesses, Nikai, Ninly, Noetica, Nomad, Nwbeeson, Oaktreedave, Orcrist, Paul Niquette, Penni17, Peterlewis, Philip Trueman, Philosotox, Phlegat, Pinethicket, Pinkadelica, Pointblankstare, Qazmun, Quest for Truth, RadicalBender, Rankiri, Rdummarf, RedHouse18, Reginalafferty, RexNL, Robert K S, Rocko1124, Romanm, Ronz, RoyalTS, Rrius, S Roper, Salgueiro, Samwaltz, Samwb123, Schmeitgeist, Serranopoint, Shantavira, Simon Kilpin, SlackerMom, Snoyes, Spiral5800, Spitfire, Sslevine, Stand777, Stephen378, Stereotek, Stevertigo, Sundar, TRMc, Taak, Tcncv, TeamZissou, Tenmei, Tesseract2, The Anome, The wub, TheSoundAndTheFury, Theonemacduff, Theroadislong, Thesilverbail, Thorwald, Thumperward, Tide rolls, Timneu22, Tnolley, Tompsci, Troped, Typofier, UkPaolo, Uncle Dick, Updatehelper, Varlaam, Victor Chmara, VinceyB, Waveguy, Weerasad, Wereon, Whoosit, Wik, Wolfdog, XJamRastafire, Xaxafrad, Xos, Xp54321, Yakovsh, 300 anonymous edits Affect heuristic Source: http://en.wikipedia.org/w/index.php?oldid=455313852 Contributors: Aaron Kauppi, Arnoutf, Calm, Cogpsych, Counsellor1956, DCDuring, Fplay, Headbomb, Ieatbugs, Koavf, Kwamikagami, Mark83, MartinPoulter, Masonbarge, Mattisse, Melesse, Onslaught, Outriggr, Rjwilmsi, Rlamping, Robert Daoust, Sasuke Sarutobi, Shell Kinney, SimonP, Tevildo, TravisMunson1993, Wiwaxia, 12 anonymous edits Anchoring Source: http://en.wikipedia.org/w/index.php?oldid=454363225 Contributors: 2004-12-29T22:45Z, AThing, Aaron Kauppi, Action potential, Andy Smith, Arthena, Asocall, AxelBoldt, Bethan 182, BlaiseFEgan, Bluerasberry, ChrisG, DJLumination, DavidWBrooks, Diberri, DonRus, Doug4, Epeefleche, Erwinduke, Excirial, FT2, Grumpyyoungman01, Hadal, Haymaker, Icarus3, John of Reading, JorisvS, Karada, Larrymcp, Lova Falk, MartinPoulter, McGeddon, Mdixson, Meco, Msml, Mulat, Nopetro, Outriggr, Pgreenfinch, Radagast83, Rgfolsom, Robert Weemeyer, Rubingr, Simoes, Taak, Tonkiro, Vladimir Volokhonsky, Yeli23, 53 anonymous edits Availability heuristic Source: http://en.wikipedia.org/w/index.php?oldid=452411374 Contributors: (, Aaron Kauppi, Anonymous Dissident, Atomiktoaster, Bert Macklin, Bobblewik, Bporopat, Bryan Derksen, Butters7, Cat Cubed, Centrepull, Chantoke, CharlotteWebb, Chuck Carroll, Craig.borchardt, Critic11, DanielCD, Dukeofomnium, Eric Shalov, Foggy Morning, Fplay, Framhein, Freakofnurture, Fuzzy artist, Helixblue, Heroeswithmetaphors, Ideatr, Jeff Muscato, Johnkarp, Koavf, Kwhitten, MartinPoulter, Mattisse, Maurice Carbonaro, Michael Hardy, Mkamensek, Nishkid64, Orioneight, Paradisefades, Philwelch, Piotrus, Podkayne, Psp2010, Raider Duck, Reetep, Rene Thomas, Sardanaphalus, Sefeist, Taak, Terpsichoreus, The Brain, TheEditrix2, Xyzzyplugh, 81 anonymous edits Contagion heuristic Source: http://en.wikipedia.org/w/index.php?oldid=342490588 Contributors: BenKovitz, Beno1000, Bjones, DCDuring, JJay, JLaTondre, Lithui, Masonbarge, PK1962*, 7 anonymous edits Effort heuristic Source: http://en.wikipedia.org/w/index.php?oldid=435356301 Contributors: Avicennasis, Bearcat, DCDuring, Glumundrung, Mattmeskill, Rehoot, Xezbeth, 8 anonymous edits Escalation of commitment Source: http://en.wikipedia.org/w/index.php?oldid=450355844 Contributors: Acdx, AndyLandy, BD2412, Blackjack48, Blueboy96, Cccantarelli, Cogiati, Comet Tuttle, Deltabeignet, EEng, Eleassar, Fridek, GTBacchus, Jmelenson, Kappa, Kevin Murray, Kwhittingham, Marudubshinki, Mpt, Neurolysis, Petri Krohn, Phoenixrod, R'n'B, RichardF, Rrreese, Saqib, Scientus, Underpants, Wykypydya, ZeroOne, 16 anonymous edits Familiarity heuristic Source: http://en.wikipedia.org/w/index.php?oldid=404277370 Contributors: Aaron Kauppi, Basmaf, DCDuring, Daniros, Doczilla, Dragoonvkq, Giraffedata, Glumundrung, Gurch, Iridescent, Kriorlis, Lova Falk, MER-C, Mattisse, Mplmike, Mysticturtle, Zac4213, 17 anonymous edits Fluency heuristic Source: http://en.wikipedia.org/w/index.php?oldid=445947839 Contributors: Dionyziz, Glumundrung, Grutness, Miegoreng, Throwaway85, 1 anonymous edits Gambler's fallacy Source: http://en.wikipedia.org/w/index.php?oldid=453795981 Contributors: 2004-12-29T22:45Z, 2005, AKGhetto, Aaron Kauppi, Alex W, Alksub, Aly89, Andeggs, Andyroo316, Aoxiang, Areldyb, Ashley Pomeroy, Aveekbh, Avirunes, AxelBoldt, Baccyak4H, Badger Drink, Barklund, Bender235, Bfinn, Bigturtle, Bkell, Blue Tie, Bryan Derksen, Bush6984, CSTAR, Calair, Camaj, Camw, Cgwaldman, Cjrcl, Cmglee, CobaltBlue, Constructive editor, Conversion script, Courcelles, Cyclist, D o m e, DanielCD, DavidDouthitt, DavidWBrooks, Day viewing, Dcoetzee, Den fjttrade ankan, Deor, DocWatson42, Doniago, Donnaidh sidhe, DragonHawk, E090, Edward, Electricbassguy, Emurphy42, Enric Naval, Eurosong, Evanreyes, Father Goose, FeatherPluma, Feezo, Fosterd2, Fr, Furrykef, GDstew4, GVnayR, Gayasri, Gazpacho, Giftlite, Gonzalo Diethelm, Grace Note, Graham87, GreenReaper, Gregbard, Grumpyyoungman01, HAGADAG, Headcase88, Heron, Horovits, Hyphz, Iceberg3k, Jaguar9a9, Jasperdoomen, Jimjam27, Jnestorius, Jokes Free4Me, Julesd, Karada, Kazvorpal, Labans, Lenoxus, Lifefeed, Liko81, LilHelpa, LukeH, Luqui, Magog the Ogre, Malcolm Farmer, MathHisSci, McGeddon, Melchoir, Melcombe, Meno25, Michaelbluejay, Mike Van Emmerik, Molinari, Musiphil, Navigatr85, Nbarth, Netsumdisc, Notoldyet, NykeYoung, O'kelly, Orthologist, Ozkaplan, PAR, Pacomartin, Pakaran, PanagosTheOther, Pat Hayes, PatrikR, Pigman, Pratik.mallya, Quarl, Quiddity, Qwfp, Rbarreira, Reki, Roma emu, Ruzbehabbasi, S2000magician, SCF71, Sandebert, Sbyrnes321, Shantavira, Sietse Snel, Silence, Slicing, SmartGuy, Smjg, Snoyes, Sockatume, Spoon!, Statoman71, StuRat, Superm401, Taak, Takwish, Tamfang, Tarquin, The Anome, TheFix63, TheOtherStephan, Thumperward, Timo Honkasalo, Tomeasy, Tomisti, Torbad, UltimateHombre, Uvaphdman, Vbailo, Vicki Rosenzweig, Vonbontee, Waleswatcher, Wolfkeeper, Woodstone, Wotnow, Wrp103, 178 anonymous edits Gaze heuristic Source: http://en.wikipedia.org/w/index.php?oldid=184633573 Contributors: ApolloCreed, DCDuring, Doczilla, Orionnow, PhilKnight, RoyalTS, 3 anonymous edits

Article Sources and Contributors


Naive diversification Source: http://en.wikipedia.org/w/index.php?oldid=452375850 Contributors: Andycjp, Flipin, GregorB, RL0919, Sslevine, Tassedethe, Victor Chmara, 3 anonymous edits Peak-end rule Source: http://en.wikipedia.org/w/index.php?oldid=371500096 Contributors: Aaron Kauppi, Andyjsmith, Chris Edgemon, Cogpsych, Deltabeignet, Fplay, Jimpartame, Johnkarp, Johnleemk, Karada, Maximus Rex, Rifleman 82, Taak, Tenrub, The Anome, 3 anonymous edits Recognition heuristic Source: http://en.wikipedia.org/w/index.php?oldid=423080914 Contributors: Fplay, Johnkarp, NSQE, Opplab, T3hZ10n, Taak, WhisperToMe, Wile E. Heresiarch, 6 anonymous edits Representativeness heuristic Source: http://en.wikipedia.org/w/index.php?oldid=454365583 Contributors: Amarakana, AmiDaniel, Andyroo316, Auto469680, Axharr, Charlespeirce11, Cognatus, Dpv, Epeefleche, Eric Shalov, Fieldday-sunday, Fplay, Framhein, Funandtrvl, Fuzzy artist, Gdabski, Goochelaar, Gurch, Icarus3, J04n, JEN9841, Jemoliver, Jesin, Johnkarp, Jon.baron, Koavf, Kwhitten, MBisanz, Mantissa128, MartinPoulter, Mattisse, Meco, Paul Murray, PrestonH, Sam Spade, Taak, The Anome, Varuag doos, WikipedianMarlith, Xyzzyplugh, 38 anonymous edits Scarcity heuristic Source: http://en.wikipedia.org/w/index.php?oldid=445961123 Contributors: Aaron Kauppi, Doczilla, Glumundrung, Grutness, 1 anonymous edits Similarity heuristic Source: http://en.wikipedia.org/w/index.php?oldid=335767193 Contributors: Anomalocaris, Anticipation of a New Lover's Arrival, The, Antidisestablishmentarianism12804, Destynova, Schmancy47, Steelerdon, That Guy, From That Show!, The Anome, Whee, 13 anonymous edits Simulation heuristic Source: http://en.wikipedia.org/w/index.php?oldid=449740844 Contributors: Abductive, Bunny35, DCDuring, Demize, Doughboy, Epeefleche, Fplay, Johnkarp, Lova Falk, MadMax, MartinPoulter, Rjwilmsi, Soderick, Taak, WereSpielChequers, 7 anonymous edits Social proof Source: http://en.wikipedia.org/w/index.php?oldid=440371858 Contributors: AaronAgassi, Al.locke, Andy Christ, Andycjp, Anomalocaris, Ashmoo, Dhartung, Doczilla, Grocer, Harriseldon, Hierogre, Insomnianiac, Jcbutler, JeffereyLin, Jipinghe, Johnleemk, Jorm, JoshuaPL, Lightbound, LjL, MartinPoulter, MartinPoulter-training, Mathmo, Membender, Michael Hardy, Piotrus, Recury, Rjwilmsi, That Guy, From That Show!, The Anome, TheParanoidOne, Torrentweb, Trevorsavagerose, Vyn, Woohookitty, Wykypydya, Zapyon, 79 anonymous edits Take-the-best heuristic Source: http://en.wikipedia.org/w/index.php?oldid=375622513 Contributors: Arno Matthias, Charles Matthews, DCDuring, Fplay, Marudubshinki, 3 anonymous edits Aestheticism Source: http://en.wikipedia.org/w/index.php?oldid=452623758 Contributors: Al Pereira, Algabal, Andy Dingley, Angel ivanov angelov, Anna Gielas, Audrey, Bencherlite, Blue Henk, Bluszczokrzew, Boonewightman, Bwmcmaste, Clubmarx, Cold Season, Colonies Chris, CryptoDerk, Dahn, Dawn Bard, Derek Ross, Djnjwd, Epbr123, Evanreyes, Ewulp, Feezo, Freakofnurture, GearedBull, Geniac, Gonzonoir, Goodvac, Gregbard, Hadal, Hmains, Howcheng, Hydrargyrum, Hydro, Icarus2113, Ihcoyc, Indi94, Inividual, Ironie, Isnow, Jahsonic, Jauhienij, JeffC, Jlittlet, Johnny 42, Jpbowen, Khalidkhoso, Kralahome, Lh389, LilHelpa, Lisa Di, Loggie, Martinp23, Mattis, Michaeldsuarez, Nagelfar, Nakon, Nevinho, O010068, Oxymoron83, PKM, Paul Barlow, Planetneutral, Radagast83, Reddi, RepublicanJacobite, Ronhjones, Schmeitgeist, Sethmahoney, Sjc, Skomorokh, Slac, Smith609, Sparkit, Ssilvers, Subodhsaini, Swatson2166, The Transhumanist, Thingg, Torgo, Truthkeeper88, Two PhDs and a Master's Degree, Undisputedvoiceofreason, VAwebteam, Varano, Wayward, Wikidresser, William percy, Windlake, Woohookitty, 109 anonymous edits Attribute substitution Source: http://en.wikipedia.org/w/index.php?oldid=449739200 Contributors: Aaron Kauppi, Cezary Okupski, Epeefleche, Koavf, MartinPoulter, Nabeth, Robofish, Vicenarian, 3 anonymous edits Bounded rationality Source: http://en.wikipedia.org/w/index.php?oldid=449727036 Contributors: 7&6=thirteen, A.Ou, A5, Aaronlovesmolly, Abu badali, Aleksd, Amritasenray, Apophenian Alchemy, AppleJuggler, Arno Matthias, Bgurbaskan, Bonadea, Brichard37, Buldri, Bwrs, Clausen, Commgrad, Cookiehead, Damian Yerrick, DavidCBryant, Deanba, Download, Drew.peterson.roach, Earth, Eh kia, Epeefleche, Epktsang, Forwardmeasure, Frank Guerin, Grick, Imersion, Jnmclaren, John Quiggin, Kzollman, LachlanA, Landroni, Lawrencekhoo, Lumos3, Maurice Carbonaro, Michael Hardy, Mkoval, Nabeth, NatalieAvigail, NeoNerd, Noisy, PStrait, Paolo.dL, Patrick, Patriotic dissent, Pgreenfinch, Phronetic, PierreAbbat, Pm67nz, RDBrown, Rajah, Ratpow, Reidlophile, Rinconsoleao, Rl, Robin klein, Rp, SMasters, Seglea, Slightlyslack, Smilyanov, Srbauer, Stephenb, Taak, Terra Novus, The Anome, Thomasmeeks, Tijfo098, VladimirKorablin, Volunteer Marek, Wikidea, Wikiloop, Xristy, 77 anonymous edits Cognitive bias Source: http://en.wikipedia.org/w/index.php?oldid=453784121 Contributors: 2004-12-29T22:45Z, Aaron Kauppi, Aleksa Lukic, Altenmann, Altzinn, AppleJuggler, Aqui, Arcadian, Arno Matthias, Auto469680, Becker0109, Beland, Bender235, Bert56, BirgerH, Borkert, Cat Cubed, Charles Matthews, Cispyre, ClockworkSoul, Cogresearcher, Cpiral, Cyan, Dangph, DanielCD, Deanba, DeistCosmos, Diberri, Drjeanne, EPM, Epeefleche, Ericspenguin, Everyme, Gregbard, Greudin, Grumpyyoungman01, Haoie, Harborsparrow, Hcberkowitz, Hyperdanny, ImperfectlyInformed, Ividrin, JRR Trollkien, Johnkarp, Jon.baron, Jonkerz, Joriki, KF, Katieh5584, Kermit2, Koavf, Kpmiyapuram, Ksyrie, Leroyjohn, Letranova, Lightbound, Loodog, MartinDK, MartinPoulter, Mathsci, Mattisse, Michael Hardy, Miradre, Nabeth, Oldarney, Omicronpersei8, Penbat, Petrb, Pgreenfinch, Populus, QueenofBattle, Rfl, Rgvandewalker, Roadrunner, Rocket000, Romeu, Rursus, Ryanmcdaniel, STGM, Selimober, Sjkim, Skywalker415, Sparkit, Spencerk, Steve carlson, Taak, Tene, Tenmei, Teratornis, Texture, The Anome, Tktktk, WeijiBaikeBianji, Wesha, Ww, kebrke, 84 anonymous edits List of cognitive biases Source: http://en.wikipedia.org/w/index.php?oldid=455164295 Contributors: "alyosha", .digamma, 4RugbyRd, Aaron Kauppi, Aciel, Adam Krellenstein, AerobicFox, Aeternus, Amarakana, Andries, Anomalocaris, Anonymous4367, Antedater, Anthon.Eff, Arjuna909, Arno Matthias, AstroHurricane001, Avenue, Aznfanatic6, Badger Drink, Badgettrg, Barnacles phd, Bbartho, Beland, Benjamin Mako Hill, Bert56, Biddingers, Big Bird, CapitalR, CapitalSasha, Captain-n00dle, Cat Cubed, Cervello84, Chasfh, Christophernandez, Clayoquot, Cogpsych, Colorfulharp233, Counseladvise, Cretog8, CrookedAsterisk, CuriousOne, Cycleskinla, Cyfal, DCDuring, Dangph, Danman3459, Dave Runger, Deanba, DenisHowe, Digi843, Dondegroovily, Douglasjarquin, DrL, Dragice, Dreadfullyboring, Drjeanne, Droll, Drsa12, Edit650, Effie.wang, ElentariAchaea, Endogenous -i, ErkDemon, ErnestC, Everything counts, Exeunt, Fjarlq, Foocha, Frogular, Frdrick Lacasse, Gamewizard71, General Wesc, GoingBatty, Good Vibrations, Gracefool, Grumpyyoungman01, Gwern, Hankconn, Hans Adler, Harold f, Heqwm, Hobsonlane, Iaoth, Ingolfson, Int3gr4te, IvR, J.Ammon, JForget, Jcreigno, Jenifan, Joebieber, John Cross, Johnkarp, Jon.baron, Jonathan.s.kt, Joniale, Jose Icaza, Jtneill, Juniper blue, Jweiss11, Kilmer-san, Knucmo2, Koavf, Kookyunii, Kpmiyapuram, Kwhitten, L33tminion, LCP, Lamjeremy, Lamro, Lawrencekhoo, Leejc, Leonard G., Letranova, Lightbound, Loudsox, Lova Falk, Magmi, Mahjongg, MartinPoulter, Marudubshinki, Mattjs, Matzeachmann, Mbrooks21, McGeddon, McSly, MdMcAli, Metamatic, Michael Hardy, Michele123, Mindmatrix, Mukadderat, NFOlson, Nbarth, Nclean, Nekteo, Nhunt, Nick, NinjaKid, Northamerica1000, Number 0, Olsen34, Osubuckeyeguy, Patriarch, Peace01234, Pedrobh, Pgreenfinch, Philwelch, Pilgaard, Poliquin, Power.corrupts, Proberts2003, Punanimal, R42, Rachel Pearce, RafaelRGarcia, Rich Farmbrough, RichardF, Richmcl, Rj.amdphreak, Rjwilmsi, Rl, Robin klein, Robinh, Rrburke, Rurik3, Samohyl Jan, Sandstein, Scwlong, SebastianHelm, Seren-dipper, Sergei Peysakhov, Shnookle72, Sills bend, SocialPsych101, Sparkit, Startxxx, SteveJanssen77, Stevegallery, StradivariusTV, Sverre, Swift2plunder, Taak, Tabletop, Tdent, Teratornis, Texture, The Anome, The Kytan Apprentice, Timwi, Tisane, Tom harrison, Travelbird, Trylks, Uselessmoose, Utrechton, ValerieBK, Valerius Tygart, Wavelength, WheezePuppet, Whitepaw, WikiSlasher, Wikiteck, Wittylama, Wjbeaty, Wk muriithi, Wkerst, Wknight94, Wolfkeeper, Wragge, Wunderbarrrrr, Wykily, Wykypydya, WynnQuon, YechezkelZilber, ZoneSeek, Zvika, 230 anonymous edits Dysrationalia Source: http://en.wikipedia.org/w/index.php?oldid=445947046 Contributors: Aaron Kauppi, Deanba, Dr.orfannkyl, Flammifer, Grutness, Gtstricky, Jahsonic, Katharineamy, Kynance, Mattisse, Maurog, PrimarySource, R'n'B, SGGH, Tainter Rational emotive behavior therapy Source: http://en.wikipedia.org/w/index.php?oldid=452624292 Contributors: 1000Faces, 9831236gddshs, Abc4rebt, Action potential, Andreas Philopater, Arcadian, Arj1981, Arman Begoyan, Asuoonline, Avoided, BD2412, Backslash Forwardslash, Belovedfreak, BigrTex, Blueteal75, Bodnotbod, Buddho, Carlvincent, Casliber, Chunksta05, ClaudiaM, Cryptic star, Crystallina, Dan, Dark Kubrick, Davin, Diablod666, Dlmccaslin, Doczilla, Dr.alf, EagleFan, Ed Poor, Elwikipedista, Ensign beedrill, Excirial, EyeOfVision, Falstaffswims, Favonian, Fire e, FrenchIsAwesome, Fuhghettaboutit, Gaius Cornelius, GregA, Gregbard, GrouchyDan, Harryboyles, Hcven005, HuenemeRick, HypnoSynthesis, IdeArchos, Idehill, J. Ash Bowie, J04n, Jay Slupesky, Jeff3000, JesseHogan, Jfdwolff, JimR, Kagabida, Kelly Martin, Khatru2, Kimbernelson, Lihaas, Llohr, Lochlyn.Christante, Loveslob, MarkThomas, Mattisse, Meewam, Mr Christopher, MrOllie, Mrbbking, Mus0k45, Mussyjamie, Myasuda, Neutrality, Neutrophil, Nick Number, Nitram Nai, Oberobic, Obsidian Soul, Oddity-, Oldefarquer, Orangemarlin, Pdcook, PeaceNT, Pearle, Penbat, Peterdjones, Phantomsteve, Pokemonblackds, Preachershields1945, Psymba, Ptypes, R'n'B, Rasmus Faber, Redheylin, Rhombus, Rich Farmbrough, Rmsydiaha, Roleplayer, Rwthorburn, SHIMONSHA, Sannse, Scriptin, Seekerazrael, Sethie, Shadowlapis, Shimgray, Shiroman13, Skagedal, SlackerMom, SteveREBT, Tlabshier, Tobych, Travelbird, Vegaswikian, Vincej, Vitriden, William Avery, Willross, Woohookitty, Zenohockey, 329 anonymous edits Self-serving bias Source: http://en.wikipedia.org/w/index.php?oldid=454879926 Contributors: 2004-12-29T22:45Z, Aaron Kauppi, Akuli91, Algae, Alusik, Bender235, Chevymontecarlo alt, Cogpsych, Diza, Editor64, Eequor, Eric119, Furrykef, Gary King, Hu12, Johnkarp, Jtneill, Karada, Koavf, Ksyrie, Lotje, Margin1522, MartinPoulter, Mattisse, Meredyth, Northamerica1000, Nroese, Penbat, Psychobabble, Quadell, Rjwilmsi, Taak, Teratornis, VBGFscJUn3, Wikireviewer99, Yyk2, Zvika, 42 anonymous edits

80

Image Sources, Licenses and Contributors

81

Image Sources, Licenses and Contributors


File:Loudspeaker.svg Source: http://en.wikipedia.org/w/index.php?title=File:Loudspeaker.svg License: Public Domain Contributors: File:Daniel KAHNEMAN.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Daniel_KAHNEMAN.jpg License: Public Domain Contributors: File:Lawoflargenumbersanimation.gif Source: http://en.wikipedia.org/w/index.php?title=File:Lawoflargenumbersanimation.gif License: Public Domain Contributors: Sbyrnes321 Image:TaxicabProblem.png Source: http://en.wikipedia.org/w/index.php?title=File:TaxicabProblem.png License: Public Domain Contributors: File:Peacock2.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Peacock2.jpg License: Public Domain Contributors: TomR (Thomas Ruedas) Image:Design for an Aesthetic theatrical poster.png Source: http://en.wikipedia.org/w/index.php?title=File:Design_for_an_Aesthetic_theatrical_poster.png License: Public Domain Contributors: E.L.S. Image:Frith A Private View.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Frith_A_Private_View.jpg License: Public Domain Contributors: Image:Opt taeuschung groesse.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Opt_taeuschung_groesse.jpg License: GNU Free Documentation License Contributors: Anton File:Falls are the fifth most common event leading to an occupational fatality for oil and gas extraction workers.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Falls_are_the_fifth_most_common_event_leading_to_an_occupational_fatality_for_oil_and_gas_extraction_workers.jpg License: Public Domain Contributors: National Institute for Occupational Safety and Health (NIOSH) from USA

License

82

License
Creative Commons Attribution-Share Alike 3.0 Unported //creativecommons.org/licenses/by-sa/3.0/

Das könnte Ihnen auch gefallen