Sie sind auf Seite 1von 46

MEMORY MODELS – COG.

PROCESSING

Definitions
Memory
– Memory is the encoding, storage and retrieval of information
– In order to understand more about the possible structure and function of memory,
researchers within the cognitive approach have suggested models of memory that can
be tested to determine their validity.
Duration
– STM does not last very long up to 30 seconds
– Rehearsal keeps a memory active
– Verbal rehearsal can allow the memory to become long term.
– LTM can last a lifetime
Capacity
– Capacity = how much can be held in a particular place.
– LTM is considered pretty much limitless. Losses happen through decay (memory
loss) and interference (new information preventing remembering things) not a limit on
capacity.
Capacity of STM
- George Miller (1956) – Immediate memory is 7 +/- 2, whether it be letters, numbers
or words. Chunking (integrated pieces or units of information) is a way to remember
words and letters. He found that we can recall 5 words as well as 5 letters, by
chunking things together so we can remember more. This is a very personalised
process.
- Chunking improves the capacity of memory although it may reduce accuracy.
Coding
• How we store information
• Information arrives in your sensory memory as a sound or an image or a feeling.
• Three main ways of coding
– Acoustic coding: the sound of a stimulus.
– Visual coding: the physical appearance of a stimulus.
– Semantic coding: the meaning of the stimulus.
• In general, STM seems to use acoustic coding and LTM uses semantic coding.

Multi-Store Model of Memory


Atkinson and Shiffrin (1968) were among the first to suggest a basic structure of memory
with their Multi-Store Model of memory. Although this model seems rather simplistic today,
it sparked much research based on the idea that humans are information processors.
The model is based on a number of assumptions:
– The model argues that memory consists of a number of separate locations in which
information is stored.
– Memory processes are sequential.
– Each memory store operates in a single, uniform way.
Sensory Register
There are separate memory stores in the sensory register:
– Echoic store = auditory information
– Iconic store = visual information
– Haptic store = tactile information
– Gustatory store = taste information
– Olfactory store = smell information

STM
– Information in STM will disappear very quickly if it is not rehearsed.
– It will also disappear if new information enters STM pushing out old information.
– This is because there is a limited capacity.

LTM
– Information from STM needs to be rehearsed to go to LTM.
– The more something is rehearsed the longer lasting and better the memory will be.
– This is referred to as maintenance rehearsal.

Studies
Sperling (1960)
Aim
– To look at the limited duration of the sensory store.
Method
– Participants saw grids of digits and letters for 50 milliseconds (blink of an eye).
– They were either asked to write down all 12 items (whole report) or hear a tone after
the exposure and write down that row (partial-report).
– High tone – top row, Medium tone – middle row, Low tone – bottom row
Results
– When asked to report the whole thing, their recall was poorer, on average 4 items,
(about 35%) than when asked to give one row only (3 items recalled, 75%)
Conclusion
– This shows that information decays rapidly in the sensory store
Glanzer and Cunitz (1966)
Aim
– To examine whether the position of words influences recall (primacy & recency
effects).
Method
– There were two conditions, participants (240 army enlisted men) were given a list of
20 words consisting of common one-syllable nouns, presented one at a time.
Immediately after hearing the words they were required to do a free-recall task for
two minutes.
– In the second condition, researchers introduced a delay between the end of the list and
the start of recall. During the delay, participants engaged in a filler task: counting
backwards from a given number for 30 seconds. The filler task was meant to prevent
rehearsal.
Results
– Results of these trials clearly demonstrated serial position effect in both its aspects:
participants were better at remembering words at the start of the list (primacy effect)
and at the end of the list (recency effect). This did not depend on the number of
repetitions of each word.
– The resulting data indicated that participants were still successful at recalling the
words from the start of the list (primacy effect preserved), but were no longer able to
recall the words from the end of the list (recency effect disappeared).
Conclusion
– Primacy occurs because the first words are best rehearsed and transferred to LTM.
– Recency occurs because these words are in STM when people start recalling. When
rehearsal is prevented the ability to recall decreases.

Limitations to the MSM


– This model proposes that the transfer of information from short to long term memory
is through rehearsal.
– However, in daily life we pay very little time to activate rehearsal although we are
constantly storing away new information in long term memory, which isn’t true to
everyday life.
– Craik and Watkins found that the type of rehearsal is more important as maintenance
rehearsal does not transfer information to LTM.
– Elaborative rehearsal is needed for long term storage as it allows you to link the
information with your existing knowledge, or what you think it means.
– It has been argued that LTM is not a unitary store, and there are differences in the
way different types of information are stored.
– At least three types of memory might be stored differently: episodic (memory of
events), procedural (how-to memory, for example, memory of how to tie your laces or
how to ride a bike) and semantic (general knowledge).
– One source of evidence for these claims is from case studies of amnesia where some
memories were lost while others stayed intact.
– The MSM focuses more on structure than process.
– It clearly separates the stores of memory and explains the structure of how memories
are formed and recalled.
– However, it fails to detail the specific process of acquiring and maintaining memories.
– Therefore, the model can be seen as reductionist as it simplifies a complex process
reducing our ability to understand the process of memory.
Working Memory Model (Baddeley and Hitch 1974)
Baddeley and Hitch focused on STM ONLY and believed it was not a unitary store (like
MSM) so therefore an advance upon MSM.
– Viewed LTM as a more inactive store that holds previously learned material for use
by the STM when needed.
3 components:
1. Central executive
2. Phonological loop
3. Visuo-spatial sketchpad

– Central Executive
– This is the key component to the working memory model.
– The function is to direct attention to particular tasks. It controls the other systems,
known as a slave system by determining how resources will be allocated.
– The information comes from LTM or from the sensory store.
– It has a very limited capacity and can’t attend to too many things at once, typically
one piece of information at one time.
– It also allows us to switch attention between different inputs of information.

– Phonological Loop (PL)


– The PL deals with auditory information and preserves the order of information. It also
has a limited capacity, the amount of information that can be spoken out loud for 2
seconds.
– It has 2 sub-components:
 Phonological store (inner ear) = holds the sounds/words you hear.
 Articulatory process (inner voice) = words maintained by repetition that are
heard or seen. These words are silently repeated (looped) which is a form of
maintenance rehearsal.

– Visuo-Spatial Sketchpad (VSS)


– Processes visual and spatial information (how things look and where they are
located).
– It is used when you have to plan a spatial task e.g. moving from one room to another.
– It has a limited capacity.
– Logie (1995) suggested that the VSS can be divided into a visual cache which stores
visual material about form and colour and inner scribe which deals with spatial
relationships.

Episodic Buffer
– Baddeley (2000) added the episodic buffer as he realised the model needed a general
store to operate properly.
– The episodic buffer is an extra storage system that has a limited capacity. It integrates
information from the central executive, the phonological loop, the visuo-spatial
sketchpad and also from long-term memory.
CE, PL and VSS each have their own processing resources. Therefore, WM can be used to
‘multi-task’. BUT ONLY IF:
(1) tasks use different components AND
(2) the capacity of WM is not exceeded.

Studies
Landry and Bartling (2011)
Aim
– The aim was to investigate if articulatory suppression would influence recall of a
written list of phonologically dissimilar letters in serial recall. The participants were
randomly assigned to one of the two conditions. The participants consisted of thirty-
four undergraduate psychology students.
Method
– The participants were tested individually. In the experimental group, participants first
saw a list of letters that they had to recall while saying the numbers '1' and '2' at a rate
of two numbers per second (the articulatory suppression task). The control group saw
the list of letters but did not engage in a articulatory suppression task. There were ten
lists each consisting of a series of 7 letters that did not sound similar. In the control
group, the experimenter showed participants a printed list for five seconds, instructed
them to wait for another five seconds, and then instructed them to write the correct
order of the letters on the answer sheet as accurately as possible. In the experimental
group, participants received instructions to repeatedly say the numbers '1' and '2' at a
rate of two numbers per second from the time of presentation of the list until the time
they filled the answer sheet.
Results
– The results showed that the scores from the experimental group were much lower than
the scores from the control group. The mean percent of accurate recall in the control
group was 76% compared to a mean of 45% in the experimental group. The results
supported the experimental hypothesis as the mean percent of accurate recall in the
control group was higher than the mean percent of accurate recall in the experimental
group.
Conclusion
– The data seems to support the prediction of the Working Memory Model that
disruption of the phonological loop through the use of articulatory suppression results
in less accurate working memory. In line with the model's prediction, articulatory
suppression is preventing rehearsal in the phonological loop because of overload. This
resulted in difficulty in memorizing the letter strings for participants in the
experimental conditions whereas the participants in the control condition did not
experience such overload.

Brain scans have shown that a different area of the brain is active when carrying out verbal
tasks than when carrying out visual tasks. This supports the idea that there are different parts
of memory for visual and verbal tasks.

Limitations of Working Memory Model


– The model is complex and therefore can only test one component of the WMM at a
time. This reduces validity of the model because it limits the ability to
– In addition, it only tests STM and does not factor in SM and LTM. This is an issue
because it doesn't show any connections between STM and LTM as well as not
showing how items in the STM is transferred into LTM.
– The role of the central executive is unclear, although Baddeley and Hitch said it was
the most important part of the model. For example, they suggested that it has its own
limited capacity, but it is impossible to measure this separately from the capacity of
the phonological loop and the visuospatial sketchpad.
– This model does not explain memory distortion or the role of emotion in memory
formation.

Example Exam Questions


SAQ
– Explain one model of memory.
– Explain one study that supports one model of memory.
ERQ
– Evaluate one model of memory.
– Contrast two models of memory.
THE SCHEMA THEORY – COG. PROCESSING

Definitions
– Schemas are mental representations that are derived from prior experience and
knowledge. The bottom-up information derived from the senses is interpreted by the
top-down influence of relevant schemas in order to determine which behaviour is
most appropriate. Schemas help us to predict what to expect based on what has
happened before. They are used to organize our knowledge, to assist recall, to guide
our behaviour and to help us to make sense of current experiences.
– Schema help our minds to simplify the world around us. For example, we all have a
schema for a telephone. If someone hands me their mobile phone and asks me to
quickly call a doctor, I don't look at the phone and go, "I don't know. I have never
used THIS phone before!" Instead, I have a schema for how a phone works that
allows me to use the phone, regardless of the brand. Perhaps this particular phone has
features I have never seen before. If that it true, then I will learn about those features
by having to use the phone and then those features will be assimilated into my
schema of mobile phones.
– Schemas are useful because people must have a way to organise the world, ON the
other hand, there are two downsides to schemas, one is that we have a limited
capacity for storing memories and we use schemas during the encoding, so they can
affect their retrieval.
– The other is that schemas can lead to stereotyping so it is important to learn how
schemas affect thinking
– Schema theory and research spans all the approaches to behaviour. Schema theory is a
theory of how humans process incoming information, relate it to existing knowledge
and use it. The theory is based on the assumption that humans are active processors
of information. People do not passively respond to information. They interpret and
integrate it to make sense of their experiences, but they are not always aware of it. If
information is missing, the brain fills in the blanks based on existing schemas.
– Culture determines the contents of schemas and they become representations in the
mind that guide behaviour
– Scripts are a special type of schemas about events such a script for what happens at a
birthday party. Scripts are patterns of behaviour that are learned through our
interaction with the environment. We have thousands of scripts.
– Scripts are knowledge about situations we have faced over time and inform us about
what is supposed to happen in the future. For example, cultural scripts include the
information everyone knows within a cultural group e.g. how to pan a holiday
– Idiosyncratic scripts include the knowledge specific to your personal situation such as
what is explained to others so they can understand e.g. what happened to you on your
holidays
– People have different scripts based on their cultural experiences and they do not
always match

– Accommodation: The process of accommodation involves altering one's existing


schemas, or ideas, as a result of new information or new experiences. New schemas
may also be developed during this process. Piaget postulated that learning was a
combination of accommodation and assimilation of schema.
– Assimilation: The process by which we take in new information or experiences and
incorporate them into our existing ideas or schemas. We tend to modify experience or
information somewhat to fit in with our pre-existing beliefs.
– Cognitive restructuring: A therapeutic strategy developed by Aaron Beck in which
the therapist helps the client to recognize how schema are leading to maladaptive
patterns of thinking. Beck believed that if you could help to change the schema, then
the filter by which the world is interpreted would be changed and depression would
be alleviated.
– Schema: mental representations that are used to organize our knowledge, to assist
recall, to guide our behaviour, to predict likely happenings and to help us to make
sense of current experiences. Schemas are cognitive structures that are derived from
prior experience and knowledge. They simplify reality, setting up expectations about
what is probable in relation to particular social and textual contexts.
– Examples: cultural schemas, self schemas and gender schemas

Bottom-up and top-down processing


There are two broad types of information processing
 Bottom-up information processing occurs when the cognitive process is data-driven;
perception is not biased by prior knowledge or expectations. It is a case of ‘pure’
information processing based on the reality as it is.
 Top-down processing occurs when your prior knowledge or expectations (schemas)
act as a lens or a filter for the information that you receive and process.

Schema theory has been used to explain how memory works. Cognitive psychologists divide
memory processes into three main stages:
– Encoding: transforming sensory information into memory.
– Storage: creating a biological trace of the encoded information in memory, which is
either consolidated or lost
– Retrieval: using the stored information in thinking, problem solving and decision
making.

Studies
Bartlett (1932) – Cognitive Schemas
Aim
– The aim of Bartlett's classic study was to investigate how memory of a story is
affected by previous knowledge. He wanted to see if cultural background and
unfamiliarity with a text would lead to distortion of memory when the story was
recalled. Bartlett’s hypothesis was that memory is reconstructive and that people store
and retrieve information according to expectations formed by cultural schemas.
Method
– Bartlett told participants a Native American legend called The War of the Ghosts. The
participants were British; for them the story was filled with unfamiliar names and
concepts, and the style was foreign to them.
– Bartlett allocated the participants to one of two conditions. One group was asked to
use repeated reproduction, where participants heard the story and were told to
reproduce it after a short time and then to do so again repeatedly over a period of
days, weeks, months or years. The second group was told to use serial reproduction,
in which they had to recall the story and repeat it to another person.
Results
– Bartlett found that there was no significant difference between the way that the groups
recalled the story. Over time the story became shorter; Bartlett found that after six or
seven reproductions, it was reduced to 180 words. The story also became more
conventional - that is, it retained only those details that could be assimilated to the
social and cultural background of the participants. For example, instead of "hunting
seals," participants remembered that the men in the story were fishing; the word
"canoe" was changed to the word "boat."
– Bartlett found that there were three patterns of distortion that took place. The story
became more consistent with the participants’ own cultural expectations - that is,
details were unconsciously changed to fit the norms of British culture. The story also
became shorter with each retelling as participants omitted information which was seen
as not important. Finally, participants also tended to change the order of the story in
order to make sense of it using terms more familiar to the culture of the participants.
They also added detail and/or emotions.
Conclusion
– The participants overall remembered the main themes in the story but changed the
unfamiliar elements to match their own cultural expectations so that the story
remained a coherent whole although changed.
Evaluation
– Bartlett's suggestion that schemas are complex unconscious knowledge structures is
one of Bartlett's major contributions to psychology. His research was one of the first
to investigate mental processes in a time where psychological science insisted on
studying only behaviours that could be directly observed.
– Bartlett wanted to study memory in a naturalistic setting meaning that he would give
participants some tasks that could be encountered in real life - for example,
remembering a story. Bartlett documented his research procedures but he has been
criticized for not being specific enough which has made it difficult to replicate his
findings. For example, he did not standardize the intervals at which participants
reproduced the material they had learned. In addition, no significant independent
variable was manipulated with other factors held constant to observe its systematic
effect on some dependent variable. Psychologists are critical of Bartlett's methods on
the grounds that they were not scientific in a modern sense.
– Many researchers have attempted to replicate the findings of Bartlett's original study,
but they have not been successful. This would indicate that the findings have low
reliability. This would make sense since Barltett did not use a standardized procedure.
Bergman & Roedeger (1999) carried out a replication with a slight twist. The
independent variable was the amount of delay before the retelling of the story. They
found that when there was a 15 minute delay in the first retelling of the story, there
was a higher rate of distortion than if the story were replicated immediately.
Immediate retelling of the story was often highly accurate and resulted in less
distortion over time.
– There was no control group to see if, for example, other cultures would remember the
story different. For example, there was not native American group asked to recall the
story.
– The story was quasi-experimental. No cause and effect can be established.
Limitations
– Bartlett wanted to study memory in a naturalistic setting meaning that he would give
participants some tasks that could be encountered in real life - for example,
remembering a story. However, no significant independent variable was manipulated
with other factors held constant to observe its systematic effect on some dependent
variable. Psychologists are critical of Bartlett's methods on the grounds that they
were not scientific in a modern sense. How else does the age of this study criticise
the theory?
– Many researchers have attempted to replicate the findings of Bartlett's original study,
but they have not been successful. This would indicate that the findings have low
reliability. This would make sense since Bartlett did not use a standardized procedure.
Why is this a problem in regards to the theory?

Martin and Halverson (1983) – Gender Schemas


Aim
– Martin and Halverson (1983) performed an experiment with boys and girls aged
between five and six years. They saw pictures of males and females in activities that
were either in line with gender role schemas (e.g. a girl playing with a doll) or
inconsistent with gender role schemas (e.g. a girl playing with a gun). A week later,
the children were asked to remember what they had seen on the pictures. The children
had distorted memories of pictures that were not consistent with gender role schemas.
They remembered the picture of a girl playing with a gun as a boy playing with a gun.
This shows how information may be distorted to fit with existing schemas.
– Martin and Halverson found that children actively construct gender identity based on
their own experiences. The tendency to categorize on the basis of gender leads them
to perceive boys and girls as different.
– According to Martin and Halverson, children have a gender schema for their own sex
(the in-group) and for the opposite sex (the outgroup).
– Gender schemas determine what children pay attention to, whom they interact with,
and what they remember. Gender schemas thus serve as an internal, self-regulating
standard. This could be the reason that gender schemas may become a self-fulfilling
prophecy or a stereotype threat.

Evaluation for Gender Schemas


Strengths:
– It can explain why children’s gender roles do not change after middle childhood. The
established gender schemas tend to be maintained because children pay attention to
and remember information that is consistent with their gender schema (confirmation
bias).
– The theory depicts the child as actively trying to make sense of the world using their
present knowledge and gender schemas serve as an internal, self-regulating standard.
Limitations:
– There is too much of a focus on individual cognitive processes in the development of
gender roles. Social and cultural factors are not taken into consideration in this.
– There is no explanation on exactly how gender schemas are formed and developed.
Why is this a problem?
Brewer and Treyens (1981)
Aim
– Brewer and Treyens wanted to study the role of schema in the encoding and retrieval
of memory. To do so, they carried out an experiment to see how well people could
recall what was in an office.
Method
– The sample was made up of 86 university psychology students. Participants were
seated in a room that was made to look like an office. The room consisted of objects
that were typical of offices: a typewriter, paper and a coffee pot. There were some
items in the room that one would not typically find in an office - for example, a skull
or a toy top.
– Each participant was asked to wait in the professor's office while the researcher
"checked to make sure that the previous participant had completed the experiment."
The participant did not realize that the study had already begun. The participants were
asked to have a seat. All of the chairs except for one had objects on them. In this
way, it was guaranteed that all participants would have the same vantage point in the
office. The researcher left the room and said that he would return shortly.
– After 35 seconds the participants were called into another room and then asked what
they remembered from the office. When they finished the experiment, they were
given a questionnaire. The important question was "Did you think that you would be
asked to remember the objects in the room?" 93% said "no."
– The participants were randomly allocated to one of three conditions.
– The recall condition: Participants were asked to write down a description of as many
objects as they could remember from the office. They were also asked to state the
location, shape, size and colour of the objects. They were asked to "Write your
description as if you were describing the room for someone who had never seen it."
After this, they were given a verbal recognition test in which they were given a
booklet containing a list of objects. They were asked to rate each item for how sure
they were that the object was in the room. "1" meant that they were sure it was not in
the room; "6" meant that they were absolutely sure it was in the room. The
questionnaire consisted of 131 objects: 61 were in the room; 70 were not.
– The drawing condition: In this condition participants were given an outline of the
room and asked to draw in the objects they could remember.
– The verbal recognition condition: In this condition, the participants were read a list
of objects and simply asked whether they were in the room or not.
Results
– The researchers found that when the participants were asked to recall either by writing
a paragraph or by drawing, they were more likely to remember items in the office that
were congruent with their schema of an office - that is, the "expected items" were
more often recalled. The items that were incongruent with their schema of an office -
e.g. the skull, a piece of bark or the screwdriver - were not often recalled. When
asked to select items on the list, they were more likely to identify the incongruent
items; for example, they didn't remember the skull when doing the free recall, but
gave it a 6 on the verbal recognition task. However, they also had a higher rate of
identifying objects which were schema congruent but which were actually not in the
room.
– In the both the drawing and the recall condition, they also tended to change the nature
of the objects to match their schema. For example, the pad of yellow paper that was
on a chair was remembered as being on the desk.
Evaluating Schema Theory
Testable: Schema theory is testable. This is seen in the studies by Bartlett and by Brewer &
Treyens. You will see several more examples throughout the course.
Empirical evidence: There is also biological research to support the way in which the brain
categorizes input. For example, Mahone et al (2009) found that from the visual cortex,
information about living and non-living objects is shuttled to different parts of the brain -
even in blind participants. These findings suggest that our brains automatically sort
information and classify it, in the same manner which schema theory predicts.
Applications: Schema theory has been applied to help us understand how memory works. It
also is helps us to understand memory distortion. Schema theory has also been applied in
abnormal psychology (therapy for depression and anxiety), relationships (theories of mate
selection) and in health psychology (health campaigns to change unhealthy behaviours). It is
a robust theory that has many applications across many fields of psychology.
Construct validity: Cohen (1993) argued that the concept of schema is too vague and
hypothetical to be useful. Schema cannot be observed.
Unbiased: Schema theory is applied across cultures. There is no apparent bias in the
research, although most of the early research was done in the West.
Predictive validity: The theory helps to predict behaviour. We can predict, for example,
what types of information will be best recalled when given a list of words. Trends, such as
omitting information that is not of high relevance to the individual, are commonly seen in
individuals recalling a news story. However, we cannot predict exactly what an individual
will recall.

– Large amount of empirical evidence - A significant amount of research has supported


the idea that schemas affect cognitive processes such as memory. The theory seems
quite useful for understanding how people categorize information, interpret
information and make inferences. Schema theory has contributed to our understanding
of memory distortions and false memories.
– Some of the limitations of schema theory are that it is not yet entirely clear how
schemas are acquired in the first place or the exact way they influence cognitive
processes. It has also been argued that schema theory cannot account for why
schema-inconsistent information is sometimes recalled. However, in spite of some
imperfections of the theory, it seems to be a robust theory that has generated a lot of
research and still does. This is similar to gender schemas so be careful not be repeat.

Example Exam Questions


SAQ
– Explain schema theory with reference to one research study

ERQ
– Evaluate schema theory with reference to research studies
THINKING AND DECISION MAKING – COG. PROCESSING

Definitions
– Thinking is the process of using knowledge and information to make plans, interpret
the world, and make predictions about the world in general. There are several
components of thinking - these include problem solving, creativity, reasoning and
decision making.
– Decision making is defined as the process of identifying and choosing alternatives
based on the values and preferences of the decision-maker. Decision making is
needed during problem-solving to reach the conclusion.
– Problem-solving is thinking that is directed toward solving specific problems by
means of a set of mental strategies. The concepts of problem-solving, decision
making and thinking are very much interconnected.

The Dual Process Model of Thinking and Decision Making


The Dual Process Model of thinking and decision making postulates that there are two basic
modes of thinking - what Stanovich and West (2000) refer to as "System 1" and "System 2”.

System 1 is an automatic, intuitive and effortless way of thinking.


- System 1 thinking often employs heuristics - that is, a ‘rule’ used to make decisions
or form judgements.
- Heuristics are mental short-cuts that involve focusing on one aspect of a
complex problem and ignoring others (Lewis, 2008).
- This ‘fast’ mode of thinking allows for efficient processing of the often complex
world around us but may be prone to errors when our assumptions do not match the
reality of a specific situation.
- These errors may have greater consequence in our day to day lives because system 1
thinking is expected to create a greater feeling of certitude – certainty that our initial
response is correct.
- Gilbert and Gill (2000) have argued that we become more likely to use System 1
thinking when our cognitive load is high - that is, when we have lots of different
things to think about at the same time, or we have to process information and make a
decision quickly.

System 2 is a slower, conscious and rational mode of thinking.


- This mode of thinking is assumed to require more effort.
- System 2 starts by thinking carefully about all of the possible ways we could interpret
a situation and gradually eliminates possibilities based on sensory evidence until we
arrive at a solution.
- Rational thinking allows us to analyse the world around us and think carefully about
what is happening, why it is happening, what is most likely to happen next and how
we might influence the situation.
- This mode of thinking is less likely to create feelings of certitude and confidence.
Studies
Wason (1968) – Wason Selection Task
One example of research that supports the dual process model is based on the Wason
selection task. The aim was to determine how abstract versus non-abstract stimuli impact
cognition and thinking. Participants were shown a set of cards (3, 8, red and orange) and
asked the question “Which card(s) must be turned over to test the idea that if a card shows an
even number on one face, then its opposite face is red?” Most people would have chosen the
cards with a number “8” and “red” but this is incorrect based on matching bias which means
that when faced with an abstract problem, we tend to be overly influenced by the wording or
context of the question. The Wason selection task provides important evidence for the dual
process model. Most people make the decision of which cards to choose without any
reasoning - but as an automatic response to the context of the question. Wason (1968) found
that even when he trained people how to answer this question, when he changed the context,
the same mistakes were made. For example, can you solve this one? Which cards would you
have to turn over in order to prove if the following statement is true? If there is a male's
name on one side of the card, then there is an IB subject the other side of the card. If you got
this wrong, this shows how powerful System 1 can be. It can interfere with System 2, even
when you have learned the "right way to do things.” Griggs and Cox (1982) found that when
the task is not abstract, we do not tend to show the matching bias.

Atler and Oppenheimer (2007)


Aim
– Investigate how font affects thinking
Method
– 40 Princeton students completed the Cognitive Reflections Test (CRT). This test is
made up of 3 questions, and measures whether people use fast thinking to answer the
question (and get it wrong) or use slow thinking (and get it right). Half the students
were given the CRT in an easy-to-read font, while the other half were given the CRT
in a difficult-to-read font
Results
– Among students given the CRT in easy font, only 10% of participants answered all
three questions correctly, while among the students given the CRT in difficult font,
65% of participants were fully correct
Conclusion
– When a question is written in a difficult-to-read font, this causes participants to slow
down, and engage in more deliberate, effortful System 2 thinking, resulting in
answering the question correctly.
– On the other hand, when the question is written in an easy-to-read font, participants
use quick, unconscious and automatic System 1 thinking to come up with the obvious
(but incorrect) answer
Evaluation
– This study provides strong evidence for dual processing theory, providing support
for Kahneman's model of fast System 1 and slow System 2 thinking
– The study only involved Princeton undergraduate students, which are clearly not
representative of the general population. Therefore, the results may not generalize to
other groups of participants
– The CRT is made up of "trick" questions, which rarely come up in everyday
life. Therefore, the ecological validity of this study is low, as the real-world
significance of these findings is unclear

Bechara et al. (2000)


Aim
– The aim of the study was to compare decision making of participants with damage to
their ventromedial prefrontal cortices (vmPFC) with healthy controls. The vmPFC has
been shown to play a role in regulating impulsive behaviour and is therefore believed
to regulate behaviour through its ability to enable the use of system two processing.
Method
– Researchers compared decisions made by 17 healthy controls and 8 patients with
lesions in their vmPFC during the Iowa Gambling Task. Participants chose cards
from four different decks for 100 trials and win or lose money based on their
decisions. Two decks had high initial reward but more long-term risk factor and the
other two decks had low risk and low reward.
Results
– Participants typically take 20 or 30 trials before they opt for the safe decks and so are
able to resist temptation of high reward decks and therefore are using system 2
thinking because they are thinking through their decisions.
– The vmPFC lesion participants did not demonstrate this change in behaviour and
continued to choose from the disadvantageous decks. Therefore, it can be said that
they are oblivious to the future and are guided predominantly by immediate prospects.
Conclusion
– This study suggests that the vmPFC plays a role in our ability to use system two
processing. If this part of our brain is damaged, we may not be able to think past
initial impulses, way up more factors, and base our decisions on consequences, which
are all fundamental characteristics of system two processing. This might lead to
decisions being made based on system one, which is impulsive and automatic.

This study provides evidence that system two processing might have a biological base in the
vmPFC. Damage to this part of the brain, therefore, could affect our thinking and decision
making.
Evaluating the Dual Process Model
Strengths
– There is biological evidence that different types of thinking may be processed in
different parts of the brain.
Limitations
– The model can seem to be overly reductionist as it does not clearly explain how (or
even if) these modes of thinking interact or how our thinking and decision making
could be influenced by emotion.
– The definitions of System 1 and System 2 are not always clear. For example, fast
processing indicates the use of System 1 rather than System 2 processes. However,
just because a processing is fast does not mean it is done by System 1. Experience can
influence System 2 processing to go faster.

The Theory of Reasoned Action (TRA)


– The theory of reasoned action (TRA) aims to explain the relationship between
attitudes and behaviours when making choices. This theory was proposed by Martin
Fishbein in 1967.
– The main idea of the theory is that an individual’s choice of a particular behaviour is
based on the expected outcomes of that behaviour.
– If we believe that a particular behaviour will lead to a particular (desired) outcome,
this creates a predisposition known as the behavioural intention.
– The stronger the behavioural intention, the stronger the effort we put into
implementing the plan and hence the higher the probability that this behaviour will
actually be executed.
– There are two factors that determine behavioural intention: attitudes and subjective
norms.
– An attitude describes your individual perception of the behaviour (whether this
behaviour is positive or negative) while the subjective norm describes the perceived
social pressure regarding this behaviour (if it is socially acceptable or desirable to do
it).
– Depending on the situation, attitudes and subjective norms might have varying
degrees of importance in determining the intention.
– In 1985 the TRA was extended and became what is known as the theory of planned
behaviour (TPB). This theory introduced the third factor that influences behavioural
intentions: perceived behavioural control. This was added to account for situations in
which the attitude is positive, and the subjective norms do not prevent you from
performing the behaviour; however, you do not think you are able to carry out the
action
– TPB predicts an individual's intention to engage in a behaviour at a specific time and
place. It posits that individual behaviour is driven by behaviour intentions, where
behaviour intentions are a function of three determinants: an individual’s attitude
toward behaviour, subjective norms, and perceived behavioural control
– Behavioural Intention
– This is a proxy measure for behaviour. It represents a person's motivation in
the sense of her or his conscious plan or decision to perform certain behaviour.
Generally, the strong the intention is, the more likely the behaviour will be
performed.
– Attitude toward Behaviour
– This refers to the degree to which a person has positive or negative feelings of
the behaviour of interest. It entails a consideration of the outcomes of
performing the behaviour.
– Subjective Norm
– This refers to the belief about whether significant others think he or she will
perform behaviour. It relates to a person’s perception of the social
environment surrounding the behaviour

Studies
Pabian and Vanderbosch
Aim
– The aim of Pabian and Vanderbosch (2013) is to test which behavioural, normative
and control beliefs are the best predictors of the 3 main factors of the Theory of
Planned Behaviour (TPB), respectively, attitudes (A), subjective norm (SN) and
perceived behavioural control (PBC) with regard to cyberbullying.
Method
– A longitudinal study with a random stratified cluster sample was used in the study.
The sample was limited to adolescents in the first four grades of secondary education
in Belgium (95.6% participants of Belgian nationality), since this age group (11 – 17
years) has the highest involvement in cyberbullying. Before the surveys were
administered, parental consent was attained, and each student was assured anonymity
and confidentiality of their results verbally and in writing. In total 1814 students filled
in the questionnaire during school time in their school with the presence of a
researcher. Firstly, a questionnaire based on cyberbullying perpetration (involving A,
SN, PBC, intentions to cyberbully and underlying beliefs) was administered and
assessed on a seven-point scale. The participants were then given an explanation of
bullying to ensure common understanding and then another questionnaire was
administered on their frequency of cyberbullying during the last six months. These
surveys were conducted again 6 months later to observe various predictors of
behavioural, normative and control beliefs.
Results
– In total, 11.7% (n = 151) of respondents reported that they had cyberbullied someone
else within the past six months. The three main factors of the TPB—A, SN and
PBC—explain 28.8% of total variance of adolescents’ intention to cyberbully.
Intention is a significant predictor of self-reported cyberbullying six months later.
Attitude is the best predictor of intention (β = 0.38) followed by SN (β = 0.28). PBC
has no significant effect on intention (β = 0.03) or directly on behaviour (β = 0.01).
Conclusion
– The results reveal that the theoretical model of planned behaviour fits for the sample.
Intention to engage in cyberbullying is a strong predictor of self-reported
cyberbullying behaviour six months later. Attitude is the strongest direct predictor of
intention, followed by the SN.
Evaluation of Theory of Planned Behaviour
Strengths:
– High predictive validity based upon the diagram of TPB. Ajzen and Fishbein (1973),
as a result of their own meta-analysis of published research, report a 0.63 correlation
between intentions and behaviour.
– Collectively, the four variables should be able to explain a significant portion of
variance in the responses to the target variable (future behaviour). In other words,
using the data it should be possible to build a mathematical formula that predicts
future behaviour from the other four variables with a high degree of probability. This
measure of probability is also referred to as the predictive validity of the model.
– Many applications…

Limitations:
– It assumes the person has acquired the opportunities and resources to be successful in
performing the desired behavior, regardless of the intention.
– It does not account for other variables that factor into behavioral intention and
motivation, such as fear, threat, mood, or past experience.
– While it does consider normative influences, it still does not take into account
environmental or economic factors that may influence a person's intention to perform
a behavior.
– It assumes that behavior is the result of a linear decision-making process, and does not
consider that it can change over time.
– While the added construct of perceived behavioral control was an important addition
to the theory, it doesn't say anything about actual control over behavior.
– The time frame between "intent" and "behavioral action" is not addressed by the
theory.

Example Exam Questions


SAQ
– Explain one study of thinking and decision making.
– Explain one theory or model of thinking and decision making.
ERQ
– Discuss one thinking and decision making, with reference to relevant research.
BIASES IN TDM – RELIABILITY OF COG. PROCESSES

Definitions
– Human beings are not always rational thinkers
– Shortcuts & incomplete, simplified strategies are known as heuristics
– Heuristics lead to cognitive biases
– Instead, we rely on intuitive thinking and we take cognitive shortcuts resulting in
“cognitive biases”

A cognitive bias is any of a wide range of observer effects identified in cognitive science and
social psychology including very basic statistical, social attribution, and memory errors that
are common to all human beings.
– A cognitive bias is an error in thinking that affects the decisions and judgments that
people make.
– A cognitive bias is a mistake in reasoning, evaluating, remembering, or other
cognitive process, often occurring as a result of holding onto one's preferences and
beliefs regardless of contrary information. Psychologists study cognitive biases as
they relate to memory, reasoning, and decision-making.

Cognitive Bias 1: Confirmation Bias


– the tendency to interpret new evidence as confirmation of one's existing beliefs or
theories.
– Imagine that you have tried to reach a friend (with whom you have an ambivalent
relationship) by phone (or email), leaving messages, yet have not received a call in
return. In situation like this, it is easy to jump to conclusions in an intuitive manner
that your friend wants to avoid you. The danger, of course, is that you leave this belief
unchecked and start to act as though it were true.

Studies – Confirmation Bias


Mendel et al. (2011)
Aim
– To study whether psychiatrists and medical students are prone to confirmation bias
Method
– To study whether psychiatrists and medical students are prone to confirmation bias
and whether confirmation bias leads to poor diagnostic accuracy in psychiatry, an
experimental decision task was presented to 75 psychiatrists and 75 medical students.
Results
– A total of 13% of psychiatrists and 25% of students showed confirmation bias when
searching for new information after having made a preliminary diagnosis. Participants
conducting a confirmatory information search were significantly less likely to make
the correct diagnosis compared to participants searching in a disconfirmatory or
balanced way
Evaluation
– Experimental setting- doesn’t reflect real life diagnoses
– Only two alternatives presented
– only supplied information about symptoms in written form. This differs from making
medical diagnoses in real life, where physicians examine real-life patients and obtain
important information from different sources (e.g. from visual cues or interaction with
the patient).
– Real-life diagnoses are often made under time pressure.
– Several studies from psychology suggest that time pressure increases confirmation
bias (Ask & Granhag, 2007; D. Frey, unpublished observations). Therefore, this bias
may occur even more frequently under natural conditions

Snyder and Swann (1978)


Aim
– To see whether female college students would create questions based on stereotypes.
Method
– Told female college students that they would meet a person who was either
introverted (reserved, cool) or extroverted (outgoing, warm). The participants were
then asked to prepare a set of questions for the person they were going to meet.
Results
– In general, participants came up with questions that confirmed their perceptions of
introverts and extroverts. Those who thought they were going to meet an introvert
asked, “What do you dislike about parties?” or “Are there times you wish you could
be more outgoing?” and extroverts were asked, “What do you do to liven up a party?”
Conclusion
– The researchers concluded that the questions asked confirmed participants’
stereotypes of each personality type so that it became a self-fulfilling prophecy - for
example, because they believed he was an introvert they asked him questions which
made him appear to be one.
Evaluation
– Low Representational Generalisability  only female college students used  low
ecological validity
– Researcher bias  impacts reliability of the results  researchers opinions of the
questions participants asked and whether they suited stereotypes
– Replicability  easy method to recreate

Cognitive Bias 2: Illusory Correlation


– Illusory correlation - people see a relationship between two variables even when
there is none.
– An example of this is when people form false associations between membership of a
social group and specific behaviours such as women’s inferior ability in mathematics.
The illusory correlation phenomenon causes people to overestimate a link between the
two variables, here “women” and “ability in mathematics”.
– Illusory correlations can come in many forms and culturally-based prejudice about
social groups can to some extent be classified as illusory correlations
– E.g. A person bitten by a dog, assumes that all dogs are aggressive and develops a
phobia towards them. This is due to incomplete knowledge about dog behaviour.
Studies
Hamilton and Gifford (1976)
Aim
– To investigate how our expectations of events can distort how we process the
information
Method
– Participants read descriptions of various people from imaginary groups: Group A &
Group B. Group A was considerably larger than group B. The readings contained
descriptions of the individuals group membership and a specific behaviour. These
behaviours were either helpful or harmful. (e.g. John, a teacher in Group B, screams
at his students). Participants were asked to give their impressions of a typical group
member.
Results
– When giving their descriptions, participants considered the behaviour of group B
members (the minority) to be considerably less desirable that those of group A. There
was no correlation between group membership and desirability and so participants
were making an illusory correlation.
Evaluation
– shows possible relationship between illusory correlation and stereotypes
– no cause and effect – correlational only
– Low ecological validity

Illusory correlations are the result of our brain's effort to find connections where none exist.
They are mere logical errors that can cause misconceptions and lead to stereotypes. However,
rational thinking can help rectify them and thus curb tendencies such as racial stereotyping,
bias, superstitions, forming opinions based on insufficient knowledge, and living with
preconceived notions that lack a logical base.

Example Exam Questions


SAQ
– Explain one study of one cognitive bias.
– Explain one cognitive bias, making use of one study.
ERQ
– Discuss research on cognitive biases
RECONSTRUCTIVE MEMORY – RELIABILITY OF COG. PROCESSES

Definitions
– Eyewitness testimony (EWT): The recall of observers of events previously
experienced.
– Memory: A cognitive process which is the encoding, storage and retrieval of
information. So the retention of experience.
– Reconstructive memory: The theory that when memories are accessed, they are not
retrieved as a single, whole memory, but rather as a collection of independent
memories put together. It is in this “reconstructive process” that distortions occur.
– Schema: Mental representations based on one’s past experiences, beliefs and
culture. Schema play a key role in the reconstructive process of memory. They
simplify reality, setting up expectations about what is probable in relation to particular
social and textual contexts Information that is not relevant to our schema is often not
remembered; information that is familiar is often exaggerated and information that is
foreign to our culture may be changed to make it more personally relevant.

Reconstructive Memory
– It is based on the idea that memories are not saved as complete, coherent wholes.
– Retrieval of memory is influenced by our perception, our beliefs, past experience,
cultural factors and the context in which we are recalling the information.
– Schema influence what we encode and what we retrieve from memory.
– Bartlett argued that we try to make sense of the past by adding our interpretations of
events and deducing what most likely happened.
– He argued that memory is an imaginative reconstruction of experience.

– Loftus claims that the nature of questions asked by police or in a courtroom can
influence witnesses’ memory.
– Leading questions - that is, questions that are suggestive in some way - and post-event
information facilitate schema processing which may influence accuracy of recall.
This is called the misinformation effect.
– Witnesses are often quite confident of what they remember even though their
recollections don’t fit the actual facts. When witnesses try to retrieve a past event,
they may unknowingly fill in the gaps with information based on other past
experience, stereotypes or post-event information.
– Post-event information is any information that you are exposed to after you have
witnessed something. This information can come in the form of television or social
media reports - or from listening to other people tell their stories.
– When eyewitnesses' memories are distorted, it can have very damaging effects
Studies
Bartlett (1932)
Aim
– The aim of Bartlett's classic study was to investigate how memory of a story is
affected by previous knowledge. He wanted to see if cultural background and
unfamiliarity with a text would lead to distortion of memory when the story was
recalled. Bartlett’s hypothesis was that memory is reconstructive and that people
store and retrieve information according to expectations formed by cultural
schemas.
Method
– Bartlett told participants a Native American legend called The War of the Ghosts. The
participants were British; for them the story was filled with unfamiliar names and
concepts, and the style was foreign to them. Bartlett allocated the participants to one
of two conditions.
– One group was asked to use repeated reproduction, where participants heard the
story and were told to reproduce it after a short time and then to do so again
repeatedly over a period of days, weeks, months or years.
– The second group was told to use serial reproduction, in which they had to recall the
story and repeat it to another person.
Results
– Bartlett found that there was no significant difference between the way that the groups
recalled the story. Over time the story became shorter; Bartlett found that after six or
seven reproductions, it was reduced to 180 words. The story also became more
conventional - that is, it retained only those details that could be assimilated to the
social and cultural background of the participants. For example, instead of "hunting
seals," participants remembered that the men in the story were fishing; the word
"canoe" was changed to the word "boat."
– Bartlett found that there were three patterns of distortion that took place.
– The story became more consistent with the participants own cultural expectations -
that is, details were unconsciously changed to fit the norms of British culture.
– The story also became shorter with each retelling as participants omitted information
which was seen as not important.
– Finally, participants also tended to change the order of the story in order to make
sense of it using terms more familiar to the culture of the participants. They also
added detail and/or emotions.
– The participants overall remembered the main themes in the story but changed the
unfamiliar elements to match their own cultural expectations so that the story
remained a coherent whole although changed.
Conclusion
– Remembering is not a passive but rather an active process, where information is
retrieved and changed to fit into existing schemas. This is done in order to create
meaning in the incoming information. According to Bartlett, humans constantly
search for meaning. Based on his research Bartlett formulated the theory of
reconstructive memory. This means that memories are not copies of experiences but
rather reconstructions. This does not mean that memory is unreliable but rather
that memory can be altered by existing schemas.
Evaluation
– High ecological validity  several applications and explains many real life situations
– The methodology was not rigorously controlled. Participants did not
receive standardized instructions. There was no standardized time after which
participants had to recall the story. He also did not tell his participants to be as
accurate as possible.
– Although there were two conditions, there was no difference in the performance of
the two groups - in other words, the IV did not affect the DV. However, it appears
that culture did affect how they recalled the story. But if we focus on how cultural
schema affect the participants' memories, there are several limitations.
– When we consider culture the IV, then the study is quasi-experimental - that is, no
independent variable was manipulated. Therefore, a cause and effect relationship
cannot be established.
– Secondly, there was no control group. There was no group of Native Americans
recalling the story to verify that, in fact, this distortion doesn't happen to people in
that cultural group.

Loftus and Pickerell (1995)


Aim
– The aim of the study was to determine if false memories of autobiographical events
can be created through the power of suggestion.
Method
– Three males and 21 females were the participants. Before the study, a parent or
sibling of the participant was contacted and asked two questions. First, "Could you
tell me three childhood memories of the participant?" Second, "Do you remember a
time when the participant was lost in a mall?" Data was only used if the answer to the
second question was "no."
– The participants then received a questionnaire in the mail. There were four memories
that they were asked to write about and then mail back the questionnaire to the
psychologists. Three events were real and one was “getting lost in the mall.” They
were instructed that if they didn’t remember the event, they should simply write “I do
not remember this.”
– The participants were interviewed twice over a period of four weeks. They were
asked to recall as much information as they could about the four events. Then they
were asked to rate their level of confidence about the memories on a scale of 1 -
10. After the second interview, they were debriefed and asked if they could guess
which of the memories was the false memory.
Results
– About 25% of the participants “recalled” the false memory. However, they also
ranked this memory as less confident than the other memories and they wrote less
about the memory on their questionnaire.
Conclusion
– Although this is often seen as strong evidence of the power of suggestion in creating
false memories, only 25% of the participants had them. The study does not tell us
why some participants were more susceptible to these memories than others, but it
does show that the creation of false memories is possible.
Evaluation
– High ecological validity  talking about childhood memories
– The fact that the questionnaire was filled out at home could lead to contamination -
that is, they could have consulted with someone
– Demand characteristics  Social desirability effect
Loftus and Palmer (1974)
Aim: to investigate whether the use of leading questions would affect an eyewitness's
estimation of speed.
Experiment 1
Method
– 45 students participated in the experiment. They were divided into five groups of
seven students. Seven short films of traffic accidents were shown. These films were
taken from driver’s education films.
– When the participants had watched a film they were asked to give an account of the
accident they had seen and then they answered a questionnaire with different
questions about the accident. There was one critical question which was the one
asking the participant to estimate the speed of the cars involved in the accident.
– The participants were asked the same question but the critical question included
different verbs. Nine participants were asked “About how fast were the cars going
when they hit each other?" The critical word "hit’" was replaced by ‘collided’,
‘bumped’ or ‘smashed’ or’ contacted’ in the other conditions which each had nine
participants answering the question.
– The researchers predicted that using the word ‘smashed’ would result in higher
estimations of speed than using the word ‘hit’. The independent variable was the
different intensities of the verbs used in the critical question and the dependent
variable was an estimation of speed.
Results
– The mean estimates of speed were highest in the ‘smashed’ condition (40.8 mph) and
lowest in the ‘contacted’ group (31.8 mph). The results were significant at p ≤ 0.005.
– The findings were that the more intense the verb that was used, the higher the average
estimate
Conclusion
– In conclusion, it seems that participants’ memory of an incident could be changed by
using suggestive questions.
Evaluation
– The experiment was conducted in a laboratory and the participants were students. Lab
experiments may be problematic in the sense that they do not necessarily reflect how
people remember in real life. There may be a problem of ecological validity and it has
been argued that this is the case here. A support for this point could be that the films
shown in the experiment were made for teaching purposes and therefore the
participants did not experience the same as if it had been a real accident.
– However, strength of the experimental method is that confounding variables can be
controlled so that it is really the effect of the independent variable that is measured.
This was the case in this experiment and Loftus and Palmer could rightfully claim that
they had established a cause-effect relationship between the independent variable (the
critical words) and the dependent variable (estimation of speed). The fact that the
experiment used students as participants has also been criticized because students are
not representative of a general population. Another problem could be demand
characteristics since the participants knew they participated in an experiment. This
could affect their answers because they responded to what they thought would be
appropriate answers. If this is the case it was not their memory that was tested.
Experiment 2
Method
– A second experiment used 150 students as participants. They were divided into three
groups and they all saw a film of a car accident. Then they were asked questions
about the accident, including the question of the estimation of speed, but this time
only including “hit” or “smashed” in two of the groups. The last group - the control
group - did not have a question about speed estimates.
– In a second variation of the study, 150 students were randomly allocated to one of
three conditions. participants were asked only one of two questions: Either how fast
the cars were going when they smashed or when they hit each other. A third group,
the control group, was not asked anything. The participants were asked to come back
a week later and without re-watching the video, they were asked one of the following
questions: Did you see any broken glass? Yes or no?
Results
– The results showed that those that had originally had the question with the more
intense verb (smashed) were more likely to recall seeing broken glass than those that
that had the less intense verb (hit).
Conclusion
– Loftus argues that when the different verbs are used, they activate schemas that have
a different sense of meaning. When the question is asked using smashed, the
connotation of the verb influences how the memory is formed.
Evaluation
– This study can also be accused of lacking ecological validity and therefore it may be
difficult to generalize the findings to real life. See first experiments evaluation

Evaluation Research on Eyewitness Memory


– Studies by Loftus done under controlled conditions are open to criticism. They often
are artificial in nature. When watching a video of a car crash, one does not experience
the emotions that one would experience when actually seeing a real car accident.
Thus, emotion or stress, which are conditions normal for most eye-witnesses, are
absent in her research. Many say that the studies lack ecological validity.
– The studies have been replicated and show a high degree of reliability.
– There is evidence - for example, from the testimonies of Holocaust survivors - that
shows that what is seen in the laboratory is seen in real life. In the case of Holocaust
survivors, we have actual historical data which we can use to compare their memories
to actual events and establish the level of accuracy.
– There are ethical concerns about manipulating a participant's memories. In the Lost in
the Mall study, deception is a real concern.
– The research has been applied in order to improve the process of gathering data from
eyewitnesses. In addition, it has been applied to better understand false memories that
arise in therapy. This means that the research has had several different applications.

Application
The following changes have been implemented to criminal investigations as a result of
research on reconstructive memory.
1. Witnesses are more likely to pick someone in clothes similar to those worn by the
culprit than select them on physical characteristics in a line-up. Therefore, they should
all be wearing the same clothing – and not similar to those that were described at the
scene of the crime.
2. There is usually the assumption made that the suspect is in the line-up. Therefore, the
witness tends to choose the person who most resembles their memory or schema of
the accused. Therefore, all members of the line-up should match their description. In
addition, witnesses should be told that the suspect may or may not be in the line-up.
Culter & Penrod advocate sequential line-ups. The accuracy of identification
increases when suspects are seen one by one and an identification is made (yes/no)
after each person is presented. Finally, witnesses should not be given feedback that
confirms their identification.
3. When gathering evidence from a witness, researchers use a narrative interview style
called a Cognitive Interview. A narrative interview is an interview that asks a simple
question such as,"Could you please tell me what you remember about the night of the
murder?" The interviewee does most of the talking; there are very few questions,
except for clarification. In this way the interviewer does not alter schema and distort
memory by asking leading questions.

– The cognitive interview is a type of narrative interview that begins with context
reinstatement. We have better recall when we are in the same place, the same
emotional state, and/or the same context in which memory was encoded. This is based
on Tulving & Thomson's Encoding Specificity Hypothesis (1973). Before asking
them to retell what happened, the police would have the interviewee think about
where they were when they witnessed the crime and how they felt at the time.
– The cognitive interview often also uses the following strategies:
– Change the perspective. This involves asking the person to "think outside of their
schema." What do you think that the bank teller saw?
– Change the order. This breaks down the role of schema in “filling in” information.
Researchers have found that more information is obtained if the witness is asked to
recall events forward and backward than simply retelling the story

Counter Argument: Memory is Reliable


*Important when addressing a “To what extent” question.

Yuille and Cutshall (1986)


Aim
– To study whether leading questions would affect memory of eyewitnesses at a real
crime scene
Method
– The crime scene was in Vancouver. A thief entered a gun shop and tied up the owner
before stealing money and guns from the shop. The owner freed himself, and thinking
that the thief had escaped, went outside the shop. But the thief was still there and shot
him twice. Police had been called and there was gunfire - and the thief was eventually
killed. As the incident took place in front of the shop, there were eyewitnesses - 21
were interviewed by the police.
– The researchers chose this incident to study because there were enough witness and
there was forensic evidence available to confirm the stories of the eyewitnesses.
– The researchers contacted the eyewitnesses four months after the event. 13 of the
eyewitnesses agreed to be interviewed as part of a study. They gave their account of
the incident, and then they were asked questions. Two leading questions were
used. Half the group was asked if they saw a broken headlight on the getaway car.
– The other half was asked if they saw a yellow panel on the car (the panel was actually
blue). They were also asked to rate their stress on the day of the event on a seven-
point scale.
Results
– It was found that eyewitnesses were actually very reliable. They recalled a large
amount of accurate detail that could be confirmed by the original police reports. They
also did not make errors as a result of the leading questions. 10 out of 13 of them said
there was no broken headlight or yellow quarter panel, or that they hadn’t noticed
those particular details.
– The researchers found that the accuracy of the witnesses compared to the original
policy reports was between 79% and 84%. It appears that this research contradicts the
study by Loftus & Palmer (1974). It could be that the lack of emotional response to
the video that was shown in their study played a key role in the influence of the
leading questions. The witnesses reported that they didn't remember feeling afraid
during the incident, but they did report having an "adrenaline rush."
Evaluation
– The study was a field study and thus has very strong ecological validity.
– There was archival evidence (police records of the original testimonies) to confirm
the accuracy of the memories.
– The study is not replicable and also not generalizable since it was a one-off incident.
There was no control of variables, so it is difficult to know the level of rehearsal that
was used by the different eyewitnesses. It could be that those who agreed to be in the
study had spent the most time thinking and reading about the case.
– Because the eyewitnesses' safety was threatened, it could be that this is a case of
flashbulb memory, which would mean that it cannot be compared to Loftus's original
research.
– There was an attempt at deceiving the participants. As consent was given by all
participants, the idea that undue stress or harm would be caused by being asked to
recall the incident is unfounded.
– The quantification of the qualitative responses from the participants is problematic
and may be open to researcher bias.

Example Exam Questions


SAQ
– Explain one study of reconstructive memory.
ERQ
– To what extent is one cognitive process reliable?
– Evaluate research on reconstructive memory.
EMOTION AND MEMORY – EMOTION AND COGNITION

Definitions
– Emotion and cognition are intertwined
– Emotions are believed to perform an adaptive function in that they shape the
experience of events and guide the individual in how to react to events, objects and
situations, with reference to personal relevance and well-being.
– Memories of emotional events sometimes have a persistence and vividness that others
seem to lack
– Cognitive process = MEMORY

Flashbulb Memories
– Brown & Kulik (1977) defined flashbulb memory as a highly detailed, exceptionally
vivid "snapshot" of the moment when a surprising and emotionally arousing event
happened
– They postulated the special-mechanism hypothesis, which argues for the existence
of a special biological memory mechanism that, when triggered by an event
exceeding critical levels of surprise, creates a permanent record of the details and
circumstances surrounding the experience
– People tend to remember six pieces of information:
■ where they were,
■ what they were doing,
■ who they were with,
■ who told them what they felt about it,
■ what others felt about it,
■ what happened immediately afterwards.
– This contradicts processing in short term memory

How emotion may affect one cognitive process


– Flashbulb Memory (FBM) research spans all three levels of analysis. It is a cognitive
process, the brain is active, and culture acts as a mediator.
– FBM are different from memories about the actual event. Rather FBM bring out very
clear personal memories of the context in which someone hears the news. This
context is called “the reception context” and may be more important than the news of
the real event. Time affects the forgetting curve for FB memories less that it does with
other memories.
– Today the most commonly accepted model of flashbulb memory is called
the importance-driven model. This model emphasizes that personal consequences
determine intensity of emotional reactions.

Amygdala and Emotions


– Amygdala – localization – Emotion
– A small structure in the temporal lobe, appears to be critical in the brain’s emotional
circuit - and it is believed to play a critical role in emotional memories.
– It makes sense that our brains would make sure to store information about fearful
experiences in good detail.
Studies
Brown and Kulik (1977)
Aim
– To investigate whether surprising and personally significant events can cause
flashbulb memories.
Method
– Interviews with 80 participants.
– The participants were given a series of nine events - for example, the assassination of
President Kennedy - and asked if they "recalled the circumstances in which you first
heard about the event."
– For those events which they said "yes," they were then asked to write an account of
their memory and rate it on a scale of personal importance.
– People in the study were also asked if they had flashbulb memories of personal
events. Of 80 participants, 73 said that they had flashbulb memories associated with a
personal shock such as the sudden death of a close relative.
Results
– Brown and Kulik found that people said that they had very clear memories of where
they were, what they did, and what they felt when they first learned about an
important public occurrence such as the assassination of John F. Kennedy or Martin
Luther King. 99% of the participants recalled the circumstances in which they heard
about the assassination of the president - thirteen years after the event.
– Much lower rate of flashbulb memories among white participants than black
participants to the assassinations of Malcolm X and Martin Luther King Jr.
Conclusion
– The link between personal importance and the event is important in the creation of a
flashbulb memory.
Evaluation
– The study was one of the first to attempt to empirically test the existence of flashbulb
memories. It has led to a large amount of further research.
– The procedure could be replicated, allowing us to determine if the results are reliable.
– The questionnaire was retrospective in nature - that is, it was self-reported data that
relied on the memory of the individual and could not be verified for accuracy by the
researchers. Compare tis to Neisser & Harsch's prospective study.
– The actual level of surprise or emotion at the moment of the historical event cannot be
measured or verified.
– It is not possible to actually measure the role of rehearsal in the creation of the
memories.
– Social desirability may have played a role in the responses given by the participants.
– The study shows sampling bias; it is difficult to generalize the findings as only
American males were studied. The study had both gender and cultural bias. More
recent findings show that collectivistic societies may have lower rates of FBM.
– Perhaps people tell the story of JFK's assassination so many times, hence the memory
seems detailed, but perhaps the details change over time. People may "fill in" missing
details based on their best guess, as schema theory suggests
Sharot et al. (2007)
Aim
– The aim of this study was to determine the potential role of biological factors on
flashbulb memories.
Method
– This quasi-experiment was conducted three years after the 9/11 terrorist attacks in
Manhattan. The sample was made up of 24 participants who were in New York City
on that day.
– Participants were put into an fMRI. While in the scanner, they were presented with
word cues on a screen. The list of words is listed in the chart below. In addition, the
word "Summer" or "September" was projected along with this word in order to have
the participant link the word to either summer holidays or to the events of 9-11.
– Participants’ brain activity was observed while they recalled the event. The memories
of personal events from the summer served as a baseline of brain activity for
evaluating the nature of 9/11 memories.
– After the brain scanning session, participants were asked to rate their memories for
vividness, detail, confidence in accuracy and arousal. Participants were also asked to
write a description of their personal memories. Only half of the participants actually
reported having what would be called "flashbulb memories" of the event - that is, a
greater sense of detail and a strong confidence in the accuracy of the memory.
– Those that did report having flashbulb memories also reported that they were closer to
the World Trade Centre on the day of the terrorist attack. Participants closer to the
World Trade Centre also included more specific details in their written memories.
Results
– Sharot and her team found that the activation of the amygdala for the participants who
were downtown was higher when they recalled memories of the terrorist attack than
when they recalled events from the preceding summer, whereas those participants
who were further away from the event had equal levels of response in the amygdala
when recalling both events. The strength of amygdala activation at retrieval was
shown to correlate with flashbulb memories.
Conclusion
– These results suggest that close personal experience may be critical in engaging the
neural mechanisms that produce the vivid memories characteristic of flashbulb
memory.
Evaluation
– Confirms the hypothesis that a special brain mechanism is responsible for these
memories
– The study is correlational in nature and does not establish a cause and effect
relationship. which would explain how hte memory is actually attributed to activity in
the amygdala.
– Research by McGaugh & Cahill supports the role of the amygdala in the creation of
emotional memories.
– The environment of the fMRI and the task that the participants are asked to do is
highly artificial - and thus low in ecological validity.
– Although the study demonstrates the role of the amygdala as a result of proximity to
the event, it does not explain why some people have vivid memories after seeing the
events on television or the Internet.
– The sample size is small and culturally biased. Research (such as Kulofsky) indicates
that individualistic cultures are more likely to have flashbulb memories than
collectivistic cultures. This makes the findings difficult to generalize.

Neisser and Harsch (1992)


Aim
– To test the theory of FBM by investigating the extent to which memory for a shocking
event (the Challenger disaster) would be accurate after a period of time.
Method
– Neisser and Harsch (1992) investigated students’ memory accuracy of the incident 24
hours after the accident, and then again two and a half years later. When filling out
the second questionnaire which asked questions like - where were you when you
heard the about the Challenger disaster? Who were you with? What were you
doing? The participants were also asked how confident they were of these memories.
Results
– The researchers concluded that emotional intensity was associated with greater
memory confidence, but not with accuracy.
Conclusion
– The participants were very confident that their memories were correct, but the
researchers found that 40 per cent of the participants had distorted memories in the
final reports they made. Possibly, post-event information had influenced their
memories.
Evaluation
– The study was a case study. The strength of this method is that it was
both longitudinal and prospective. There was also method triangulation - both
questionnaires and interviews were used. The limitation is that it cannot be
replicated. In addition, there was participant attrition - that is, participants who
dropped out of the study over time.
– The study has high ecological validity. The researcher did not manipulate any
variables and the study was not done under highly controlled conditions.
– The study was naturalistic. Although this is good for ecological validity, it is
difficult to eliminate the role of confounding variables. There was no control over the
participants' behaviour between the first questionnaire and the second. We have no
idea how often this memory was discussed or how often the participants were exposed
to media about the event.
– It is possible that the confidence levels were higher than they should have been as a
result of demand characteristics - that is, since the participants were asked to verify
their level of confidence, they could have increased their ratings to please the
researcher or avoid social disapproval for claiming not remember an important day in
their country's history.
– As mentioned in the background section above, there are several studies of different
events - like September 11th - which seem to have the same results. This
demonstrates the transferability of the findings of this study to other situations.
Kulkofsky et al. (2011) – FBM and Culture
Aim
– Looked at the role of culture in flashbulb memory in five cultures: China, Germany,
Turkey, the UK and the USA.
Method
– Participants were given five minutes to recall as many memories as they could of
public events occurring in their lifetime.
– They were then asked to complete a "memory questionnaire" for each event where
they were asked if they remembered where they first heard of the event.
– If so, then they were asked a series of questions to determine the extent of the FBM.
– They were then asked to answer questions about the importance of the event to them
personally.
Results
– The researchers found that in a collectivistic culture like China, personal importance
and intensity of emotion played less of a role in predicting FBM, compared with more
individualistic cultures that place greater emphasis on an individual's personal
involvement and emotional experiences.
– Because focusing on the individual's own experiences is often de-emphasized in the
Chinese context, there would be less rehearsal of the triggering event compared with
participants from other cultures - and thus a lower chance of developing an FBM.
– However, it was found that national importance was equally linked to FBM formation
across culture.
Conclusion
– Collectivistic cultures have fewer personalized flashbulb memories than
individualistic cultures. In the case that the memory is based on a national tragedy, the
rate of FBMs was the same.
Evaluation
– A representative of the culture administered the test and the questionnaires were given
in the native languages of the participants. This avoids interviewer effects. It also
meant that since they were responding in their native language - and the language in
which these memories were mostly created - the participants were more likely to
recall these memories.
– The study used back-translation to make sure that the translation of the
questionnaires was not a confounding variable. This increases the credibility of the
study.
– There is the danger of the ecological fallacy - just because the participants come from
the culture being studied, this does not mean that they necessarily share the traits of
the culture's predominant dimensions - that is, just because I am American does not
mean that I process flashbulb memories like other Americans.
– It is an etic approach to researching cultural difference. It is possible that cultural
factors affected how information was self-reported. It cannot be verified in this study
whether those personal memories actually exist but were not reported.
Evaluation of FBM Theory
Strengths
– There is biological evidence that supports the role of emotion in memory formation -
for example, McGaugh & Cahill (1995) and Sharot (2007).
Limitations
– Neisser argues that it is one's level of confidence, not accuracy, which defines FBM.
– There are cultural differences that indicate that rehearsal may play the most important
role in the development of FBM.
– Often with real-life research on the topic, it is impossible to verify the accuracy of
memories.
– It is not possible to measure one's emotional state at the time of an event - thus
making it impossible to demonstrate a clear causal explanation.
– People do not always know that an event is important until later, so it is unclear how
flashbulb memories could be created at the moment of the event.
– Memories are so vivid because the event itself is rehearsed and reconsidered after the
event.
– According to Neisser, what is called a flashbulb memory may simply be a well-
rehearsed story. The flashbulb memories are governed by a storytelling schema
following a specific structure, such as place (where were we?), activity (what were we
doing?), informant (who told us?), and affect (how do we feel about it?).
– Neisser argues that it is one's level of confidence, not accuracy, which defines FBM

Example Exam Questions


SAQ
– Explain one theory of how emotion may affect one cognitive process, using one
study.
ERQ
This may be asked using the command terms discuss, evaluate, contrast or "to what
extent" as appropriate:
– Discuss the influence of emotion on one cognitive process.
ETHICS

Ethical consideration Definition & why important


 Informed Consent Participants must be given information about the study before agreeing to take part, for
participants who are too young or too intellectually disabled to give consent their guardian must
be given the information on their behalf
The use of deception Only permitted if the results would be affected by knowledge, however participants must be
debriefed when the study is complete
 Right to withdraw Participants have the right to leave at any stage regardless of the possible effects of the results,
also have the right to withdrawal their results after the study
 No undue stress or harm to the Researcher must always act in a professional way, making sure that the best interests of the
participants participants and of society in general are met
 Participant data must be Participants must not be identified in any way in terms of results, involvement or any other
anonymised confidential data
Should be described to participants at the beginning
Debriefing Occurs after completion of the study, participants are told the results and the conclusion, any
erroneous beliefs are corrected, especially if there was deception. Participants are informed of
the availability of counsellors

Cog. Processing
Bechara et al. – Anonymity
– In this particular case, one example of an ethical consideration based on the results of
the study could be anonymity. The results reveal interesting and unique features
about participants’ decision making based on the damage to the brain. This is
sensitive information and so participant details should be anonymous and confident. If
this were publicized, in extreme cases this could even lead to manipulation
of vmPFC lesion patients.
Brewer and Treyens – Deception
– Although they had given consent to be part of an experiment, they were not told the
true aim of the experiment and were not aware that the experiment had actually
begun. This was done to avoid demand characteristics. If the participants knew that
they were going to be asked to remember what was in an office, then they would have
tried to memorize as much as they could while sitting there.

Reliability of Cog. Processes


Loftus and Pickerell - Deception
Loftus and Palmer – Undue stress or harm

Emotion and Cognition


Neisser and Harsch – Undue stress or harm
Brown and Kulik – Withdrawal rights
RESEARCH METHODS

Interviews
An interview is a one on one or more conversation where the interviewer asks questions and
answers are given by the interviewee.
 Structured interview: Highly Controlled
 Semi-structured interview: informal conversation
 Unstructured: focus groups – informal
+ Isn't costly
+ Focus groups: quick and convenient way to collect data from several individuals
simultaneously
+ Semi-structured/ unconstructed = room for clarification
+ Repour
+ More naturalistic
+ Qualitative and quantitative
-Demand characteristics: social desirability
-Researcher bias
-Correlational
-Difficult to quantify
-Greater room for confirmation bias

Case Studies
Case studies are in-depth investigations of a single person, group, event or community.
+Used when not many people are available
+high ecological validity
+Used to gain rich, qualitative data
+Longitudinal
+Method Triangulation
+Allows research into unique and possibly unethical conditions
-Low generalisability
-Possible bias
-Time consuming
-Cannot be replicated

Quasi Experiment
Experiment in which the DV is measured against a naturally occurring IV.
+Naturalistic
+High ecological validity
+Used in circumstances where IV cannot be changed
-Less control over IV
-Sampling bias
-Uncontrolled extraneous variables
-Correlational: No IV is changed
-Low internal validity

Observations
Any means by which a phenomenon or even is studied
Naturalistic: Naturally occurring behaviour is being recorded in an inconspicuous way
Covert: Observer conceals their presence whilst making observations
Overt: Observer participates in the activity and observes
+Participants are in their natural environment
+Qualitative data
+No demand characteristics, naturally occurring behaviour
-Extraneous Variables
-Researchers do not have control over variables
-Can take a long time for something to happen
-Ethical considerations
-Researcher bias

Experiments
The manipulation of an IV resulting in a change in a DV, to show a cause and effect
relationship.
+Cause and effect relationship
+Low confounding variables
+Easy to replicate in the future to test for reliability of results
+Extraneous variables are controlled
+Easy to manipulate and control
+Usually quantitative data
-Demand characteristics
-Low ecological validity
-Prone to confirmation bias

Questionnaires
A series of written questions to gain either quantitative or qualitative data.
+Easy to conduct
+Can get both quantitative and qualitative data
+Replicable (if standardised)
+Quick, easy and cheap
+No researcher bias
-Closed questions cannot be further elaborated on
-Demand characteristics
-Can’t ask for clarification
-Lacks ecological validity
-Only correlational
-Self-reported data

Cog. Processing
Pabian and Vanderbosch - Survey
Sperling - Experiment

Reliability of Cog. Processes


Yuille and Cutshall – Case study
Snyder and Sniper - Experiment

Emotion and Cognition


Brown and Kulik – Interview
Neisser and Harsch – Questionnaire
SCAFFOLDS – MEMORY MODELS

Evaluate One Model of Memory


Introduction:
– Address question
– Define key terms
o Memory
o Encoding (types of encoding)
o Storage
o Retrieval
o Capacity
o Duration
– Introduce and Explain Multistore model
o Include linear process, characteristics of each store the model suggests, don’t forget
rehearsal loop, include how each store is linked to each other and how information
moves between them

Body 1:
– AMRC of Sperling (1960)

Body 2:
– AMRC of Glanzer and Cunitz (1966)

Body 3:
– Evaluation of Multistore model
o It clearly separates the stores or memory and explains the structure of how
memories are formed and recalled. However, it doesn’t show how memories are
acquired and suggests a very simple linear model  Reductionist view on memory
o Highly supported by many pieces of research (Include how they support it)
o The model proposes that the transfer of information from short term to long term
memory is through rehearsal. However, in daily life, we very rarely rehearse
memories but they are being stored into LTM
o It has been argued that LTM is not a unitary store, and there are differences in the
way different types of information are stored  At least 3 types of memories
have been seen to be stored differently: episodic, procedural and semantic
memory

Conclusion:
– Sum up evaluation of the model as command term is evaluate
– Further Research is needed as memory is a complex cognitive process…
Contrast Two Models of Memory
Introduction:
– Address question
– Define key terms
o Memory
o Encoding (types of encoding) and Storage
o Retrieval, Capacity & Duration
– Introduce and Explain Multistore model
o Include linear process, characteristics of each store the model suggests, don’t
forget rehearsal loop, include how each store is linked to each other and how
information moves between them
– Introduce and Explain Working Memory model
o Include 3 components and the added component + general idea of what it is +
how ‘multi-tasking’ can occur

Body 1:
– AMRC of Sperling (1960)

Body 2:
– Evaluation of Multistore model
o It clearly separates the stores or memory and explains the structure of how
memories are formed and recalled. However, it doesn’t show how memories
are acquired and suggests a very simple linear model  Reductionist view on
memory
o Highly supported by many pieces of research (Include how they support it)
o The model proposes that the transfer of information from short term to long
term memory is through rehearsal. However, in daily life, we very rarely
rehearse memories but they are being stored into LTM
o It has been argued that LTM is not a unitary store, and there are differences in
the way different types of information are stored  At least 3 types of
memories have been seen to be stored differently: episodic, procedural and
semantic memory

Body 3:
– AMRC of Landry and Bartling (2011)

Body 4:
– Evaluation of Working Memory Model (contrast to Multistore model)
o Complex model  only test one component at a time  reduces validity
o Only tests STM and doesn't factor in LTM  doesn't show connection or how
info is transferred to LTM
o The role of the central executive is unclear  most important part  Own
limited capacity  impossible to measure separately from the other
components
o Does not explain memory distortion or role of emotion in memory formation.

Conclusion:
– Sum up evaluation of the models
– Summarise how they are different
SCAFFOLDS – THE SCHEMA THEORY

Evaluate Schema Theory


Introduction:
– Address the question
– Define key terms
o Schemas
o Schema theory - Explain
o Assimilation
o Accommodation
o Scripts
o Types of schemas
– Describe schemas and their effects
– Introduce studies

Body 1:
– AMRC of Bartlett
– Link to question

Body 2:
– AMRC of Brewer and Treyens
– Link to Schema Theory

Body 3: (Optional)
– AMRC of Martin and Italworson
– Link to question

Body 4:
– Evaluate Schema Theory (TEACUP)
– Testable: Yes, Because of Bartlett & Brewer and Treyens
– Evidence: Yes,  refer to studies (also bio evidence - Mahone et al.)
– Applications: “robust” theory – understand how schemas affect memory  helped us
understand false memories and distortion
– Construct Validity: Vague + can’t be directly observed – also can’t explain why
with schemas inconsistent info is recalled. Lots?  not clear how and why schemas
are formed in the first place
– Unbiased: Applicable across many cultures  no bias evident
– Predictive Validity: Helps predict behaviour – we can predict what an individual will
recall when given a list of words, based on our understanding of schema theory… 
trends in behaviour are common across individuals

Conclusion:
– Summarise findings of the studies
– Summarise evaluation
– Answer question
SCAFFOLDS – THINKING AND DECISION MAKING

Discuss Thinking and Decision Making, with reference to relevant research


Introduction:
– Address the question
– Define key terms
o Thinking
o Decision Making
o Problem Solving
o Heuristics
– Introduce and explain the dual process model
o System 1
o System 2

Body 1:
– AMRC of Wason (1968)
– Link to question

Body 2:
– AMRC of Alter and Oppenheimer (2007)
– Link to question

Body 3: (Optional)
– AMRC of Bechara et al. (2000)
– Link to question

Body 4:
– Evaluate the Dual Process Model
Strengths
– There is biological evidence that different types of thinking may be processed in
different parts of the brain.
Limitations
– The model can seem to be overly reductionist as it does not clearly explain how (or
even if) these modes of thinking interact or how our thinking and decision making
could be influenced by emotion.
– The definitions of System 1 and System 2 are not always clear. For example, fast
processing indicates the use of System 1 rather than System 2 processes. However,
just because a processing is fast does not mean it is done by System 1. Experience can
influence System 2 processing to go faster.

Conclusion:
– Summarise findings of the studies
– Summarise evaluation
– Link to question
SCAFFOLDS – BIASES IN TDM

Discuss Research on Cognitive Biases


Introduction:
– Address the question
– Define key terms
o Memory
o Thinking
o Decision making
o Heuristics
o Cognitive biases in general
o Confirmation bias
o Illusory Correlation

Body 1:
– Bias 1 – Confirmation bias – give example
– AMRC of Mendel et al. (2011)

Body 2:
– Evaluation of Mendel et al.
– Link to question

Body 3:
– AMRC of Snyder and Swann (1978)

Body 4:
– Evaluation of Snyder and Swann
– Link to question

Body 5:
– Bias 2 – Illusory Correlation – give example
– AMRC of Hamilton and Gifford (1976)

Body 6:
– Evaluation of Hamilton and Gifford
– Link to question

Conclusion:
– Summarise findings of the studies
– Summarise the two types of cognitive biases
– Link to question

*Body 3 and 4 optional


D
SCAFFOLDS – RECONSTRUCTIVE MEMORY

To What Extent is One Cognitive Process Reliable OR Evaluate Research on Reconstructive


Memory
Introduction:
– Address the question
– Define key terms
o Cognitive process
o Eyewitness testimony
o Memory
o Reconstructive memory
o Schema
o Leading question, post-event information, post-event discussion
o Confidence of witnesses, changes in memory (impacts)

Body 1:
– AMRC of Loftus and Palmer experiment 1

Body 2:
– Loftus and Palmer experiment 1 – evaluation strength – further research supports it
– Loftus and Palmer experiment 2 AMRC
– Introduce evaluation of both Loftus and Palmer experiments

Body 3:
– Further evaluation of both Loftus and Palmer experiments

Body 4:
– Introduce false memories
– Loftus and Pickerell AMRC Options
1. Keep everything
Body 5: 2. Remove Loftus & Palmer (body 1,
– Evaluation of Loftus and Pickerell 2, & 3)
3. Remove Loftus & Pickerell
Body 6: 4. Remove Loftus & Palmer 2nd
– Recall is enhanced – FOR reliability experiment and replace with
– Yuille and Cutshall AMRC evaluation for the 1st study (remove
body 2 and only have evaluation of
Body 7: Loftus & Palmer experiment 1)
– Evaluation of Yuille and Cutshall

Conclusion:
– Summarise findings of the studies
– Summarise Evaluation
– Answer question
SCAFFOLDS – EMOTION AND MEMORY

Discuss the influence of emotion on one cognitive process


Introduction:
– Address the question
– Define key terms
o Memory
o Emotions
o Flashbulb memories
– Discuss how emotion may effect one cognitive process
– Amygdala and its role on emotions

Body 1:
– AMRC of Brown and Kulik (1977)
– Evaluation
– Link to question

Body 2:
– AMRC of Sharot et al. (2007)
– Evaluation
– Link to question

Body 3:
– AMRC of Neisser and Harsch (1992)
– Evaluation
– Link to question

Conclusion:
– Summarise findings of the studies
– Summarise the evaluations
– Discuss why it is difficult to know whether flashbulb memory actually exists.
– Link to question
SCAFFOLDS – ETHICS

Introduction
– Define ethical considerations (assume examiner is stupid and they know nothing! - be
concise, clear)
– Define CLOA (ppt. 1 will help with this)/specific topic of the question e.g. reliability
of cognitive processes (explain what these are)/emotion and cognition
– Explain why ethics are important to consider in CLOA research or topic of the
question e.g. The cognitive level of analysis looks at cognitive processes such as
memory and how this impacts on human behaviour. Because of the sensitive nature of
memory, it is important to consider ethical issues or concerns in order to investigate
the effects of cognitive processes on human behaviour. Within the CLOA ethical
considerations such as ….. are important.

Body
– Define and explain all ethical considerations (there is never one stand-alone ethical
consideration, they are all intertwined and linked together)
– Pick 2 studies and the 2 key ethical considerations for the study.
– The study will have either 1) have been considered the ethical issue, 2) not considered
it or 3) tried to consider it but didn’t quite work.
– Outline AMRC
– Application of ethics (this is critical thinking – criteria D):
o apply the main ethical consideration to the study
o explain how they did or did not consider it, if they did not consider it then was
it justified (e.g. deception = lack of informed consent)
o what could potentially happen (e.g. psychological harm, would need to ensure
right to withdraw was upheld)
o what would they need to do to overcome this failure to consider (e.g.
debriefing)?

So you can see that from one study mainly focusing on deception I have incorporated many
other ethical considerations. The application of ethics MUST be detailed, go to town on this!
2nd study AMRC
Application again… different key ethical consideration will result in different application.

Conclusion
– Summarise study and application of ethics for both studies. (whether they took them
into consideration or not)
SCAFFOLDS – RESEARCH METHODS

Introduction
– Key Terms
– How and why research methods are used

Body 1
– Explain research methods 1

Body 2
– AMRC of study

Body 3
– Strengths and Limitations of Method

Body 4
– Explain research methods 2

Body 5
– AMRC of study

Body 6
– Strengths and Limitations of Method

Conclusion
– Summarise strengths and limitations of methods
– Concluding statement  each appropriate for studying different topics

Das könnte Ihnen auch gefallen