Sie sind auf Seite 1von 18

Learning

The acquisition of knowledge or skill; as, the learning of languages; the learning
of telegraphy.
2. The knowledge or skill received by instruction or study; acquired knowledge or
ideas in any branch of science or literature; erudition; literature; science; as, he is
a man of great learning.

There are two types of possible conditioning:

1) Classical conditioning, where the behavior becomes a reflex response to stimulus as in


the case of Pavlov's Dogs. Pavlov was interested in studying reflexes, when he saw that
the dogs drooled without the proper stimulus. Although no food was in sight, their saliva
still dribbled. It turned out that the dogs were reacting to lab coats. Every time the dogs
were served food, the person who served the food was wearing a lab coat. Therefore, the
dogs reacted as if food was on its way whenever they saw a lab coat.In a series of
experiments, Pavlov then tried to figure out how these phenomena were linked. For
example, he struck a bell when the dogs were fed. If the bell was sounded in close
association with their meal, the dogs learned to associate the sound of the bell with food.
After a while, at the mere sound of the bell, they responded by drooling.

2) Operant conditioning where there is reinforcement of the behavior by a reward or a


punishment. The theory of operant conditioning was developed by B.F. Skinner and is
known as Radical Behaviorism. The word ‘operant’ refers to the way in which behavior
‘operates on the environment’. Briefly, a behavior may result either in reinforcement,
which increases the likelihood of the behavior recurring, or punishment, which decreases
the likelihood of the behavior recurring. It is important to note that, a punishment is not
considered to be applicable if it does not result in the reduction of the behavior, and so
the terms punishment and reinforcement are determined as a result of the actions. Within
this framework, behaviorists are particularly interested in measurable changes in
behavior.

Educational approaches such as applied behavior analysis, curriculum based


measurement, and direct instruction have emerged from this model.[citation needed]

The earliest challenge to the behaviorists came in a publication in 1929 by Bode, a gestalt
psychologist. He criticized behaviorists for being too dependent on overt behavior to
explain learning. Gestalt psychologists proposed looking at the patterns rather than
isolated events. Gestalt views of learning have been incorporated into what have come to
be labeled cognitive theories. Two key assumptions underlie this cognitive approach: (1)
that the memory system is an active organized processor of information and (2) that prior
knowledge plays an important role in learning. Cognitive theories look beyond behavior
to explain brain-based learning. Cognitivists consider how human memory works to
promote learning. For example, the physiological processes of sorting and encoding
information and events into short term memory and long term memory are important to
educators working under the cognitive theory. The major difference between gestaltists
and behaviorists is the locus of control over the learning activity . For gestaltists, it lies
with the individual learner; for behaviorists, it lies with the environment.

Once memory theories like the Atkinson-Shiffrin memory model and Baddeley's working
memory model were established as a theoretical framework in cognitive psychology, new
cognitive frameworks of learning began to emerge during the 1970s, 80s, and 90s. Today,
researchers are concentrating on topics like cognitive load and information processing
theory. These theories of learning are very useful as they guide instructional design.[citation
needed]
. Aspects of cognitivism can be found in learning how to learn, social role
acquisition, intelligence, learning, and memory as related to age.

Classical Conditioning (Pavlov)


Summary: Classical conditioning is a reflexive or automatic type of learning in which a
stimulus acquires the capacity to evoke a response that was originally evoked by another
stimulus.

Several types of learning exist. The most basic form is associative learning, i.e., making
a new association between events in the environment. There are two forms of associative
learning: classical conditioning (made famous by Ivan Pavlov’s experiments with dogs)
and operant conditioning.

Pavlov’s Dogs

In the early twentieth century, Russian physiologist Ivan Pavlov did Nobel prize-winning
work on digestion. While studying the role of saliva in dogs’ digestive processes, he
stumbled upon a phenomenon he labeled “psychic reflexes.” While an accidental
discovery, he had the foresight to see the importance of it. Pavlov’s dogs, restrained in an
experimental chamber, were presented with meat powder and they had their saliva
collected via a surgically implanted tube in their saliva glands. Over time, he noticed that
his dogs who begin salivation before the meat powder was even presented, whether it was
by the presence of the handler or merely by a clicking noise produced by the device that
distributed the meat powder.

Fascinated by this finding, Pavlov paired the meat powder with various stimuli such as
the ringing of a bell. After the meat powder and bell (auditory stimulus) were presented
together several times, the bell was used alone. Pavlov’s dogs, as predicted, responded by
salivating to the sound of the bell (without the food). The bell began as a neutral stimulus
(i.e. the bell itself did not produce the dogs’ salivation). However, by pairing the bell with
the stimulus that did produce the salivation response, the bell was able to acquire the
ability to trigger the salivation response. Pavlov therefore demonstrated how stimulus-
response bonds (which some consider as the basic building blocks of learning) are
formed. He dedicated much of the rest of his career further exploring this finding.
In technical terms, the meat powder is considered an unconditioned stimulus (UCS) and
the dog’s salivation is the unconditioned response (UCR). The bell is a neutral stimulus
until the dog learns to associate the bell with food. Then the bell becomes a conditioned
stimulus (CS) which produces the conditioned response (CR) of salivation after repeated
pairings between the bell and food

Social Learning Theory (Bandura)

People learn through observing others’ behavior, attitudes, and outcomes of those
behaviors. “Most human behavior is learned observationally through modeling: from
observing others, one forms an idea of how new behaviors are performed, and on later
occasions this coded information serves as a guide for action.” (Bandura). Social learning
theory explains human behavior in terms of continuous reciprocal interaction between
cognitive, behavioral, and environmental influences.

Necessary conditions for effective modeling:

1. Attention — various factors increase or decrease the amount of attention paid.


Includes distinctiveness, affective valence, prevalence, complexity, functional
value. One’s characteristics (e.g. sensory capacities, arousal level, perceptual set,
past reinforcement) affect attention.
2. Retention — remembering what you paid attention to. Includes symbolic coding,
mental images, cognitive organization, symbolic rehearsal, motor rehearsal
3. Reproduction — reproducing the image. Including physical capabilities, and self-
observation of reproduction.
4. Motivation — having a good reason to imitate. Includes motives such as past
(i.e. traditional behaviorism), promised (imagined incentives) and vicarious
(seeing and recalling the reinforced model)

Bandura believed in “reciprocal determinism”, that is, the world and a person’s behavior
cause each other, while behaviorism essentially states that one’s environment causes
one’s behavior, Bandura, who was studying adolescent aggression, found this too
simplistic, and so in addition he suggested that behavior causes environment as well.
Later, Bandura soon considered personality as an interaction between three components:
the environment, behavior, and one’s psychological processes (one’s ability to entertain
images in minds and language).

Social learning theory has sometimes been called a bridge between behaviorist and
cognitive learning theories because it encompasses attention, memory, and motivation.
The theory is related to Vygotsky’s Social Development Theory and Lave’s Situated
Learning, which also emphasize the importance of social learning.

he four orientations can be summed up in the following figure:

Four orientations to learning (after Merriam and Caffarella 1991: 138)


Aspect Behaviourist Cognitivist Humanist Social and situational

Learning Thorndike, Koffka, Kohler, Maslow, Rogers Bandura, Lave


theorists Pavlov, Watson, Lewin, Piaget, and Wenger,
Guthrie, Hull, Ausubel, Bruner, Salomon
Tolman, Skinner Gagne

View of the Change in Internal mental A personal act to Interaction


learning process behaviour process fulfil potential. /observation in
(including insight, social contexts.
information Movement from
processing, the periphery to
memory, the centre of a
perception community of
practice

Locus of Stimuli in external Internal cognitive Affective and Learning is in


learning environment structuring cognitive needs relationship
between people
and environment.

Purpose in Produce Develop capacity Become self- Full participation


education behavioural and skills to learn actualized, in communities of
change in desired better autonomous practice and
direction utilization of
resources

Educator's role Arranges Structures Facilitates Works to


environment to content of development of establish
elicit desired learning activity the whole person communities of
response practice in which
conversation and
participation can
occur.

Manifestations Behavioural Cognitive Andragogy Socialization


in adult learning objectives development
Self-directed Social
Competency Intelligence, learning participation
-based education learning and
memory as Associationalism
Skill development function of age
and training Conversation
Learning how to
learn

As can seen from the above schematic presentation and the discussion on the linked
pages, these approaches involve contrasting ideas as to the purpose and process of
learning and education - and the role that educators may take. It is also important to
recognize that the theories may apply to different sectors of the acquision-formalized
learning continuum outlined above. For example, the work of Lave and Wenger is
broadly a form of acquisition learning that can involve some more formal interludes.

Cognitive Learning Theory


Edward Tolman proposed a theory that had a cognitive flair. He was a
behaviorist but valued internal mental phenomena in his explanations of how
learning occurs.

Some of his central ideas were:

Behavior should be studied at a local level.

Learning can occur without reinforcement.

Learning can occur without a change in behavior.

Intervening variables must be considered.

Behavior is purposive.

Expectations of fact behavior.

Learning results in an organized body of information.


Based on his research of rats, Tolman proposed that rats and other organisms
develop cognitive maps of their environments. They learn where different parts
of the environment are situated in relation to one another. The concept of a
cognitive map also called a mental map has continued to be a focus of research.

Cognitivism is currently the predominant perspective within which human learning is


described and explained. Contemporary cognitivism emphasizes mental processes and
proposes that many aspects of learning may be unique to the human species. Cognitivism
has affected educational theory by emphasizing the role of the teacher in terms of the
instructor's effectiveness of presentation of instructional material in a manner that
facilitates students' learning (e.g., helping students to review and connect previous
learning on a topic before moving to new ideas about that topic, helping students
understand the material by organizing it effectively, understanding differences in
students' learning styles, etc.)
Classical conditioning:

is the process of reflex learning—investigated by Pavlov—through which an


unconditioned stimulus (e.g. food) which produces an unconditioned response
(salivation) is presented together with a conditioned stimulus (a bell), such that the
salivation is eventually produced on the presentation of the conditioned stimulus
alone, thus becoming a conditioned response.

This is a disciplined account of our common-sense experience of learning by


association (or "contiguity", in the jargon), although that is often much more
complex than a reflex process, and is much exploited in advertising. Note that it does
not depend on us doing anything.

Such associations can be chained and generalised (for better of for worse): thus
"smell of baking" associates with "kitchen at home in childhood" associates with
"love and care". (Smell creates potent conditioning because of the way it is perceived
by the brain.) But "sitting at a desk" associates with "classroom at school" and hence
perhaps with "humiliation and failure"...

This site goes further into Watson's ideas, beyond Pavlov, and the "Little Albert" experiment.

Operant Conditioning

If, when an organism emits a behaviour (does something), the consequences of that
behaviour are reinforcing, it is more likely to emit (do) it again. What counts as
reinforcement, of course, is based on the evidence of the repeated behaviour, which
makes the whole argument rather circular.

Learning is really about the increased probability of a behaviour based on


reinforcement which has taken place in the past, so that the antecedents of the
new behaviour include the consequences of previous behaviour. Operant
conditioning is the use of consequences to modify the occurrence and form of behavior.
Operant conditioning is distinguished from classical conditioning (also called respondent
conditioning, or Pavlovian conditioning) in that operant conditioning deals with the
modification of "voluntary behavior" or operant behavior. Operant behavior "operates"
on the environment and is maintained by its consequences, while classical conditioning
deals with the conditioning of respondent behaviors which are elicited by antecedent
conditions. Behaviors conditioned via a classical conditioning procedure are not
maintained by consequences.[1] The main dependent variable is the rate of response that is
developed over a period of time. New operant responses can be further developed and
shaped by reinforcing close approximations of the desired response

General Principles

There are 4 major techniques or methods used in operant conditioning. They result from
combining the two major purposes of operant conditioning (increasing or decreasing the
probability that a specific behavior will occur in the future), the types of stimuli used
(positive/pleasant or negative/aversive), and the action taken (adding or removing the
stimulus).

Outcome of Conditioning
Increase Behavior Decrease Behavior
Positive
Response Cost
Positive Reinforcement
Stimulus
(remove stimulus)
(add stimulus)
Negative Negative Punishment
Stimulus Reinforcement
(add stimulus)
(remove stimulus)

Schedules of consequences

Stimuli are presented in the environment according to a schedule of which there are two
basic categories: continuous and intermittent. Continuous reinforcement simply means
that the behavior is followed by a consequence each time it occurs. Intermittent schedules
are based either on the passage of time (interval schedules) or the number of correct
responses emitted (ratio schedules). The consequence can be delivered based on the same
amount of passage of time or the same number of correct responses (fixed) or it could be
based on a slightly different amount of time or number of correct responses that vary
around a particular number (variable). This results in an four classes of intermittent
schedules. [Note: Continuous reinforcement is actually a specific example of a fixed ratio
schedule with only one response emitted before a consequence occurs.]

1. Fixed interval -- the first correct response after a set amount of time has passed is
reinforced (i.e., a consequence is delivered). The time period required is always the same.

Notice that in the context of positive reinforcement, this schedule produces a scalloping
effect during learning (a dramatic dropoff of responding immediately after
reinforcement.) Also notice the number of behaviors observed in a 30 minute time period.
2. Variable interval -- the first correct response after a set amount of time has passed is
reinforced. After the reinforcement, a new time period (shorter or longer) is set with the
average equaling a specific number over a sum total of trials.

Notice that this schedule reduces the scalloping effect and the number of behaviors
observed in the 30-minute time period is slightly increased.

3. Fixed ratio -- a reinforcer is given after a specified number of correct responses. This
schedule is best for learning a new behavior

Notice that behavior is relatively stable between reinforcements, with a slight delay after
a reinforcement is given. Also notice the number of behaviors observed during the 30
minute time period is larger than that seen under either of the interval schedules.

4. Variable ratio -- a reinforcer is given after a set number of correct responses. After
reinforcement the number of correct responses necessary for reinforcement changes. This
schedule is best for maintaining behavior.
Notice that the number of responses per time period increases as the schedule of
reinforcement is changed from fixed interval to variable interval and from fixed ratio to
variable ratio.
In summary, the schedules of consequences are often called schedules of reinforcements
because there is only one schedule that is appropriate for administering response cost and
punishment: continuous or fixed ratio of one. In fact, certainty of the application of a
consequence is the most important aspect of using response cost and punishment.
Learners must know, without a doubt, that an undesired or inappropriate target behavior
will be followed by removal of a positive/pleasant stimulus or the addition of a
negative/aversive stimulus. Using an intermittent schedule when one is attempting to
reduce a behavior may actually lead to a strengthening of the behavior, certainly an
unwanted end result.

Premack Principle

The Premack Principle, often called "grandma's rule," states that a high frequency activity
can be used to reinforce low frequency behavior. Access to the preferred activity is
contingent on completing the low-frequency behavior. The high frequency behavior to
use as a reinforcer can be determined by:

1. asking students what they would like to do;


2. observing students during their free time; or
3. determing what might be expected behavior for a particular age
group.

Analyzing Examples of Operant Conditioning

There are five basic processes in operant conditioning: positive and negative
reinforcement strengthen behavior; punishment, response cost, and extinction weaken
behavior.

1. Postive Reinforcement--the term reinforcement always indicates a


process that strengthens a behavior; the word positive has two cues
associated with it. First, a positive or pleasant stimulus is used in the
process, and second, the reinforcer is added (i.e., "positive" as in + sign for
addition). In positive reinforcement, a positive reinforcer is added after a
response and increases the frequency of the response.
2. Negative Reinforcement-- the term reinforcement always
indicates a process that strengthens a behavior; the word negative has two
cues associated with it. First, a negative or aversive stimulus is used in the
process, and second, the reinforcer is subtracted (i.e., "negative" as in a "-"
sign for subtraction). In negative reinforcement, after the response the
negative reinforcer is removed which increases the frequency of the
response. (Note: There are two types of negative reinforcement: escape
and avoidance. In general, the learner must first learn to escape before he
or she learns to avoid.)
3. Response Cost--if positive reinforcement strengthens a response
by adding a positive stimulus, then response cost has to weaken a behavior
by subtracting a positive stimulus. After the response the positive
reinforcer is removed which weakens the frequency of the response.
4. Punishment--if negative reinforcement strengthens a behavior by
subtracting a negative stimulus, than punishment has to weaken a behavior
by adding a negative stimulus. After a response a negative or aversive
stimulus is added which weakens the frequency of the response.
5. Extinction--No longer reinforcing a previously reinforced
response (using either positive or negative reinforcement) results in the
weakening of the frequency of the response.

The Learning Cycle


A way of using Kolb's learning styles is a cycle whereby we learn. This is different
from Kolb's styles which state that people have preferred static positions regarding
these.

Experiencing

Experimenting Reflecting

Theorizing

Experiencing

First of all, we have an experience. Most experiences are not worth further movement
on the cycle as we are already familiar with them and they need no further
interpretation and hence no need for learning.

Reflecting

Having experienced something which does not fit well into our current system of
understanding, we then have to stop and think harder about what it really means. This
reflection is typically a series of attempts to fit the experience to memories and our
internal models (or schemata).

Reflecting on new experiences is first a process of explaining as we try to use our


existing models to make sense of our experience. When we cannot fully explain what
happened, reflecting also includes confusion when they do not fit in with existing
models.

If we can explain what happened, then the cycle stops here as there is nothing to learn.
Much of life is like this. Many of us also avoid going past this stage as we fake and fix
our experiences so we do not have to go through the pain of learning.

Theorizing

When we find that we cannot fit what we have experienced into any of our memories or
internal models, then we have to build new models. This theorizing gives us a possible
answer to our puzzling experiences.

For some people, this is a wonderful stage as they consider all kinds of possibilities. For
others, it is a struggle as they try to make sense of the senseless.

Experimenting

After building a theoretical model, the next step is to prove it in practice, either in 'real
time' or by deliberate experimentation in some safe arena. Again, this can be enjoyable
or worrisome, depending on the individual personality and perspective.

If the model does not work, then we go through reflection-theorizing the loop again,
figuring out what happened and either adjusting the model or building a new one.

learning styles

David Kolb has defined one of the most commonly used models of learning. As in the
diagram below, it is based on two preference dimensions, giving four different styles of
learning.

Concrete
ACCOMODATORS DIVERGERS
Experience
^
Perception

|
Active -- Processing Reflective
<------
--------
------>
Experimentation Observation
|
|

V
Abstract
ASSIMILATOR
CONVERGERS conceptualizatio S
n

Preference dimensions
Perception dimension

In the vertical Perception dimension, people will have a preference along the continuum
between:

• Concrete experience: Looking at things as they are, without any


change, in raw detail.
• Abstract conceptualization: Looking at things as concepts and ideas,
after a degree of processing that turns the raw detail into an internal model.

People who prefer concrete experience will argue that thinking about something
changes it, and that direct empirical data is essential. Those who prefer abstraction will
argue that meaning is created only after internal processing and that idealism is a more
real approach.

This spectrum is very similar to the Jungian scale of Sensing vs. Intuiting.

Processing dimension

In the horizontal Processing dimension, people will take the results of their Perception
and process it in preferred ways along the continuum between:

• Active experimentation: Taking what they have concluded and trying


it out to prove that it works.
• Reflective observation: Taking what they have concluded and
watching to see if it works.

Four learning styles

The experimenter, like the concrete experiencer, takes a hands-on route to see if their
ideas will work, whilst the reflective observers prefer to watch and think to work things
out.
Divergers (Concrete experiencer/Reflective observer)

Divergers take experiences and think deeply about them, thus diverging from a single
experience to multiple possibilities in terms of what this might mean. They like to ask
'why', and will start from detail to constructively work up to the big picture.

They enjoy participating and working with others but they like a calm ship and fret over
conflicts. They are generally influenced by other people and like to receive constructive
feedback.

They like to learn via logical instruction or hands-one exploration with conversations
that lead to discovery.

Convergers (Abstract conceptualization/Active experimenter)

Convergers think about things and then try out their ideas to see if they work in
practice. They like to ask 'how' about a situation, understanding how things work in
practice. They like facts and will seek to make things efficient by making small and
careful changes.

They prefer to work by themselves, thinking carefully and acting independently. They
learn through interaction and computer-based learning is more effective with them than
other methods.

Accomodators (Concrete experiencer/Active experimenter)

Accommodators have the most hands-on approach, with a strong preference for doing
rather than thinking. They like to ask 'what if?' and 'why not?' to support their action-
first approach. They do not like routine and will take creative risks to see what happens.

They like to explore complexity by direct interaction and learn better by themselves
than with other people. As might be expected, they like hands-on and practical learning
rather than lectures.

Assimilators (Abstract conceptualizer/Reflective observer)

Assimilators have the most cognitive approach, preferring to think than to act. The ask
'What is there I can know?' and like organized and structured understanding.

They prefer lectures for learning, with demonstrations where possible, and will respect
the knowledge of experts. They will also learn through conversation that takes a logical
and thoughtful approach.

They often have a strong control need and prefer the clean and simple predictability of
internal models to external messiness.
The best way to teach an assimilator is with lectures that start from high-level concepts
and work down to the detail. Give them reading material, especially academic stuff and
they'll gobble it down. Do not teach through play with them as they like to stay serious.

Classical Conditioning

This is learning by association. A Russian physiologist called Ivan Pavlov, studied salivation in
dogs as part of his research programme. Normally, dogs will salivate at the when food is
presented, but Pavlov was interested why the dogs had started to salivate when the saw the people
that usually fed them (they also responded to the sound of the dishes being used for their meals).
Pavlov set up an experiment to find out if the dogs could be trained to salivate at other stimuli
such as the sound of a bell or a light. At feeding times, Pavlov would ring a bell and the amount of
saliva produced by the dog was measured. After several 'trials' Pavlov rang the bell without
presenting the food and found that the dogs salivated in the same way as if food was being
presented.

You will note that the conditional response is the same as the unconditioned response, the only
difference being that the response is evoked by a different stimulus.

The Classical Conditioning Procedure:

In scientific terms, the procedure for this is as follows.

1 Food is the unconditioned stimulus or UCS. By this, Pavlov meant that the stimulus that elicited
the response occurred naturally.

2 The salivation to the food is an unconditioned response (UCR), that is a response which occurs
naturally.

3 The bell is the conditioned stimulus (CS) because it will only produce salivation on condition
that it is presented with the food.

4 Salivation to the bell alone is the conditioned response (CR), a response to the conditioned
stimulus.

In Positive Reinforcement a particular behavior is strengthened by the 
consequence of experiencing a positive condition. For example: 
A hungry rat presses a bar in its cage and receives food. The food is a positive condition for the 
hungry rat. The rat presses the bar again, and again receives food. The rat's behavior of pressing 
the bar is strengthened by the consequence of receiving food.
In Negative Reinforcement a particular behavior is strengthened by the 
consequence of stopping or avoiding a negative condition. For example: 
A rat is placed in a cage and immediately receives a mild electrical shock on its feet. The shock is 
a negative condition for the rat. The rat presses a bar and the shock stops. The rat receives 
another shock, presses the bar again, and again the shock stops. The rat's behavior of pressing the 
bar is strengthened by the consequence of stopping the shock.
In Punishment a particular behavior is weakened by the consequence of 
experiencing a negative condition. For example: 
A rat presses a bar in its cage and receives a mild electrical shock on its feet. The shock is a 
negative condition for the rat. The rat presses the bar again and again receives a shock. The rat's 
behavior of pressing the bar is weakened by the consequence of receiving a shock.
In Extinction a particular behavior is weakened by the consequence of not 
experiencing a positive condition or stopping a negative condition. For example: 
A rat presses a bar in its cage and nothing happens. Neither a positive or a negative condition 
exists for the rat. The rat presses the bar again and again nothing happens. The rat's behavior of 
pressing the bar is weakened by the consequence of not experiencing anything positive or 
stopping anything negative.

Reinforcement is the specialist term in operant conditioning for the ‘stamping-in’ of


stimulus associations and response habits that follows the experience of reward. Such
reinforcement is defined to occur when an event following a response causes an increase
in the probability of that response occurring in the future. The response strength of a
reinforced behavior is assessed by such measures as the frequency with which the
response is made. Examples of this include the number of times a pigeon pecks a key in a
session, or the speed with which a rat may run a maze faster.

A reinforcer is the environment change that happens contingent upon the behavioral
response

Schedules of Reinforcement
In operant conditioning, schedules of reinforcement are an important component of the
learning process. When and how often we reinforce a behavior can have a dramatic impact on
the strength and rate of the response. Certain schedules of reinforcement may be more
effective in specific situations. There are two types of reinforcement schedules:

1. Continuous Reinforcement

In continuous reinforcement, the desired behavior is reinforced every single time it occurs.
Generally, this schedule is best used during the initial stages of learning in order to create a
strong association between the behavior and the response. Once the response if firmly
attached, reinforcement is usually switched to a partial reinforcement schedule.

2. Partial Reinforcement

In partial reinforcement, the response is reinforced only part of the time. Learned behaviors
are acquired more slowly with partial reinforcement, but the response is more resistant to
extinction. There are four schedules of partial reinforcement:

1. Fixed-ratio schedules are those where a response is reinforced only after a specified
number of responses. This schedule produces a high, steady rate of responding with only
a brief pause after the delivery of the reinforcer.
2. Variable-ratio schedules occur when a response is reinforced after an unpredictable
number of responses. This schedule creates a high steady rate of responding. Gambling
and lottery games are good examples of a reward based on a variable ratio schedule.
3. Fixed-interval schedules are those where the first response is rewarded only after a
specified amount of time has elapsed. This schedule causes high amounts of responding
near the end of the interval, but much slower responding immediately after the delivery
of the reinforcer.
4. Variable-interval schedules occur when a response is rewarded after an
unpredictable amount of time has passed. This schedule produces a slow, steady rate of
response.

Das könnte Ihnen auch gefallen