Sie sind auf Seite 1von 24

1

Part Four: Applied Behavior Analysis

• Applied Behavior Analysis


• Behavioral Analysis
• Baselining
• Self-Monitoring as a Method of Behavior Change
• Using Reinforcement
• Using the Premack Principle
• Natural Social Reinforcers
• Secondary Reinforcement Using Tokens
• More about Shaping
• Prompting and Fading
• Differential Reinforcement
• DRO and DRL
• Using Negative Reinforcement
• Using Punishment
• Treatment of Uncontrollable Sneezing
• Punishment from the Environment
• The Punishment Trap
• Response Cost with a Rabbit
• Applied Analysis of Antecedents
• Summary: Applied Behavior Analysis
2

Applied Behavior Analysis


In preceding sections of this chapter you have been introduced to the main tools of the applied behavior
analyst, a behavioral psychologist who specializes in operant conditioning. There are two main tools: (1)
systematic arrangement of consequences (reinforcement and punishment) and (2) careful analysis and
arrangement of antecedents (S+ and S-). Together, these skills can be called contingency management. A
contingency is a dependency between events, such as the delivery of food when an animal performs a
behavior. Contingency management is used whenever animals are motivated by incentives (such as
getting paid for a job) or penalties (such as paying a fine for doing something wrong).
Applied behavior analysis is the application of conditioning principles to any tasks or problems outside
the laboratory. We already discussed applications of classical conditioning in an earlier section of this
chapter. In this section we will concentrate on applications of operant conditioning.
How did a professor start a class on applied behavior analysis, and what was the point?
One professor started a graduate class on applied behavior analysis by generating a huge list of problems.
The professor told the students, "Think of any problem a person can have, and we will design a
behavioral therapy for it. Let's get a big list on the board."
At first the class responded slowly. Someone suggested "marriage problems" and the professor said, "Put
it in terms of behaviors." "OK," said the student, "let's say the problem is not enough time spent together."
The professor wrote that on the board. Another student suggested the problem of eliminating "writer's
block," defined behaviorally as the problem of increasing the frequency of certain types of writing.
Another student suggested the problem of eliminating unwanted involuntary movements (tics). Other
students mentioned problems like nailbiting and quitting cigarettes.
How are problems defined in behavioral terms?
As the list grew, the students realized this process could take quite a while. The list of possible human
problems is neverending. Most can be defined in behavioral terms. In other words, most problems can be
described in terms of some observable, measurable activity (behavior) that must be made either more
frequent or less frequent. That is the essence of the behavioral approach: problems are defined as
behaviors that can be made more frequent or less frequent, and the therapy consists of applying
conditioning techniques to make the desired changes.
After the list filled the board, the professor gave the class its assignment. Each student had to select a
problem and, by the end of the term, design a behavioral treatment for it. The professor was making a
point, not just an assignment. Behavioral approaches can be applied to virtually any conceivable problem.
3

Behavioral Analysis
The first step in applied behavior analysis is to analyze the problem. The analysis must be behavioral; that
is, one must state the problem in terms of behaviors, controlling stimuli, reinforcers, punishers, or
observational learning...the concepts we have covered in this chapter. Antecedent and consequent stimuli
must be identified. After this analysis, one can make an educated guess about which intervention strategy
or "treatment" might be best.

Lindsley's Simplified Precision Model


Green and Morrow (1974) offered a convenient, four-step guide to successful behavior change.
Developed by Ogden Lindsley, it is called Lindsley's Simplified Precision Model.
What is Lindsley's Simplified Precision Model?
1. Pinpoint the target behavior to be modified.
2. Record the rate of that behavior.
3. Change consequences of the behavior.
4. If the first try does not succeed, try and try again with revised procedures.
The first step in Lindsley's list was to pinpoint the behavior to be modified. This is often the most crucial
step. If you cannot specify a behavior, how can you modify it?
What "heroic" efforts are exemplified by the list of speech problems?
Behavior modifiers and therapists sometimes go to heroic lengths to identify specific, modifiable
behaviors. For example, a team of behavior therapists at a speech clinic came up with 49 different,
specific verbal problems. The checklist included the following:
1. Overtalk. A person speaks considerably more than is appropriate.
2. Undertalk. A person doesn't speak enough.
3. Fast talk. A person talks too fast.
4. Slow talk. A person speaks too slowly.
5. Loud talk. A person speaks too loudly.
6. Quiet talk. A person speaks too softly.
7. Singsong speech. A person talks as if singing or chanting.
8. Monotone speech. A person speaks with unvarying tone.
9. Rapid latency. A person speaks too quickly after someone else.
10. Slow latency. A person responds only very slowly.
11. Affective talk. A person talks with great emotion, crying, whining, screaming, or speaking with a
shaky voice.
12. Unaffective talk. A person speaks in a flat, unemotional voice even when emotion is appropriate.
4

13. Obtrusions. A person too often "butts in" to conversation.


...and the list goes on, up to #49, which is "illogical talk."
(Adapted from Thomas, Walter & O'Flaherty, 1974, pp. 248-251.)
What happened when a client first entered the speech clinic? What happened once the problem was
specified?
When a client first entered the speech clinic, the therapists checked off which behaviors defined the
client's problem. Each category specifies a type of behavior: something that can be recognized, singled
out for reinforcement, extinction, or punishment. If a person clearly alternates between logical and
illogical talk, it is possible to reinforce one and extinguish the other. Once the problem is specified in
terms of something that can be measured or detected, a behavior, then a therapist can attempt to change
the antecedents or consequences of the behavior, to alter the frequency of the behavior.

Baselining
The next step, after specifying a behavior to be changed, is to measure its frequency of occurrence before
attempting to change it. Baselining is keeping a careful record of how often a behavior occurs, without
trying to alter it. The purpose of baselining is to establish a point of reference, so one knows later if the
attempt to change behavior had any effect. A genuine behavior change (as opposed to a random variation
in the frequency of a behavior) should stand out sharply from the baseline rate of the behavior.
What is baselining? What is its purpose? How long should baselining continue, as a rule?
As a general rule, baselining should continue until there is a definite pattern. If the frequency of the
behavior varies a lot, baseline observations should continue for a long time. If the behavior is produced at
a steady rate, the baseline observation period can be short.
While taking baseline measurements of an operant rate-the frequency of some behavior-an applied
behavior analyst should pay careful attention to antecedents -stimuli that come before the behavior. As
we saw earlier, discriminative stimuli (both S+s and S-s) act as if they control behavior, turning it on or
off.
In what important respect was Lindsley's model incomplete?
During the baseline period, one should keep a record of potentially important antecedent stimuli, as well
as the record of the frequency of the behavior itself. One weakness of Lindsley's Simplified Precision
Model (previous page) was that it did not mention antecedents. It only mentioned changing consequences
of a behavior. Often the relevance of antecedents will be obvious, once they are noticed. For example, if a
child's misbehavior occurs every day when the child is dropped off at a nursery school, a good behavior
analyst will target this period of the day and try to arrange for a reinforcing event to occur if the child
remains calm following the departure of the parent at such a time.
5

Self-Monitoring as a Method of Behavior Change


Sometimes behavior will change during a baseline observation period, due to the measurement itself.
Although old-time behaviorists never would have used this language, the likely explanation is that a
person becomes more conscious of the behavior and starts to alter it when it is measured. Measurement of
one's own behavior is called self-monitoring , and it can be an effective tool for behavior change. For
example, many people wish to lose weight, but few people keep detailed records of their calorie intake.
People who keep a record of every calorie consumed often find that they lose weight as a result. The act
of recording data focuses attention on each bite of food and its consequences, and this (or the fear of
having bad eating habits exposed) motivates a person to eat less.
What is self-monitoring? What sorts of problems respond well to self-monitoring?
Self-monitoring often works especially well with impulsive habits, like snacking, cigarette smoking, or
TV-watching. These are all things a person may start to do "without thinking." Of course, to really work,
self-monitoring must be done honestly without cheating. It forces a person to think about every
occurrence of a behavior. It also draws attention to the consequences of behavior. "If I eat this, I am over
my limit," or "If I start watching TV I won't get my homework done." Self-monitoring, as a behavior
change procedure, lacks any specially arranged reinforcement or punishment, but it forces attention to
natural reinforcements and punishments.

Using Reinforcement
In some cases, mere measurement and informative feedback is not enough. A direct form of intervention
is required. Therefore the third part of Lindsley's method consists of arranging a contingency involving
reinforcement or punishment.
Green and Morrow (1974) offer the following example of Lindsley's Simplified Precision Model in
action.
Jay, a twenty-year-old spastic, retarded man urinated in his clothes. A physician had ruled out any organic
basis for this behavior. Recently Jay was transferred from a state hospital to a nursing home.... The
nursing home director said that Jay must leave if he continued to be enuretic. He agreed, with reservation,
to let [a student] try a program to eliminate the wetting behavior.
Conveniently, the nurses had been routinely recording the number of times Jay wet his clothes each
day....Jay's daily rate of wets was...plotted on a standardized graph form...
Questionable punishment procedures, insisted upon by the nursing home director, were used in the first
two modification phases. First, Jay was left wet for thirty minutes following each wet. Second, Jay was
6

left in his room for the remainder of the day after he wet once. Throughout both punishment phases the
median rate remained unchanged.
What to do? Lindsley's step #4 is "if at first you don't succeed, try and try again with revised procedures."
That is a rather strange sounding rule, but it proves important. Behavior analysts are not automatically
correct in their first assessment of a situation. The first attempt at behavior change may not work. Like
other scientists, they must guess and test and try again. In the case of Jay, behavior analysts decided to
stop the "questionable punishment procedures" and try positive reinforcement instead.
How does the story of "Jay" illustrate step #4 of Lindsley's procedure?
In a fourth phase, following consultation by the student with the junior author, and with the nursing home
director's reluctant consent, Jay was given verbal praise and a piece of candy each time he urinated in the
toilet. No punishment was used. Candy and praise were chosen as consequences after discussion with the
nursing home personnel disclosed what Jay seemed to "go for." The procedure essentially eliminated
wetting.
In an "after" phase (after reinforcements were discontinued), the rate remained at zero except for one
lapse. Presumably, approved toilet behavior and nonwetting were now maintained by natural
consequences, such as social approval and the comfort of staying dry. (Green & Morrow, 1974, p.47))
Note the "after" phase. Proper intervention procedures also include a follow-up to make sure the change is
permanent.

Using the Premack Principle


The search for effective reinforcers sometimes requires creative thinking. Here is a case history in which
the Premack principle was employed with good results. The Premack principle, discussed earlier, is the
idea that preferred or high frequency behaviors can be used to reinforce less preferred, low frequency
behaviors.
How was the Premack principle used to help Burton, the strange 13-year-old?
Case #S-13. Burton had been forced out of school because of his bizarre mannerisms, gestures, and
posturing. It was generally assumed that he was a severely schizophrenic child, albeit a highly intelligent
13-year-old. He acted belligerently toward his parents and was destructive of home property. He had been
known to punish his parents by such behaviors as pouring buckets of water over his head in the middle of
the living room, but his high probability behaviors were to publicly assume a semifetal position, and,
alternately, to lock himself alone in his room for long hours. Reading the homework assigned by his
visiting teacher was low probability behavior. Neither he nor his parents rated any objects or people as
reinforcing. Initially, therefore, the reinforcement of "retiring to his room" was used, contingent upon
completing his homework assignment.
7

Later, he was returned to a special education classroom. Low probability behavior was classwork, and
high probability was escaping alone to a corner of the schoolyard. A contingency was established in
which Burton was allowed to leave the class after completion of his assignment. Later, school attendance
became a high probability behavior. At that point, he was allowed to attend school only contingent upon
more acceptable behavior at home. (Tharp and Wetzel, 1969, p.47)

Natural Social Reinforcers


Probably the most commonly used reinforcer in human and animal affairs is natural social reinforcement.
Natural social reinforcement includes all sorts of positive social contact, including (among humans)
smiles, hugs, compliments, or simple attention and company.
What are natural social reinforcers?
Common social reinforcers among non-human animals are attention, touching, grooming, and cleaning.
For example, small fish sometimes linger in the area of larger fish and clean them by eating parasites and
debris from the larger fish. Walcott and Green (1974) showed that this cleaning symbiosis was reinforcing
to the larger fish. In other words, the larger fish would perform a behavior more often when it was
followed by contact with the smaller, cleaning fish.
Among humans, social reinforcers can be ruined if they are perceived to be fake or manipulative. A
primary rule of social reinforcement is "be sincere." If flattery is honest and true, it is a powerful
reinforcer. Perhaps the word flattery is a bad choice. To some people, it implies deceit, as if flattering
someone means buttering them up, not really meaning what you say. The Dale Carnegie course, which
teaches "how to win friends and influence people," says flattery is not recommended as a technique for
winning friends, but appreciation is very effective!
Natural social reinforcement can be useful in professionally planned behavior modification programs. The
following example is from Tharp and Wetzel's book Behavior Modification in the Natural Environment
(1969).
How was natural social reinforcement used with Rena?
Case #50. Rena was referred by her parents who were very concerned about her inappropriate behavior at
school. Rena, an elementary school student, was known throughout the school for her aggressiveness
toward her peers, disruptive classroom behavior, and general defiance. After interviewing her parents, we
discovered that Rena was exhibiting, on a somewhat lesser scale, the same behavior at home...
An intervention plan was set up whereby Rena's teacher could inform the parents each day her behavior
was satisfactory. Since reinforcers at home were so limited, we had to rely on the positive attention her
father could give her when he got home. They would play simple card games or play in the yard skipping
rope, etc. Rena's father had occasionally done this with her, and by making it contingent, this interaction
8

became very meaningful to her. When Rena's behavior was not satisfactory at school, this reinforcement
did not occur.
The plan took effect rather rapidly, and before long Rena was no problem at school. And, as hoped, her
behavior at home also improved.

Secondary Reinforcement Using Tokens


Secondary reinforcers, you may recall, are learned or symbolic reinforcers. Money is a secondary
reinforcer, because you cannot eat or drink money or get any other primary reinforcement directly from it.
However, you can trade money for primary reinforcers such as food and drink. Secondary reinforcers get
their reinforcing properties from their association with primary reinforcers. Grades are an example. They
are worthless in themselves, but they can lead to primary reinforcers like pride, attention, and the fruits of
employment after graduation.
How are secondary reinforcers used in token economies?
One well-known application of secondary reinforcement is in token economies. Token economies are like
miniature economic systems, using plastic poker chips or similar tokens instead of money. Originally they
were used in mental hospitals, but more recently they have been found useful in institutions serving
learning-disabled individuals. One student writes:
Everyone has a need and a want for food, sleep, and love. These are just a few examples of what is known
as primary reinforcements. Sometimes reinforcements are earned to buy or attain primary reinforcements.
This kind of reinforcement is known as secondary reinforcement.
My sister is mentally retarded, and I can tell when she has been positively reinforced for something she
has done, especially at school. She attends Gordon County Training Center. The teachers there have set
up a system based on tokens. If the students at GCTC do their work well or get along with the other
students, they receive a certain amount of tokens. At the end of each week, the students can go to the
"store" in the school and spend their tokens on something that they want. My sister always comes home
telling us how many tokens she earned and what she spent them on.
She really enjoys getting tokens or any other kind of secondary reinforcement, such as trophies or
ribbons, for her achievements. The secondary reinforcements show her that she is doing something good
and acceptable in the eyes of others. [Author's files]
Why are tokens useful in institutional settings?
Tokens are useful in group settings like a training center for several reasons. (1) the same reinforcer can
be given to everybody, and (2) reinforcement can be given immediately after a behavior. In a treatment
facility, candy might be an effective reinforcer with some people...but not everybody, and different people
like different types of candy. With tokens, you do not need to worry about having the right type of
9

reinforcer on hand; you can give reinforcements immediately (in tokens) and the patient can "spend" the
tokens later at a store in the hospital.

More about Shaping


Before you can reinforce a behavior, the behavior must occur. What if the behavior is not occurring?
Then you must use a technique called shaping , mentioned earlier in connection with teaching a rat to
press a bar in a Skinner Box.
What is the technical name for "shaping"?
Shaping is well described by its technical name: the method of successive approximations. To
approximate something is to get close to it. To do successive approximations is to get closer by small
steps. Shaping works by starting with whatever the organism can already do and reinforces closer and
closer approximations to a goal.
Here are five simple rules for shaping.
What are five rules to observe, while using shaping?
1. Make sure the target behavior is realistic and biologically possible.
2. Specify the current (entering) behavior and desired (target) behavior.
3. Plan a small chain of behavioral changes leading from the entering behavior to the target behavior.
4. If a step proves too large, break it into smaller, more manageable steps.
5. Use reinforcers in small quantities, to avoid satiation (getting "full").
How are the five rules illustrated by teaching a dog to catch a Frisbee?
To illustrate the five rules, consider the task of teaching a dog to catch a Frisbee. If you have ever seen
dogs catch a Frisbee, you know it is quite impressive. Suppose you want to teach your dog this trick. How
do you do it?
According to rule #1, you have to decide whether your dog is physically capable of such an act. The
national champion Frisbee-catching dogs are usually dogs like whippets with a lean, muscular build
which permit them to leap high into the air to snatch Frisbees out of the breezes. Other breeds—bulldogs,
Pekinese, and dachshunds—might be less able to learn this skill.
Suppose you have a dog that is physically capable of catching a Frisbee. Rule #2 says "specify the current
(entering) behavior." This must be a behavior the dog can already perform. It should be a behavior that
can be transformed, in small steps, into the target behavior. Frisbee catching requires that the dog take a
Frisbee into its mouth, so you might start by reinforcing the dog for the entering behavior of holding the
Frisbee in its mouth. Most dogs are capable of doing this without any training. In fact, they will gladly
puncture a Frisbee with their canine teeth, so use a cheap Frisbee you do not mind ruining. The dog enters
10

the experimental situation with this behavior already in its repertoire. That is why it is called an entering
behavior.
Rule #3 says to devise a series of small steps leading from the entering behavior (holding the Frisbee in
his mouth) to the target behavior (snatching the Frisbee from the air). Finding such a sequence of steps is
the trickiest part of shaping. How can you get from "here" to "there"? One approach is to toss the Frisbee
about a foot in the air toward the dog, hoping it will perform the skill so you can reinforce it.
Unfortunately, this probably will not work. The dog does not know what to do when it sees the Frisbee
coming, even if the dog has chewed on it in the past. It hits the dog on the nose and falls to the ground.
This brings us to rule #4. If a step is too large (such as going directly from chewing the Frisbee to
snatching it out of the air) you must break it into smaller steps. In the Frisbee-catching project, a good
way to start is to hold the Frisbee in the air. The dog will probably rise up on his hind legs to bite it. You
let the dog grab it in his mouth, then you release it. That is a first, simple step. Next, you release the
Frisbee a split second before the dog grabs it. If you are holding the Frisbee above the dog, you might
drop it about an inch through the air, right into the dog's mouth.
Now the most critical part of the shaping procedure takes place. You gradually allow the Frisbee to fall a
greater and greater distance before the dog bites it. You might start one inch above the dogs mouth, work
up to two inches, then three, and so on, until finally the dog can grab the Frisbee when it falls a whole
foot from your hand to the dog's mouth. (For literate dogs outside the U.S. and Britain, use centimeters
and meters.) Keep rule #4 in mind; if the dog cannot grab the Frisbee when it falls 8 inches, you go back
to 6 inches for a while, then work back to 8, then 10, then a foot.
Eventually, if the dog gets into the spirit of the game, you should be able to work up to longer distances.
Once the dog is lunging for Frisbees that you flip toward it from a distance of a few feet, you are in
business. From there to a full-fledged Frisbee retrieval is only a matter of degree.
Rule #5 says to have reinforcers available in small quantities to avoid satiation. Satiation (pronounced
SAY-see-AY-shun) is "getting full" of a reinforcer—getting so much of it that the animal (or person) no
longer wants it. If satiation occurs, you lose your reinforcer, and your behavior modification project
grinds to a halt. Suppose you are using Dog Yummies to reinforce your Frisbee-catching dog. If you use
50 yummies getting it up to the point where it is catching a Frisbee that falls eight inches, you will
probably not get much further that day. The dog is likely to decide it has enough Dog Yummies and crawl
off to digest the food.
Why is satiation unlikely to be a problem in this situation?
Actually, dogs respond well to social reinforcement (praise and pats), and that never gets old to a loving
dog, so dog trainers usually reserve their most powerful reinforcers for occasional use. Retrieval games
are themselves reinforcing to many dogs. When I took a dog obedence course, the trainer used retrieval of
11

a tennis ball to reinforce her dog at the end of a training session. That was a fine example of the Premack
Principle in action because a preferred behavior (retrieving a tennis ball) was used to reinforce non-
preferred behaviors (demonstrating obedience techniques).

Prompting and Fading


Prompting is the act of helping a behavior to occur. This is a useful way to start teaching a behavior. A
coach who helps a small child hold a baseball bat, to teach a proper swing, is using prompting. Fading is
said to occur when the trainer gradually withdraws the prompt. For example, the baseball coach gradually
allows the child to feel more and more of the bat's weight, until the coach is no longer holding it.
Eventually the child swings the bat alone. The prompt has been "faded away."
What is prompting and fading?
Prompting and fading is commonly used in dog obedience training. For example, to teach a dog to sit, one
gives the command (sit) then forces the dog to comply with it by gently sweeping the arm into the dog's
back knees from behind, so the dog's back legs buckle gently and its rump goes down to the ground.
Meanwhile one holds the dog's collar so the head stays up. This forces the dog to sit. When the dog sits,
the trainer praises it or offers it a morsel of food.
How is prompting and fading used in dog obedience training?
The command is a stimulus that eventually functions as an S+. The upward tug on the collar and the arm
behind the back knees are called a prompt because they help or prompt the behavior to occur. The
procedure of gradually removing the prompt is called fading. The prompt becomes weaker and weaker; it
is "faded out." After about 20 repetitions there is no need to touch the back of the dog's legs; one says
"sit" and the dog sits.
How did a city use prompting and fading?
Prompting and fading was used by one city when it switched from signs with the English words "No
Parking" to signs with only an international symbol (a circle with a parked car in it and a diagonal line
crossing it out). For the first three months, the new signs contained both the international symbol and the
English words. Then the words were removed. People hardly noticed the transition to the new signs,
because their behavior was transferred smoothly from one controlling stimulus to another.

Differential Reinforcement
Differential reinforcement is selective reinforcement of one behavior from among others. Unlike shaping,
differential reinforcement is used when a behavior already occurs and has good form (does not need
shaping) but tends to get lost among other behaviors. The solution is to single out the desired behavior
and reinforce it.
12

What is differential reinforcement? How is it distinguished from shaping? What is a "response class"?
Differential reinforcement is commonly applied to a group of behaviors. For example, if one was working
in a day care center for children, one might reinforce any sort of cooperative play, while discouraging any
fighting. The "cooperative play" behaviors would form a group singled out for reinforcement. Such a
group is labeled a response class. A response class is a set of behaviors—a category of operants—singled
out for reinforcement while other behaviors are ignored or (if necessary) punished. The only limitation on
the response class is that the organism being reinforced must be able to discriminate it. In the case of
preschoolers at a day care center, the concept of cooperative play could be explained to them in simple
terms. Children observed to engage in cooperative play would then be reinforced in some way that
worked, for example, given praise or a star on a chart or more time to do what they wanted.
How did Pryor reinforce creative behavior?
Karen Pryor is a porpoise trainer who became famous when she discovered that porpoises could
discriminate the response class of novel behavior. Pryor reinforced two porpoises at the Sea Life Park in
Hawaii any time the animals did something new. The response class, in this case, was any behavior the
animal had never before performed. Pryor set up a contingency whereby the porpoise got fish only for
performing novel (new) behaviors. At first this amounted to an extinction period. The animals were
getting no fish.
How did the porpoises' natural reaction to extinction help Pryor?
As usual when an extinction period begins, the porpoises showed extinction-induced resurgence. In other
words, the variety of behavior increased, and the porpoises showed a higher level of activity than normal.
They tried their old tricks but got no fish. Then they tried variations of old tricks. These were reinforced
if the porpoise had never done them before. The porpoises caught on to the fact that they were being
encouraged to do new and different things. One porpoise "jumped from the water, skidded across 6 ft of
wet pavement, and tapped the trainer on the ankle with its rostrum or snout, a truly bizarre act for an
entirely aquatic animal" (Pryor, Haag, & O'Reilly, 1969). The animals also emitted four new behaviors—
the corkscrew, back flip, tailwave, and inverted leap—never before performed spontaneously by
porpoises.

DRO and DRL


A special form of differential reinforcement is differential reinforcement of other behavior, abbreviated
DRO. Other behavior basically means any behaviors except the one you want to get rid of. In the
behavioral laboratory, DRO is technically defined as "delivery of a reinforcer if a specified time elapses
without a designated response." In other words, the animal can do whatever it wants, and as long as it
does not do a particular behavior for a certain period of time, it receives a reinforcer.
13

What is DRO? What are situations in which DRO might be useful?


DRO is used to eliminate a behavior without punishment. Suppose you have a roommate who complains
constantly about poor health. You could say, "Stop talking about your health" but that would be rude. So
how can you encourage your roommate to stop talking about health? One approach is to use DRO. If the
roommate spends a minute without talking about health, you pay attention and act friendly. If the
conversation turns to aches and pains, you stop talking. Eventually, if the procedure works, your
roommate will stop talking about health problems.
As this example shows, DRO involves extinction of the problem behavior. You cut off reinforcements to
the behavior you want to get rid of (extinction) and you reinforce any other behavior (DRO).
How can DRO supplement or replace punishment?
Whenever punishment is used, DRO should be used as well. For example, if you feel you must discipline
a child, you should not merely punish the wrong responses, you should positively reinforce correct
responses. Most of the time, you can achieve what you want through positive reinforcement alone without
any punishment.
Another variation of differential reinforcement is DRL or differential reinforcement of a low rate of
behavior. DRL occurs when you reinforce slow or infrequent responses. Psychologists were initially
surprised that such a thing as DRL could exist. After all, reinforcement is defined as increasing the rate of
behavior. However, many animals can learn a contingency in which responding slowly produces
reinforcement.
What is DRL?
A student reports using DRL to deal with a roommate problem:
My experience with my roommate is an example of DRL. My roommate is a wonderful person, but she
talks too much. A simple yes or no question receives a "15 minute lecture" answer. She talks constantly.
After the psychology lecture on differential reinforcement for a low rate of behavior, I decided to try this
method. When I asked her a simple question and received a lengthy answer, I simply ignored her or left
the room. When she gave a simple reply, I tried to seem interested and even discussed her answer. Now
my roommate talks less and I don't get as aggravated with her. [Author's files]

Using Negative Reinforcement


So far we have considered variations on the theme of positive reinforcement. Negative reinforcement also
has many uses. One is to increase productivity in industry. Most people will work extra hard if they can
get some time off from work as a reward. This is negative reinforcement because the level of work is
increased (the behavior becomes more frequent) as the result of having an aversive stimulus removed
(time off from work). Sometimes this works too well!
14

Here is a story told by a guest lecturer in a Behavior Modification class.


How does the story about the automatic transmission assembly line illustrate the potential power of
negative reinforcement?
A major automobile manufacturer asked some behavioral psychologists to help solve a morale and
production problem at an automobile assembly plant in Ypsilanti, Michigan. Workers on one of the lines
were going too slow, holding up the entire plant. They were supposed to manufacture 60 transmissions
per hour, but they seldom achieved this objective. When managers urged them to speed up, they
complained about being exploited, because (as they saw it) they were being asked to produce more work
for the same salary. They felt abused. The supervisors were disgusted. Bad feelings existed all around.
The psychologists suggested a contingency. If the workers produced their quota of 60 transmissions
before the end of the hour, they could take a break until the end of the hour. The psychologists conceived
of this as a negative reinforcement contingency. They arrived at the idea when they realized there was no
positive reinforcer available, because the management would not allow any extra salary incentives. OK,
said the psychologists, then we will use a negative reinforcer. If the workers hate being "pushed" all the
time, we will let them take time off (and avoid being pushed) when they meet their quota.
The program worked like magic. Productivity leaped. 60 transmissions were produced in 50 minutes,
leaving 10 minutes for a break every hour. The supervisors felt strange seeing the workers "goof off"
every hour, but at least the line was finally meeting its quota. Then a funny thing happened. The line
began to manufacture 60 transmissions in 45 minutes, then 60 transmissions in 40 minutes. Soon workers
on the other lines were grumbling, "Hey, how come those guys are getting a 20 minute break every
hour?"
The plant managers had not expected this to happen, and they had no plans for dealing with it. They
boosted the production quota to 80 transmissions per hour. The workers grumbled, but given the
alternative of going back to the old plan with no hourly break, they accepted the new quota. Nobody
expected the "problem" that occurred next. At first, producing 80 transmissions took almost the whole
hour. But soon 80 transmissions took only 50 minutes...and then only 45 minutes, and the workers were
back to taking a 15 minute break every hour. Workers on the other assembly lines started asking for a
similar system.
At this point, the plant managers decided the experiment had to end. It was unacceptable to have people
taking a break every hour. The managers sent the behavior modifiers home and went back to the old
system. "We knew it wouldn't work," the managers said. [Author's files]
The assembly line story is an example of negative reinforcement. The reinforcer (time off from work)
involved removal of an aversive stimulus so it was "negative" reinforcement. Like positive reinforcement,
it produced a higher frequency of behavior...until the managers refused to let it continue.
15

Eventually a simplified form of this incentive did find a home in the auto industry, and it even became
part of union contracts. In an echo of the above story (which I heard as an undergraduate in the early
1970s) news articles in 1997 reported that workers in a stamping plant at Flint, Michigan were going
home after only 4 hours instead of the usual 8. Apparently the union had negotiated a production quota
based on the assumption that certain metal-stamping machines were capable of stamping out 5 parts per
hour, but actually the machines were capable of 10 per hour. The workers speeded them up to 10 parts per
hour and met the quota specified in their union contract within 4 hours. GM decided to eliminate the "go
home early" policy, and this was one issue in a 1998 strike against General Motors. In the end, a modified
version of the policy with a higher rate of production was re-affirmed in a new contract.
What is reinforced by an hourly wage? By piecework? What is the disadvantage of piecework?
If you think about it, an hourly wage reinforces slow behavior. The less energy a worker puts into the job,
the more money the worker receives per unit of energy expended. By contrast, when people are paid
according to how much they produce (so-called "piecework" systems) they work very quickly to
maximize their gain. The obvious disadvantage is that workers who are rushing to complete their quota
might produce a poor quality product, or endanger themselves, unless steps are taken to maintain quality
control and safety on the job.

Babies as master behavior modifiers


Babies are masters of behavior modification who use negative reinforcement to increase the frequency of
parenting behaviors in adults. Non-human babies of all different types practice this form of operant
conditioning instictively. For example, juvenile birds fresh out of the nest will fly from branch to branch,
following their parents, making unpleasant noises until fed.
How are babies "masters of behavior modification"?
Almost nothing is more aversive to parents than the cry of a baby, and adult humans will do almost
anything to eliminate or prevent that stimulus. They will feed a baby, change its diapers, dance around
with it, get up in the middle of a deep sleep…all to prevent crying. Fortunately for parents, crying (in
babies under one year old) is not reinforced by the application of love.
What finding surprised behavior modifiers?
This finding actally surprised behavior modifiers when it was first discovered, because it was so counter-
intuitive. Studies showed that babies whose parents responded less to their crying (leaving the baby
crying longer or more often) actually suffered more crying in the future. Perhaps this is because crying
cannot be completely extinguished. Parents must respond eventually, so parents who respond slowly are
essentially using intermittent reinforcement and are teaching their babies persistence, not quietude.
16

Using Punishment
Punishment is the application of a stimulus after a behavior, with the consequence that the behavior
becomes less frequent or less likely. Most people assume the stimulus has to be unpleasant (aversive), but
that is not always the case. Any stimulus that has the effect of lowering the frequency of a behavior it
follows is a punisher, by definition, even if it does not seem like one.
Electric shock is often the most effective punishing stimulus. Perhaps because electricity is an unnatural
stimulus, or because it disrupts the activity of nerve cells, organisms never become accustomed to it, and
they will do almost anything to avoid it. Whatever the reason, electric shock "penetrates" when other
punishers fail to work.
What is punishment?

Treatment of head-banging
Whaley and Mallott (1971) tell of a nine-year-old, mentally retarded boy who caused himself serious
injury by head-banging. The boy had to be kept in a straitjacket or padded room to keep him from hurting
himself. This prevented normal development; he acted more like a three-year-old than a nine-year-old.
Left unrestrained in a padded room, the boy banged his head up to a thousand times in an hour.
Something had to be done.
How did punishment help the child who banged his head?
The researchers decided to try a punishment procedure. They placed shock leads (electrodes) on the boy's
leg, strapping them on so he could not remove them. Each time he banged his head, they delivered a mild
shock to his leg.
The first time he banged his head and was given a shock, Dickie stopped abruptly and looked about the
room in a puzzled manner. He did not bang his head for a full three minutes, and then made three contacts
with the floor in quick succession, receiving a mild shock after each one. He again stopped his head-
banging activity for three minutes. At the end of that time he made one more contact, received a shock,
and did not bang his head for the remainder of the one-hour session. On subsequent sessions, after a
shock was given the first time Dickie banged his head, he abandoned this behavior. Soon the head
banging had stopped completely and the mat was removed from the room. Later, a successful attempt was
made to prevent Dickie from banging his head in other areas of the ward.
Currently Dickie no longer needs to be restrained or confined and has not been observed to bang his head
since the treatment was terminated; therefore, in his case, punishment was a very effective technique for
eliminating undesirable behavior. The psychologist working with Dickie stressed that the shock used was
17

mild and, compared to the harm and possible danger involved in Dickie's head banging, was certainly
justified (Whaley & Mallott, 1971).
Twenty years after this technique was developed, it was still being debated. It worked and spared the
child further self-injury, plus it stopped a destructive habit that might have persisted for years if left
unchecked. However, people who regarded any use of electric shock with humans as unacceptable
attacked this technique as cruel.

Treatment of Uncontrollable Sneezing


If you try to identify the common element in problems that respond well to aversive treatment, they often
involve a "stuck circuit"—a biologically based behavior pattern that, for some reason, is triggered again
and again with self-injurious consequences. In these cases, one could argue, punishment therapy is
justifiable, even merciful, because it is so quick and effective.
Consider another case reported by Whaley and Mallott. It involves uncontrollable sneezing in a 17-year-
old girl. She sneezed every few minutes for six months. Her parents spent thousands of dollars consulting
with medical specialists, but nobody could help. The problem was solved (again) with mild electric
shocks.
How does the case history of the sneezing girl illustrate therapeutic use of electric shock?
The shock began as soon as the sneeze was emitted and lasted for half a second after its cessation. Within
a few hours the sneezing became less frequent, and six hours later it had stopped completely. For the first
time in six months...the girl spent a full night and day without a single sneeze. Two days later she was
discharged from the hospital and, with occasional subsequent applications of the shock apparatus, her
sneezing remained under control.
...The total time that the teenager actually received shocks during the entire treatment was less than three
minutes. (Whaley & Mallott, 1971)

Punishment from the Environment


Punishment often has negative side effects, Animals lose their trust of humans who punish them, making
medical treatment and other interactions difficult. Punishment can cause avoidance and emotional
disturbance. When humans punish animals, the animals often fail to learn because they do not know
which specific behavior is being punished. To be effective, a punishment must occur immediately after a
behavior, and it need not be injurious. A mother wolf (or lion or tiger) shows effective punishment
procedures with its babies. Misbehavior is followed by a quick and largely symbolic act of dominance,
such as a swat or pretend bite. The punishment does not cause injury, but it conveys disapproval, and it
comes immediately after the problem behavior.
18

What are some negative side effects of punishment? What typically happens when a human tries to
punish a cat?
Dunbar (1988) noted that if a cat owner sees a cat performing a forbidden act such as scratching the
furniture, punishment is not usually effective. If the human punishes the cat, the cat merely learns to
avoid the human (so the human becomes an S-). Typically the cat will continue to perform the same
forbidden act when the human is not present. If the human discovers evidence of a cat's forbidden
behavior upon coming home, and punishes the cat, the cat learns to hide when the human comes home.
This does not mean the cat feels "guilt." It means the cat has learned that the human does unpleasant
things when first arriving home. The cat does not associate punishment with the forbidden behavior,
which typically occurred much earlier.
What is "punishment from the environment" and how can it be used to keep cats off the kitchen counter?
If punishment from a human does not work very well, a good alternative is punishment from the
environment. It works with all animals, even cats. Dunbar points out, "A cat will only poke its nose into a
candle flame once." For similar reasons, "a well-designed booby trap usually results in one-trial learning."
For example, a cat can be discouraged from jumping on a kitchen counter by arranging cardboard strips
that stick out about 6 inches from the counter, weighted down on the counter with empty soda cans. When
the cat jumps to the counter it lands on the cardboard. The cans go flying up in the air, and the whole kit
and caboodle crashes to the floor. The cat quickly learns to stay off the counter. Meanwhile the cat does
not blame this event on humans, so the cat does not avoid humans, just the kitchen counter.
What conditional response bedevils cat owners? How do automatic gadgets help?
Sometimes cats get into the nasty habit of defecating or urinating on a carpet. Once the problem starts, it
is likely to continue, because even enzyme treatments designed to eliminate the odor do not eliminate all
traces of it, and the odor "sets off" the cat in the manner of a conditional response. The behavior occurs
when no human is present, and punishment by a human does not deter it, for reasons discussed above
(punishment comes too late and the animal fails to connect the punishment with the behavior).
What to do? The problem is urgent and motivates online buying, so entrepreneurs have responded.
Gadgets designed to deter this behvaior typically combine a motion sensor with a can of pressurized air or
a high-frequency audio alarm. The blast of air (or alarm) is triggered by the presence of the cat in the
forbidden area. According to reviews by troubled cat owners at places like amazon.com, these devices
sometimes work when all else has failed. They are also a good example of punishment from the
environment.
What are several reasons dog trainers recommend against harsh punishment?
Dog trainers also recommend not using harsh punishment. Some dogs will "take it," but some will
respond with an active defense reflex that could involve biting, even if the dog is normally friendly.
19

(Terrier breeds are particularly prone to this problem, and a usually-friendly dog can surprise a child with
a vicious response to being harassed.) Moreover, punishment is unnecessary with dogs. Dogs have been
bred to desire the approval of humans. They respond very well to positive reinforcement as simple as a
word of praise.
When punishment is used with any pet or domesticated animal, it should be as mild as possible. For
example, if cat owners have a kitty that likes to wake them up too early in the morning, the simplest and
gentlest approach is negative punishment or response cost. Simply put the kitty out of the room. If that
fails, a squirt gun works. Gentle methods are to be preferred with all animals. Trainers who handle wild
horses no longer "break" them, the way they did a century ago. Modern horse trainers win horses over
with gentle and consistent positive reinforcement. It works just as well and results in a horse that enjoys
human company.
How should cat owners respond to unwanted morning awakenings?
Is electric shock punishment ever justified? Some people argue against all use of electric shock, in
principle, as if shock is always inhumane. But electric shocks come in all sizes. Small shocks do not cause
physical injury, and they are very effective punishers that discourage a repetition of harmful behavior.
Sometimes this is necessary and desirable.
What is an example of "effective and humane" use of electric shock?
In the case of electric fences used by ranchers, for example, shock is effective and humane. You can
touch an electric fence yourself, and although you will get a jolt, you will not be harmed. But even large
animals like horses will not test an electric fence more than a few times. Then they avoid it. Avoidance
behaviors are self-reinforcing, so large animals will continue to avoid a thin electric fence wire, even if
the electricity is turned off. They do not "test" it the way they test non-electric fences (often bending or
breaking them in the process). Electric fences also allow farmers and ranchers to avoid using barbed wire,
which can injure animals severely.

The Punishment Trap


Ironically, stimuli intended as punishment may sometimes function as reinforcers. How can you tell when
something intended as punishment is functioning as reinforcement? Observe the frequency of the
behavior. If the behavior becomes more frequent, the intended punisher is actually a reinforcer. If a child
responds to punishment by doing more of the same bad behavior, most parents will step up the level of
punishment. Sometimes this only makes the behavior worse. If so, the parents are caught in the
punishment trap.
How can you tell when attempts at punishment are actually reinforcing a behavior? What is the
"punishment trap"? What typically happens when children are well behaved?
20

How could such a pattern occur? Consider these facts. The average parent is very busy, due partly to
having children. The parent enjoys peace and quiet when children are being good or playing peacefully.
Therefore, when children are well behaved, parents tend to ignore them. By contrast, when children
misbehave, parents must give attention. Parents must break up fights, prevent damage to furniture or walls
or pets, and respond to screams or crying. Most children are reinforced by attention. So there you have all
the necessary ingredients for the punishment trap. Children learn to misbehave in order to get attention.
One student noticed the misbehavior-for-attention pattern while visiting a friend:
I was at my friend's trailer one weekend visiting with her and her small daughter. I played with Jessie, the
little girl, for a while. Then Dee-Ann and I sat down to talk, leaving Jessie to play with her toys. She
played quietly for a while, but all of a sudden she got up, stuck her hand in the potted plant, and threw dirt
on the floor. Dee-Ann quickly scolded her and got her to play with her toys again. Dee-Ann then sat back
down and continued with our conversation. In a few minutes, Jessie was throwing dirt again. Again, Dee-
Ann got her to play with her toys, and then sat back down. But in a few minutes Jessie was throwing dirt.
Why did little Jesse throw dirt?
Dee-Ann could not understand why Jessie was acting like that. I then remembered the story about the
parents hitting the kids for messing with things, but the kids wanting attention and doing it more often. So
I thought maybe Jessie was being reinforced for throwing dirt because each time she threw dirt, Dee-
Ann's attention reverted to her. I explained this to Dee-Ann, and the next time Jessie messed with the
plant, Dee-Ann simply ignored her, picked up the plant and sat it out of Jessie's reach. That ended the
dirt-throwing problems. [Author's files]
Little Jessie probably got lots of loving attention when her mother was not engrossed in conversation with
a friend. But some children receive almost no attention unless they are "bad." In such cases, serious
behavior problems may be established. One student remembers this from her childhood:
When I was a little girl, I always told lies, even if I did not do anything wrong... I think the only reason I
lied was to get attention, which my parents never gave me. But one thing puzzles me. Why would I lie
when I knew my dad was going to spank me with a belt? It really hurt. [Author's files]
How can a stimulus intended as punishment actually function as a reinforcer, in this type of situation?
The answer to this student's question, probably, is that she wanted attention more than she feared pain.
Any attention-even getting hit with a belt-is better than being totally ignored. Similar dynamics can occur
in a school classroom. One of my students told about a teacher in elementary school who wrote the names
of "bad" children on the board. Some children deliberately misbehaved in order to see their names on the
board.
How can a parent avoid the punishment trap?
21

The solution? It is contained in the title of a book (and video series) called Catch Them Being Good.
Parents should go out of their way to give sincere social reinforcement-love, attention, and appreciation-
when it is deserved. When children are playing quietly or working on some worthy project, a parent
should admire what they are doing. When they are creative, a parent should praise their products. When
they invent a game, a parent should let them demonstrate it. If you are a parent with a child in a grocery
store, and you observe other children misbehaving, point this out to your own children and tell them how
grateful you are that they know how to behave in public. Don't wait for them to misbehave. Point out how
"mature" they are, compared to those kids in the next aisle who are yelling and screaming.
Sincere social reinforcement of desirable behavior is a very positive form of differential reinforcement. It
encourages a set of behaviors that might be called sweetness. With such a child, the occasional reprimand
or angry word is genuinely punishing. This reduces the overall level of punishment considerably. A child
who loves you and trusts you and looks forward to your support may be genuinely stricken by harsh
words. A loving parent realizes this and adopts a gentler approach, which is usually adequate when a
child cares about pleasing the parent.
What did Tanzer write about in the book titled Your Pet Isn't Sick ?
Pets are also capable of something like the punishment trap. They, too, can learn to misbehave or pretend
to be ill, in order to get attention from humans.
One veterinarian saw so many malingering animals trying to get attention by acting ill, coughing, or
limping, that he wrote a book called Your Pet Isn't Sick [He Just Wants You to Think So] (Tanzer, 1977).
It explained how owners who accidentally reinforced symptoms of illness caused pet problems. Owners
will run over to a pet and comfort it, if it makes a funny noise like a cough. Soon the pet would be
coughing all the time. Tanzer found that if the animals were not reinforced for the symptoms (after a
thorough check to rule out genuine problems) the symptom would go away.
What did one vet call the "single most common problem" he encountered?
Another vet specialized in house calls so he could see a pet misbehave in context. He said unwitting
reinforcement of undesired behavior was the single most common problem he encountered. The solution
was the same as with many child behavior problems: "catch them being good." Praise the pet and give it
lots of love when it acts healthy; ignore it when it starts coughing or limping. Usually the problem goes
away. Of course, first you have to rule out genuine medical problems.

Response Cost with a Rabbit


Recall that there are two forms of reinforcement (positive and negative) just as there are two forms of
punishment (positive and negative). Negative punishment is more commonly called response cost and
22

consists of removing a reinforcing stimulus. This has the effect of punishing a behavior, making the
behavior less frequent.
On the internet, several discussion groups cater to rabbit owners. Here is an example from one of them in
which the solution to a problem involved response cost.
[An American list member writes:]
I have a 1 1/2 year old French lop and for his entire 1 1/2 years he has been obsessed with peeing on the
bed. We discovered that if we kept the bed covered with a tarp, it would usually deter him from the bed,
though not always. Up until about a month ago, we thought we had him pretty well trained, with only a
few infrequent accidents. But then, my husband and I got married and Jordy (we call him Monster) and I
moved in with my husband. He seems to have adjusted quite well, with the exception of his bed habit...
How was response cost used with a rabbit?
...Please help us, we want to keep Jordy as happy as possible, but we can't keep washing bedding every
day.
[A British list member responded:]
We have two "outdoor" rabbits that come inside for about an hour a day. The older (male) rabbit used to
pee on the bed. Whenever he did this he would immediately be put back in the hutch outside. After about
10 times of peeing on our bed, he learnt that if he peed, quite simply, he wouldn't be able to play with
"Mum" and "Dad." We haven't had a problem since then. I imagine that if Jordy is put outside of the
bedroom and denied affection for the rest of the evening he'll learn pretty quickly. Good luck!
This is a fine example of response cost, because the rabbit's behavior was punished by removing a
reinforcing stimulus. In this case the reinforcing stimulus was being allowed in the house. This stimulus
was removed. Eventually, after about 10 repetitions, he learned the consequence of his behavior, and the
problem behavior was eliminated.

Applied Analysis of Antecedents


So far most of our examples of applied behavior analysis have involved changing the consequences of
behavior. But antecedents of behavior are important, too. Antecedents are things that happen before an
event, and they may control behavior by signaling when a behavior will or will not be followed by a
reinforcer.
Recall the discussion of discriminative stimuli. An S+ is a stimulus indicating reinforcement is available.
An S- is a signal that reinforcement is not available, or that punishment may be coming. Naturally,
animals learn to perform a behavior in the presence of an S+ and to suppress it in the presence of an S-.
Behavior reliably emitted or suppressed in the presence of a particular stimulus is said to be under
stimulus control.
23

How can you manipulate antecedent stimuli to help study more?

B.F. Skinner
The powers of daily habit can jumpstart important life activities. Books about studying in college
typically advise that students set aside a particular time and place for study. The familiar time and
location triggers the studying behavior. That is important with studying because getting started is half the
battle. Usually studying is not too painful once one gets started. Problems occur when a person never gets
started or procrastinates until there is too much work for the remaining time.
How did B.F. Skinner apply this principle to increase his writing productivity?
B.F. Skinner, whose research on operant conditioning underlies virtually all of the second half of this
chapter, used stimulus control to encourage his scholarly work. He followed a rigid daily schedule. At 4
a.m. he got up and ate breakfast, then he wrote for about five hours. Time and the environment of his
home office served as discriminative stimuli to get him started on his writing. Around 10 a.m. Skinner
took a walk down to campus (Harvard) to meet his morning classes. Walking probably became a stimulus
for mulling over his lectures for the day. In the afternoon he attended meetings and scheduled
appointments. With this routine he was always able to put in a few good hours of writing every day
during his prime time, early morning, while also scheduling adequate time for his other activities.

Summary: Applied Behavior Analysis


Applied behavior analysis is the application of principles from operant conditioning to "real life"
problems outside the conditioning laboratory. Lindsley's Simplified Precision Model recommends first
pinpointing the behavior to be modified, recording the rate of that behavior, changing the consequences of
the behavior, then (if one fails at the first attempt to change behavior) trying again with revised
procedures.
The first step in any behavioral intervention is to specify the behaviors targeted for change. Next,
baseline measurements should continue until a stable pattern of behavior is observed. . During baselining,
24

antecedent stimuli should also be observed; they are often important in behavior analysis. Baseline
measurement may itself produce behavior change. When this is done deliberately (for example, to help
people stop smoking) it is called self-monitoring.
The Premack principle suggests that a preferred behavior can be used to reinforce less likely behaviors.
Shaping is a technique that employs positive reinforcement to encourage small changes toward a target
behavior. Prompting and fading is a technique in which a behavior is helped to occur, then help is
gradually withdrawn or faded out until the organism is performing the desired behavior on its own.
Differential reinforcement is the technique of singling out some behaviors for reinforcement, while
ignoring other behaviors.
Negative reinforcement works wonders when employees are given "time off" as a reinforcer for good
work. Babies are master behavior modifiers who use negative reinforcement to encourage nurturing
behavior in parents.
Punishment is effective in certain situations. Electric fences are arguably more humane than alternatives
such as barbed wire for horses and other grazing animals. In human child-rearing, parents must beware of
the "punishment trap," which occurs when children are ignored until they misbehave. The solution is to
"catch them being good." Animals can also learn to misbehave or act ill, if it gets them attention. They,
too, respond better to kindness than punishment.
Analysis of antecedents can prove helpful in changing behavior. Dieters are often advised to avoid eating
in front of the TV, so television does not become an S+ for eating. Time of day can be used as a
discriminative stimulus for desirable behaviors such as studying. B.F. Skinner used this technique when
he set aside a certain time every morning for writing.

Das könnte Ihnen auch gefallen