Sie sind auf Seite 1von 40

LEARNING

THEORIES
Submitted by:
Earl Mathew A. Mag-alasin
BSIE 1-1

Submitted to:
Professor Ma. Corazon Constantino

Ivan Pavlov (1849 - 1936)


Classical Conditioning, Extinction Learning, and Discrimination
Classical Conditioning involves the formation (or strengthening) of an
association between aconditioned stimulus and a response through
repeated presentation of the conditioned stimulus together with an
unconditioned stimulus.
There are four variables involved: unconditioned stimulus, conditioned
stimulus, unconditioned response, and conditioned response. An
unconditioned stimulus (US) is any stimulus that has the ability to elicit
a response without previous training. A conditionedstimulus (CS) refers
to a stimulus which initially does not elicit the response under study
butcomes to do so by being paired with the unconditioned stimulus. An
unconditioned response (UR) is the original response to an
unconditioned stimulus while the conditioned response (CR) is the
learned response to a conditioned stimulus.
Classical conditioning involves learning to associate an unconditioned stimulus that already brings about a
particular response (i.e. areflex) with a new (conditioned) stimulus, so that the new stimulus brings about the
same response.
Extinction learning refers to the gradual decrease in response to a conditioned stimulus that occurs when the
stimulus is presented without reinforcement.
Discrimination, in classical conditioning, is the ability to differentiate between a conditioned stimulus and
other stimuli that have not been paired with an unconditioned stimulus

Experiment

The classical conditioning of Pavlov


involves a dog in a soundproof room. This
dogs cheek has been operated on so that
part of the salivary gland is exposed. A
capsule is then attached to the cheek to
measure the salivary flow. The first
stimulus, a bell, is presented to the dog. The
bell is rung in the room but this does not
elicit any salivary response. The bell (CS) is
then presented together with food (US)
which elicits salivation. The food is an
unconditioned stimulus, since it elicits the
salivary response (UR) without training.
The pairing of the bell and the food is
repeated several times; this is the
conditioning process. After a certain point,
the bell is rung and presented without the food. The dog still salivates (CR). This salivary reaction to the ringing
bell is considered a conditioned response.The dog had learned an association betweenthe bell and the food and a
new behaviour had been learnt. Because this response was learned (or conditioned), it is called aconditioned
response. The neutral stimulus has become a conditioned stimulus.
Pavlov found that for associations to be made, the two stimuli had to be presented close together in time. He
called this the law oftemporal contiguity. If the time between the conditioned stimulus (bell) and unconditioned
stimulus (food) is too great, thenlearning will not occur.

CONCLUSION
It can be said that when a neutral stimulus paired with the unconditioned stimulus it will show a conditioned
response after several pairings.

GENERALISATIONS AND IMPLICATIONS


It can be generalized that organisms can be classically conditioned. Responses can occur as a result of
experience such as braking when you see the traffic light red.

APPLICATION
Phobias can be described in classical conditioning terms as stimulus generalization that has gone way too far.
Say, for instance, that you are a guy who likes spiders. They dont make you afraid at all. They are neutral
stimuli. One day, however, you are bitten by a spider that comes out of nowhere and bite you in the hand. The
bite is an unconditioned stimulus that causes an unconditioned startle response, fear and pain. You dont need to
learn to react with fear to that kind of stimulus.

http://www.simplypsychology.org/pavlov.htmlhttp://www.vcehelp.com.au/ivan-pavlov
%E2%80%99s-classical-conditioning-dog-salivation-experiment-

1330/http://www.springerreference.com/docs/html/chapterdbid/319711.html
http://psychology.about.com/od/dindex/g/discrimination.htm
http://www.slideshare.net/coburgpsych/lesson-7-applications-of-classical-conditioning

Hermann Ebbinghaus(1850 1909)


Theory of memory
Theory of memory states that if you're looking for ways to optimize
your study habits, Ebbinghaus's research on memory should help
you. He spent lots of time being his own test subject, trying to
memorize nonsense syllables in order to see which methods of
memorization would work the best. These usually involved a
consonant, followed by a vowel, and ending with another consonant.
The use of nonsense syllables were preferred because they were not
familiar and therefore could not involve prior learning. Through this
process, he discovered a number of trends about how the human
mind retains new information.

Memory experiments of Ebbinghaus:


how to investigate retention of newly learnt material

invented lists of 16 nonsense syllables to minimize influence oflearners history


goal: study memory in pure form

introduced criterion for successful learning (2 errorless recitations)

introduced savings method to measure retention/forgetting of lists

The forgetting curve

Forgetting curve describes how the ability of the brain


to retain information decreases in time.
Hermann Ebbinghaus was the first to study the
forgetting behaviour in an experimental, scientific way.
In his groundbreaking research he studied on himself
the memorization and forgetting of nonsense three letter
words. Ebbinghaus performed a series of tests on
himself over various time periods. He then analyzed all
his recorded data to find the exact shape of the
forgetting curve. He found that forgetting is exponential
in nature.
Ebbinghaus published his findings in 1885 in his book
ber das Gedchtnis (Memory: A Contribution to
Experimental Psychology).

Memory experiments of Ebbinghaus:


Examination of forgetting curve with savings method
> calculation of savings scores to measure performance
> findings:
o most forgetting occurs right after learning
* approx. 50% in first 40 min
* relationship between delay and forgetting not linear
At the beginning your retention is 100% since this exactly the point in time when you actually learned the piece
of information. As time goes on the retention drops sharply down to around 40% in the first couple of days.
The forgetting curve is exponential. That means that in the first days the memory loss is biggest, later (as you
can see in the forgetting curve at the right side) you still forget but the rate at which you forget is much, much
slower.

The forgetting curve clearly shows that in the first period after learning or reviewing a piece of information we
forget most!
The speed with which we forget any information depends on a number of different factors:

How difficult is the learned material? How easy is it to relate the information with facts, which you know
already?
How is the information represented?
Under which condition are you learning the material? Are you stressed?
Are you fully rested and have you slept enough?
While all individuals differ in their capacity to learn and retain information the shape of the forgetting curve for
base tests (such as nonsensical words) is nearly identical.
Thus the differences in learning capacities come from different acquired learning behaviours. Some individuals
are able to transform the piece of information to a memory representation that is more suitable for them (for
example audio oriented learners or visually oriented learners). Also some people have naturally a better capacity
to use memory hooks and other mnemonic techniques to remember more easily and relate to information, which
they know already.
The spacing effect
The Spacing effect refers to the finding that information, which is presented over spaced intervals is learned and
retained more easily and more effectively.
In particular it refers to remembering items in a list. You can study them in fewer times over a long period of
time (spaced presentation) or repeatedly in a short period of time (massed presentation). It was found that
spaced repetition is much more beneficial both time-wise and retention-wise.
In simple words:
If your reviews are farther apart in time you will benefit much more than when the repetitions are close together.
This fact was first found by the German psychologist Hermann Ebbinghaus. He published his findings in his
book ber das Gedchtnis. UntersuchungenzurexperimentellenPsychologie (Memory: A Contribution to
Experimental Psychology) in 1885.
The spacing effect was confirmed in many different memory tasks such as recognition, frequency estimation,
free recall and cued recall. The closer you are to forgetting a piece of information the more you will profit from
reviewing.
All experiments conducted suggest that the spacing effect is a fundamental property to all biological life forms.
Spaced repetition works on all tested animals, not just for humans. The spacing effect works, because thats how
the nerve cells in our bodies store information. Recent experiments in rats have
found that the spacing effect has a clear neuro-physiological basis: Sisti et al. (2007) showed that neural
longevity in the hippocampus of rat brains improved significantly with spaced repetition. (The hippocampus is a
region in the brain, which is important for long-term storage of information.)

The learning curve


Learning curve denotes a graphical representation of the rate at which you make progress learning new
information.
When you learn something new repetition is essential. Through repetition you become more efficient and more
effective at any challenge, which you pose yourself. The progress you make during the learning and repetition
phases can be represented graphically like in the plot below. Scientific studies on memory and acquisition of
motor skills have shown that the learning curve looks as follows: in the beginning, when what you have to learn
is very new, the progress you make is very slow. However, if you keep training and repeating something
interesting happens. Your brain starts adjusting to the challenge and suddenly the progress becomes much more
accelerated. This is the phase, where you make the most progress. Once you reach a certain level of skill and
knowledge you enter the phase of diminishing returns. The better you become at the task the less you still can
make progress in the learning curve. You start mastering the new knowledge or skill and your brain has adapted
and adjusted to the challenge you hit the bounds of the skill or you know all there is to know in that field
you have reached a plateau. The plateau is generally not flat, it is just much, much harder to make significant
progress. Most people then are satisfied with the results and say that they have learned the new knowledge or
skill as well as it can be learned. You could call it the individual maximum competence for a given skill.
Most likely you have experienced the learning curve
first hand. Whenever you delve into a new and very
complex field your progress at the beginning of the
learning curve is very slow. This is mostly due to the
fact that you first have to familiarize yourself with the
topic, get an overview whats going on, understand the
basic definitions and terminology etc. Once you are
over this initial phase, whatever you learn starts slowly
making sense. In fact, once you
learned the basic vocabulary, your understanding
improves significantly and you make a lot of progress
learning what really matters in that field. This is when
all the scattered facts begin to interconnect and you
start putting it all together. You continue making fast
progress until you reached the end of what there is to
learn in the given field and when all is left are the boring details, which do not change the big picture any more.
Then your progress in the learning curve slows down until you reach your individual plateau. The first scientific
study of the learning curve was performed by Hermann Ebbinghaus. Ebbinghaus learned nonsensical three
letter words such as WID KOR ZIF etc. and recorded his progress. He found that when learning lists of
such nonsensical words the time spent learning a given number of words increases drastically with the number
of words.
While the learning curve originally used in psychology and memory research has a clearly defined meaning, it
has become increasingly popular in other fields as well. Therefore different terminologies have been coined
such as experience curve, progress function, progress curve, improvement curve, cost improvement
curve, start-up curve, and efficiency curve. These different names however denote essentially the same

behavior. An slow beginning followed by an accelerated rate of increase and a subsequent reaching of a plateau.
All these processes are described by a so called S-curve.

The steep learning curve


A colloquial term that is frequently used is this product has a steep learning curve. Interestingly this is exactly
the opposite of the original definition. When you say e.g. quantum thermodynamics has a steep learning curve
you intend to say that it is rather difficult to understand and master it. A steep learning curve however, as
defined in learning psychology, means that you make rapid progress. This is the part in the middle of above plot
where things are starting to make sense and you move ahead quickly this is where learning is easiest!
Probably the term came to be because you can consider its figural meaning: if you hike on learning curve, steep
is probably associated with frustration and strenuous climbing.

CONCLUSION:
From his discovery regarding the "forgetting curve", Ebbinghaus came up with the effects of "overlearning".
Overlearning ensures that information is more impervious to being lost or forgotten, and the forgetting curve for
this overlearned material is shallower.
Ebbinghaus hypothesized that the speed of forgetting depends on a number of factors such as the difficulty of
the learned material (e.g. how meaningful it is), its representation and physiological factors such as stress and
sleep. He further hypothesized that the basal forgetting rate differs little between individuals. He concluded that
the difference in performance (e.g. at school) can be explained by mnemonic representation skills.
He went on to hypothesize that basic training in mnemonic techniques can help overcome those differences in
part. He asserted that the best methods for increasing the strength of memory are:
> better memory representation (e.g. with mnemonic techniques)
> repetition based on active recall (esp. spaced repetition).

APPLICATION:
When you search for your missing things, it may seem that that the information about where you left them is
permanently gone from your memory. However, forgetting is generally not about actually losing or erasing this
information from your long-term memory. Forgetting typically involves a failure in memory retrieval. While the
information is somewhere in your long-term memory, you are not able to actually retrieve and remember it.

http://psychology.about.com/od/cognitivepsychology/p/forgetting.htm
http://www.flashcardlearner.com/articles/hermann-ebbinghaus-a-pioneer-of-memory-research/
http://users.ipfw.edu/abbott/120/Ebbinghaus.html

Edward Thorndike (1874 - 1949)


Thorndike is famous in psychology for his work on learning theory that
lead to the development of operant conditioning within behaviourism.
He was one of the first investigators who came out with an association
theory of learning called Connectionism.The theory of connectionism,
states that behavioural responses to specific stimuli are established
through a process of trial and error that affects neural connections
between the stimuli and the most satisfying responses. It associated sense
impressions and impulses with action.
Connectionism, today defined as an approach in the fields of artificial
intelligence, cognitive psychology, cognitive science and philosophy of mind which models mental or
behavioural phenomena with networks of simple units. It is not a theory in frames of behaviourism, but it
preceded and influenced behaviourist school of thought. Connectionism represents psychology's first
comprehensive theory of learning.
Connectionism was based on principles of associationism, mostly claiming that elements or ideas become
associated with one another through experience and that complex ideas can be explained through a set of simple
rules. But connectionism further expanded these assumptions and introduced ideas like distributed
representations and supervised learning and should not be
confused.
Law of effect stated that any behavior that is followed by
pleasant consequences is likely to be repeated, and any behavior
followed by unpleasant consequences is likely to be stopped.

Experiment
He placed a hungry cat in the puzzle box, which was encouraged
to escape to reach a scrap of fish placed outside. Thorndike would
put a cat into the box and time how long it took to escape. The
cats experimented with different ways to escape the puzzle box
and reach the fish.
Eventually they would stumble upon the lever which opened the cage. When it had escaped it was put in again,
and once more the time it took to escape was noted. In successive trials the cats would learn that pressing the
lever would have favourable consequences and they would adopt this behaviour, becoming increasingly quick at
pressing the lever.
Edward Thorndike put forward a Law of effect which stated that any behaviour that is followed by pleasant
consequences is likely to be repeated, and any behaviour followed by unpleasant consequences is likely to be
stopped.

Thorndike says that this is also the same process human used in learning; that is, man learns by trial and error.
In trying to open the box, the cat used many trials before it found the correct solution. Using the same process in
searching for the correct solution to problems confronting him, man uses the trial and error process. From his
experience, man learnes the act which leads to a satisfactory state of affairs and eliminates those which do
not.

CONCLUSIONS:
It can be concluded that the cats were use as an instrument in learning to escape from the puzzle box in order to
get a food reward.
From his animal studies, Thorndike gave to the world his two laws of learning; the Law of exercise and the Law
of Effect. Both are based on the theory of connectionism. The Law of Exercise states that stimulus-response
(SR) connections are strengthened by practice or repetition. The Law of Effect simply states that the SR bonds
or connections are strengthened by rewards or satisfaction. An organism willingly approaches a reward or
satisfier.

GENERALISATIONS AND IMPLICATIONS


Behaviour which is followed by satisfying consequences is strengthened and is more likely to occur than
behaviour which is followed by negative consequences, which is usually weakened. When solving a maths
problem many trials may occur until the satisfying outcome is obtained.

APPLICATION
An example is often portrayed in a child given a candy. When a child eats a candy for the first time and
receives a positive outcome, they are likely to repeat the behavior due to the reinforcing consequence. Over
time, the child will search for more candy.
http://www.simplypsychology.org/edward-thorndike.html
http://www.vcehelp.com.au/thorndike%E2%80%99s-instrumental-learning-experiment-with-cats-1338/

http://www.simplypsychology.org/edward-thorndike.html
http://dictionary.reference.com/browse/connectionism

Burrhus Skinner

(March 20, 1904 August 18, 1990)

Skinner believed that we do have such a thing as a mind, but that it is simply
more productive to study observable behaviour rather than internal mental
events.
Skinner believed that the best way to understand behaviour is to look at the
causes of an action and its consequences. He called this approach operant
conditioning.
Operant conditioning (or instrumental conditioning) is a type of learning in
which an individual's behaviour is modified by its antecedents and consequences.

BF Skinner: Operant Conditioning


Skinner is regarded as the father of Operant Conditioning, but his work was based on Thorndikes law of effect.
Skinner introduced a new term into the Law of Effect - Reinforcement. Behaviour which is reinforced tends to
be repeated (i.e. strengthened); behaviour which is not reinforced tends to die out-or be extinguished (i.e.
weakened).
B.F. Skinner (1938) coined the term operant conditioning; it means roughly changing of behaviour by the use of
reinforcement which is given after the desired response. Skinner identified three types of responses or operant
that can follow behavior.
> Neutral operants: responses from the environment that neither increase nor decrease the probability of a
behaviour being repeated.
> Reinforcers: Responses from the environment that increase the probability of a behaviour being repeated.
Rein forcers can be either positive or negative.
> Punishers: Responses from the environment that decrease the likelihood of a behaviour being repeated.
Punishment weakens behaviour.

Positive Reinforcement
Skinner showed how positive reinforcement worked by placing a hungry rat in his Skinner box. The box
contained a lever in the side and as the rat moved about the box it would accidentally knock the lever.
Immediately it did so a food pellet would drop into a container next to the lever. The rats quickly learned to go
straight to the lever after a few times of being put in the box. The consequence of receiving food if they pressed
the lever ensured that they would repeat the action again and again.
Positive reinforcement strengthens a behaviour by providing a consequence an individual finds
rewarding.

Negative Reinforcement
The removal of an unpleasant reinforcer can also strengthen behaviour. This is known as negative reinforcement
because it is the removal of an adverse stimulus which is rewarding to the animal. Negative reinforcement
strengthens behaviour because it stops or removes an unpleasant experience.
Skinner showed how negative reinforcement worked by placing a rat in his Skinner box and then subjecting it to
an unpleasant electric current which caused it some discomfort. As the rat moved about the box it would
accidentally knock the lever. Immediately it did so the electric current would be switched off. The rats quickly
learned to go straight to the lever after a few times of being put in the box. The consequence of escaping the
electric current ensured that they would repeat the action again and again.
In fact Skinner even taught the rats to avoid the electric current by turning on a light just before the electric
current came on. The rats soon learned to press the lever when the light came on because they knew that this
would stop the electric current being switched on.
These two learned responses are known as Escape Learning and Avoidance Learning.
Skinner (1948) studied operant conditioning by conducting experiments using animals which he placed in a
'Skinner Box' which was similar to Thorndikes puzzle box.

Skinners Operant Conditioning Experiment with rats


EXPERIMENT: A hungry rat was placed in the Skinner box and
every time it pressed the lever it was rewarded with a food pellet
in the food dish which was used to reinforce its behaviour.

RESULT: Rats scurried around the box randomly


touching parts of the floor and wall. Eventually the rat
accidently touched the lever and a food pellet was released. The same sequence was repeated and
with more trials the time taken to press the lever eventually decreased. The random movements of
the rat eventually became deliberate, rats then ate the food as fast as they could press the lever.

CONCLUSION
Behaviourism is primarily concerned with observable behaviour, as opposed to internal events like thinking
and emotion. Note that Skinner did not say that the rats learned to press a lever because they wanted food. He
instead concentrated on describing the easily observed behaviour that the rats acquired.
The major influence on human behaviour is learning from our environment. In the Skinner study, because food
followed a particular behaviour the rats learned to repeat that behaviour, e.g. classical and operant conditioning.
There is little difference between the learning that takes place in humans and that in other animals. Therefore
research (e.g. classical conditioning) can be carried out on animals (Pavlovs dogs) as well as on humans (Little

Albert). Skinner proposed that the way humans learn behaviour is much the same as the way the rats learned to
press a lever.
Behaviourism and its offshoots tend to be among the most scientific of the psychological perspectives. The
emphasis of behavioural psychology is on how we learn to behave in certain ways. We are all constantly
learning new behaviours and how to modify our existing behaviour. Behavioural psychology is the
psychological approach that focuses on how this learning takes place.
It can be concluded that the rats operated on their environment to receive a food reward.

GENERALISATIONS AND IMPLICATIONS


Organisms can learn to operate on their environment to get a desired outcome. Kids learn that if they study hard
for a test they could get a passing grade.

APPLICATION
A kid plays a video game. A child study hard for a test. A runner competes in a marathon and all of these
individuals are reinforced by the results they receive.
http://www.simplypsychology.org/operant-conditioning.html
http://www.vcehelp.com.au/skinner%E2%80%99s-operant-conditioning-experiment-with-rats-1340/

John B. Watson (1878 to 1958)


Behaviorism
John Watson proposed that the process of classical conditioning (based on
Pavlovs observations) was able to explain all aspects of human psychology.
Everything from speech to emotional responses were simply patterns of
stimulus and response. Watson denied completely the existence of the mind
or consciousness.
Behaviourism, according to Watson, was the science of observable
behaviour. Only behaviour that could be observed, recorded and measured
was of any real value for the study of humans or animals. Watson's thinking was significantly influenced by the
earlier classical conditioning experiments of Russian psychologist Ivan Pavlov and his now infamous dogs.
Watson believed that all individual differences in behaviour were due to different experiences of learning. He
famously said:

"Give me a dozen healthy infants, well-formed, and my own specified world to bring them up in and I'll
guarantee to take any one at random and train him to become any type of specialist I might select - doctor,
lawyer, artist, merchant-chief and, yes, even beggar-man and thief, regardless of his talents, penchants,
tendencies, abilities, vocations and the race of his ancestors

Little Albert Experiment (Phobias)


Ivan Pavlov showed that classical conditioning applied to
animals. Did it also apply to humans? In a famous (though
ethically dubious) experiment Watson and Rayner (1920)
showed that it did.
Little Albert was a 9-month-old infant who was tested on his
reactions to various stimuli. He was shown a white rat, a rabbit,
a monkey and various masks. Albert described as "on the whole
stolid and unemotional" showed no fear of any of these stimuli.
However what did startle him and cause
him to be afraid was if a hammer was struck against a steel bar behind his head. The sudden loud noise would
cause "little Albert to burst into tears.
When "Little Albert" was just over 11 months old the white rat was presented and seconds later the hammer was
struck against the steel bar. This was done 7 times over the next 7 weeks and each time "little Albert" burst into
tears. By now "little Albert only had to see the rat and he immediately showed every sign of fear. He would cry
(whether or not the hammer was hit against the steel bar) and he would attempt to crawl away.
Watson and Rayner had shown that classical conditioning could be used to create a phobia. A phobia is an
irrational fear, i.e. a fear that is out of proportion to the danger. Over the next few weeks and months "Little
Albert" was observed and 10 days after conditioning his fear of the rat was much less marked. This dying out of
a learned response is called extinction. However even after a full month it was still evident.
Classical conditioning theory involves learning a new behaviour via the process of association. In simple terms
two stimuli are linked together to produce a new learned response in a person or animal.

CONCLUSION
It can be concluded that fear can be acquired through classical conditioning.

GENERALISATIONS AND IMPLICATIONS


Classical conditioning could cause some phobias in humans. Fear can be acquired through classical
conditioning in organisms. The fear can be generalised to other similar stimuli to the original
stimulus (e.g. rabbit, dog, cotton wool ball, Santa Claus mask). Conditioned fear response may be
acquired through classical conditioning and may be strong in young kids.

APPLICATION
We can apply this to a child who wont stop playing, his mother took a stick and told the child to stop, but he
didnt, then she spanked the child to stop, and he stopped. To condition the child, the next day, the Mother took
a stick, spanked the child and told to stop. After the mother conditions the child, the result is whenever the
mother took a stick and told the child to stop, the child will stop.

http://www.lifecircles-inc.com/Learningtheories/behaviorism/Watson.html
http://psychology.about.com/od/behavioralpsychology/f/behaviorism.htm

Edward C. Tolman (1886- 1959)


Cognitive theory is an approach to psychology that attempts to explain
human behaviour by understanding the thought processes. The assumption
is that in humans, thoughts are the primary determinants of emotions and
behaviour.
Tolman, a cognitive theorist, came up with his sign learning theory from
his experiments with rats. Sign Learning is defined as an acquired
expectation that one stimulus will be followed by another in a particular
context.

Tolman - Latent Learning


The behaviourists stated that psychology should study actual observable behaviour, and that nothing happens
between stimulus and response (i.e. no cognitive processes take place).
Edward Tolman (1948) challenged these assumptions by proposing that people and animals are active
information processes and not passive learners as behaviourism had suggested. Tolman developed a cognitive
view of learning that has become popular in modern psychology.
Tolman believed individuals do more than merely respond to stimuli; they act on beliefs, attitudes, changing
conditions, and they strive toward goals. Tolman is virtually the only behaviourists who found the stimulusresponse theory unacceptable, because reinforcement was not necessary for learning to occur. He felt behaviour
was mainly cognitive.

Tolman coined the term cognitive map, which is an internal representation (or image) of external environmental
feature or landmark. He thought that individuals acquire large numbers of cues (i.e. signals) from the
environment and could use these to build a mental image of an environment (i.e. a cognitive map).
By using this internal representation of a physical space they could get to the goal by knowing where it is in a
complex of environmental features. Short cuts and changeable routes are possible with this model.
Tolman also worked on latent learning, defined as learning which is not apparent in the learner's behavior at the
time of learning, but which manifests later when a suitable motivation and circumstances appear. The idea of
latent learning was not original to Tolman, but he developed it further.
Cognitive Map is an internal perceptual representation of external environmental features and
landmarks. He thought that individuals acquire large numbers of cues from the environment and build
up expectancies about their permanence or changeable characteristics. By using this internal
representation of a physical space they could get to the goal by knowing where it is in a complex of
environmental.

Tolman and Honzik (1930)


In their famous experiments Tolman and Honzik (1930) built a maze to investigate latent learning in rats. The
study also shows that rats actively process information rather than operating on a stimulus response relationship.

Procedure
In their study 3 groups of rats had to find their way
around a complex maze. At the end of the maze there
was a food box. Some groups of rats got to eat the food,
some did not.

Group 1: Rewarded
> Day 1 17: Every time they
got to end, given food (i.e.
reinforced).

GROUP 2: DELAYED REWARD


> DAY 1 - 10: EVERY TIME THEY GOT TO END, TAKEN OUT.
> DAY 11 -17: EVERY TIME THEY GOT TO END, GIVEN FOOD (I.E. REINFORCED).

GROUP 3: NO REWARD
> DAY 1 17: EVERY TIME THEY GOT TO END, TAKEN OUT.

RESULTS
The delayed reward group learned the route on days 1 to 10 and formed a cognitive map of the maze. They
took longer to reach the end of the maze because there was no motivation for them to perform. From day 11
onwards they had a motivation to perform (i.e. food) and reached the
end before the reward group.
This shows that between stimulus (the maze) and response (reaching
the end of the maze) a mediational process was occurring the rats were
actively processing information in their brains by mentally using their
cognitive map In a paper that summarizes the study just described,
"Cognitive maps in rats and men" (1948), Tolman concludes with an
argument that he calls "cavalier and dogmatic," proposing that humans
have cognitive maps that not only situate them in space, but within a
broader network of causal, social and emotional relationships. A
narrow map can lead one to discount outsiders; a broader map to
understanding and empathy. Tolman wrote:

CONCLUSION
Tolmans study indicated that the rats had a cognitive map of the whole situation; thus, they knew where to pass
when one route was closed.
Reacting to behaviorism, Tolman said that when applied to human beings, behaviorism showed too little
appreciation for the cognitive aspect of behavior. He insisted that humans do not simply respond to stimuli but
rather act from belief and attitudes. Behavior is, therefore, goal-oriented and defined by a purpose. It is either
going towards something or getting away from something.

APPLICATION
We can apply this in a lost cat, a teenager found a lost cat in front of their house, he feed the cat and take care of
it for a week, after a week, the teenager brought the cat to a far place. After a day, the cat can still go back to the
teenagers house to eat.
http://www.npr.org/blogs/13.7/2013/02/11/171578224/of-rats-and-men-edward-c-tolman
http://psychology.about.com/od/lindex/fl/What-Is-Latent-Learning.htm
http://faculty.frostburg.edu/mbradley/psyography/edwardtolman.html
http://psychology.ucdavis.edu/sommerb/sommerdemo/mapping/cogmap.htm

Wolfgang Kohler

(22 January 1887 11 June 1967)


In the 1920's, German psychologist Wolfgang Kohler was studying the
behavior of apes. He designed some simple experiments that led to the
development of one of the first cognitive theories of learning, which he
called insight learning.
Insight learning is the abrupt realization of a problem's solution. Insight
learning is not the result of trial and error, responding to an
environmental stimulus, or the result of observing someone else attempt
the problem. It is a completely cognitive experience, which requires the
ability to visualize the problem and the solution internally, in the mind's
eye so to speak, before initialling a behavioral response.
Insight learning is considered a type of learning because it results in a long-lasting change. Following the
occurrence of insight, the realization of how to solve the problem can be repeated in future similar situations

Experiment
In his experiment, Kohler hung a piece of fruit just
out of the reach of each chimp. He then provided the
chimps with either two sticks or three boxes, then
waited and watched. Kohler noticed that after the
chimps realized they could not simply reach or jump
up to retrieve the fruit, they stopped, had a seat, and
thought about how they might solve the problem.
Then after a few moments, the chimps stood up and
proceeded to solve the problem.
In the first scenario, the problem was solved by placing the smaller stick into the longer stick to create
In the first scenario, the problem was solved by placing the smaller stick into the longer stick to create one very
long stick which could be used to knock the hanging fruit down. In the second scenario, the chimps would solve
the problem by stacking the boxes on top of each other, which allowed them to climb to the top of the stack of
boxes and reach the fruit.

CONCLUSION
Learning occurs in a variety of ways. Sometimes it is the result of direct observation, other times it is the result
of experience through personal interactions with the environment. Kohler called this newly observed type of
learning insight learning.

APPLICATION
An example of Insight Learning in everyday life is your study habits before a big exam. A boy trying toget a toy
above the refrigerator and uses a small chair but cant reach it. Next he uses a medium chair and almost reach
it. Lastly he uses a tall chair and reach the toy. So after having failure the first couple times he realizes that it is
best to use the tall cahir in stead of the small one in reaching a high place.

http://education-portal.com/academy/lesson/insight-learning-wolfgang-kohler-theory-definitionexamples.html#lesson

Robert Gagne (1916-2002)

Gagne is considered to be an experimental psychologist who is concerned with


learning and instruction.
Hes known for the skills hierarchy which present simple skills and builds to
complex ones
His learning theory is summarized as The Gagne Assumption which consists of
five types of learning and nine events of instruction.

Taxonomy of Human learning


Gagn identifies five major categories of learning: verbal information, intellectual skills, cognitive strategies,
motor skills and attitudes. Different internal and external conditions are necessary for each
type of learning. The following matrix is abstracted from Gredler's (1997) descriptions of Gagne's condition of
learning:

Gagne developed three principles that he felt was integral for successful instruction
1. Providing instruction on the set of component tasks that build toward a final task
2. Ensuring that each component task is mastered
3. Sequencing the component tasks to ensure optimal transfer to the final task

Eight Conditions of Learning


1. Signal learning- the learner makes a general response to a signal
2. Stimulus- response learning- the learner makes a precise response to a signal
3. Chaining- the connection of a set of individual stimulus and responses in a sequence
4. Verbal association- the learner makes associations using verbal connections
5. Discrimination learning- the learner makes different responses to different stimuli that are somewhat alike
6. Concept learning- the learner develops the ability to make a generalized response based on a class of stimuli
7. Rule learning- a rule is a chain of concepts linked to a demonstrated behaviour
8. Problem solving- the learner discovers a combination of previously learned rules and applies them to solve a
novel situation

CONCLUSION
Gagn concluded that instructional theory should address the specific factors that contribute to the learning of
complex skills.
APPLICATION

We can apply this in a class, every week, the teacher usually gives a long exam to know if the
students understand the discussions for the week. After checking it, the teacher will explain again to
the class the correct and wrong answers in the exam and will answer the students questions.
http://www.icels-educators-for-learning.ca/index.php?option=com_content&view=article&id=54&Itemid=73
http://www.personal.psu.edu/wxh139/gagne.html

http://www.icels-educators-for-learning.ca/index.php?
option=com_content&view=article&id=54&Itemid=73
http://www.theoryfundamentals.com/gagne.htm

Albert Bandura
In social learning theory Albert Bandura (1977) states behaviour is
learned from the environment through the process of observational
learning.
Unlike Skinner, Bandura (1977) believes that humans are active
information processors and think about the relationship between their
behaviour and its consequences. Observational learning could not occur
unless cognitive processes were at work.
There are three core concepts at the heart of social learning theory. First is
the idea that people can learn through observation. Next is the notion that
internal mental states are an essential part of this process. Finally, this
theory recognizes that just because something has been learned, it does
not mean that it will result in a change in behaviour.

1. People can learn through observation.

Observational Learning
Bandura identified three basic models of observational learning:
A live model, which involves an actual individual demonstrating or acting out a behavior.
A verbal instructional model, which involves descriptions and explanations of a behavior.
A symbolic model, which involves real or fictional characters displaying behaviors in books, films, television
programs, or online media.

2. Mental states are important to learning.


Intrinsic Reinforcement
Bandura noted that external, environmental reinforcement was not the only factor to influence learning and
behavior. He described intrinsic reinforcement as a form of internal reward, such as pride, satisfaction, and a
sense of accomplishment. This emphasis on internal thoughts and cognitions helps connect learning theories to
cognitive developmental theories. While many textbooks place social learning theory with behavioral theories,
Bandura himself describes his approach as a 'social cognitive theory.'

3. Learning does not necessarily lead to a change in behavior.


While behaviorists believed that learning led to a permanent change in behavior, observational learning
demonstrates that people can learn new information without demonstrating new behaviors.

The Modeling Process


Not all observed behaviors are effectively learned. Factors involving both the model and the learner can play a
role in whether social learning is successful. Certain requirements and steps must also be followed. The
following steps are involved in the observational learning and modeling process:

Attention:
In order to learn, you need to be paying attention. Anything that detracts your attention is going to have a
negative effect on observational learning. If the model interesting or there is a novel aspect to the situation, you
are far more likely to dedicate your full attention to learning.

Retention:
The ability to store information is also an important part of the learning process. Retention can be affected by a
number of factors, but the ability to pull up information later and act on it is vital to observational learning.

Reproduction:

Once you have paid attention to the model and retained the information, it is time to actually perform the
behavior you observed. Further practice of the learned behavior leads to improvement and skill advancement.

Bobo Doll Experiment


Bandura (1961) conducted a study to investigate if social behaviors (i.e. aggression) can be acquired by
observation and imitation.

Sample
Bandura, Ross and Ross (1961) tested 36 boys and 36 girls from the Stanford University Nursery School aged
between 3 to 6 years old.
The researchers pre-tested the children for how aggressive they were by observing the children in the nursery
and judged their aggressive behavior on four 5-point rating scales. It was then possible to match the children in
each group so that they had similar levels of aggression in their everyday behavior. The experiment is therefore
an example of a matched pairs design.
To test the inter-rater reliability of the observers, 51 of the children were rated by two observers independently
and their ratings compared. These ratings showed a very high reliability correlation (r = 0.89), which suggested
that the observers had good agreement about the behavior of the children.

Method
A lab experiment was used, in which the independent variable (type of model) was manipulated in three
conditions:

Aggressive model shown to 24 children

Non-aggressive model shown to 24 children

No model shown (control condition) - 24 children

Stage 1. Modeling
In the experimental conditions children were individually shown into a room containing toys and played with
some potato prints and pictures in a corner for 10 minutes while either:
1. 24 children (12 boys and 12 girls)
watched a male or female model
behaving aggressively towards a toy
called a 'Bobo doll'. The adults
attacked the Bobo doll in a
distinctive manner - they used a
hammer in some cases, and in others threw the doll in the air and shouted "Pow, Boom".

2. Another 24 children (12 boys and 12 girls) were exposed to a non-aggressive model who played in a quiet
and subdued manner for 10 minutes (playing with a tinker toy set and ignoring the bobo-doll).
3. The final 24 children (12 boys and 12 girls) were used as a control group and not exposed to any model at all.

Stage 2: Aggression Arousal


All the children (including the control group)
were subjected to 'mild aggression arousal'.
Each child was (separately) taken to a room
with relatively attractive toys. As soon as the
child started to play with the toys the
experimenter told the child that these were the
experimenter's very best toys and she had
decided to reserve them for the other children.

Stage 3: Test for Delayed Imitation


The next room contained some aggressive toys and
some non-aggressive toys. The non-aggressive toys
included a tea set, crayons, three bears and plastic farm
animals. The aggressive toys included a mallet and peg
board, dart guns, and a 3 foot Bobo doll.
The child was in the room for 20 minutes and their
behavior was observed and rated though a one-way
mirror. Observations were made at 5-second intervals
therefore giving 240 response units for each child.
Other behaviors that didnt imitate that of the model
were also recorded e.g. punching the Bobo doll on the
nose.

RESULTS
Children who observed the aggressive model made far more imitative aggressive responses than those who
were in the non-aggressive or control groups.
There was more partial and non-imitative aggression among those children who has observed aggressive
behavior, although the difference for non-imitative aggression was small.
The girls in the aggressive model condition also showed more physical aggressive responses if the model was
male but more verbal aggressive responses if the model was female. However, the exception to this general

pattern was the observation of how often they punched Bobo, and in this case the effects of gender were
reversed.
Boys were more likely to imitate same-sex models than girls. The evidence for girls imitating same-sex
models is not strong.
Boys imitated more physically aggressive acts than girls. There was little difference in the verbal aggression
between boys and girls.

CONCLUSION
Children learn social behavior such as aggression through the process of observation learning - through
watching the behavior of another person. This study has important implications for the effects of media violence
on children.
Bandura stress two important concepts around which social learning revolves- modelling and imitation. Human
beings learn from the models they are exposed to. Children who often see aggressive behaviour display more
aggressive behaviour than those who are not exposed to such behaviour. Furthermore, studies show that the age,
sex, and status of objects are also crucial factors. Imitation involves copying the behaviour of the model one is
exposed to. High status models tend to be imitated more often. Oftentimes, the models similarity to the subject
is an important factor in the imitation process.

APPLICATION
We can apply this to a teenager. A teenager is watching his father in fixing the broken electric fan.
In this the teenager can have an idea on how to fix an electric fan, there for he learned from
observing his father..
http://www.simplypsychology.org/bobo-doll.html

http://www.learning-theories.com/social-learning-theory-bandura.html
http://www.simplypsychology.org/bandura.html

Benjamin Samuel Bloom


(February 21, 1913 September 13, 1999)
Taxonomy of Learning Domains is a framework for classifying
educational objectives, which are the statements of what educators expect
their students to have learned by the end of instruction.
His theory can be summarized best with this quote by Bloom The
purpose of education is to change the thoughts, feelings, and actions of
students.
Bloom believed that there were three Domain of learning:

Cognitive mental skills (knowledge)

> involves knowledge and the development of intellectual skills.


> (ex: recognition of specific facts)

Affective - growth in feelings or emotional areas (Attitude)

> includes the manner in which we deal with things emotionally


> (ex: feelings, appreciations, etc.)

Psychomotor manual or physical skills (skills)

> involves physical movement, coordination, and use of the motor skills
> (ex: perception & response)
1. Bloom's taxonomy - cognitive domain - (intellect - knowledge - 'think')

level

category or
'level'

Knowledge

cognitive domain
behaviour
descriptions

recall or recognise
information

examples of
activity to be
trained, or
demonstration
and evidence to
be measured

'key words'
(verbs which
describe the
activity to be
trained or
measured at
each level)

multiple-choice test,
recount facts or
statistics, recall a

multiple-choice test,
recount facts or
statistics, recall a

process, rules,
definitions; quote
law or procedure

Comprehension

Application

Analysis

Synthesis
(create/build)

process, rules,
definitions; quote
law or procedure
arrange, define,
describe, label, list,
memorise,
recognise, relate,
reproduce, select,
state
understand
explain or interpret
explain, reiterate,
meaning, re-state
meaning from a
reword, critique,
data in one's own
given scenario or
classify, summarise,
words, interpret,
statement, suggest
illustrate, translate,
extrapolate,
treatment, reaction
review, report,
translate
or solution to given
discuss, re-write,
problem, create
estimate, interpret,
examples or
theorise, paraphrase,
metaphors
reference, example
use or apply
put a theory into
use, apply, discover,
knowledge, put
practical effect,
manage, execute,
theory into practice, demonstrate, solve a
solve, produce,
use knowledge in
problem, manage an
implement,
response to real
activity
construct, change,
circumstances
prepare, conduct,
perform, react,
respond, role-play
interpret elements,
identify constituent
analyse, break
organizational
parts and functions
down, catalogue,
principles, structure,
of a process or
compare, quantify,
construction,
concept, or demeasure, test,
internal
construct a
examine,
relationships;
methodology or
experiment, relate,
quality, reliability of
process, making
graph, diagram,
individual
qualitative
plot, extrapolate,
components
assessment of
value, divide+0
elements,
relationships, values
and effects; measure
requirements or
needs
develop new unique
develop plans or
develop, plan, build,
structures, systems,
procedures, design
create, design,
models, approaches, solutions, integrate
organise, revise,
ideas; creative
methods, resources, formulate, propose,
thinking, operations ideas, parts; create
establish, assemble,

Evaluation

teams or new
integrate, reapproaches, write
arrange, modify
protocols or
contingencies
assess effectiveness
review strategic
review strategic
of whole concepts,
options or plans in
options or plans in
in relation to values,
terms of efficacy,
terms of efficacy,
outputs, efficacy,
return on
return on investment
viability; critical
investment or costor costthinking, strategic
effectiveness,
effectiveness,
comparison and
practicability; assess practicability; assess
review; judgement
sustainability;
sustainability;
relating to external
perform a SWOT
perform a SWOT
criteria
analysis in relation
analysis in relation
to alternatives;
to alternatives;
produce a financial
produce a financial
justification for a
justification for a
proposition or
proposition or
venture, calculate
venture, calculate
the effects of a plan the effects of a plan
or strategy; perform or strategy; perform
a detailed and
a detailed and
costed risk analysis costed risk analysis
with
with
recommendations
recommendations
and justifications
and justifications

2. Bloom's taxonomy - affective domain - (feeling, emotions - attitude - 'feel')


level

category or
'level'

affective domain
behaviour
examples of
descriptions
experience, or
demonstration
and evidence to
be measured

Receive

open to experience,
willing to hear

Respond

react and participate

listen to teacher or
trainer, take interest
in session or
learning experience,
take notes, turn up,
make time for
learning experience,
participate passively
participate actively

'key words'
(verbs which
describe the
activity to be
trained or
measured at
each level)
ask, listen, focus,
attend, take part,
discuss,
acknowledge, hear,
be open to, retain,
follow, concentrate,
read, do, feel
react, respond, seek

actively

in group discussion,
active participation
in activity, interest
in outcomes,
enthusiasm for
action, question and
probe ideas, suggest
interpretation

Value

attach values and


express personal
opinions

Organise or
Conceptualize
values

reconcile internal
conflicts; develop
value system

decide worth and


relevance of ideas,
experiences; accept
or commit to
particular stance or
action
qualify and quantify
personal views,
state personal
position and
reasons, state beliefs

Internalize or
characterise values

adopt belief system


and philosophy

self-reliant; behave
consistently with
5
personal value set
3. Bloom's taxonomy - psychomotor domain - (physical - skills - 'do')

clarification,
interpret, clarify,
provide other
references and
examples,
contribute, question,
present, cite,
become animated or
excited, help team,
write, perform
argue, challenge,
debate, refute,
confront, justify,
persuade, criticise,

build, develop,
formulate, defend,
modify, relate,
prioritise, reconcile,
contrast, arrange,
compare
act, display,
influence, solve,
practice,

Dave's psychomotor domain taxonomy

level

psychomotor domain (dave)


category or 'evel'
behaviour
examples of
descriptions
activity or
demonstration
and evidence to
be measured

Imitation

copy action of
another; observe
and replicate

Manipulation

Precision

reproduce activity
from instruction or
memory
execute skill

watch teacher or
trainer and repeat
action, process or
activity
carry out task from
written or verbal
instruction
perform a task or

'key words'
(verbs which
describe the
activity to be
trained or
measured at
each level) 1
copy, follow,
replicate, repeat,
adhere
re-create, build,
perform, execute,
implement
demonstrate,

reliably,
independent of help

Articulation

adapt and integrate


expertise to satisfy a
non-standard
objective

Naturalization

automated,
unconscious
mastery of activity
and related skills at
strategic level

activity with
expertise and to
high quality without
assistance or
instruction; able to
demonstrate an
activity to other
learners
relate and combine
associated activities
to develop methods
to meet varying,
novel requirements
define aim,
approach and
strategy for use of
activities to meet
strategic need

complete, show,
perfect, calibrate,
control,

construct, solve,
combine,
coordinate,
integrate, adapt,
develop, formulate,
modify, master
design, specify,
manage, invent,
project-manage

Blooms Taxonomy
In order to get to the highest level of the
cognitive taxonomy, which is evaluation,
meaning the ability to evaluate, the student
would need to have the necessary knowledge
in all the other levels below Evaluation

CONCLUSION
Bloom's Taxonomy is a wonderful reference model for all involved in teaching, training, learning, coaching - in
the design, delivery and evaluation of these development methods. At its basic level (refresh your memory of
the Bloom Taxonomy overview if helpful), the Taxonomy provides a simple, quick and easy checklist to start to
plan any type of personal development. It helps to open up possibilities for all aspects of the subject or need
concerned, and suggests a variety of the methods available for delivery of teaching and learning. As with any
checklist, it also helps to reduce the risks of overlooking some vital aspects of the development required. The
more detailed elements within each domain provide additional reference points for learning design and
evaluation, whether for a single lesson, session or activity, or training need, or for an entire course, programme
or syllabus, across a large group of trainees or students, or a whole organisation. And at its most complex,
Bloom's Taxonomy is continuously evolving, through the work of academics following in the footsteps of
Bloom's early associates, as a fundamental concept for the development of formalised education across the

world. As with so many of the classical models involving the development of people and organisations, you
actually have a choice as to how to use Bloom's Taxonomy. It's a tool - or more aptly - a toolbox. Tools are most
useful when the user controls them; not vice-versa. Use Bloom's Taxonomy in the ways that you find helpful for
your own situation.

APPLICATION
We can apply this in a PE Class, the instructor must provide the objectives that needs to be done after the
activity, the instructor will provide us knowledge on what we will do by discussing it o the students. Next, the
instructor will start the activity and see if the students understand what hes discussing. After the activity, the
instructor will discuss the changes that occurs to the students after they did the activity.

http://www.nwlink.com/~donclark/hrd/bloom.html
http://thesecondprinciple.com/instructional-design/threedomainsoflearning/

Neal Elgar Miller


(August 3, 1909 March 23, 2002)
Learning theory is a model of psychology that explains
human responses through the concept of learning. Learning
theory includes behaviorism, cognitive theory, cognitivebehavioral theory and constructivism.
Motivation theory is a concept that describes the activation of
goal-oriented behaviors in humans.
Biofeedback is a patient-guided treatment that teaches an
individual to control muscle tension, pain, body temperature,
brain waves, and other bodily functions and processes through
relaxation, visualization, and other cognitive control techniques. The name biofeedback refers to
the biological signals that are fed back, or returned, to the patient in order for the patient to develop
techniques of manipulating them.
With respect to the above, Miller made the following main contributions:
1) Fear as a learned drive. He sh owed that fear, through learning, can become attached to cues and then
function to reinforce whatever responses escape or avoid these cues. Later, his efforts to show that hunger can
also become a learned drive, failed.
2) Approach-Avoidance Conflict. Upon setting up in a rat, an approach to a goal for food and an avoidance of
the same goal for fear of shock, he studied the resulting conflict between the approach and the avoidance
gradients, and at what proximity to the goal their intersections would cause the rat to hover.

3) Displacement. The point in the hovering was then studied as a manifestation of displacement, i.e., to what
degree of similarity of another object to the desired goal object would the rat approach before the goal

objects aversiveness, also suggested by the other objects similarity, would balance out the
approach.
4) Premonitory of the Rescorla-Wagner model, he and a student (David Egger) conducted a study of
the informativeness of a reward as it affects its reinforcing value.

5) Discoveries emerging from his tests of Clark Hulls Strong Form of the Drive-Reduction Hypothesis of
Reward

6) Some of Miller and his laboratorys other contributions-primarily regarding physiological aspects of
motivations-are covered in the Neuroscience section.

Biofeedback and Self-Regulation


Neal Miller was a founding father in the field of Biofeedback. The following articles deal with biofeedback, its
history and new directions. Included within the material are both some of Millers own articles, and unique
material that the committe has gathered. More recent works represent either projects that derived from Prof.
Millers activity or that of his students.
Biofeedback, or applied psychophysiological feedback, is a patient-guided treatment that teaches an individual
to control muscle tension, pain, body temperature, brain waves, and other bodily functions and processes
through relaxation, visualization, and other cognitive control techniques. The name biofeedback refers to the
biological signals that are fed back, or returned, to the patient in order for the patient to develop techniques of
manipulating them.

Purpose
Biofeedback has been used to successfully treat a number of disorders and their symptoms, including
temporomandibular joint disorder (TMJ), chronic pain, irritable bowel syndrome (IBS), Raynaud's syndrome,
epilepsy, attention-deficit hyperactivity disorder (ADHD), migraine headaches, anxiety, depression, traumatic
brain injury, and sleep disorders.
Illnesses that may be triggered at least in part by stress are also targeted by biofeedback therapy. Certain types
of headaches, high blood pressure, bruxism (teeth grinding), post-traumatic stress disorder, eating disorders,
substance abuse, and some anxiety disorders may be treated successfully by teaching patients the ability to relax
and release both muscle and mental tension. Biofeedback is often just one part of a comprehensive treatment
program for some of these disorders.

How it all began


In 1957, Neal Miller read K. M. Bykovs book, The Cerebral Cortex and the Internal Organs, which reported
that autonomic responses in a wide variety of internal organs could be classically conditioned. Noting the close
concordance already established between the operations of classical conditioning and instrumental ones, Miller

was inspired to embark on attempts to show that autonomic response could also be instrumentally conditioned.
If successful, the medical benefits would be enormous. Heartened by an experiment finding that thirsty dogs
were able to increase or decrease salivation in order to obtain water rewards but wishing to rule out the
possibility that the salivation was somehow triggered automatically by the somatic postural changes also
adopted by the dogs, Miller paralyzed rats somatic musculature with curare which left heart rate responding
relatively unaffected. Brain stimulation reward was then used as the reinforcement for any designated increase
or decrease in heart rate the rats made. After an initial series of successful experiments using this rat
preparation, the effect mysteriously disappeared despite repeated and highly sophisticated attempts to identify
the cause and reinstate the effect.
However, humans paralyzed by gunshot wounds proved better at gaining autonomic control in elevating in their
case a profoundly hypostatic blood pressure. According to Miller and Dollards four fundamentals necessary for
effective instrumental learning as italicized: These patients had a high drive to do so otherwise they fainted
whenever they sat or stood up. Unlike the rats, they were shown their own amplified blood pressure readings,
thus, providing them an informational biofeedback cue about their own performance. To this information, the
response they initially used to try to change their readings was to think emotional, often sexy, thoughts to which
the desired blood pressure changes are normally reflexly connected. Whenever there was a desired response,
even if too small an increment initially to be clinically relevant, the mere detectable fact of it was a reward,
given the paralytics high achievement motivation.
Similar success has been reported in intact humans in for example causing one arm to blush and the other arm to
blanch which makes it difficult to construe by what mediating cognitive construction these autonomic responses
could be reflexly mediated. And as Miller loved to point out, toilet training, particularly the learning of control
over the autonomic bladder sphicters, is a well-known-and rewarded-universal fact of life. Certainly the
application of biofeedback methodology promoted by
Miller and his associates has proved highly beneficial medically in treating a wide variety of problems, such as
idiopathic scoliosis, enuresis, and migraine, problems involving both voluntary and autonomic response
systems. Biofeedback methodologies as applied to neuroimaging and the like, uses not explored by Miller, have
come very much into standard usage as successful treatments for mood and other mentation disorders in recent
years.

CONCLUSION
Compare a hungry, active dog with weak, inactive dog. The hungry animal learns to get food by pressing a bar,
while the satiated animal goes to sleep. To demonstrate that failure to learn is due to lack of motivation, a mild
electric shock is supplied and the satiated animal becomes active and learns to strike a lever which turns off the
shock. Shows the animal also learning to rotate a wheel, bite a rubber tube, and strike another animal to avoid
electric shock.

APPLICATION
We can apply this to boy being inloved. Hell do anything, will give everthing, will face anything
just for one girl. This behavior is being goal-oriented, because his goal was to be with the girl he
love.mentation disorders in recent years.

http://nealmiller.org/?page_id=82

Edwin Ray Guthrie (18861959)


Law of Contiguity:
Guthrie's law of contiguity states that a combination of
stimuli which has accompanied a movement will on its
recurrence tend to be followed by that movement. He said
that all learning is based on a stimulus-response association.
Movements are small stimulus- response combinations.
These movements make up an act . A learned behaviour is a
series of movements. It takes time for the movements to
develop into an act. He believed that learning is
incremental. Some behaviour involves repetition of
movements and what is learned are movements, not behaviours (Internet, 1999).
Guthrie stated that each movement produces stimuli and the stimuli then become conditioned. Every motion
serves as a stimulus to many sense organs in muscles, tendons and joints. Stimuli which are acting at the time of
a response become conditioners of that response. Movement-produced stimuli
have become conditioners of the succession of movements. The movements form a series often referred to as a
habit. Our movements are often classified as forms of conditioning or association. Some behaviour involves the
repetition of movements, so that conditioning can occur long after the original stimulus.
Guthrie rejected the law of frequency. He believed in one-trial learning. One-trial learning states that a stimulus
pattern gains its full associative strength on the occasion of its first pairing with a response. He did not believe
that learning is dependent on reinforcement. He defined reinforcement as anything that alters the stimulus
situation for the learner (Thorne and Henley, 1997). He rejected reinforcement because it occurs after the
association between the stimulus and the response has occurred. He believed that learning is the process of
establishing new stimuli as cues for some specified response (Sills, 1968).
Guthrie believed that the recency principle plays an integral role in the learning process. This principle states
that which was done last in the presence of a set of stimuli will be that which is done when the stimulus
combination occurs again. He believed that it is the time relation between the substitute stimulus and the
response that count. Associative strength is greater when the association is novel. When two associations are
present with the same cue, the more recent will prevail. The stimulus-response connections tend to grow weaker
with elapsed time.

Contiguity theory implies that forgetting is a form of retroactive or associative inhibition. Associative
inhibition occurs when one habit prevents another due to some stronger stimuli. Guthrie stated that forgetting is
due to interference because the stimuli become associated with new responses (Internet, 1999). He believed that
you can use sidetracking to change previous conditioning. This involves discovering the initial cues for the habit
and associating other behavior with those cues. Sidetracking causes the internal associations to break up. It is
easier to sidetrack than to break a habit. Other methods used to break habits include threshold, fatigue, and the

incompatible response method. Fatigue is a change in behavior-altered chemical states in the muscle and blood
stream. It has the effect of decreasing the conditioned response. The stimulus conditions the other responses
thus inhibiting the response. The threshold method involves presenting cues at such low levels that the response
does not occur. The stimulus is then increased thus raising the response threshold. The incompatible stimulus
method involves presenting the stimulus for the behavior we want to remove when other aspects of the situation
will prevent the response from occurring (Thorne and Henley, 1997). Excitement facilitates learning and also
the stereotyping of a habit. It is the conflict responsible for the excitement that breaks up the old habit. Breaking
up a habit involves finding the cues that initiate the
action and practicing another response to such cues.

Problem solving Experiment


Guthrie did a collaborative study with George P. Horton
which involved the stereotyped behaviour of cats in the
puzzle box. Horton set up the trials and supervised the
photography, while Guthrie took notes in shorthand. The
Guthrie-Horton experiment illustrated the associative
theory of learning. They used a glass panelled box which
allowed them to photograph the cats' movements. The
box was constructed so that the cat could open the door
by
touching a post. It took approximately 15 minutes for the
cat
to touch the post. The second time, the cat had the
tendency to duplicate its first behaviour. The photographs
showed that the cats repeated the same sequence of movements associated with their previous escape from the
box. This showed an example of stereotyped behaviour.

CONCLUSION
The Guthrie-Horton experiment allows us to assume than an animal learns an association between a
stimulus and a behavioural act after only one experience. Guthrie stated that numerous trials are not
duplications, but learning to respond to similar stimulus complexes. Only after we form several
associations can the behavioural criterion of learning be achieved.
"We learn only what we ourselves do" (Sills, 1968). The responses we wish to cue to various
stimuli must be made by the individual himself in the presence of those stimuli . He extends this
philosophy when emphasizing that circumstances must be changed in order to further learning.
Teachers often limit their involvement in the classroom in order to further student learning. By
doing this, they allow the student to make the desired responses without stimuli from the teacher.
Guthrie had a large interest in the evaluation of teaching ability. He stressed the idea that the
circumstances under which he wishes the desired response to be made in the future should be
approximated as closely as possible by the present circumstances.
APPLICATION

> A cat learned to repeat the same sequence of movement associated with the preceding escape
from the box, but improvement does take place.
> A boy use to always throw his bag around after coming back home from school. Even after
repeated admonishment from her parents. One day her mother told her to go out of the house,
reenter and put his bag in order. The throwing on the floor habit disappeared and the more recent
habit of cleaning-up response became a new habit for the boy.
> One of your instructors smile at you as she hands back the exam that she has just corrected. You
discover that you have gotten an A - on the exam, and you get ancomfortable feeling in the pit of
your stomach. The next time your instructor smile at you, the same comfortable feeling returns.
http://www.muskingum.edu/~psych/psycweb/history/guthrie.htm

Clark Leonard Hull


(May 24, 1884 May 10, 1952)
Drive Reduction which is postulated that behavior occurs in response to "drives" such as hunger, thirst, sexual
interest, feeling cold, etc. When the goal of the drive is attained (food, water, mating, warmth) the drive is
reduced, at least temporarily. This reduction of drive serves as a reinforcer for learning. Thus learning involves a
dynamic interplay between survival drives and their attainment. The bonding of the drive with the goal of the
drive was a type of reinforcement, and his theory was a reinforcement theory of learning.
Hull believed that these drives and behaviors to fulfill the drives were influential in the evolutionary process as
described by Darwin. Movement sequences lead to need reduction as survival adaptations. He assumed that
learning could only occur with reinforcement of the responses that lead to meeting of survival needs, and that
the mechanism of this reinforcement was the reduction of a biological drive. Hull believed that human behavior
is a result of the constant interaction between the organism and its environment. The environment provides the
stimuli and the organism responds, all of which is observable. Yet there is a component that is not observable,
the change or adaptation that the organism needs to make in order to survive within it's environment. Hull
explains, "when survival is in jeopardy, the organism is in a state of need (when the biological requirements for
survival are not being met) so the organism behaves in a fashion to reduce that need" ( Schultz & Schultz, 1987,
p 238). Simply, the organism behaves in such a way that reinforces the optimal biological conditions that are
required for survival.
Hull was an objective behaviorist. He never considered the conscious, or any mentalistic notion. He tried to
reduce every concept to physical terms. He viewed human behavior as mechanical, automatic and cyclical,
which could be reduced to the terms of physics. Obviously, he thought in terms of mathematics, and felt that
behavior should be expressed according to these terms. "Psychologist must not only develop a thorough
understanding of mathematics, they must think in mathematics" (Schultz & Schultz, 1987, p 239). In Hull's time
three specific methods were commonly used by researchers; observation, systematic controlled observation, and
experimental testing of the hypothesis. Hull

believed that an additional method was needed, - The Hypothetico Deductive method. This involves deriving
postulates from which experimentally testable conclusions could be deduced. These conclusions would then be
experimentally tested.
Hull viewed the drive as a stimulus, arising from a tissue need, which in turn stimulates behavior. The strength
of the drive is determined upon the length of the deprivation, or the intensity / strength of the resulting behavior.
He believed the drive to be non-specific, which means that the drive does not direct behavior rather it functions
to energize it. In addition this drive reduction is the reinforcement. Hull recognized that organisms were
motivated by other forces, secondary reinforcements. " This means that previously neutral stimuli may assume
drive characteristics because they are capable of eliciting responses that are similar to those aroused by the
original need state or primary drive" (Schultz & Schultz, 1987, p 240). So learning must be taking place within
the organism.
Hull's learning theory focuses mainly on the principle of reinforcement; when a S-R relationship is followed by
a reduction of the need, the probability increases that in future similar situations the same stimulus will create
the same prior response. Reinforcement can be defined in terms of reduction of a primary need. Just as Hull
believed that there were secondary drives, he also felt that there were secondary reinforcements - " If the
intensity of the stimulus is reduced as the result of a secondary or learned drive, it will act as a secondary
reinforcement" ( Schultz & Schultz, 1987, p 241). The way to strengthen the S-R response is to increase the
number of reinforcements, habit strength.
Clark Hull's Mathematico Deductive Theory of Behaviour relied on the belief that the link between the
S-R relationships could be anything that might affect how an organism responds; learning, fatigue, disease,
injury, motivation, etc. He labelled this relationship as "E", a reaction potential, or as sEr. Clark goal was to
make a science out of all of these intervening factors. He classified his formula

sEr = (sHr x D x K x V) - (sIr + Ir) +/- sOr


as the Global Theory of Behaviour. Habit strength, sHr, is determined by the number of reinforces. Drive
strength, D, is measured by the hours of deprivation of a need. K, is the incentive value of a stimulus, and V is a
measure of the connectiveness. Inhibitory strength, sIr, is the number of non reinforces. Reactive inhibition, Ir,
is when the organism has to work hard for a reward and becomes fatigued. The last variable in his formula is
sOr, which accounts for random error. Hull believed that this formula could account for all behaviour, and that it
would generate more accurate empirical data, which would eliminate all ineffective introspective methods
within the laboratory (Thomson, 1968).

CONCLUSION
Hull was interested in applying mathematical formulas to psychology, and it is simple to see how
this works with the Drive Reduction Theory.
If you have achieved homeostasis your motivation is 0, since you have no drives to reduce. If you
are hungry, then your drive is increased to 1. If you are really hungry, your drive becomes 2. If you
are thirsty your drive to satisfy the hunger and thirst becomes 3. As drives accumulate your overall
motivation increases.

APPLICATION
When you are bored, you learn that by playing games you have fun by doing so, you are able to
satisfy your drive for game. As a result, you will repeat that behaviour the next time you are bored
and have no food. If however, if your game is confiscated by your momwas and you were unable to
play, you would then have to learn a new way to satisfy yourself such as by going to your friends
house.
Generally speaking, Drive Reduction applies to everything that involves satisfying biological needs
associated with food, water, safety, sex and etc. All of which are primitive animalistic drives.
http://www.muskingum.edu/~psych/psycweb/history/hull.htm
http://psychology.about.com/od/motivation/a/drive-reduction-theory.htm

Das könnte Ihnen auch gefallen