Sie sind auf Seite 1von 70

SYNOPSIS OF THE SEMINAR

WORKSHOP
Definitions , context, objectives and
functions
Evaluation Designs Characteristics
Types of Evaluation Models
ABCD Model
CIPP Model
Provus Discrepancy Model
Kerrigan Model
Goal Based Evaluation (GBE)
Goal Free Evaluation (GFE)

Evaluation Designs:
1.Seeks to clarify planning
alternatives and to improve
program
2.Loyalty to a particular program
or project; choice of evaluation
topics is determined by the
information needs of decision
makers.

Evaluation Designs:
3. Methodological choices are scientific but
non-experimental & quite often
naturalistic
4. Time-Frame for the production of results
is set by the program
5. Professional rewards consist in the
utilization of findings by decision makers
& demonstrated improvement in the
program implementation.

MODELS of
Research
Evaluation

1. ABCD Jesus Ochave

2. CIPP Daniel Stufflebeam


3. Provus Discrepancy
Malcolm Provus
4. Kerrigan Evaluation John
Kerrigan
5. Goal Based Evaluation Tyler
6. Goal Free Evaluation
Michael Scriven

By: Dr. JESUS OCHAVE

Background
It is an evaluation model developed by
Jesus Ochave(1994)
Is comparable with classical models of
evaluation like those of Stufflebeams
CIPP Model and Provus Discrepancy Model
Comprehensive enough to allow the
evaluator to evaluate aspects of the
program which bears with the
effectiveness
Flexible and allows the evaluator to trace
the causal factors that explain program
effects without necessarily going through
a mass of other data

Four Major
Components of ABCD
Model
A- the

respondents/subjects/clientele
B- the program/ operations
C- the effects
D- the social impact
Each of these components has 2
dimensions, the intent/plans and
the actualities/ observations

STEPS
Identify who the respondents are
Provide a program/operation that will
suit best or answer the need of the
respondents
Check if the program/operation
provided has a better effects to the
respondents cognitive or affective
domain capabilities
Verify whether the operation
provides makes the respondents
useful members of the community

The Model
Spherical structure
-symbolizes the flexibility of
the components which will
actually depend on the kind
of program being evaluated
Polygon with many sides
-represents the many faces
or areas of concern of the
program

Vectors/arrows
-represent the direction of the effects

-components A affects components B,


C and D. This means that the
program has to be oriented according
to the type of clientele and the set of
learning experiences must conform
with the objectives of the program to
have at least and impact on the
clientele

Broken line arrow


-from the component A to component C
represents an actuality of the effects on
students. No matter what, the effects will
vary depending on the actual
students/clientele.
-The discrepancy between standards
(solid lines) and the actual performance
(broken lines) is indicated by the gap in the
two boxes.
The bigger the magnitude of the box
between the intents and actualities,
the less positive the evaluation.

The lesser the gap, the more positive the


better.
-The

gaps explain the nature of the


effects and impact. The data on
discrepancies between the intents
and management of educational
program.
The 2 lines (solid & broken lines)
approximating each other would
indicate an outstanding or excellent
dimension which means that what

However, if the gap between the two


boxes is big, then the discrepancy
between the what is and what
should be is also big. The intended
effects are stated objectives or
goals of the program and the
social impact as intended are
visions of the program.

BOTIKA NG
BARANGAY
Implementation of the
Program / Operation

Effects:
Savings
Health Related
Benefits

Clients
Acceptance
Utilization
Satisfaction

Social Impact
Mortality Rate

CIPP MODEL
Daniel Stufflebeam
A decision focused approach to
evaluation
Emphasizes the systematic provision
of information for program
management
Makes evaluation directly relevant to
the needs of decision makers during
the different phases and activities of a
program
Provides useful information to
decision-makers with the overall goal
of program or project improvement

CIPP Model
Daniel Stufflebeam

Contex
t
Vision
Mission
Philosop
hy
Objectiv
es

Input
Faculty
Curricul
ar
Facilities
Library

Proces
s
Monitori
ng
Instructio
n
Manage
ment
Student
Activities
Student
Services

Produc
t
Student
Performa
nce
Cognitiv
e
Affective
Psychom
otor

Aspects of
Evaluation

Aspects of
Evaluation

Context Evaluation

Input evaluation

Context
Evaluation
Input
evaluation
Process
evaluation
Product
evaluation

Process evaluation

Product evaluation

Type of decision

Type of
decision

Kind of question answered

Kind of
question
answered
What should
we do?
How should we
do it?
Are we doing it
as planned?
Did it work?

Planning decisions

What should we do?

Structuring decisions

How should we do it?

Planning
decisions
Structuring
decisions
Implementing
decisions
Recycling
decisions
Implementing decisions

Are we doing it as
planned?

Recycling decisions

Did it work?

Statement of the
Problem:
The main purpose of the study
is to make an evaluation of
the Community District
Hospital Nursing Service, from
January 2007-2009.
The findings serve as the
baseline data for the
envisioned hospital
accreditation and expansion
purposes.

Statement of the
Problem:
Specifically it seeks to answer
the following questions:
1. What is the existing status of
Community District Hospital
Nursing Service in terms of the
following variables as perceived
by the Nursing Service
personnel?
1.1 Context Variable
1.1.1 Philosophy and Objectives
1.1.2 Organization
1.1.3Policies

Statement of the
Problem:
1.2 Input Variables
1.2.1 Resource Management
1.2.2 Material Management
1.2.3 Financial Management
1.3 Process Variables
1.3.1 Patient Care Management
1.3.2 Reporting / Recording
1.3.3 Interdepartmental
Relations
1.4 Product Variable
1.4.1 Quality Nursing Service

Six Essential Steps in


Evaluation
1. Develop a list of standards
which specify the
characteristics of ideal
implementation of a learning
program.
2. Determine the information
required to compare
actual implementation
with define standards.
3. Design methods to
obtain the required
information.

4. Identify the discrepancies


between the standards and
the actual learning program.
5. Determine reasons for
the discrepancies
6. Eliminate discrepancies
by making changed to
implementation of the
learning program.

FFFF FFFFFFFF FFFFFF


F. FFFFFFF FFFFFFFFFF - FFFFFFFFFF FF FFF
FFFFFFF FFFFFF FF FFFFFFFF FFF FFFFFFFFF
FFFFFF, FFFFFFFFF FFF FFFFFFF FFF FFFF FF
FFFFFFFFFF FFF FFFFFFFFFFFFFFFFF FFF
FFFFFFFF FFFFFFFFFFF FF FFF FFFFFF.
FF. FFFFFFF FFFFFFFFFFFF FFFFFFFFFF FF FFF
FFFFFF FFFFFFF FFFFFFFFFFFF FFFFFFF FFFFF F

FFFF FFFFFFFF FFFFFF


FFF.FFFFFFF FFFFFFF FFFFFFFFFF FF FFF
FFFFFFFFFFFF FFFFFFF FFF FFFFFFFFF FF FF
FFFFFF FFF FFFFFFF FFF FF FFFFFF FFF
FFFFFF.
FF. FFFFFFF FFFFFFF FFFFFFFFFF FFFFFFF FFF
FFFFFF FF FFF FFFFFFF FFFFFFFF FFF FFFFF
FFFFFFFFF.

4 Choices in Discrepancy:
F. FFFFFFF FF FFF FFFF FFFFF FF FFFFFFFFFF FF FF
FFFFFFFFFFF FFFFF.
F. FF F FFFFFFFFFFF FFFFF, FFFFFFF FFFFFFF FFF
FFFFFFFF FFFFF FFFFF FFFFF FFF FFFF F FFFFFF FF
FFFFFF FFF FFFFFFFF FFFFFFFFF FF FFFFFFFFFF.
F. FF #F FFFFFF FF FFFFFFFFFFFF, FFFF FFFFFFF FFFF
FF FFFFF F FFFFFFF FFFFFFFFFF, FF FFFFFFFF FFF
FFFFFFF, FFFF FFFFF FFF FFFFFFFFFFF FFFFFFFFFF
FFFFF FF FFFFF F.
F. FF #F FFFFFF FF FFFFFFFFFFFF FFFFFFFFF FFF
FFFFFFF.

Effectiveness of the Provus


Discrepancy Model:
When the type of evaluation desired
formal and the program is in the
formative, rather than summative
stages.
When evaluation is defined as
continuous information management
addressing program improvement
and assessment, and where
evaluation is a component program
development.
Where the purpose of evaluation is

Effectiveness of the
Provus Discrepancy
Model:
Where the key emphasis of evaluation is

program definition and program


installation.
Where the roles of the evaluator are those
of facilitator, examiner of standards,
observer of actual behaviors and design
expert.
When at each stage of evaluation program
performance is compared with program
objectives ( standards ) to determine
discrepancies.

Effectiveness of the
Provus Discrepancy
Model:

Where the program evaluation


procedure is designed to identify
weaknesses and to make
determination about corrections
or termination.
Where the theoretical construct
is that all stages of programs
continuously provide feedback to
each other.

Effectiveness of the
Provus Discrepancy
Model:

Where the criteria for judging


programs includes carefully
evaluating whether:
The program meets established
program criteria
The actual course of action taken
can be identify and
A course of action can be taken
to resolve all discrepancies

Statement of the Problem


The study assessed the
ABC College of Nursing
based on CHED
STANDARDS, pursuant to
CMO no. 30 s. 2001 in an
attempt to prepare a FiveYear Development Plan.

Specifically, the study sought to answers to the


following questions:

Specifically, the study sought to answers to


the following questions:

Is there a discrepancy between the


standards and the existing
conditions?
What problems are exhibited in the
10 areas identified?
What development plan for the
College of Nursing could be evolved
for the next 5 years?

Provus Discrepancy
Evaluation Model
by MALCOLM PROVUS

5 Stages
S Standard
P Program Performance
C Comparison of S with P
D Discrepancy Information
Resulting from C
T Terminate
A Alteration of P or S

S
C
P

D
A

KERRIGAN
EVALUATION
MODEL
by John Kerrigan

Kerrigan Evaluation Model


(Pre-service Orientation Program for Staff
Nurses)

Training
Activity
(Reaction
Criteria)
Did the
Trainees
enjoy the
training?

Trained
Persons
(Learning
Criteria)
What did
the
trainees
learn?

The Job on
Organizatio
n
(Behavioral
Criteria)
Did the
Trainees
behavior
change on
the job?

Results in
Job and
Organizatio
nal
Performanc
e
(Results
Criteria)
Did the
Organizatio
n or project
improve in
performanc
e

Sample Research
An Evaluation of the Pre-Service
Orientation Program for Newly
Employed Staff Nurses in
Hospitals: a Kerrigan Evaluation
Model

Statement of the Problem


The major aim of this study was to
evaluate the Pre-Service Orientation
Program for newly employed staff
nurses of government and private
hospitals in Cebu City from January
to June 2009.

Specifically it sought to
answer the following
questions:

1. Reaction Criteria

1.1 What is the profile of the trainees?


1.2 What are the reactions of the trainees
regarding the program/training?

2. Learning Criteria
2.1 What did the trainees learn?
(cognitive, affective, psychomotor)

Specifically it sought to
answer the following
questions:

3. Behavioral Criteria
3.1 Did the trainees behavior
change on the job? (before after)
4. Results Criteria
4.1 Did the organization/institution
improve in their performance based
on the training?

GOAL- FREE
EVALUATION

PRESENTATION OUTLINE
A.Definition of Goal Free
Evaluation (GFE)
Purpose and activities of GFE
A.Arguments for GFE
B.Four Reasons for Doing GFE
C.Implementation
considerations
D.Framework

GOAL FREE
EVALUATION
An inductive and holistic
strategy aimed at
countering the logical
deductive limitations
inherent in the usual
quantitative goals-based
approach to evaluation

GOAL FREE
EVALUATION
It involves gathering data
on a broad array of actual
effects and evaluating the
importance of these effects
in meeting demonstrated
needs without being
constrained by a narrow
focus on stated goals.

GOAL FREE
EVALUATION
GF evaluator avoids learning

the stated
purpose/goals/intended
achievements of the program
prior to or during the
evaluation and interviews
program consumers.
Only the programs actual
outcomes and measureable
effects are studied, and these
are judged on the extent to

GOAL FREE
EVALUATION
This prevents tunnel vision, or only
looking at the program as it pertains to
the intended goals at the risk of
overlooking many positive and/or
negative unintended side-effects
The evaluator thus can be open to
whatever data emerge from the
phenomena of the program

GOAL FREE
EVALUATION

Lends itself to qualitative


methods as it relies
heavily on description
and direct experience
with the program

GOAL FREE
EVALUATION
GFE Evaluator asks: What does
the program actually do?
Rather than, What does the
program intend to do?
Merit is determined by relating
program effects to the
relevant needs of the
impacted population (Scriven,
1991. p 180)

GOAL FREE EVALUATION

A comprehensive needs
assessment is conducted
simultaneously with data
collection.

GOAL FREE
EVALUATION
The evaluator should provide
experiential accounts of
program activity so that
readers of the report can,
through naturalistic
generalization, arrive at their
own judgment of quality in
addition to those the
evaluator provides, (Stake,
2004 in Alkin, 2004, p.215)

GOAL FREE
EVALUATION
Can be conducted along with
goals-based evaluation, but
with separate evaluators
using each approach to
maximize its strengths and
minimize it weaknesses as
proposed by Scriven

GOAL FREE
EVALUATION
Defining question: What
are the actual effects of
the program on clients
(without regard to what
staff say they want to
accomplish)? To what
extent are real needs
being met?

Arguments for the Utilization of


GFE
It may identify unintended positive
and negative side-effects and other
context specific information.
As a supplement to a traditional
evaluation, it serves as a form of
triangulation both data collection
methods and data sources.

Arguments for the


Utilization of GFE
It circumvents the traditional
outcome evaluation and the
difficulty of identifying true
current goals and true
original goals, and then
defining and weighing them.
It is less intrusive to the
program and potentially less
costly to the client.

Arguments for the Utilization


of GFE
It is adaptable to changes in needs or
goals.
By reducing interaction with program
staff, it is less susceptible to social,
perceptual, and cognitive biases.

Arguments for the


Utilization of GFE
It is reversible; an evaluation
may begin goal free and later
become goal-based using the
goal-free data for preliminary
investigative purposes

Arguments for the


Utilization of GFE
It is less subject to bias
introduced by intentionally or
unintentionally trying to
satisfy the client because it is
not explicit in what the client
is attempting to do; it offers
fewer opportunities for
evaluator bias or corruption
because the evaluator is
unable to clearly determine

Arguments for the


Utilization of GFE
For the evaluator, it
requires increase effort,
identifies incompetence,
and enhances the
balance of power among
the evaluator, the
evaluee and client.

Arguments for the Utilization


of GFE
It focuses on human
experience and what people
actually do and feel, allows
for understanding how
program implementer deal
with its nonprogrammed
decision [1]
(Stake, 2004; in Alkin, 2004).
[1] Non programmed decisions are decisions regarding relatively novel

4 Reasons for doing GFE


(Scriven, 1972)

1. To avoid the risk of


narrowly studying stated
program objectives and
thereby missing important
unanticipated outcomes;

4 Reasons for doing


GFE (Scriven, 1972)
2. To remove the negative
connotations attached to
the discovery of
unanticipated effects,
because the whole
language of side effects or
secondary effect or even
unanticipated effect
tended to be a putdown of
what might well be the
crucial achievement,

4 Reasons for doing GFE


(Scriven, 1972)

3. To eliminating the perceptual biases


introduced into an evaluation by
knowledge of goals;
4. To maintain evaluator objectivity
and independence through goal-free
conditions.

GFE as a Supplement to GBE

The weaknesses in exclusively using any


one approach (either GBE or GFE) are
significantly minimized by combining the
two; consequently, the validity of the
synthesized final evaluation is enhanced.
GBE and GFE are combined by having the
GB and GF evaluators designs and
conduct their evaluations independently.

GFE as a Supplement to
GBE
In synthesizing the GBE and GFE,
evaluators interpret the data while
discussing whether the evaluation
methods, results, and conclusion support
or contradict each other; and the
evaluators (possibly with key
stakeholders)weigh the data from both
approaches to make an evaluative

Goal-Based Evaluation
Goals/Objectives

Implementation

Formative/Summative
Evaluation

Das könnte Ihnen auch gefallen