Sie sind auf Seite 1von 47

PUSPITA DIRGAHAYANI | 2014 | SLIDE 1

PLAN & POLICY EVALUATION

PUSPITA DIRGAHAYANI | 2014 | SLIDE 2

WHAT IS EVALUATION ?
The systematic assessment of the worth or
merit of some objects (Trochim, 2000)
Evaluation research is a robust arena of
activity directed at collecting,
analyzing, and interpreting
information on the need for,
implementation of, and effectiveness and
efficiency of intervention efforts to better
the lot of humankind (Rossi and Freeman,
1989).

PUSPITA DIRGAHAYANI | 2014 | SLIDE 3

MONITORING VS EVALUATION
Establishing factual premises about public
policies answering what happened,
how, and why
MONITORING
Concerned with establishing the value
premises necessary to produce
information about the performance of
policies answering what differences
EVALUATION
does it make
Dunn, 1994

PUSPITA DIRGAHAYANI | 2014 | SLIDE 4

EVALUATING PLAN:
PLANNING TO PERFORM
Evaluation Models for City Planning
(Leora Susan Waldner, 2004)

PUSPITA DIRGAHAYANI | 2014 | SLIDE 5

RENCANA SPASIAL & PEMBANGUNAN

Sumber: PermenPU No. 17/2009

PUSPITA DIRGAHAYANI | 2014 | SLIDE 6

RENCANA SPASIAL & PEMBANGUNAN

PUSPITA DIRGAHAYANI | 2014 | SLIDE 7

LITTLE e AND BIG E EVALUATION

Waldner, 2004

Goal attainment,
outcome,
conformance, or
compliance

Impact evaluation,
effectiveness
attempts to
evaluate the
effectiveness of the
plan along some
dimension

PUSPITA DIRGAHAYANI | 2014 | SLIDE 8

LITTLE e AND BIG E EVALUATION

Waldner, 2004

Were the prescribed


actions of the plan carried
out
E.g. Did we issue permits
in accordance with the
zoning ordinance?

Strategically evaluate
the effect of the plan
on the community
E.g. What kind of
community resulted
from the zoning
patterns we set forth?

PUSPITA DIRGAHAYANI | 2014 | SLIDE 9

LITTLE e AND BIG E EVALUATION

Waldner, 2004

May overlook the


unintended
consequences of
plan
Simply assume
that
implementation
has occurred as
the plan prescribed

PUSPITA DIRGAHAYANI | 2014 | SLIDE 10

HOW ABOUT PROCESS EVALUATION?


Innes and Boohers approach (1999) may
allow evaluation in an effort where process
is paramount and the plan is of secondary
importance (or absent entirely)
A framework for evaluating consensusbuilding efforts, providing process
evaluation criteria as well as outcome
evaluation criteria.
Waldner, 2004

PUSPITA DIRGAHAYANI | 2014 | SLIDE 11

THE KEY DEBATE


Boils down on the question of how we
define SUCCESS in planning, is it:

Goal attainment?
Effects of the plan?
Its use in future decision-making?
A successful process?
Some other criterion?
Or a combination thereof?

Waldner, 2004

PUSPITA DIRGAHAYANI | 2014 | SLIDE 12

GENERAL PLAN EVALUATION


CRITERIA
An Approach to Making Better Plans
(William C. Baer, 1997)

PUSPITA DIRGAHAYANI | 2014 | SLIDE 13

HOW TO DEFINE PLAN QUALITY?


Planners can often differentiate high
quality plans from low quality ones, but
they are hard pressed to explicitly define
the key characteristics of plan quality.
The planning literature is surprisingly
narrow when it comes to what constitutes
a good plan.
The planning profession has generally
avoided this normative question and
focused instead on the methods and
processes of plan making.

Berke and French, 1994, 237-8 in Baer, 1997

PUSPITA DIRGAHAYANI | 2014 | SLIDE 14

PLAN EVALUATION CRITERIA


Appropriate criteria for plan evaluation
depend on distinguishing the different
stages in plan-making when evaluation
can be performed.
The rubric of evaluation has included:

Plan assessment
Plan testing and evaluation
Plan critique
Comparative research and professional
evaluations
Post hoc evaluation of plan outcomes

Baer, 1997

PUSPITA DIRGAHAYANI | 2014 | SLIDE 15

DISTINGUISH THE RUBRIC OF


EVALUATION
Identify who is undertaking the evaluation and
their relation to the plan authors;
When the evaluation is undertaken (i.e. at what
stage during the plan preparation or after its
completion);
What of the evaluation.

Baer, 1997

PUSPITA DIRGAHAYANI | 2014 | SLIDE 16

THE WHAT EVALUATION


The substance of plan alternatives;
The plan as a package including the document
that communicates:
Goals and objectives
Needs of problems
Assumptions and methods of reasoning
Specific proposals
Perhaps, implementation devices (ordinances,
budgets, etc.
The outcome following plan implementation.
Baer, 1997

PUSPITA DIRGAHAYANI | 2014 | SLIDE 17

PLAN CRITIQUE
WHO

By persons other than the plans author


(professional planners

WHEN

Before the plan has had time to be put into


practice, and before any measurable results
have occurred

WHAT

Plan as a package

METHOD

Professional virtualities: art of judgement or


conceptive powers

Baer, 1997

PUSPITA DIRGAHAYANI | 2014 | SLIDE 18

PLAN TESTING & EVALUATION


WHO

By the team or group preparing the plan, not


by outside critics

WHEN

During the planning process: phase 5 and 6

WHAT

Alternative ways to achieve a plans goals

METHOD

Analytic devices that are explicit and


reproducible by others, e.g. CBA, etc

Baer, 1997

PUSPITA DIRGAHAYANI | 2014 | SLIDE 19

RESEARCH & PROFESSIONAL


EVALUATION
WHO

Plan preparation team, professional experts,


or pure researcher

WHEN

After plan adoption (either before outcomes


can be evaluated, or with outcomes not
intended to be part of the evaluation)

WHAT

Several plans are compared systematically,


in the context of plan quality

METHOD

A set criteria defining plan quality, under the


assumption that: (i) planning should be a rational
process; (ii) a democratic decision-making process;
(iii) plans (and planning reports) should add to the
body of knowledge about urban design

Baer, 1997

PUSPITA DIRGAHAYANI | 2014 | SLIDE 20

POST-HOC PLAN OUTCOMES


EVALUATION
WHO

Plan preparation team, professional experts,


or pure researcher

WHEN

After plans are adopted and implemented, depending


on: (i) when should the outcome be determined?; (ii)
what terms should its performance or effectiveness be
cast in? 10, 15 or 20 years

WHAT

To discover if the plan was implemented,


and, if so, how it performed or what its
effectiveness was.
Various permutations of post-hoc evaluation:
(i) reality vs null case; and (ii) reality vs
intended outcome

METHOD

Baer, 1997

PUSPITA DIRGAHAYANI | 2014 | SLIDE 21

Baer, 1997

PUSPITA DIRGAHAYANI | 2014 | SLIDE 22

Baer, 1997

PUSPITA DIRGAHAYANI | 2014 | SLIDE 23

EVALUATING POLICY
PERFORMANCE
Dunn, 1994: Chapter 9

PUSPITA DIRGAHAYANI | 2014 | SLIDE 24

WHAT IS POLICY?

Refers to the actions of government and the


intentions that determine those actions

Is whatever governments choose to do or not to


do (Thomas Dye)

a set of inter-related decisions taken by a


political actor or group of actors concerning the
selection of goals and the means of achieving
them within a specified situation where those
decisions should, in principle, be within the
power of those actors to achieve (Jenkins,
1978)

PUSPITA DIRGAHAYANI | 2014 | SLIDE 25

EVALUATION IN POLICY ANALYSIS


Evaluation refers to the production of
information about the value or worth of
policy outcomes
Characteristics of evaluation

Value focus
Fact-value interdependence
Present and past orientation
Value duality

PUSPITA DIRGAHAYANI | 2014 | SLIDE 26

FUNCTIONS OF EVALUATION
Evaluation provides reliable and valid
information about policy performance
Evaluation contributes to the
clarification and critique of values
that underlie the selection of goals and
objectives
Evaluation may contribute to the
application of other policy-analytic
methods, including problem structuring
and recommendation

PUSPITA DIRGAHAYANI | 2014 | SLIDE 27

Criteria for Policy Evaluation

PUSPITA DIRGAHAYANI | 2014 | SLIDE 28

Type of
Criterion

Question

Illustrative Criteria

Effectiveness

Has a valued outcome been


achieved?

Units of service

Efficiency

How much effort was required to


achieve a valued outcome?

Unit cost
Net benefits
Cost-benefit ratio

Adequacy

To what extent does the


achievement of a valued outcome
resolve the problem?

Fixed costs (type I


problem)
Fixed effectiveness (type II
problem)

Equity

Are costs and benefits distributed


equitably among different
groups?

Pareto criterion
Kaldor-Hicks criterion
Raws criterion

Responsivene
ss

Do policy outcomes satisfy the


needs, preferences, or values of
particular groups?

Consistency with citizen


surveys

Appropriatene
ss

Are desired outcomes (objectives)


actually worthy or valuable?

Public programs should be


equitable as well as
efficient

PUSPITA DIRGAHAYANI | 2014 | SLIDE 29

Approaches to Evaluation

PUSPITA DIRGAHAYANI | 2014 | SLIDE 30

Approach

Aims

Assumptions

Major Forms

Pseudoevaluation

Use descriptive
methods to produce
reliable and valid
information about
policy outcomes

Measures of worth
or value are selfevident or
uncontroversial

Social
experimentation
Social systems
accounting
Social auditing
Research and
practice synthesis

Formal
evaluation

Use descriptive
methods to produce
reliable and valid
information about
policy outcomes that
have been formally
announced as policyprogram objectives

Formally announced
goals and objectives
of policy makers and
administrators are
appropriate
measures of worth
or value

Developmental
evaluation
Experimental
evaluation
Retrospective process
evaluation
Retrospective
outcome evaluation

Decisiontheoretic
evaluation

Use descriptive
methods to produce
reliable and valid
information about
policy outcomes that
are explicitly valued
by multiple

Formally announced
as well as latent
goals and objectives
of stakeholders are
appropriate
measures of worth
or value

Evaluability
assessment
Multiattribute utility
analysis

PUSPITA DIRGAHAYANI | 2014 | SLIDE 31

PSEUDO-EVALUATION
An approach that uses descriptive methods to
produce reliable and valid information about
policy outcomes, without attempting to
question the worth or value of these outcomes
to persons, groups, or society as a whole
The major assumption is that measures of
worth or value are self-evident or
uncontroversial
Variety of methods: quasi-experimental
design, questionnaires, random sampling,
statistical techniques

PUSPITA DIRGAHAYANI | 2014 | SLIDE 32

FORMAL EVALUATION
An approach that uses descriptive methods to produce
reliable and valid information about policy outcomes
but evaluates such outcomes on the basis of policyprogram objectives that have been formally announced
by policy makers and program administrators.
The major assumption is that formally announced goals
and objectives are appropriate measures of the worth
or value of policies and programs
The difference with pseudo-evaluation is that formal
evaluations use legislation, program documents, and
interviews with policy makers and administrators to
identify, define, and specify formal goals and
objectives

PUSPITA DIRGAHAYANI | 2014 | SLIDE 33

TYPES OF FORMAL EVALUATION


Summative evaluation
involves an effort to monitor the
accomplishment of formal goals and
objectives after a policy or program has
been in place for some period of time
Formative evaluation
involves efforts to continuously monitor
the accomplishment of formal goals and
objectives

PUSPITA DIRGAHAYANI | 2014 | SLIDE 34

CONTROL OVER POLICY INPUTS AND


PROCESSES IN FORMAL EVALUATION
Direct controls
Evaluators can directly manipulate
expenditure levels, the mix of programs,
or the characteristics of target groups
Indirect controls
Policy inputs and processes cannot be
directly manipulated, rather they must be
analyzed retrospectively on the basis of
actions that have already occurred

PUSPITA DIRGAHAYANI | 2014 | SLIDE 35

TYPES OF FORMAL EVALUATION

Control
Over Policy
Actions

Orientation Toward Policy Process


Formative

Summative

Direct

Developmental
evaluation

Experimental evaluation

Indirect

Retrospective
process evaluation

Retrospective outcome
evaluation

PUSPITA DIRGAHAYANI | 2014 | SLIDE 36

1. DEVELOPMENTAL EVALUATION
refers to evaluation activities that are
explicitly designed to serve the day-today needs of program staf.
involves some measure of direct control
over policy actions
can be used to adapt immediately to new
experience acquired through systematic
manipulations of input and process variable

PUSPITA DIRGAHAYANI | 2014 | SLIDE 37

2. RETROSPECTIVE PROCESS
EVALUATION
Involves the monitoring and evaluation of programs
after they have been in place for some time
Often focuses on problems and bottlenecks
encountered in the implementation of policies and
programs
Does not permit the direct manipulation of inputs
and processes, rather it relies on ex post facto
(retrospective) descriptions of ongoing program
activities, which are subsequently related to outputs
and impacts
Requires a well-established internal reporting system
that permits the continuous generation of programrelated information

PUSPITA DIRGAHAYANI | 2014 | SLIDE 38

3. EXPERIMENTAL EVALUATION
Involves the monitoring and evaluation of outcomes under conditions
of direct controls over policy inputs and processes
All factors that might influence policy outcomes except one
a particular input or process variable are controlled, held
constant, or treated as plausible rival hypotheses
Must meet rather severe requirements before they can be carried out:
A clearly defined and directly manipulable set of treatment variables
that are specified in operational terms
An evaluation strategy that permits maximum generalizability of
conclusions about performance to many similar target group or settings
(external validity)
An evaluation strategy that permits minimum error in interpreting policy
performance as the actual result of manipulated policy inputs and
processes (internal validity)
A monitoring system that produces reliable data on complex
interrelationships among preconditions, unforeseen events, inputs,
processes, outputs, impacts, and side effects and spillovers.

PUSPITA DIRGAHAYANI | 2014 | SLIDE 39

4. RETROSPECTIVE OUTCOME EVALUATIONS


Involve the monitoring and evaluation of outcomes
but with no direct control over manipulate policy
inputs and processes
The evaluator attempts to isolate the effects of
many different factors by using quantitative
methods
There are two main variants of retrospective
process evaluation:
Longitudinal studies are those that evaluate
changes in the outcomes of one, several, or many
programs at two or more points in time
Cross-sectional studies seek to monitor and
evaluate multiple programs at one point in time

PUSPITA DIRGAHAYANI | 2014 | SLIDE 40

DECISION THEORETIC EVALUATION


An approach that uses descriptive methods to produce
reliable and valid information about policy outcomes that
are explicitly valued by multiple stakeholders
The difference between this evaluation and pseudo-and
formal evaluation, is that decision-theoretic evaluation
attempts to surface and make explicit the latent as well as
manifest goals and objectives of stakeholders.
Decision-theoretic evaluation is a way to overcome
several deficiencies of pseudo-evaluation and formal
evaluation:
Underutilization and nonutilization of performance
information
Ambiguity of performance goals
Multiple conflicting objectives

PUSPITA DIRGAHAYANI | 2014 | SLIDE 41

DECISION
THEORETIC
EVALUATION
Decision-theoretic Evaluation
One of the main purposes of this evaluation is to
link information about policy outcomes with the
values of multiple stakeholders.
The assumption is that formally announced as well
as latent goals and objectives of stakeholders are
appropriate measures of the worth or value of
policies and programs
The two major forms of decision theoretic
evaluation are evaluability assessment and
multiattribute utility analysis, both of which
attempt to link information about policy outcomes
with the values of multiple stakeholders

PUSPITA DIRGAHAYANI | 2014 | SLIDE 42

1. EVALUABILITY ASSESSMENT (1/2)


Is a set of procedures designed to analyze the decision-making
system that is supposed to benefit from performance
information and to clarify the goals, objectives, and
assumptions against which performance is to be measured
For a policy program to be evaluable, at least three conditions
must be present:
A clearly articulated policy or program
Clearly specified goals and/or consequences
A set of explicit assumptions that link policy actions to goals
and/or consequences

PUSPITA DIRGAHAYANI | 2014 | SLIDE 43

1. EVALUABILITY ASSESSMENT (2/2)


A series of steps that clarify a policy or program from
standpoint of the intended users of performance information
and the evaluators themselves:
Policy-program specification
Collection of policy-program information
Policy-program modelling
Policy-program evaluability assessment
Feedback of evaluability assessment to users

PUSPITA DIRGAHAYANI | 2014 | SLIDE 44

2. MULTIATTRIBUTE UTILITY ANALYSIS (1/2)


Is a set of procedures designed to elicit from
multiple stakeholders subjective judgments about
the probability of occurrence and value of policy
outcomes
It explicitly surfaces the value judgments of
multiple stakeholders, it recognizes the presence of
multiple conflicting objectives in policy-program
evaluation, and it produces performance
information that is more usable from the
standpoint of intended users

PUSPITA DIRGAHAYANI | 2014 | SLIDE 45

2. MULTIATTRIBUTE UTILITY ANALYSIS (2/2)


The steps in conducting a multiattribute utility
analysis are:

Stakeholder identification
Specification of relevant decision issues
Specification of policy outcomes
Identification of attributes of outcomes
Attribute ranking
Attribute scaling
Scale standardization
Outcome measurement
Utility calculation
Evaluation and presentation

PUSPITA DIRGAHAYANI | 2014 | SLIDE 46

Approach
Pseudo
Evaluation

Techniques
- Graphic displays

- Interrupted time-series

analysis
- Tabular displays
- Index numbers
analysis

- Control-series analysis
- Regression-discontinuity

Formal
Evaluation

- Objectives mapping

Decision
Theoretic
Evaluation

- Brainstorming

- Value clarification
- Value critique

- Constrains mapping
- Cross impact analysis
- Discounting

- Argumentation analysis
- Policy Delphi
- User-survey analysis

PUSPITA DIRGAHAYANI | 2014 | SLIDE 47

References
1. Baer, W.C. (1997) General Plan Evaluation Criteria: An Approach to
Making Better Plans, Journal of the American Planning Association
63(3), American Planning Association, Chicago, IL.
2. Waldner, L.S. (2004) Planning to Perform: Evaluation Models for City
Planners, Berkeley Planning Journal 17(1), eScholarship, University
of California.
3. Dunn, W. (1994) Public Policy Analysis: 2nd Edition, Chapter 9:
Conclusion: Evaluating Policy Performances, Prentice-Hall, Inc., New
Jersey.

Das könnte Ihnen auch gefallen