Sie sind auf Seite 1von 133

MONITORING &

EVALUATION
IPDET GBEPM 740 /
MBAH 760 / MPA 720
Lecturer Dr Joseph Mwape Chileshe
PhD(Agricultural Engineering) SCAU 2012
MSc (Agricultural Mechanization)SCAU 1996
B.Eng. (Mechanical Engineering)UNZA 1991
Email mwapejc@gmail.com
Cell: +260969538805
Section I

• 1.1 This section focuses on basic definition.


• a)Concept of a project.
• b)Concept of monitoring.
• c)Concept of Evaluation.
• 1.2 The relationship between monitoring and
evaluation of projects in project
management.
• 1.3 The significance of Monitoring and
Evaluation in project Management.

IPDET © 2009 2
Section II – Monitoring
• the monitoring process,
• types of monitoring,
• purpose of monitoring
• significance / importance of monitoring of
projects
• when monitoring is initiated
• the time frame for monitoring projects

IPDET © 2009 3
Monitoring Cont’d
• which institutions or individuals undertake
monitoring in project management.
• roles of these monitoring institution or
individuals
• project types and monitoring procedures
• outcomes of monitoring i.e. monitoring
outputs
• communication lines
• case study of a specific project

IPDET © 2009 4
SECTION III -EVALUATION OF
PROJECTS
•◦Concept of Evaluation
•◦Context of Application.
•◦Types of Evaluation.
•◦Purpose of Evaluation.
•◦Significance of Evaluation

IPDET © 2009 5
Evaluation Cont’d

•Process of Evaluation (Case Study)


•◦Planning the Evaluation.
•◦Requirements for Evaluation.
•◦Execution of evaluation process
including fieldwork and data collection.
•◦Analysis of Evaluation Data.
•◦Formulation of conclusions and
recommendation.
•◦Communication of Evaluation Results.
IPDET © 2009 6
SECTION IV -STANDARD PRACTICE OF
EVALUATION
• 1)International procedures and standards used in
Evaluation of projects.
• ◦The World Bank Evaluation Formats, Criteria and
Indicators.
• ◦European Union Evaluation Formats, Criteria and
Indicators.

• 2)National Standards and Procedures for Evaluation.


• 3)Application of the Different Standards and
Procedures.
• ◦Reporting Procedures and Verification Methods

IPDET © 2009 7
Evaluation Cont’d.
•EMERGING CONTEMPORARY
APPROACHES OF PROJECT EVALUATION
•◦Participatory Monitoring and
Evaluation.
•◦Stakeholder involvement.
•◦Roles of stakeholder.
•◦The process of participatory Evaluation

IPDET © 2009 8
SECTION V -MONITORING AND
EVALUATION IN THE DEVELOPMENT
PLANS

•a. SNDP
•b. MDG

IPDET © 2009 9
SECTION VI-Students Project on Monitoring &
Evaluation

•◦Group work

IPDET © 2009 10
Project...
A collection of linked activities, carried
out in an organised manner,with a
clearly defined START POINT and END
POINT to achieve some specific results
desired to satisfy the needs of the
organisation at the current time
What is a Project?

Definition:
A temporary endeavor under-taken
to create a unique product or service

Source:PMI Guide to Project Management Body of Knowledge, 2000


Project Management
A dynamic process that utilises the
appropriate resources of the
organisation in a controlled and
structured manner, to achieve some
clearly defined objectives identified as
needs.
It is always conducted within a
defined set of constraints
What does Project
Management Entail?
Planning: is the most critical and gets the
least amount of our time
Beginning with the End in mind-Stephen Covey
Organizing: Orderly fashion
(Contingent/Prerequisites)
Controlling: is critical if we are to use our
limited resources wisely
Measuring: To determine if we accomplished
the goal or met the target?
Measuring…….
Are we efficient?
Are we productive?
Are we doing a good job?
What is the outcome?
Is it what we wanted to be?
If you can’t plan it, You can’t do it
If you can’t measure it, you can’t manage it
What is a programme
1. It is a COORDINATED APPROACH to exploring
an organisation’s specific developmental area
2. A SYSTEM OF PROJECTS or services intended to
meet a public need;
3. The COORDINATED MANAGEMENT OF A
PORTFOLIO OF PROJECTS that change
organisations to achieve benefits that are of
strategic importance
What is a programme
The Programme comprised several different Projects which were intended
to improve the quality of Roads (a public need!!!!!!!!!)
The IUCN Monitoring and Evaluation (M&E)
Initiative
•The mandate of the Monitoring and Evaluation Initiative
is to establish a Monitoring and Evaluation System for
International Union for Conservation of Nature (IUCN)
at regional and global levels that:
• ◦Supports learning, improvement and accountability through a
regular cycle of reviews of relevance, effectiveness, efficiency and
impact of IUCN’s work at project, programme and organizational
level.
• ◦Promotes a learning culture of self-assessment and reflection.
• ◦Builds capacity of staff, partners and members in planning,
monitoring and evaluation.
• ◦Supports the implementation of the IUCN Evaluation Policy and
Standards.
• Publications from the M&E Initiative are available on-
line on the IUCN website

18
IPDET © 2009
Objectives

• Explain the difference between monitoring and


evaluation.
• Introduce the most common M&E terms.
• Review examples of each term.
Monitoring versus Evaluation

Monitoring Evaluation
•Data collected on program •Data collected to answer specific
activities questions
•Ongoing, routine •Periodic
•Focus on activities and output, •Focus on outcome, impact
compared to target

Are we doing the work How effective were our


we planned? activities?
Monitoring or Evaluation?

Local researchers conduct a study to


determine if there are more people with
possible TB symptoms coming to DOTS clinics
as a result of a media campaign to promote TB
screening.

Evaluation
Monitoring or Evaluation?

A district manager reports on how many nurses


were trained on interpersonal communication
skills for her quarterly donor report.

Monitoring
It Starts with QUESTIONS

• Monitoring and evaluation answer different


questions.

• If we do not ask good questions about our


activities, we will not get useful data!
What is a GOAL?

• The ultimate result of efforts at a broad,


population level.

• Achieved over the long term (years) and


through combined efforts of multiple
programs (not always related to Advocacy,
communication and social mobilization
(ACSM)).
• Decrease morbidity and mortality due to TB in Country X.
• Reduce prevalence of TB by 50%.
• Eliminate stigma of TB in our communities.
OBJECTIVES

GOAL ACSM activities

•How the results of your short-term program activities


contribute to the big goal.
•Several objectives can relate to the same goal.
•Link between ACSM activities and the NTP.
Objective Examples

• Aggressively advocate to increase NTP budget by 8%


each year for the next four years.

• Double the percentage of secondary school students


who can correctly identify TB symptoms by 2015.

• Design and pilot a treatment support program for newly


released prisoners with TB by 2015.
INPUTS

• Resources needed to plan and implement ACSM


• “Raw materials” of an ACSM project

Examples
• Money
• Staff
• Policies, guidelines
• Equipment
• Partners
ACTIVITIES
• The work that we do, what we implement
• Also called “processes”

Examples
• Training events
• Meetings
• Events
• Outreach
• Home visits
OUTPUTS

• Immediate results of activities


• What we can measure/count right after the activity

Examples

• Number of people trained


• Number of brochures produced
• Number of policymakers reached with
advocacy activity
OUTCOMES

• “Ripple effects” of ACSM activities


• What changes after outputs are produced

Examples

• Increased funding for TB after lobbying meeting


• Short: Improved attitudes toward TB patients among DOTS
nurses after a training
• Medium: Increased satisfaction of TB clients
• Long: TB clients stay in treatment longer
The Crow and the Pitcher
INDICATORS

• How we define our activities, outputs, or outcomes


• Signs or evidence we watch for to see if we have reached
them

ACTIVITY: Meeting with Finance Minister and NTP Director to lobby


for more funding for NTP

OUTPUT: INDICATOR:
Number of officials Number of officials attending the meeting
attending the meeting compared to number invited

OUTCOME: INDICATOR:
Increased funding Percentage of NTP budget covered by the
Ministry of Health
Focus on INDICATORS
Indicators
• An indicator is a measure that is used to show
change in a situation, or the progress in/results of an
activity, project, or programme.

• Indicators:
– enable us to be “watchdogs”;
– are essential instruments for monitoring and
evaluation.
– are objectively verifiable measurements
What are the Qualities of a Good
Indicator?
• Specific The Sphere Project provides the most

• Measurable accepted indicators for nutrition and


food security interventions in
emergencies: see Module 21.
• Achievable
• Relevant
• Time-bound
And there is also the SMART initiative….
Standardised Monitoring and Assessment in Relief and Transition
Initiative - interagency initiative to improve the M&E of humanitarian
assistance
Indicators exist in many different forms:

Types of indicators
 Direct indicators correspond precisely to results
Direct
at any performance level.
Indirect /
 Indirect or "proxy" indicators demonstrate the
proxy
change or results if direct measures are not
feasible.
 Indicators are usually quantitative measures,
Qualitative
expressed as percentage or share, as a rate,
Quantitative etc.
 Indicators may also be qualitative
observations.
Global /
standardised  Standardised global indicators are
comparable in all settings.
Locally
developed  Other indicators tend to be context
specific and must be developed locally.
Impact

Outcome

Output

Input
Impact Related to Goal

Outcome Related to Objectives


(or Purposes)

Related to Outputs
Output

Input Related to
Activities/Resources
Related to Goal Malnutrition rates
Impact amongst young
children reduced

% of young children
Outcome Related to Objectives getting appropriate
(or Purposes) complementary food

X number of mothers
Related to Outputs
Output know about good
complementary food and
how to prepare that

Nutritional education
Input Related to to mothers on
Activities/Resources complementary food

*
IMPACT

• More related to goal


• Very broad-scale result over long term

Examples
• Higher rate of treatment success
• Reduction in deaths among MDR-TB patients
performance
M&E
efficiency
outputs
effectiveness appropriateness
outcomes
Quantitative indicators
Qualitative indicators

impact target Logframes

DO NO HARM
assessment
coverage
INPUTS
connectedness
accountability
timeliness
A WASP NEST………? *
Definition
Monitoring
‘The systematic and continuous assessment of the
progress of a piece of work over time….’

‘To continuously measure progress against programme


objectives and check on relevance of the programme’

It involves collecting and analysing data/information

It is NOT only about PROCESS


*
Purpose of monitoring
• to document progress and results of project

• to provide the necessary information to


Management for timely decision taking and
corrective action (if necessary)

• to promote accountability* to all stakeholders


of a project (to beneficiaries, donors, etc)

*
Information collected for monitoring
must be:
• Useful and relevant
• Accurate
• Regular
• Acted upon
• Shared
• Timely

*
Monitoring is an implicit part of an evaluation.

It is often done badly:


– Routine data collection not done routinely!
– Data collection done poorly
– Information not processed/used in a timely manner
– Focus only on process indicators and neglecting (lack
of) preliminary impact

*
Can you give examples of Monitoring
in your current work?

For example
- From a CMAM programme?
- From a Micronutrient programme?
- From a General Food Distribution?
- From a Health programme?
- From a Livelihoods programme?

*
Monitoring
• Monitoring compares intentions with results
• It guides project revisions, verifies targeting criteria
and whether assistance is reaching the people
intended.
• It checks the relevance of the project to the needs.
• It integrates and responds to community feedback
• It enhances transparency and accountability
Difference between

Monitoring of

• Process/activities

• Impact/results

*
The project cycle Disaster

ASSESSMENT
EVALUATION

Monitoring

PROGRAMME
IMPLEMENTATION DESIGN

*
Why would you
do an evaluation
of a programme?

*
Definitions
Evaluation
The aim is to determine relevance and fulfilment of
objectives, as well as efficiency, effectiveness, impact
and sustainability of a project.

It involves the objective assessment of an


ongoing or completed project/programme, its
design, implementation and results.

*
There has been an increased
focus on evaluation of
humanitarian action as part
of efforts to improve quality and
standards

*
Monitoring versus Evaluation
MONITORING =

• Tracking changes in program performance over time

EVALUATION =

• Assessing whether objectives have been met


• Assessing extent to which program is responsible for
observed changes
Evaluation

It aims to
– Improve policy and practice
– Enhance accountability

*
Evaluations are done when /
because:
– Monitoring highlights unexpected results
– More information is needed for decision making
– Implementation problems or unmet needs are identified
– Issues of sustainability, cost effectiveness or relevance
arise
– Recommendations for actions to improve performance
are needed
– Lessons learning are necessary for future activities
Evaluations
• Evaluation involves the same skills as assessment
and analysis
• Evaluation should be done impartially and ideally
by external staff
• Evaluation can also occur during (e.g. mid-term)
and after implementation of the project

One of the most important sources


of information for evaluations is
data used for monitoring
Sep 26, 2020 56
*
The OECD-DAC criteria
Organisation for Economic Co-operation and Development

• The Development Assistance Committee


(DAC) evaluation criteria are currently at the
heart of the evaluation of humanitarian
action.

• The DAC criteria are designed to improve


evaluation of humanitarian action.

*
Evaluation looks at

• Relevance/Appropriateness: Doing the right thing in the right way at


the right time.
• Connectedness (and coordination): Was there any replication or gaps
left in programming due to a lack of coordination?
• Coherence: Did the intervention make sense in the context of the
emergency and the mandate of the implementing agency? Are their
detrimental effects of the intervention on long run?
• Coverage: Who has been reached by the intervention, and where:
linked to effectiveness?
• Efficiency: Were the results delivered in the least costly manner
possible?
• Effectiveness: To what extent has the intervention achieved its
objectives?
• Impact: Doing the right thing, changing the situation more
profoundly and in the longer-term.

*
Evaluation looks at

• Relevance/Appropriateness: Doing the right thing in the right way at the right
time.
• Connectedness (and coordination): Was there any replication or gaps left in
programming due to a lack of coordination?
• Coherence: Did the intervention make sense in the context of the emergency
and the mandate of the implementing agency? Are their detrimental effects of
the intervention on long run?
• Coverage: Who has been reached by the intervention, and where: linked to
effectiveness?
• Efficiency: The extent to which results have been delivered in the least costly
manner possible.
• Effectiveness: The extent to which an intervention has achieved its objectives –
• Impact: Doing the right thing, changing the situation more profoundly and in
the longer-term.

*
Example on General Food Distribution

• Relevance/Appropriateness: Doing the right


thing in the right way at the right time.
Was food aid the right thing to do, not cash?

• Connectedness: Are their detrimental effects


of the intervention on long run?
Did food aid lower food prices? Did local
farmers suffer from that?
• Coverage: Who has been reached by the
intervention, and where: linked to effectiveness?
Were those that needed food aid indeed
reached?

• Efficiency: Were the results delivered in the least


costly manner possible?
Was it right to import the food or should it
have been purchased locally? Could the results
have been achieved with less (financial)
resources? Food aid was provided, would cash
have been more cost-effective?
• Effectiveness: To what extent has the
intervention achieved its objectives?
Did food aid avoid undernutrition? (assuming it
was an objective)

• Impact: Doing the right thing, changing the


situation more profoundly and in the longer-term.
Did the food aid avoid people becoming
displaced? Did the people become dependent
on food aid?
• Impact:
- Very much related to the general goal of the
project
- Measures both positive and negative long-term
effects, as well as intended and unintended
effects.
GFD: did it lower general food prices with long-term economic
consequences for certain groups ? Were people that received food
aid attacked because of the ration? (therefore more death…?)
- Need for baseline information!!!!
(to measure results against….)
To evaluate projects well is a real skill!

And you often need a team…

*
M&E in emergencies?

Any project without Monitoring and/or


Evaluation is a BAD project
*
Hel p !

*
The “M” and the “E”…
Monitoring Evaluation

Primary use of Project Accountability


the data management Planning (future
projects)
Frequency of data Ongoing Periodic
collection
Type of data Info on process Info on effects
collected and effects
Who collects the Project staff External
data evaluators
Evaluations in Humanitarian Context
• Single-agency evaluation (during/after
project)
• There is an increasing move towards:
– Inter-agency evaluations: the objective is to
evaluate responses as a whole and the links
between interventions
– Real-time evaluations: carried out 8 to 12 weeks
after the onset of an emergency and are processed
within one month of data collection
Real-time evaluations (1)
• WHY?
Arose from concern that evaluations came too late to affect
the operations they were assessing
• Various groups of organizations aim to undertake real-time
evaluations
• Same purpose as any other evaluation
• Common characteristics:
– Takes place during the course of implementation
– In a short time frame

*
Real-time evaluations (2)
• It is an improvement-oriented review; it can be regarded
more as an internal function than an external process.
• It helps to bring about changes in the programme, rather
than just reflecting on its quality after the event.
• A real-time “evaluator” is a “facilitator”, working with staff
to find creative solutions to any difficulties they encounter.
• It helps to get closer to the people affected by crisis, and
this enables to improve accountability to ‘beneficiaries’.

*
Monitoring & Evaluation systems
• Main components of M&E systems:
– M&E work plan for data collection and analysis,
covering baseline, on-going M&E
– Logical framework, including indicators and
means/source of verification
– Reporting flows and formats
– Feedback and review plan
– Capacity building design
– Implementation schedule
– Human resources and budget

Sep 26, 2020 71


Examples of data collection methods
for M&E
Quantitative Methods Qualitative methods
Administering structured oral or written Semi structured interviews e.g. key
interviews with closed questions informant
Population based surveys Focus group discussion
Reviewing medical and financial records Observing
Completing forms and tally sheets Case studies
Direct measurement (anthropometry,
Mapping, ranking, scoring
biochemical analysis, clinical signs)
Lot quality assessment Problem sorting, ranking
Focus on INDICATORS
Indicators
• An indicator is a measure that is used to show
change in a situation, or the progress
in/results of an activity, project, or
programme.

• Indicators:
– enable us to be “watchdogs”;
– are essential instruments for monitoring and
evaluation.
– are objectively verifiable measurements
What are the Qualities of a Good
Indicator?
• Specific
• Measurable The Sphere Project provides the most
accepted indicators for nutrition and
• Achievable food security interventions in emergencies:
see Module 21.
• Relevant
• Time-bound
And there is also the SMART initiative….
Standardised Monitoring and Assessment in Relief and Transition
Initiative - interagency initiative to improve the M&E of humanitarian
assistance
Types of indicators
Indicators exist in many different forms:
 Direct indicators correspond precisely to results at
Direct
any performance level.
Indirect /
 Indirect or "proxy" indicators demonstrate the
proxy
change or results if direct measures are not feasible.

 Indicators are usually quantitative measures,


Qualitative
expressed as percentage or share, as a rate, etc.
Quantitative
 Indicators may also be qualitative observations.

Global /
standardised  Standardised global indicators are
comparable in all settings.
Locally
developed  Other indicators tend to be context specific
and must be developed locally.
Impact

Outcome

Output

Input
Impact Related to Goal

Outcome Related to Objectives


(or Purposes)

Related to Outputs
Output

Input Related to
Activities/Resources
Related to Goal Malnutrition rates
Impact amongst young
children reduced

% of young children
Outcome Related to Objectives getting appropriate
(or Purposes) complementary food

X number of mothers
Related to Outputs
Output know about good
complementary food and
how to prepare that

Nutritional education to
Input Related to mothers on
Activities/Resources complementary food

*
What is a Log Frame?
The logical framework or logframe is an analytical tool
used to plan, monitor, and evaluate projects.

? ? ?

Victim of a log frame?


Log Frames

IMPACT

OUTCOME

INPUTS
Impact

Outcome Outcome Outcome

Output Output Output Output Output Output


Impact

Outcome

Output

Output

Output

INPUTS
Other terms that can be found in a logframe:

• The means of verification of progress towards achieving the


indicators highlights the sources from where data is collected.
The process of identifying the means of verification at this stage
is useful as discussions on where to find information or how to
collect it often lead to reformulation of the indicator.

• Assumptions are external factors or conditions that have the


potential to influence the success of a programme. They may be
factors outside the control of the programme. The achievement
of a programme’s aims depends on whether or not assumptions
hold true or anticipated risks do not materialise.

Sep 26, 2020 84


logical framework for M&E
Project description Indicators Source / mean of Assumptions / risks
verification

Goal
If the OBJECTIVES are produced, then this should
contribute to the overall GOAL
Objectives /
outcomes If OUTPUTS/RESULTS are produced, then the
OBJECTIVES are accomplished

Deliverable outputs
If adequate ACTIVITIES are conducted, then
OUTPUT/RESULTS can be produced

Activities
If adequate RESOURCES/INPUTS are provided; then
activities can be conducted
Activities versus Results
Completed activities are not results.
• e.g. a hospital was built, does not mean that injured
and sick people can be treated in the hospital, maybe
the hospital has no water and the beds have not been
delivered.

Results are the actual benefits or effects of


completed activities:
• e.g. Injured and sick people have access to a fully
functional health facility.

*
es
fram
Log
Example
Another Example…

*
Key messages
• The monitoring of nutrition interventions in emergencies is an integral
part of saving lives and maintaining nutrition status of the affected
population.
• Successful monitoring systems allow for improvements in
interventions in ‘real time’.
• Evaluations are important tools for learning, assessing interventions,
comparing the costs of the interventions and their impact. Essential
evaluation parameters are: effectiveness; efficiency;
relevance/appropriateness; impact and coverage
• Involving communities in M&E places the affected population at the
heart of the response, providing the opportunity for their views and
perceptions to be incorporated into programme decisions and
increases accountability towards them.
• A common mistake of designing M&E systems is creating a framework
which is overly complex. Always make an M&E system practical and
doable.
• The logical framework or logframe is an analytical tool used to plan,
monitor, and evaluate projects.
Monitoring for Community based
Management of Acute Malnutrition
(CMAM) interventions
• Types of monitoring, e.g.
– Individual case monitoring,
– Programme / activities monitoring

Sep 26, 2020 92


Individual monitoring for CMAM
• It is the basic follow up of cases in SFP / OTP /
SC services:
– Anthropometric / clinical assessment

• Tools for individual case follow up include:


– Medical / nutrition and action protocols
– Individual follow up card
– Referral forms
–…

Sep 26, 2020 93


Objectives of monitoring
CMAM activities
• Assess service performance / outcomes
• Identify further needs
– Support decision-taking for quality improvement
(staffing, training, resources, site location,…)
• Contribute to the analysis of the general
situation
– Assessing the nutrition trends in the area

Sep 26, 2020 94


Methods and tools for monitoring
CMAM interventions
• Monthly / weekly reporting:
• Reporting needs to be done per site (service unit)
and compiled per area (district…) up to the
national level

• Routine supervision

• External evaluations
• Coverage surveys are one of the most important
tools for evaluation of CMAM interventions

Sep 26, 2020 95


Routine data collection for monitoring
CMAM interventions
• Routine data is collected for specified time-periods:
– Nb. of new admissions ,
– Nb. of discharges (total and by category: cured, died,
defaulted, non-recovered
– Nb. of cases in treatment (nb. of beneficiaries
registered at the end of the reporting time-period)

Data on admissions should be disaggregated by


gender

Sep 26, 2020 96


Category Criteria (Children 6 – 59 months)
MUAC <11.5 cm
or
New admissions for
W/H < -3 Z scores (WHO) or <70% of median (NCHS)
children 6 – 59 months
or
(or > 60 months but <130
Bilateral pitting oedema grade + or ++
cm height)
and
child is alert, has appetite, and is clinically well
 
 Other new admissions Carer refuses inpatient care despite advice
 
Child has previously defaulted and has returned to OTP
Returned Defaulter
(the child must meet admission criteria to be re-admitted).
A child is treated in OTP until discharge after meeting discharge
Readmissions/Relapses
criteria but relapses hence need for readmission
 
Transfer from inpatient
From in-patient care after stabilisation treatment
care (SC)
 
Transfer from OTP  Patients moved in from another OTP site 

Sep 26, 2020 97


Category Criteria (Children 6 – 59 months)
  MUAC > 12.5cm and WFH > -2Z scores and no oedema for two
  consecutive visits
 Cured  And
Child is clinically well
   
Defaulted Absent for 3 consecutive visits
 
   
Died Died during time registered in OTP
 
  Has not reached discharge criteria within four months of treatment
Non-Cured Link the child to other programmes e.g. SFP. IYCF, GMP, targeted food
distributions
 
Transferred to SC Condition has deteriorated and requires inpatient care
 
 
Transfer to other OTP Child has been transferred to another OTP site
 

Sep 26, 2020 98


Monitoring of CMAM interventions:
key indicators for SAM (Sphere)
• The proportion of discharges from therapeutic care should be:
– Recovered > 75 %
– Deaths < 10 %
– Defaulter < 15 %
They are primarily applicable to the 6–59 month age group,
although others may be part of the programme.

• Distance: > 90 % of the target population is within less than one


day’s return walk (including time for treatment) of the service /
site.
• Coverage is > 50 % in rural areas, > 70 % in urban areas and >90
% in camp situations

Sep 26, 2020 99


Monitoring of CMAM interventions:
key indicators for MAM (Sphere)
• The proportion of discharges from targeted SFP should be:
– Recovered > 75 %
– Deaths < 3 %
– Defaulter < 15 %
They are primarily applicable to the 6–59 month age group,
although others may be part of the programme.

• Distance: > 90 % of the target population is within less than one day’s
return walk (including time for treatment) of the programme site for
dry ration SFP and no more than one hour’s walk for on-site wet SFP
• Coverage is > 50 % in rural areas, > 70 % in urban areas and > 90 % in
a camp situation

Sep 26, 2020 100


Additional data for monitoring CMAM
interventions
Derived from routine
monitoring and other sources: Sources of data:
• Average length of stay • Registration books
• Average weight gain • Individual follow up charts
• Relapse rate • Interviews and Focus group
• Distribution of admissions per discussions
type, per age, per origin… • Observation, home-visits
• Causes of death • …
• Reasons for defaulting
• Investigation of non-recovery
cases

Sep 26, 2020 101


M&E for CMAM interventions:
Supervision
Supportive supervision visits to sites are designed
to ensure / improve the quality of care offered by:
• Identifying weaknesses in the performance of activities,
taking immediate action and applying shared corrective
solutions
• Strengthening the technical capacity of health workers and
motivating staff through encouragement of good practices
Supervisors and managers ensure that the
performance of activities and organization of the
services meet quality standards.

Sep 26, 2020 102


Evaluation of SAM management
interventions
• Effectiveness: programme performance with a strong
focus on coverage
• Appropriateness: e.g. distribution and time of opening
of treatment sites
• Connectedness: relates to the links with health system
and shows levels of possible integration
• Cost-effectiveness has also been measured with
various methods and showing high differences
between contexts and different approaches

Sep 26, 2020 103


M&E of CMAM interventions:
population level assessments

• Community level assessment can be done


through:
– Repeated anthropometric surveys
– Programme coverage

Sep 26, 2020 104


Evaluation of coverage for CMAM
• Coverage is one of the most important
elements behind the success of the CMAM
approach.
– It is measured through studies using two main
approaches:
• The centric systematic area sampling (CSAS)
• The Semi-Quantitative Evaluation of Access and
Coverage (SQUEAC)
• Coverage should reach at least 90% of severe cases in
camps situation, 70% in urban setting, 50% in rural
setting (SPHERE standards)

Sep 26, 2020 105


Evaluation of management of MAM
interventions
• Same criteria as for all other interventions (relevance,
efficiency, etc.)
• SFP evaluations are rarely shared, but evidence
showed that defaulting and non-response are very
common
• Needs for evaluating use of Ready-to-Use-
Supplementary Food products in terms of efficiency:
gain of weight, effect of defaulting, effect on easiness
for beneficiaries, etc.

Sep 26, 2020 106


Module 1:
Introducing
Development Evaluation
Introduction
• Evaluation, What Is It?
• The Origin and History of the Evaluation
Discipline
• The Development Evaluation Context
• Principles and Standards for Development
Evaluation

IPDET © 2009 108


How Do You Define Evaluation?

OECD definition of evaluation:


• the process of determining the worth or
significance of an activity, policy, or program
• an assessment, as systematic and objective
as possible, of a planned, on-going, or
completed intervention
The Organization for Economic Cooperation and
Development (OECD)
IPDET © 2009 109
Kinds of Evaluations

• Formative
• focus on improved performance before and during
implementation (project, program or policy)
• Summative
• focus on outcomes (results, consequences)
• Prospective
• asses the likely outcomes of proposed interventions
• Is this program/project/policy worth evaluating?
• Will the gains be worth the effort/resources expended?

IPDET © 2009 110


Purpose of Evaluation

•Ethical
•Managerial
•Decisional
•Educative and Motivational

IPDET © 2009 111


Uses of Evaluations
• Help make resource allocation decisions
• Help rethink the causes of a problem
• Identify emerging problems
• Support decision making on competing or best
alternatives
• Support public sector reform and innovation
• Build consensus on the causes of a problem and
how to respond

IPDET © 2009 112


What to Evaluate?

• Projects
• Programs
• Policies
• Organizations
• Sectors
• Themes
• Country assistance

IPDET © 2009 113


Evaluation Provides Information on:

• Strategy –
• Are the right things being done?
• Operations –
• Are things being done right?
• Learning –
• What are we learning?
• Are there better ways?

IPDET © 2009 114


Monitoring and Evaluation

• Monitoring
• routine, ongoing, and internal activity of tracking key
indicators
• internal activity
• used to collect information on a program’s activities,
outputs, and outcomes to measure performance of the
program
• Evaluation
• periodic and time bound
• can be internal, external, or participatory
• periodic feedback to key stakeholders

IPDET © 2009 115


Who Conducts the Evaluation

•Internal evaluators
•External evaluators
•Participatory evaluators
•Each has advantages and disadvantages

IPDET © 2009 116


Evaluator Activities

• Consult with main client and all key stakeholders


• Reconstruct or construct theory of change
• Design the evaluation
• Manage evaluation budgets
• Perform or conduct the evaluation (or contract
staff to perform the evaluation)
• Identify standards for effectiveness
• Collect, analyze, interpret, and report on data and
findings

IPDET © 2009 117


into
118

ion ty
ans i
Exp al activ
1980s to now glob
s
tatu
ns
sio
fes
Pro
mid 1970-80 tine
rou
re
Mo and
1950s-60s (US ope)
r
Eu ik
utn
1957 Sp i o n
uct ks
str - Ban
con
Progression of Evaluation

1940 Re WW r
afte ols
cho da)
al S Cana
dic
Me and
1900 (US d s
an
on ram
ati og
uc Pr
Ed cial
1880 So
rn s
de ion
Mo aluat

IPDET © 2009
1600 Ev
nd
ta
yp
2000 BC Eg ina
Ch
Development Evaluation:

•A sub-discipline of classical evaluation


•Uses a variety of methodologies and
practices
•Mixed methodologies work best

IPDET © 2009 119


Origins of Development Evaluation

•Audit tradition
•Social science tradition

IPDET © 2009 120


Changing Development Concepts
Decade Objectives Approaches Discipline
1950s Reconstruction Technical assistance Engineering
1960s Growth Projects Finance
1970s Basic needs Sector investment Planning

1980s Adjustment Adjustment lending Neoclassical


economics

1990s Capacity Country assistance Multi-disciplinary

2000s Poverty Partnerships Results-based


reduction management

IPDET © 2009 121


Significant Changes

• From partial development to more


comprehensive development
• Toward global approaches to development
• From individual efforts to coordinated,
participatory development
• Toward using partnerships for large-scale
development challenges

IPDET © 2009 122


Growth of Professional Evaluation
Associations
•As of 2009, 35 associations
•Organizations create a support system
and allow for professionalism
•International Development Evaluation
Association (IDEAS)

IPDET © 2009 123


Development Assistance Committee (DAC)
Criteria for Evaluating Development
Assistance

• Relevance
• Effectiveness
• Efficiency
• Impact
• Sustainability

IPDET © 2009 124


Why Principles and Standards?

• Promote accountability
• Facilitate comparability
• Enhance reliability and quality of services provided

IPDET © 2009 125


DAC Principles for the Evaluation of
Development Assistance

• Purpose • Evaluation programming


• Impartiality and • Design and implementation
independence of evaluations
• Credibility
• Reporting, dissemination,
• Usefulness
and feedback
• Participation of donors and
recipients • Application of these
• Donor cooperation principles

IPDET © 2009 126


DAC Evaluation Quality Standards (test
phase)
• Rationale, purpose, and • Independence
objectives • Evaluation ethics
• Evaluation scope • Quality assurance
• Context • Relevance of the evaluation
• Evaluation methodology results
• Information sources • Completeness
• http://www.oecd.org/dataoecd
/30/62/36596604.pdf

IPDET © 2009 127


Other Evaluation Standards

• American Evaluation Association:


• Standards http://www.eval.org/EvaluationDocuments/progeval.html
• AEA Guiding Principles
http://www.eval.org/Publications/GuidingPrinciples.asp
• African Evaluation Association:
• African Evaluation Guidelines
http://www.geocities.com/afreval/documents/aeg.htm
• Australasian Evaluation Society
• Guidelines for the Ethical Conduct of Evaluation
www.aes.asn.au/about/Documents%20-%20ongoing/guidelines_for_the_ethi
cal_conduct_of_evaluations.pdf

IPDET © 2009 128


Evaluation Standards (cont.)

• Norms for Evaluation in the UN System


• http://www.unevaluation.org/papersandpubs/documentdetail.jsp?d
oc_id=21

• Standards for Evaluation in the UN System


• http://www.iom.int/jahia/webdav/site/myjahiasite/shared/shared/m
ainsite/about_iom/eva_techref/UNEG_Standards_for_Evaluation.pdf

IPDET © 2009 129


Evaluation Standards (cont.)

• IEG/DAC Sourcebook for Evaluating Global and Regional Partnership


Programs (GRPPs): Indicative Principles and Standards

IPDET © 2009 130


Evaluation and Independence

• Independent evaluation (OECD glossary):


• an evaluation carried out by entities and persons free of
the control of those responsible for the design and
implementation of the evaluation
• the credibility of an evaluation depends in part on how
independently it has been carried out

IPDET © 2009 131


Criteria of Evaluation Independence

• Organizational independence
• Behavioral independence
• Protection from external influence
• Avoidance of conflicts of interest

IPDET © 2009 132


A Final Note….

“Evaluation of the past is the first step towards vision for the future.”
-- Chris Widener, motivational speaker and author

Questions?
IPDET © 2009 133

Das könnte Ihnen auch gefallen