Sie sind auf Seite 1von 41

M&E Overview

At the end of this session, participants will be able to:


Understand concepts of monitoring and evaluation Understand how to set goals and objectives Prepare an M&E plan

T. Bezabih

What Is Monitoring? and What Is Evaluation?


How are they different? How do they fit together?

Monitoring is the routine process of data collection and measurement of progress toward program objectives. Monitoring involves counting what we are doing. Monitoring involves routinely looking at the quality of our services.

The regular follow-up of the implementation of planned activities. It also involves documentation of project activities The systematic and continuous process of following and keeping track of indicators in order to ensure that the project/program is proceeding according to plan and modifying the plan as necessary.

Are projected outputs being met? Are we heading in the right direction? Are we in good time? Are the indicators appropriate? Did you identify the correct problem and has this problem changed? Are the intervention strategies appropriate to the target population? What can be improved in our project? Are we utilizing resources efficiently? Gives us the strengths and weaknesses of our project Provides updates for stakeholders

Assesses progress objectives/outputs Supervises implementation

against

set

Assesses effectiveness of implementation strategies


Identifies new issues and/or unforeseen circumstances that may be obstacles

T. Bezabih

Identifies necessary corrective measures (strategy modification) Verifies information first-hand for immediate feedback Strengthens relationships between collaborators (donors, implementers and beneficiaries) Serves as a motivation to implementers and beneficiaries Provides an opportunity to verify whether resources are being used effectively (cost-effectiveness)

Review existing information related to the project.


Make a conceptual framework of the project for monitoring. Identify monitoring goals and objectives. Identify indicators. Determine which categories of workers, supervisors or others will be responsible for the collection of each category of monitoring data. Develop a timetable for frequency of monitoring. Develop/strengthen a management information system. Develop monitoring instruments. Conduct monitoring activities.

Analyze monitoring data. Write a report. Make recommendations. Implement recommendations. Identify new indicators based on the recommendations. Modify the monitoring system if necessary. Continue to monitor.

Evaluation is the use of social research methods to systematically investigate a programs effectiveness. Evaluation requires study design. Evaluation sometimes requires a control or comparison group. Evaluation involves measurements over time. Evaluation involves special studies.

It is a means of problem verification It maximizes utilization of resources It identifies the strengths/weaknesses of the project It provides information for planning and re-planning It provides learning opportunities It provides an opportunity for problem solving (strategy modification) It is a basis for maintaining and/or improving the existing strategy It measures the effectiveness of the project/program It is a check whether the project was implemented according to the detailed plan/design

Have the outcomes/objectives been met? What systems were actually in place? How effective were strategies used to implement project activities? Were the needs met? Have the needs changed? What is the level of participation of various stakeholders? What lessons have been learned from the project?

DIFFERENCES BETWEEN MONITORING AND EVALUATION


Monitoring Evaluation Who M. Internal management responsibility all levels E. Usually incorporates external inputs (objectivity) When M. Ongoing E. Periodic mid-term, completion, ex-post Why M. Check progress, take remedial action, update plans E. Learn broad lessons applicable to other programs/projects, policy review, etc Focus on M. Inputs, activities, outputs, process E. Outcome, impact

Lack of appreciation of the role of monitoring and evaluation. Fear of finding mistakes/ Fear of failure Lack of transparency and accountability by project managers. Lack of knowledge and skills in monitoring and evaluation. Cost of re-designing the overall project Resistance to change by entire project staff People are overwhelmed by more work, lack of time Restrictive budgets (lack of funds to accommodate monitoring and evaluation). Fear of piracy by external evaluators.

Process Evaluation Mid-term Evaluation Impact Evaluation Summative Evaluation


What is it? Why is it? When? Who conducts it? How should the findings be used?

Global Fund wants to know how many PLHIV have been reached by your HBC program this year. A local community-based organization wants to start addressing HIV in its community with a comprehensive BCC program and starts by collecting key bits of information to find out who is most in need of the services. After a year of conducting your program, you want to know if the budget is being spent in the most efficient way. NAP+ is interested in finding out if the HBC services provided are being carried out according to national standards of quality. HAPCO wants to know if the programs being carried out in Amhara Region are changing the risk behaviors of having multiple sexual partners in the Region.

The core of any M&E system is the goals and objectives of the program to be monitored and evaluated. If the program goals and objectives are written in such a way that they can be easily distinguished from one another and measured, the job of the M&E specialist will be much easier.

General statement that describes the hoped-for result of a program (e.g., reduction of HIV incidence). Goals are achieved over the long term and through the combined efforts of multiple programs.

T. Bezabih

Specific, operationalized statement detailing the desired accomplishment of the program. A properly stated objective is action-oriented, starts with the word to, and is followed by an action verb. Objectives address questions of what and when, but not why" or how. Objectives are stated in terms of results to be achieved, not processes or activities to be performed.

(S) Specificity Is it specific? Does it covers only one rather than multiple activities? (M) Measurability Can it be measured or counted in some way? (A) Attainability Is the objective actually doable? Can we achieve this goal? (R) Relevance How important is this objective to the work that we are doing? How relevant is it to achieving our goal? (T) Time Does the objective give a timeframe by when the objective will be achieved, or a timeframe during which the activity will occur?

Start with the outcome of the intervention (purpose), then define the outputs and then activities that would achieve the outputs. Then finally the impact.

MONITORING &EVALUATION INDICATORS

An indicator is the quantitative or qualitative evidence that will be used to assess progress towards an objective. Indicators provide the basis for monitoring progress and evaluating the achievement of outcomes.

CLASSIFICATIONS OF INDICATORS

Input indicators: measure all the investment and recurrent cost resources (e.g. human, financial, facility, equipment, supplies) needed to enable the activities to be delivered Process indicators: attempt to set standards for the quality of activities to be carried out, such as appropriate training methods or an adequate supervision plan.

CLASSIFICATIONS OF INDICATORS

Output indicators: measure result of inputs & process:


Level at which program is functioning (operations): e.g. number of services delivered, medicines provided, IEC messages delivered How well program is functioning in terms of its objectives (performance): e.g., adequacy of output (e.g. services delivered) in terms of quality, equity, utilization/access, cost-effectiveness, efficiency, sustainability

CLASSIFICATIONS OF INDICATORS

Outcome indicators: measure the main changes in the program population that are expected in the short-term, such as improved knowledge, attitudes or behaviors Impact indicators: measures a long-term effect that the program aims to produce, such as reduction of HIV prevalence

Practical Examples
Examples of Outcomes, Indicators, Baselines and Targets (Prevention): Table 2 Examples of Outcomes, Indicators, Baselines and Targets (Treatment)- Table 3

FEATURES OF GOOD INDICATOR


Valid: actually measures the phenomenon (e.g., self-reported vs. facility based data) Reliable: produces same results when used more than once to measure precisely the same phenomenon Specific: measures only the phenomenon it is intended to measure Sensitive: reflects changes in the state of the phenomenon Operational: measurable with developed and tested definitions and reference standards

TYPES OF INDICATORS
PROCESS - show whether the activities that were planned are actually being carried out, and carried out effectively IMPACT - to assess what progress is being made towards reaching the objectives, and what impact the work has had on the different groups of people affected by the work QUANTITATIVE indicators involve the definition of numerical measures eg number of meetings attended QUALITATIVE indicators refer to defining characteristics that cannot be quantified. For example, changes in behaviour or peoples perceptions PROXY indicators measure things that represent (or approximate) changes that cannot be measured directly.

CHOOSING INDICATORS

Select among possible choices based on: What level of system youre measuring (inputs, process, outputs, outcomes) Data availability & quality Cost of collecting data Comparability with other programs, projects

CRITERIA FOR DEVELOPING AND SELECTING INDICATORS

Result oriented: should focus at measuring result expected from the program Direct: should always be direct as much as possible. Proxy indicators could only preferred when direct indicators are not possible or difficult to use for different reasons Objective: Could be understood without different interpretations One-dimensional/Independent: output indicators cannot be used to prove achievement of purpose and purpose indicators achievement of goal Quantitative: (Whenever possible). However, there are instances where qualitative indicators could be desired or even more useful.

CRITERIA FOR DEVELOPING SELECTING INDICATORS

AND

Disaggregated: (Whenever possible) use disaggregated indicators. e.g. by sex, age, geographical location, education level, etc. Simplicity/Unambiguous: Simple and clearly defined in the program's context Validity/Consistency: The values of the indicators should stay valid/constant as long as they are collected in identical conditions, no matter who does the collecting Specificity: Should measure specific conditions that the program aims to do & change Sensitivity: should be highly sensitive to changes in a program situation.

CRITERIA FOR DEVELOPING SELECTING INDICATORS

AND

Cost-Effectiveness/Ease of Data Collection: Identify and select indicators for which data, time and resource (budget) is available Relevance/Reliable: Should be relevant to program objectives and measure what is expected to achieve Timeliness: Should be sensitive to the time it is done and possible to collect the data reasonably and quickly.

DATA GATHERING, REPORTING STRAGEY

ANALYSIS,

AND

The HIV/AIDS Monitoring and Framework, should clearly define:

Evaluation

Indicators for the specific HIV/AIDS intervention areas Sources of data to generate the indicators Measurement tool Frequency of data collection Responsible body for data collection, compilation, analysis and reporting by level

Suggested reporting schedules EMSAP II and GF projects

for

Level of Indicators Recommended frequency of reporting Example of data collection methods used Input/process Continuously Health and non- health services statistics, Program monitoring Output Quarterly, semi-annually, or annually Health and non-health services statistics, Program monitoring Outcome 1 to 3 years Population-based surveys, Health facility surveys, Special studies Impact 2 to 5 years Surveillance, Population-based surveys, Special studies

Refer M&E OVER

The proposal submitted to the Global Fund by NAP+ has five major core areas: ART adherence literacy and counseling, home based care, job creation, food and nutrition support and capacity building Break into five groups and write a goal and three SMART Objectives for each of core areas for the M&E system.

Careful selection of the questions you want answered through monitoring and evaluation will greatly help you develop your M&E processes and work plan. At the outset of the planning process, program managers should ask themselves where they want the program to take them. Many of these questions will be reflected in the goals and objectives.

Was the activity carried out as planned? Did it reach its target beneficiaries? Did any changes in exposure to HIV infection result? How will the risk behaviors of the target population be affected? What sort of coverage do you expect to have? Did STI/HIV incidence change? How much did it cost?
T. Bezabih

Refer back to your previous group activity where you developed a goal and a set of objectives. Now look at these goals and objectives and come up with at least three monitoring questions and at least two evaluation questions.

T. Bezabih

Das könnte Ihnen auch gefallen