Sie sind auf Seite 1von 14

EVALUATION MANUAL

LEARNING FROM EXPERIENCE

Monitoring and Evaluation Unit Mdecins Sans Frontires, Amsterdam June 1999

M&E Unit: Evaluation Manual, June 1999.

Pg. 1

INDEX 1. PREFACE ....................................................................................................................................3 2. THE EVALUATION FRAMEWORK ........................................................................................4 2.1. Definition of Evaluation .........................................................................................................4 2.2. The evaluation criteria ............................................................................................................4 2.3. The logical framework............................................................................................................5 2.4. Purpose and Principles............................................................................................................6 2.5. Focus.......................................................................................................................................6 3. DIFFERENT APPROACHES FOR THE EVALUATION PROCESS .......................................8 3.1. Self evaluations.......................................................................................................................8 3.2. Evaluations done by external evaluators ................................................................................8 4. NINE STEPS FOR THE EVALUATION PROCESS: ................................................................9 4.1. Initiating the evaluation ..........................................................................................................9 4.2. Defining the purpose...............................................................................................................9 4.3. Defining the scope ................................................................................................................10 4.4. Defining the key-questions ...................................................................................................10 4.5. Choosing the methodology ...................................................................................................10 4.6. Selecting the evaluators ........................................................................................................10 4.7. Planning the evaluation.........................................................................................................11 4.8. Follow-up..............................................................................................................................11 4.9. Defining the budget ..............................................................................................................11 ANNEX 1 FORMAT END OF PROJECT (CYCLE) EVALUATION REPORT.........................12 ANNEX 2 THE TERMS OF REFERENCE ..................................................................................13 ANNEX 3 EXAMPLES OF EVALUATION KEY-QUESTIONS ...............................................14

M&E Unit: Evaluation Manual, June 1999.

Pg. 2

1.

PREFACE

This is the first edition of the MSF-Holland Evaluation Manual. It is the final in a series of management related manuals.A It is meant to ensure that evaluations become institutionalised and firmly integrated in the management-cycle. The manual provides MSF staff with a theoretical basis as well as practical tools to plan and implement their own evaluation needs. Effective evaluation in MSF Holland requires familiarity with the logical framework approach and depends on good management: proper operational planning based on sound assessments, adequate monitoring and explicit policy development. This manual has built further on the insights gained while developing the monitoring system. The evaluation framework and the tools have been successfully applied in the series of evaluations coordinated by the M&E unit. Evaluating our work allows for more in-depth learning from our experiences and improved accountability. Structured analysis of process and results of our projects should help radically improve our learning capacity and help institutional operational experience. This will contribute to improving our performance in providing medical aid linked with advocacy. We will continue to develop new insights as the tools are used in the organisation and we will continue to rely on your feedback to ensure that the MSF-Holland evaluation system remains a useful aid to your work.

Monitoring and Evaluation Unit, June 1999

Manual for Exploratory Missions and Rapid Assessments, MSF Holland . 1st ed. September 1999 Toward a Transparent Approach of emergency Interventions, MSF Holland. 1st ed. February 1995 Manual for Project Planning, MSF-Holland. 1st ed. November 1995 Monitoring in MSF Holland, Introduction and Tools. 2nd ed. April 1999

M&E Unit: Evaluation Manual, June 1999.

Pg. 3

2.

THE EVALUATION FRAMEWORK

2.1.

Definition of Evaluation

The OECD/DAC definition for evaluating development assistance provides the basis for the evaluation of humanitarian assistance:B An evaluation is an assessment, as systematic and objective as possible, of an ongoing or completed project, programme or policy, its design, implementation and results. The aim is to determine the relevance and fulfilment of objectives, efficiency, effectiveness, impact and sustainability. An evaluation should provide information that is credible and useful, enabling the incorporation of lessons learned into the decision making process of both recipients and donors. This definition has been adjusted to make it more applicable to humanitarian interventions. A number of additional evaluation criteria, proposed during a symposium organised by MSF Holland in 1996, have been accepted for this purpose: Appropriateness to complement relevance, coherence and coverage to complement effectiveness and efficiency, and connectedness instead of sustainability.C These criteria will be further discussed in paragraph 2.2. The OECD/DAC evaluation definition implies that all aspects need to be taken into account in every evaluation. In MSF, we interpret the definition in a more flexible way and apply it according to the circumstances and needs, both in terms of comprehensiveness and approach. This will be further explained in chapter 3. The evaluation criteriaD

2.2.

Appropriateness: usually three types of appropriateness are distinguished which can be looked at separately: 1. whether the intervention was appropriate in relation to the MSF Policies (and vice versa), 2. whether it was appropriate according the perception and/or demands of the target population and 3. Whether it was appropriate according to national policies. The timeliness of the response(s) is sometimes indicated as a separate criterion. Connectedness: looks at whether activities of a short-term emergency nature were carried out in a context which takes longer-term problems into account. Even though sustainability is seldom planned and often not realistic in emergency interventions, we do have to take a longer term perspective into account. This can for instance be done by preparing the programme in such a way that another organisation can take over or that national staff or the local health authorities can manage the program with minimum dependency on MSF. It is done by taking the local resources and coping mechanisms into account in our planning and reinforced for example by training. Building on this capacity can help to restore dignity. It will also facilitate that when the crisis is over, a smooth transition can take place towards development.

Organisation of Economic Co-operation and Development/Development Assistance Committee. Principles for Evaluation of Development Assistance, 1991. C Borton J. A Framework for Considering Humanitarian Aid Evaluation. Mini-Symposium on Evaluations and Impact Studies of Humanitarian Relief Interventions, MSF-H, Amsterdam, 1996. D Good Practice Review No.7: Evaluating Humanitarian Assistance Programmes in Complex Emergencies: by Alistair Hallam of the Relief and Rehabilitation Network (Overseas Development Institute) - London 1998; pp. 52-57.

M&E Unit: Evaluation Manual, June 1999.

Pg. 4

Effectiveness: measures the extent to which the project purpose was achieved and whether this happened on the basis of the activities and inputs of goods and services. Impact: looks at the wider effects on the target population or the country in general, intended or unintended, positive and negative, both in the short and long term. Impact in our work is usually defined in terms of epidemiological health indicators like disease incidence or mortality rates.E Other areas of impact include for example changes in the underlying causes of the human rights violations. Often it is not possible to collect impact data in a reliable way due to the nature of an emergency. Even when data are available, it is difficult to estimate the contribution of a single intervention to any changes identified as these results are determined by multiple factors. Nevertheless it is feasible to arrive at conclusions on presumed impact through deduction based on the examination of other evaluation criteria. If an intervention was appropriate, if it addressed substantial needs, if it was implemented according to accepted standards and/or if it had high coverage, it is most likely that there was positive impact. Coherence: to see whether the activities were carried out with an effective division of labour among the actors, maximising the comparative advantage of each. Avoiding gaps and overlap, acknowledging the responsibilities of all involved. The co-ordination of the response(s) is sometimes indicated as a separate criterion. Coverage: concerns the extent to which project activities reached the specific target population and/or to what extent the beneficiaries had access to our services. Efficiency: looks at the relation between the verifiable outputs -qualitative and quantitative- and the inputs (human, material and financial resources). It is mainly used to place a value on the implementation process. The term managerial efficiency is sometimes used to look at for instance the decision making process, human resource management, logistics, financial management, etc. Cost-effectiveness: Looks at the relation between costs and effects or impact. Typically, It describes the intervention in terms of costs per patient cured, costs per life saved or costs per Disability Adjusted Life Years (DALY) averted. It is seldom feasible to do these types costeffectiveness analyses as they are limited by the same constraints as described under impact. Other methods include describing costs per output, like cost per child immunised. 2.3. The logical framework

The planning, based on the logical framework, is used as reference for most evaluations. Each of the evaluation criteria tends to focus on a particular relationship within the logical framework.F For example coherence looks at the relation between the other actors defined under assumptions and/or preconditions and our intervention. Impact looks at the relation between our achieved results and the influence they had on the overall objective and the underlying causes to the human rights violations as defined in the assumptions. The LogFrame describes the intended project activities, effects we planned to achieve, the relation to other actors (pre-conditions) and elements in the environment that could influence our project (assumptions) or on which we were dependent (preconditions). Indicators should have been formulated for the specific objectives (activities or outputs), the project purpose (the results or outcome) and the overall objectives (impact).
Perea W. The use of epidemiological tools for evaluating impact in health interventions, MiniSymposium on Evaluations and Impact Studies of Humanitarian Relief Interventions, MSF-H, Amsterdam, 1996. F Royal Ministry of Foreign Affairs, Norway. Evaluation of development assistance. November 1993.
E

M&E Unit: Evaluation Manual, June 1999.

Pg. 5

The indicators are often based on standards as defined in our guidelines. This enables us to answer basic questions like did we do what we intended to do? and did we do it in the right way (quality)?. Recently, the Sphere Project has formulated a set of minimum standards for humanitarian assistance that can be used in addition to our guidelines.G

2.4.

Purpose and Principles

The main purposes of evaluations are learning and accountability. The purposes are complementary and all evaluations will incorporate both aspects: One cannot learn if no account was given, accountability will only have added value if lessons are drawn from it. By indicating successes, as well as mistakes, and communicating them openly among all stakeholders, we ensure that the first are consolidated and the latter avoided in future responses. All stakeholders can react to the intentions and perceived results which will enhance learning and engagement. Evaluations are sometimes referred to as an action oriented management tool.H Receiving evaluation information creates an obligation to act on operational and strategic implications. As explained in the Monitoring manual, the type of information and the questions asked are similar in the monitoring and the evaluation process. The essential difference being that now the questions can be answered with the advantage of hindsight and one can now verify what has been accomplished. The time available for evaluations allows for more reflection and in-depth analysis of our activities. Evaluating humanitarian assistance has ethical implications. We need to be aware that our understandings and beliefs towards what is right and what constitutes good quality are determined by our background. As such, the evaluation framework itself reflects the ethical values we hold. Evaluations should not and cannot compensate for absent or inadequate planning and monitoring. Evaluations cant be used to settle disagreements or conflicts between different management levels. When evaluating, we primarily look at activities and events as they evolved over time. From this, conclusions are drawn regarding our performance. We look at the roles of the various people involved in the different management and support levels, not at their individual performance.

2.5.

Focus

Those responsible at various levels for project planning, annual planning of the country program or policy development, are also primarily responsible for initiating evaluations. Based on these planning and policy tools, evaluations are complementary to the monitoring tools like the monthly project reports and the 4-monthly reports. Evaluation results are used as input for making choices or decisions. First, one should ask if these decisions can be made based on the information already available, as collected through the monitoring process. Or, when the available information is insufficient, if an evaluation is the most appropriate tool to collect this.

G H

The Sphere Project. Humanitarian charter and minimum standards in disaster response. 1998 Planning and Organising Useful Evaluations; UNHCR Inspection and Evaluation Service, January 1998

M&E Unit: Evaluation Manual, June 1999.

Pg. 6

Evaluations in MSF commonly focus on the operational level (a single project or on all projects in a country; a country program evaluation) or on the policy level (the Country Policy, the Mid Term Policy, or a thematic policy like TB, floods, AIDS, etc.). Nevertheless, a project evaluation should consider the country policy to verify if the project translated the values of MSF and to see if it was the right thing to do for MSF. An evaluation focused on the country policy should also look at the projects implemented to realise this policy. In MSF Holland, three types of evaluations are distinguished:I 1. Project Management Cycle Evaluations: These are routine evaluations integrated into the project management cycle. The primary responsibility lies with the project co-ordinator. The evaluation process focuses on the same areas as the monitoring process: Project purpose, objectives and indicators in the logical framework in relation to the project environment (see Monitoring manual). It makes use of the accumulated monitoring information at the project management level. Annex 1 shows a basic format for the End of Project (Cycle) Evaluation report. 2. Country Program Evaluations: A start has been made to systematically evaluate all programs of MSF in every country. The primary responsibility lies with the country manager. There is no fixed interval or period after which such evaluations should take place but there is usually a specific reason to initiate this type of evaluation. This could be, for example, to identify whether or not the situation is such that MSF could leave the country based on the exit strategy or when it is unclear whether or not MSF is achieving the objectives as stated in the country policy. 3. Thematic Evaluations: Thematic evaluations are usually not part of a particular planning cycle, contrary to the previous types. In principle they can focus on any topic. Preconditions are that 1. we should have (field) experience with the topic to be evaluated (to distinguish them from operational research), 2. the theme needs attention (e.g. because a decision needs to be made at MT level, more insight is needed to do so) and 3. the theme has to have strategic importance. They can, for example, be used to investigate similar projects in various countries (like AIDS or TB projects, etc.) or strategies like capacity building, advocacy, emergency preparedness, natural disaster policy, but also issues like strategic presence in a country. Thematic evaluations are often initiated when there are indications that organisational policies need to be revised or developed. Evaluations of the organisational processes (e.g. like the functioning of the International Emergency Team, the international Head of Mission, etc.) are outside the scope of this manual.

This typology is not exhaustive and evaluation in MSF-H can take many forms, including combinations of the types mentioned above.

M&E Unit: Evaluation Manual, June 1999.

Pg. 7

3.

DIFFERENT APPROACHES FOR THE EVALUATION PROCESS

There are different approaches to the evaluation process: the constant factor is the systematic use of the logical- and evaluation framework as references. There are no set rules for the approach. The intention is to use the process in a flexible way, adapted to the information needs. One should choose the approach that best fits the purpose of the evaluation and the circumstances under which the evaluation takes place. The approaches listed below are not exhaustive and evaluation in MSF-H can use many different approaches, including combinations of the types as mentioned below. 3.1. Self evaluations

A. By a line-manager: The most basic form of self-evaluation is when the person responsible at any level for planning or policy development, evaluates his/her own activities. The outcome of such evaluation is not very different from the final report as made during the monitoring process. Although this is an approach that costs minimal additional resources, it is not recommended as it lacks any interaction with other stakeholders (limited learning) and is not very objective (limited accountability). B. As team-workshop: A better method for self-evaluation is when the process is planned together with the team responsible for the implementation. A good way to allow for reflection on achievements is by organising a team workshop in order to free people from their daily work obligations. C. Externally facilitated workshop: Such self-evaluation workshops can also be facilitated by someone who was not involved in the implementation. This has the added value that all involved, including the line-manager can participate freely in the discussions. This is particularly helpful when there are strongly differing opinions in the team. Facilitation can be done by support staff from HQ. This approach takes more time from all involved but the learning component is strong and it contributes to team-building. D. Together with counterparts or beneficiaries: Self-evaluation together with representatives from the national health authorities and/or the target population which allows learning about perspectives we cant otherwise know. This approach also allows us to be accountable to our target population. 3.2. Evaluations done by external evaluators

For this approach, the responsibilities for the collection, analysis of the data and reporting is delegated to an external person who was not involved in the project planning or implementation, nor in the policy developments. This will require less time from people involved in the programme. A. By internal MSF evaluator: The advantage of an internal evaluator is that this person will know the organisation and its policies The organisational learning aspect will be strong and accountability higher compared to self-evaluations. B. By external evaluator:. This can be useful when a particular expertise is not available within MSF. Using an external evaluator provides an objective second opinion.

M&E Unit: Evaluation Manual, June 1999.

Pg. 8

4. NINE STEPS FOR THE EVALUATION PROCESS:

For every project or program at the end of its planning cycle, the manager responsible for it should go through the evaluation process. If there are indications that a policy needs to be revised, it should be properly and systematically evaluated in order to do so. Planning and implementing the steps for the evaluation is an interactive process between the manager who asked for it, all MSF staff involved and the evaluator(s). When indicated, counterparts or (representatives of) beneficiaries should be involved. The main points of the evaluation are written down in the Terms Of Reference which also serves as planning tool for the evaluator (see annex 2). It is important to allow for adjustments along the way depending on possible unexpected findings, constraints or opportunities. Major changes in the TOR need to be approved by the person responsible for the evaluation.

4.1.

Initiating the evaluation

Evaluations are either planned routinely within the management cycle or requested for a specific reason. It should be clear who is directly responsible. Although in principle anyone can request an evaluation, it has to be commissioned by someone within the line-management.J This is also to ensure proper follow-up. This step is also used to define who are involved in the evaluation process and to determine their roles. Project team members, national staff, CMT, support staff from HQ, support sections, as well as our counterparts and beneficiaries, are potential participants who need to be involved or at least be well informed.

4.2.

Defining the purpose

It is important to be very clear about the purpose(s) of the evaluation. This can be achieved by describing the decisions or management choices that need to made for which the evaluation will provide the necessary information. Often these will be straightforward operational (project planning) decisions: continuing, stopping or handing over the project activities, why, how and when this should be done. There may be a need to revise or develop operational policies or to reconsider the position and role of MSF in a specific country or region (country policy). It is important that all involved are clear and open about the purpose(s). When there is disagreement about the purpose, this has to be sorted out before continuing the evaluation. The transparency thus achieved enhances the usefulness of the evaluation. Allowing sufficient time for this step is a wise investment. At this point, the general approach for the evaluation is also chosen (see chapter 3).

Managers referred to in this document include Operational Directors, Country Managers and Project Coordinators.

M&E Unit: Evaluation Manual, June 1999.

Pg. 9

4.3.

Defining the scope

The evaluation needs to focus on those aspects of the project, program or policy on which more valued information and analysis is required to make decisions or choices. This can be achieved by emphasising any combination of evaluation criteria and/or the projects implementation process, its inputs, outputs or results. The scope is usually determined by noting which aspects of the project were unclear, debated or that went very well. It can be further narrowed down by describing the period and the geographical areas to be evaluated.

4.4.

Defining the key-questions

Once the scope is clear, this can be translated into more concrete questions that need to be answered by the evaluation. Usually 3 to 5 key questions are formulated under each evaluation criterion included in the scope. By keeping the questions short and simple under those criteria that will receive less emphasis, the evaluation team can avoid spending too much time on them. Examples of frequently asked key-questions can be found in annex 3.

4.5.

Choosing the methodology

When all the above is clear, it will become relatively simple to choose the methodologies and determine the information sources that will provide the answers. Usually a mix of qualitative and quantitative methods provide the best results. The choice will always be a trade-off between the desired accuracy/precision (usually referred to in social research as reliability) of the outcomes against the limitations in time and resources. Whatever methodologies are chosen, it is important to use different sources of information to allow for cross-checking. Conclusions in the evaluation report require references to the methods and sources used to arrive at these findings. A standard method is the review of MSF information sources that were used for planning and monitoring like assessment reports, project plans, proposals, annual plans, country policies and project reports. Complementary information can be obtained by interviewing MSF staff (expat and national), stakeholders (including beneficiaries) and representatives of national and international organisations. Other methods include direct observations, team meetings, workshops or focus group discussions, analysis of available epidemiological surveillance data, coverage or KAP surveys and PRA techniques.

4.6.

Selecting the evaluators

It will now be possible to decide who will actually perform or facilitate the evaluation. This can range from the CM, the PC or someone from the support departments, to someone from outside the organisation or a combination. For any evaluation, when the purpose requires an external evaluator, it is advised to draw up a job-profile. Depending on the scope of the evaluation, the key questions and the methodology, one can define the evaluators required language skills, areas of expertise and necessary prior experience. It should be kept in mind that the ideal evaluator rarely exists or if found happens to be unavailable. Communication skills, flexibility, ability to listen and to offer constructive criticism, balancing strong and weak points are essential prerequisites. As the report will provide the basis for institutional memory (and thus learning for future generations), good writing skills are crucial.

M&E Unit: Evaluation Manual, June 1999.

Pg. 10

4.7.

Planning the evaluation

The evaluator will usually start with a preparation in HQ or the country capital. The TOR can be fine-tuned at this stage based on feedback from the evaluator. An outline of the desired final report is made. Discussions are held with the CM and PC to arrange a travel schedule in the field and to make appointments with key resource persons. At the end of the field visit, the evaluator will present and discuss preliminary findings with the team. The evaluator may return to HQ for final interviews and will start writing the first draft of the report. This report will be circulated among al staff involved to correct factual errors, but not findings or opinions, after which the final report will be written. Allow sufficient time for the reporting period. The findings will be presented and discussed in HQ and/or in the field.

4.8.

Follow-up

Following up on evaluations is probably the most challenging issue in evaluation. The linemanager who commissioned the evaluation is primarily responsible. There is an obligation to act on operational and strategic implications of the evaluation. After consultation with all involved, (s)he should provide formal feedback on the outcomes and recommendations of the evaluation. The decisions or choices to which the evaluation was meant to provide input, should now be made and implemented. This is usually reflected in a new or revised planning document. Approved policy changes should be reflected in e.g. revised country policy. Response to recommendations can then be assessed through the regular monitoring process.

4.9.

Defining the budget

If the evaluation is part of the project management cycle, funds for the evaluation should have been included in the original project proposal. When done as self-evaluation, additional costs are low. If an internal MSF consultancy is required for facilitation or for doing the evaluation, at least 260 DFL per day and travel costs need to be taken into account. For external consultancies, the following costs need to be taken into account: Travel costs to and within the country, daily fees for external consultants (500 -1.500 DFL, average 750 DFL), living expenses, communication costs (500-1.000 DFL), printing costs for the report (500-750 DFL) and 5% for contingencies.

M&E Unit: Evaluation Manual, June 1999.

Pg. 11

ANNEX 1

FORMAT END OF PROJECT (CYCLE) EVALUATION REPORT

Country :X Name PC : EoP evaluation report: From 1. SUMMARY (3 pages max.):


Project Till

: (name and cost centre)

Short description of the project (purpose, target group, project period, budget, etc.) Purpose and approach of the evaluation Conclusions and recommendations

2. CONTEXT, NEEDS AND RESPONSE


Short description of the project environment (based on description of the pre-conditions and assumptions) and changes over time.

3. RESULTS/OUTCOME:
Project Purpose (PP): describe the main objective of the project . PP indicator (=Planned result): planned target group, quantity, quality, time and place Current Value of PP indicator: Achieved results Constraints/opportunities (Observations PC): Incl. lessons learned

4. OUTPUTS/ACTIVITIES:
Specific Objective (SO) 1: SO indicator: Current value of SO indicator(s): Constraints/opportunities: Incl. lessons learned

5. RESOURCES:
HRM: # expat /national staff, % volunteers, constraints, opportunities, etc. Finances: costing exercise of actual expenses per budget-line (transport, expat, local staff, etc.). Logistics: constraints, opportunities, lessons learned General management issues: constraints, opportunities, lessons learned

6. EVALUATION FRAMEWORK:
Were the activities and outcomes appropriate? Did the project reach its purpose on the basis of project outputs? Was it effective? Did the project have impact on the wider context and needs (overall objectives, context)? Were inputs used in the most efficient way to generate the outputs? Was the project implemented in a coherent way? How was co-ordination? Connectedness: were long term problems taken into account? Did the project have good coverage of the target population?

7. CONCLUSIONS:
Status of the project. Main findings. Lessons learned.

8. RECOMMENDATIONS:
Decisions/choices made. Adjustment of planning for the next project cycle.

9. ANNEXES:
Terms of Reference Methodology, list of people interviewed Reference documents (top 10 most relevant documents) Annexes with more detailed information

M&E Unit: Evaluation Manual, June 1999.

Pg. 12

ANNEX 2

THE TERMS OF REFERENCE

1. Title for the evaluation Which country, project or policy will be evaluated and which period will be covered. Indicate the author, the date and version. The TOR should be maximum 3 to 4 pages. 2. Responsibilities and lines of communication Who initiated the evaluation? Who will be directly responsible within the line-management for the co-ordination? Who will do the evaluation and to whom will this person report. Who should be involved and what are their roles? Who is/are responsible for the follow-up? 3. Context and history A short description of the context and a brief history of the project(s). An outline of the current project or program, its objectives and capacity. Refer to other documents like the annual plan, country policy, etc. for further information 4. Purpose What is the main motive for the evaluation: what are the decisions or choices that need to be made, what will the evaluation be used for and by whom? Is it a routine project cycle management evaluation, or is it an ad-hoc evaluation e.g. to revise a policy or to study similar projects in various settings? 5. Scope What main aspects (evaluation criteria) will be addressed in the evaluation and which aspects will receive less or no emphasis? Which degree of detail is sought after for the different aspects? Which time period, areas, projects or policies will be covered? Which not? 6. Key questions Make a list of the evaluation criteria and formulate for each of them key questions to be addressed. See annex 3 for examples of frequently asked key questions. 7. Methodology What information is needed to answer the questions? What methods can be used to collect and analyse this? Available information sources should be mentioned as well as organisations and key resource persons to be consulted. (lit. references on various methods?) 8. Profile of the evaluator(s) Describe the profile(s) of those who will do or facilitate the evaluation. Will it be an internal or external person, a combination? What expertise, skills and experience are required? 9. Planning A time schedule for preparation, implementation, report writing and debriefing. Maximum number of days per phase should be indicated as well as deadline for the final report. 10. Reporting and debriefing The desired size of the evaluation report can be defined including a summary. Debriefings and presentation of the findings to different audiences, field and HQ, can be determined. 11. Budget Estimate costs for travel, transport, evaluator(s), accommodation, communication and printing of the report. Include a contingency.

M&E Unit: Evaluation Manual, June 1999.

Pg. 13

ANNEX 3

EXAMPLES OF EVALUATION KEY-QUESTIONS

Appropriateness:
Which MSF policies apply and to which extend did they determine the design and implementation of the project/programme? Was the MSF policy itself appropriate? Was the intervention appropriate according to the perception (expressed needs/demand) of the target population and/or according to national policies: how were the power relations (including gender), cultural perceptions and relevant customs of beneficiaries assessed, and taken into account? Were our intervention choices appropriately prioritised to meet the most urgent needs first?

Connectedness:
Was a phasing out strategy designed and achieved? What did it consist of? Which local capacities and resources were identified? How did the project connect with them? In how far were local resources and coping mechanisms strengthened to take responsibility for the health of the beneficiaries after MSF leaves? To restore the capacity to make own choices?

Effectiveness:
Was the project purpose, in terms of medical and and/or advocacy, achieved? Were the activities carried out as originally planned? Were there any unforeseen/foreseen negative side-effects or positive side-effects? Did we make the right and timely adaptations in response to the changes in the project environment? How do the achieved results compare against quality standards, as defined in internal guidelines or by international organisations (WHO, Sphere Project, etc.)?

Impact:
To which extent have the overall objectives been achieved? Can a contribution to the changes in the health status of the target population be attributed to the project? Were been any changes in the underlying causes of the human rights violations of our target population? To what extent can these changes be attributed to our advocacy strategies? Did we contribute to the protection of victims of the conflict? Did the project have impact toward stabilisation of the conflict? Did our presence have any unforeseen harmful impact?

Coherence:
Which were MSFs partners while implementing this programme and what were their roles? Which other humanitarian actors were involved directly or indirectly? How were respective activities/roles co-ordinated? Were there any gaps, overlap in services?

Coverage:
To which extent did the project activities reach the specific target population? To which extent did beneficiaries have access to project services? Was anyone excluded from our services?

Efficiency:
What were the financial and human resources (overview of funds over the project period divided in the general budget lines, overview of expats and national staff involved) in relation to the various outputs of the project (overview of the quantitative and qualitative outputs) Were management guidelines followed? In how far did they facilitate the achievement of the objectives? Could the activities or results have been achieved at lower costs? Could we have done more within the same budget? Were inputs and resources used to their maximum potential? Were human resources managed well (timely filling of vacancies, percentage volunteers, etc.)? How did the logistics function (e.g. timely delivery of goods, transport, etc.)?

M&E Unit: Evaluation Manual, June 1999.

Pg. 14

Das könnte Ihnen auch gefallen