Sie sind auf Seite 1von 17

PART ONE CHAPTER 4.

OBJECTIVES The focus of monitoring and evaluation on relevance, performance and success is strategically linked to the objective of ensuring that UNDP-assisted programmes and projects produce sustainable results that benefit the target groups and the larger communities of which they are a part. Both functions contribute to the achievement of this objective by supporting decision-making, accountability, learning and capacity development. Decision-making Decision-making may be linked to interventions at the macro, meso and micro levels. Macro-level decisions relate to policies that cut across sectors and affect the overall development process. Decisions made at the meso and micro levels pertain to programmes and projects, respectively. UNDP monitoring and evaluation actions support decision-making at all three levels, e.g., policy and strategic evaluations at the macro level and monitoring and evaluation of programmes and projects, individually and in clusters, at the other two levels. However, many of these actions are currently concentrated at the meso and micro levels. The data and information collected during monitoring and evaluations constitute a critical foundation for action by programme managers and stakeholders, who need to be able to identify evolving problems and decide on crucial strategies, corrective measures, and revisions to plans and resource allocations pertaining to the activities in question. Even after the completion of a programme or project, monitoring and evaluation can contribute significantly to decision-making. For instance, terminal reports, considered to be part of the monitoring function, can contain recommendations for follow-up activities. Post-programme or post-project monitoring can lead to the recommendation of measures to improve the sustainability of results produced by the programme or project. Accountability Monitoring and evaluation provide critical assessments that demonstrate whether or not programmes or projects satisfy target group needs and priorities. They help to establish substantive accountability by generating answers to questions such as:

What is the impact of the programme or project on the target groups and the broader development context? Are the required mechanisms in place to sustain the benefits in a dynamic, strategic way?

As for the question "Who is accountable?", monitoring and evaluation must be used to support accountability at different management levels within UNDP, i.e., the accountability of resident representatives, Senior Management at headquarters, the Administrator and the Executive Board.

Learning The learning derived from monitoring and evaluation can improve the overall quality of ongoing and future programmes and projects. This is particularly significant when one considers UNDP support for innovative, cutting-edge programmes and projects with all the attendant risks and uncertainties. The learning that occurs through monitoring applies particularly to ongoing programmes or projects. Mistakes are made and insights are gained in the course of programme or project implementation. Effective monitoring can detect early signs of potential problem and success areas. Programme or project managers must act on the findings, applying the lessons learned to modify the programme or project. This learning by doing serves the immediate needs of the programme or project, but it can also provide feedback for future programming. On the other hand, the learning that results from terminal and ex-post evaluations is relevant particularly to future programmes and projects. In such cases, it can be more definitive, especially if evaluations are conducted for clusters of projects or programmes from which lessons can be extracted for broader application. The lessons, which may apply to a given sector, theme or geographical area, such as a country or region, can, of course, be adapted or replicated depending on the context. Learning from monitoring and evaluation must be incorporated into the overall programme or project management cycle through an appropriate feedback system (see chapters five and 15) and support decision-making at various levels, as described above. Capacity Development Monitoring and evaluation must contribute to the UNDP mission to achieve SHD by assisting programme countries to develop their capacity to manage development. Improving the decision-making process, ensuring accountability to target groups or stakeholders in general, and maximizing the benefits offered by learning from experience can all contribute to strengthening capacities at the national, local and grass-roots levels, including, in particular, the capacities for monitoring and evaluation. National execution, the current modality for UNDP-assisted programmes and projects, implies a corresponding shift to a bipartite mechanism for monitoring and evaluation, with the programme country Government and UNDP as major partners. UNDP monitoring and evaluation activities can serve as entry points for assisting Governments to strengthen their monitoring and evaluation capacities since they bear primary responsibility for monitoring and evaluating their programmes and projects. PART ONE CHAPTER 5. MONITORING AND EVALUATION AND THE PROGRAMME/PROJECT CYCLE Monitoring and evaluation are integral parts of the programme/project management cycle . On the one hand, monitoring and evaluation are effective tools for enriching the quality of interventions through their role in decision-making and learning. On the other hand, the quality of project design (e.g., clarity of objectives, establishment of indicators) can affect the quality of monitoring and evaluation. Furthermore, the experience gained from implementation can contribute to the continuing refinement of monitoring and evaluation methodologies and instruments.

To maximize the benefits of monitoring and evaluation, the recommendations and lessons learned from those functions must be incorporated into the various phases of the programme or project cycle. PRE-FORMULATION: SEARCHING FOR LESSONS LEARNED At the identification and conceptualization stages of a programme or project, the people responsible for its design must make a thorough search of lessons learned from previous or ongoing UNDP-assisted programmes and projects and from the field of development cooperation at large. A wide variety of sources of information are available in UNDP, other donor institutions, government offices and elsewhere. Those sources take the form of printed material, electronic media such as the Internet and computerized databases (see chapter 15). Databases such as the UNDP CEDAB and the OECD/DAC database facilitate the search for relevant lessons extracted from evaluation reports since the lessons can be sorted using multiple criteria (e.g., sector, country, region). FORMULATION: INCORPORATING LESSONS LEARNED AND PREPARING A MONITORING AND EVALUATION PLAN Relevant lessons learned from experience with other programmes and projects must be incorporated in the design of a new programme or project. A monitoring and evaluation plan must also be prepared as an integral part of the programme or project design. Those responsible for programme or project design must:

construct baseline data describing the problems to be addressed; clarify programme or project objectives; set specific programme or project targets in accordance with the objectives; establish consensus among stakeholders on the specific indicators to be used for monitoring and evaluation purposes; define the types and sources of data needed and the methods of data collection and analysis required based on the indicators; reach agreement on how the information generated will be used; specify the format, frequency and distribution of reports; establish the monitoring and evaluation schedule; assign responsibilities for monitoring and evaluation; provide an adequate budget for monitoring and evaluation.

NOTE:

A monitoring and evaluation plan is not intended to be rigid or fixed from the outset; rather, it should be subject to continuous review and adjustment as required owing to changes in the programme or project itself. BOX 3. MONITORING AND EVALUATION PLANNING FRAMEWORK

Construct baseline data on problems to be addressed. Clarify programme or project objectives and set specific targets. Establish stakeholders' consensus on indicators. Define data collection process requirements and usage. Agree on the generation and utilization of information. Specifyreporting requirements(format,frequency,distribution). Establish monitoring and evaluation schedule. Assign monitoring and evaluation responsibilities. Provide adequate budget for monitoring and evaluation.

The appraisal and approval of programmes and projects must ensure that appropriate lessons and a monitoring and evaluation plan are incorporated in the programme or project design. IMPLEMENTATION: MONITORING AND EVALUATION AS SUPPORT TO DECISION-MAKING AND LEARNING As noted earlier, since monitoring is an ongoing process, it can reveal early signs of problems in implementation. This information can serve as a basis for corrective actions to ensure the fulfilment of programme or project objectives. Areas of success can also be revealed through monitoring, enabling their reinforcement. The contribution made by both monitoring and evaluation to lessons learned was also noted earlier. Thus, programme managers and other stakeholders must make certain that a learning culture is maintained throughout the imple-mentation of a programme or project. Such a culture should motivate those involved in programme or project management to learn from their experience and apply those lessons to the improvement of the programme or project. Learning can be enhanced through participatory mechanisms that enable the various stakeholders to share their views and provide feedback when and where it is needed (see chapters nine and 15). PROGRAMME OR PROJECT COMPLETION: DISSEMINATION OF LESSONS LEARNED Upon termination of a programme or project, stakeholders as a group must take stock of the experience

that has been gained: successes and failures, best and worst practices, future challenges and constraints. Special emphasis should be placed on identifying the lessons that have the potential for wider application, determining which particular user groups could benefit most from such lessons, and ascertaining the best way to disseminate the lessons to the target groups (see chapter 15). PART ONE CHAPTER 6. CONSTRAINTS AND CHALLENGES Certain conceptual and methodological constraints and challenges are associated with the monitoring and evaluation functions. Effective monitoring and evaluation can be achieved only through a careful, pragmatic approach to addressing these limitations. DEPENDENCE ON CLARITY OF OBJECTIVES AND AVAILABILITY OF INDICATORS Monitoring and evaluation are of little value if a programme or project does not have clearly defined objectives and appropriate indicators of relevance, performance and success. Any assessment of a programme or project, whether through monitoring or evaluation, must be made vis--vis the objectives, i.e., what the interventions aim to achieve. Indicators are the critical link between the objectives (which are stated as results to be achieved) and the types of data that need to be collected and analysed through monitoring and evaluation. Hence, lack of clarity in stating the objectives and the absence of clear key indicators will limit the ability of monitoring and evaluation to provide critical assessments for decision-making, accountability and learning purposes. TIME CONSTRAINTS AND THE QUALITY OF MONITORING AND EVALUATION Accurate, adequate information must be generated within a limited time frame. This may not be a very difficult task in the case of monitoring actions since programme or project managers should be able to obtain or verify information as necessary. However, the challenge is greater for UNDP evaluation missions conducted by external consultants. The average duration of such missions is three weeks; however, this should not be considered as the norm. UNDP country offices, in consultation with the Government and UNDP units at headquarters (i.e., regional bureaux and OESP), should have the flexibility to establish realistic timetables for these missions, depending on the nature of the evaluations. Budgetary provisions must be made accordingly. OBJECTIVITY AND INDEPENDENCE OF EVALUATORS AND THEIR FINDINGS No evaluator can be entirely objective in his or her assessment. It is only natural that even external evaluators (i.e., those hired from outside the Government or UNDP) could have their own biases or preconceptions. The composition of the evaluation team is therefore important in ensuring a balance in views. It is also crucial that evaluators make a distinction between facts and opinions. External evaluators must seek clarification with the Government or other concerned parties on matters where there are seeming inconsistencies to ensure the accuracy of the information. This applies particularly to understanding the cultural context of the issues at hand. In cases where opinions diverge, the external evaluators must be willing to consider the views of others in arriving at their own assessments. LEARNING OR CONTROL? Traditionally, monitoring and evaluation have been perceived as forms of control mainly because their

objectives were not clearly articulated and understood. Thus, the learning aspect of monitoring and evaluation needs to be stressed along with the role that these functions play in decision-making and accountability. In the context of UNDP, the contribution of learning to the building of government capacity to manage development should be emphasized. FEEDBACK FROM MONITORING AND EVALUATION Monitoring and evaluation can provide a wealth of knowledge derived from experience with development cooperation in general and specific programmes and projects in particular. It is critical that relevant lessons be made available to the appropriate parties at the proper time. Without good feedback, monitoring and evaluation cannot serve their purposes. In particular, emphasis must be given to drawing lessons that have the potential for broader application, i.e., those that are useful not only to a particular programme or project but also to related interventions in a sector, thematic area or geographical location (see chapter nine). RESPONSIBILITIES AND CAPACITIES Governments usually must respond to a variety of monitoring and evaluation requirements from many donors, including UNDP. This situation is being partially addressed through the harmonization efforts of United Nations agencies, specifically those that are members of the Joint Consultative Group on Policy (JCGP). Within the context of national execution in particular, there should be only one monitoring and evaluation system, namely, the national monitoring and evaluation system of the Government. The UNDP monitoring and evaluation system and those of other donors should be built upon that national system to eliminate duplication and reduce the burden on all parties concerned. Not all governments, however, may have the full capacity to carry out the responsibilities for monitoring and evaluation adequately. In such cases, UNDP should assist the governments to strengthen their monitoring and evaluation capacities. Monitoring and Evaluation: What Can They Do for Me? Excerpts from: Monitoring and Evaluating Urban Development Programs, A Handbook for Program Managers and Researchers. Bamberger, Michael and Hewitt, Eleanor. World Bank Technical Paper no 53. (Washington, D.C.: 1986) Definitions: Monitoring: This type of evaluation is performed while a project is being implemented, with the aim of improving the project design and functioning while in action. An example given in the World Bank Technical Paper, Monitoring and Evaluating Urban Development Programs, A Handbook for Program Managers and Researchers by Michael Bamberger, describes a monitoring study that, by way of rapid survey, was able to determine that the amount of credit in a micro credit

scheme for artisans in Brazil was too small. The potential beneficiaries were not participating due to the inadequacy of the loan size for their needs. This information was then used to make some important changes in the project. Bamberger defines it as: an internal project activity designed to provide constant feedback on the progress of a project, the problems it is facing, and the efficiency with which it is being implemented (Bamberger 1) Evaluation: An evaluation studies the outcome of a project (changes in income, housing quality, benefits distribution, cost-effectiveness, etc.) with the aim of informing the design of future projects. An example from Monitoring and Evaluating Urban Development Programs, A Handbook for Program Managers and Researchers describes an evaluation of a cooperative program in El Salvador that determined that the cooperatives improved the lives of the few families involved but did not have a major impact on overall employment. Bamberger describes evaluation as mainly used to help in the selection and design of future projects. Evaluation studies can assess the extent to which the project produced the intended impacts (increases in income, better housing quality, etc.) and the distribution of the benefits between different groups, and can evaluate the cost-effectiveness of the project as compared with other options (Bamberger 1). Monitoring and evaluation need not be expensive or complicated, nor do they require specialists or grand calculations. The complexity and extent of the studies can be adapted to fit the program needs. The job of the project manager in this process is to point out those areas in need of monitoring or evaluation. If this is left to the researchers, the studies may tend to be too academic and not as useful to project management. Evaluation and monitoring systems can be an effective way to: Provide constant feedback on the extent to which the projects are achieving their goals. Identify potential problems at an early stage and propose possible solutions. Monitor the accessibility of the project to all sectors of the target population. Monitor the efficiency with which the different components of the project are being implemented and suggest improvements. Evaluate the extent to which the project is able to achieve its general objectives. Provide guidelines for the planning of future projects (Bamberger 4). Influence sector assistance strategy. Relevant analysis from project and policy evaluation can highlight the outcomes of previous interventions, and the strengths

and weaknesses of their implementation. Improve project design. Use of project design tools such as the logframe (logical framework) results in systematic selection of indicators for monitoring project performance. The process of selecting indicators for monitoring is a test of the soundness of project objectives and can lead to improvements in project design. Incorporate views of stakeholders. Awareness is growing that participation by project beneficiaries in design and implementation brings greater ownership of project objectives and encourages the sustainability of project benefits. Ownership brings accountability. Objectives should be set and indicators selected in consultation with stakeholders, so that objectives and targets are jointly owned. The emergence of recorded benefits early on helps reinforce ownership, and early warning of emerging problems allows action to be taken before costs rise. Show need for mid-course corrections. A reliable flow of information during implementation enables managers to keep track of progress and adjust operations to take account of experience (OED).

Framework for Project Monitoring and Evaluation Figure 1-1 is a framework for project monitoring and evaluation from the World Bank technical paper: Monitoring and Evaluating Urban Development Programs, A Handbook for Program Managers and Researchers. It breaks down the process into several levels of evaluation.

Evaluation Outputs and the Project Cycle Table 1-1 explains the various types of evaluation and when they are performed (Monitoring and Evaluating Urban Development Programs, A Handbook for Program Managers and Researchers).

Good Design Has Five Components Excerpted pieces from: Designing Project Monitoring and Evaluation. Lessons and Practices, no 8. Operations Evaluation Department. 6/1/96 (emphasis added)

Good monitoring and evaluation design during project preparation is a much broader exercise than just the development of indicators. Good design has five components: 1. Clear statements of measurable objectives for the project and its components, for which indicators can be defined. 2. A structured set of indicators, covering outputs of goods and services generated by the project and their impact on beneficiaries. 3. Provisions for collecting data and managing project records so that the data required for indicators are compatible with existing statistics, and are available at reasonable cost. 4. Institutional arrangements for gathering, analyzing, and reporting project data,

and for investing in capacity building, to sustain the M&E service. 5. Proposals for the ways in which M&E findings will be fed back into decision making. Examples 1. Project objectives Projects are designed to further long-term sectoral goals, but their immediate objectives, at least, should be readily measurable. Thus, for example, a health project might be designed to further the sectoral goals of a reduction in child mortality and incidence of infectious diseases, but have an immediate, measurable objective of providing more equitable access to health services. Objectives should be specific to the project interventions, realistic in the timeframe for their implementation, and measurable for evaluation. Indias District Primary Education Project, for example, set out its objectives at the district level in clear statements linked directly to indicators: Capacity building: District sub-project teams would be fully functional, implementing sub-project activities and reporting quarterly on progress. In-service teams would be functioning, with augmented staff and equipment, providing support for planning and management, teacher in-service training, development of learning materials, and program evaluation. Reducing dropout and improving learning achievement: School/community organizations would be fully functional for at least half the schools, and dropout rates would be reduced to less than 10 percent. Learning achievements in language and mathematics in the final year of primary school would be increased by 25 percent over baseline estimates. Improving equitable access. Enrollment disparities by gender and caste would be reduced to less than 5 percent. 2. Indicators Input indicators are quantified and time-bound statements of resources to be provided. Information on these indicators comes largely from accounting and management records. Input indicators are often left out of discussions of project monitoring, though they are part of the management information system. A good accounting system is needed to keep track of expenditures and provide cost data for performance analysis of outputs. Input indicators are used mainly by managers closest to the tasks of implementation, and are consulted frequently, as often as daily or weekly. Examples: vehicle operating costs for the crop extension service; levels of financial contributions from the government or cofinanciers; appointment of staff; provision

of buildings; status of enabling legislation. Process indicators measure what happens during implementation. Often, they are tabulated as a set of contracted completions or milestone events taken from an activity plan. Examples: Date by which building site clearance must be completed; latest date for delivery of fertilizer to farm stores; number of health outlets reporting family planning activity; number of women receiving contraceptive counseling; status of procurement of school textbooks. Output indicators show the immediate physical and financial outputs of the project: physical quantities, organizational strengthening, initial flows of services. They include performance measures based on cost or operational ratios. Examples: Kilometers of all-weather highway completed by the end of September; percentage of farmers attending a crop demonstration site before fertilizer topdressing; number of teachers trained in textbook use; cost per kilometer of road construction; crop yield per hectare; ratio of textbooks to pupils; time taken to process a credit application; number of demonstrations managed per extension worker; steps in the process of establishing water users' associations. Impact refers to medium or long-term developmental change. (Some writers also refer to a further class of outcome indicators, more specific to project activities than impact indicators, which may be sectoral statistics, and deal more with the direct effect of project outputs on beneficiaries.) Measures of change often involve complex statistics about economic or social welfare and depend on data that are gathered from beneficiaries. Early indications of impact may be obtained by surveying beneficiaries' perceptions about project services. This type of leading indicator has the twin benefits of consultation with stakeholders and advance warning of problems that might arise. Examples of impact: (health) incidence of low birth weight, percentage of women who are moderately or severely anemic; (education) continuation rates from primary to secondary education by sex, proportion of girls completing secondary education; (forestry) percent decrease in area harvested, percent increase in household income through sales of wood and non-wood products. Examples of beneficiary perceptions: proportion of farmers who have tried a new variety of seed and intend to use it again; percentage of women satisfied with the maternity health care they receive. 3. Collecting Data and Managing Project Records The achievement of project objectives normally depends on how project

beneficiaries respond to the goods or services delivered by the project. Evidence of their response and the benefits they derive requires consultation and data collection that may be outside the scope of management. It is important to identify how beneficiaries are expected to respond to project services, because managers will need evidence of that response if they are to modify their activities and strategy. Indications that beneficiaries have access to, are using, and are satisfied with project services give early indication that the project is offering relevant services and that direct objectives are likely to be met. Such evidence - market research - may be available sooner and more easily than statistics of impact such as changes in health status or improvements in income. Market research information is an example of a leading indicator of beneficiary perceptions that can act as a proxy for later, substantive impact. Other leading indicators can be identified to give early warning about key assumptions that affect impact. Examples would include price levels used for economic analysis, passenger load factors in transport projects, and adoption of healthcare practices. When planning the information needs of a project there is a difference between the detail needed for day-to-day management by the implementing agency or, later, for impact evaluation, and the limited number of key indicators needed to summarize overall progress in reports to higher management levels. For example, during construction of village tubewells, project managers will need to keep records about the materials purchased and consumed, the labor force employed and their contracting details, the specific screen and pump fitted, the depth at which water was found, and the flow rate. The key indicators however, might be just the number of wells successfully completed and their average costs and flow rates. Exogenous indicators are those that cover factors outside the control of the project but which might affect its outcome, including risks (parameters identified during economic, social, or technical analysis, that might compromise project benefits); and the performance of the sector in which the project operates. Concerns to monitor both the project and its wider environment call for a data collection capacity outside the project and place an additional burden on the projects M&E effort. A recent example of a grain storage project in Myanmar demonstrates the importance of monitoring risk indicators. During project implementation, policy decisions about currency exchange rates and direct access by privately owned rice mills to overseas buyers adversely affected the profitability of private mills. Management would have been alerted to the deteriorating situation had these indicators of the enabling environment been carefully monitored. Instead, a narrow focus on input and process indicators missed the fundamental change in the assumptions behind the project. The relative importance of indicators is likely to change during the implementation of a project,

with more emphasis on input and process indicators at first, shifting to outputs and impact later on. This is a distinction between indicators of implementation progress and indicators of development results. Data collection Project field records. Indicators of inputs and processes will come from project management records originating from field sites. The quality of record keeping in the field sets the standard for all further use of the data and merits careful attention. M&E designers should examine existing record-keeping and the reporting procedures used by the project authorities to assess the capacity to generate the data that will be needed. At the same time, they should explain how and why the indicators will be useful to field, intermediate, and senior levels of project management. The design of field records about, say, farmers in extension groups, people attending a clinic, or villagers using a new water supply, will affect the scope for analysis later. The inclusion of simple socioeconomic characteristics such as age and sex may significantly improve the scope for analysis. A good approach is to structure reporting from the field so that aggregates or summaries are made at intermediate stages. In this way, field staff can see how averages or totals for specific villages or districts enable comparisons to be drawn and fieldwork improved. Surveys and studies. To measure output and impact may require the collection of data from sample surveys or special studies (including, where appropriate, participatory methods). Studies to investigate specific topics may call for staff skills and training beyond those needed for regular collection of data to create a time series. Where there is a choice, it is usually better to piggyback project-specific regular surveys on to existing national or internationally supported surveys than to create a new data collection facility. Special studies may be more manageable by a project unit directly, or subcontracted to a university or consultants. If the special studies are to make comparisons with data from other surveys it is vital that the same methods be used for data collection (see below). In the project plan, proposals to collect data for studies should include a discussion of: the objectives of the study or survey; the source of data; choices and proposed method of collection; and likely reliability of the data. Data comparability. Some desired indicators of impact, such as mortality rates, school attendance, or household income attributable to a project, may involve comparisons with the situation before the project, or in areas not covered by the project. Such comparisons may depend on the maintenance of national systems of vital statistics, or national surveys. Before data from such sources are chosen as indicators of project impact the designer needs to confirm that the data systems are in place and reliable and that the data are valid for the administrative area in question and for any control areas. Potential problems in making comparisons with

existing data include incomplete coverage of the specific project area; the use of different methods to collect data, such as interviewing household members in one survey and only household heads in another; and changes in techniques such as measuring crop output in one survey and collecting farmers estimates in another. Problems such as these can invalidate any comparison intended to show changing performance. To give the comparability needed for evaluation, study proposals should explain and justify the proposed approach and ensure consistency in methods. The complexity of the statistics and problems of attributing causality mean that often it is more appropriate to use the delivery of services and beneficiary response as proxy indicators than to attempt to measure impact. Participatory methods of data collection can bring new insights into peoples needs for project planning and implementation, but are no less demanding on skills than questionnaire surveys. They are time-consuming and require substantial talent in communication and negotiation between planners and participants. 4. Institutional arrangements; capacity building Good M&E should develop the capacity of the borrower and build on existing systems. Capacity building is widely acknowledged to be important but is often poorly defined. It means: upgrading skills in monitoring and evaluation, which include project analysis, design of indicators and reporting systems, socioeconomic data collection, and information management; improving procedures, to create functional systems that seek out and use information for decisions; and strengthening organizations to develop skilled staff in appropriate positions, accountable for their actions. 5. How Monitoring and Evaluation Findings Can Be Fed Back into Decision Making In projects where operating performance standards are quoted as an objective, or where decentralized processes call for localized capacity to plan and manage work programs and budgets, designers will need to describe how and when M&E findings will be used to shape work plans and contribute to program or policy development. In Mexico, for example, the Second Decentralization and Regional Development Project plans to incorporate monitoring of implementation into its regular management procedures. Annual plans are to be prepared for each component, including an element of institutional development, and these will form the basis of annual monitoring. The analysis of implementation will depend on the functioning of a central database about sub-projects, created in each state from standardized data sheets. The database will produce the reports required for the project approval procedures, giving an incentive to field staff to use the system. Results from the implementation database will be analyzed in order to target field reviews and a mid-term review. The project has no specific monitoring and

evaluation unit. Instead, each management sub-unit responsible for technical oversight of a component is responsible for ensuring the quality and timeliness of data collection, and for producing and analyzing reports. These reports will be presented by project component and be used to help diagnose technical and institutional implementation issues, propose and conduct studies, and plan institutional development and training. Experience with Implementation Even with a good design for M&E, the Banks experience shows that success during implementation depends heavily on a sense of ownership by the borrower, adequate capacity in borrower institutions, and sustained interest from the task and project managers throughout the life of the project. Two factors are important here. One is that the borrowers sense of ownership of the project provides a stimulus to transparent management and good information about progress. The other is that often borrowers doubt the value of adopting what may be costly and time consuming procedures to collect, analyze, and report information. In such circumstances sound design is especially important, with monitoring information providing a clear input to management decision making and, often, an emphasis on the early gains to be had from monitoring and on institutional procedures that encourage the use of monitoring data to trigger further implementation decisions.

Suggested reading: Appleton, Simon, Problems in Measuring Changes in Poverty over Time, IDS Bulletin Vol. 27 No 1, 1996. Brighton, UK: Institute for Development Studies. Casley, Dennis J. and Krishna Kumar, Project Monitoring and Evaluation in Agriculture. Washington, D.C.: World Bank, 1987. Operations Evaluation Department, World Bank, Building Evaluation Capacity, Lessons & Practices No. 4, November 1994. - Monitoring and Evaluation Plans in Staff Appraisal Reports in fiscal year 1995. Report No. 15222, December 1995.

| What is Urban Upgrading? | Doing Urban Upgrading | Case Examples | | Issues and Tools | Resources | About This Site | | Search Web Site | Site Map | Home | Ask Grady | Feedback |

Copyright 1999-2001, The World Bank Group. All Rights Reserved.

Das könnte Ihnen auch gefallen