Beruflich Dokumente
Kultur Dokumente
Definitions of Evaluation
This definition is hardly perfect. There are many types of evaluations that do
not necessarily result in an assessment of worth or merit -- descriptive
studies, implementation analyses, and formative evaluations, to name a few.
Better perhaps is a definition that emphasizes the information-processing and
feedback functions of evaluation. For instance, one might say:
Both definitions agree that evaluation is a systematic endeavor and both use
the deliberately ambiguous term 'object' which could refer to a program,
policy, technology, person, need, activity, and so on. The latter definition
emphasizes acquiring and assessing information rather than assessing worth
or merit because all evaluation work involves collecting and sifting through
data, making judgements about the validity of the information and of
inferences we derive from it, whether or not an assessment of worth or merit
results.
Evaluation Strategies
Types of Evaluation
There are many different types of evaluations depending on the object being
evaluated and the purpose of the evaluation. Perhaps the most important
basic distinction in evaluation types is that between formative and
summative evaluation. Formative evaluations strengthen or improve the
object being evaluated -- they help form it by examining the delivery of the
program or technology, the quality of its implementation, and the assessment
of the organizational context, personnel, procedures, inputs, and so on.
Summative evaluations, in contrast, examine the effects or outcomes of some
object -- they summarize it by describing what happens subsequent to
delivery of the program or technology; assessing whether the object can be
said to have caused the outcome; determining the overall impact of the causal
factor beyond only the immediate target outcomes; and, estimating the
relative costs associated with the object.
• needs assessment determines who needs the program, how great the
need is, and what might work to meet the need
• evaluability assessment determines whether an evaluation is feasible
and how stakeholders can help shape its usefulness
• structured conceptualization helps stakeholders define the program
or technology, the target population, and the possible outcomes
• implementation evaluation monitors the fidelity of the program or
technology delivery
• process evaluation investigates the process of delivering the program
or technology, including alternative delivery procedures
Evaluators ask many different kinds of questions and use a variety of methods
to address them. These are considered within the framework of formative and
summative evaluation as presented above.
What is the definition and scope of the problem or issue, or what's the
question?
The most common method used here is "needs assessment" which can
include: analysis of existing data sources, and the use of sample surveys,
interviews of constituent populations, qualitative research, expert testimony,
and focus groups.
..
What is Monitoring?
..
To monitor is to check on how
Monitoring is the regular observation and project activities are
recording of activities taking place in a project or progressing. It is observation;
program. It is a process of routinely gathering
.
information on all aspects of the project.
–– systematic and purposeful
observation.
..
Monitoring also involves giving feedback Reporting enables the gathered
about the progress of the project to the information to be used in making
donors, implementers and beneficiaries of
. decisions for improving project
the project. performance.
..
Purpose of
Monitoring:
..
Monitoring is very important It is like watching where you are going while riding a
in project planning and . bicycle; you can adjust as you go along and ensure
implementation. that you are on the right track.
..
Monitoring provides information that will be useful in:
• Determining whether the way the project was planned is the most appropriate
• Project evaluation.
..
Monitoring should be executed by all To efficiently implement a project, the people
individuals and institutions which planning and implementing it should plan for
have an interest (stake holders) in the
. all the interrelated stages from the
project. beginning.
..
They can be modified,
In the "Handbook for Mobilizers," we said the key questions
using "where," instead
of planning and management were: (1) What do we want? (2)
What do we have? (3) How do we use what we have to get
. of "what," while the
principles are the
what we want? and (4) What will happen when we do?
same.
..
The questions become:
Where are we?
Where do we want to go?
• Population characteristics (eg sex, age, tribe, religion and family sizes);
• Political and administrative structures (eg community committees and local
councils);
• Economic activities (including agriculture, trade and fishing);
• Cultural traditions (eg inheritance and the clan system), transitions
(eg marriages, funeral rites), and rites of passage (eg circumcision);
• On-going projects like those of sub-county, district, central Government,
non Governmental organizations (NGOs), and community based organizations
(CBOs);
• Socio-economic infrastructure or communal facilities,
(eg schools, health units, and access roads); and
The Meaning of
Evaluation:
Evaluation is a process of judging value on It involves value judgment and
what a project or program has achieved hence it is different from monitoring
particularly in relation to activities planned
. (which is observation and reporting
and overall objectives. of observations)., Text
..
Purpose of
Evaluation:
Evaluation is important to Assessing the benefits and costs that accrue to the
identify the constraints or intended direct and indirect beneficiaries of the
bottlenecks that hinder the project. If the project implemented is for example, the
project in achieving its . protection of a spring, evaluation highlights the
objectives. Solutions to the people who fetch and use water and the people
constraints can then be whose land is wasted and whose crops are destroyed
identified and implemented. during the process of water collection.
..
Drawing lessons from the project Providing a clear picture of the extent
implementation experience and using the to which the intended objectives of
lessons in re-planning of projects in that
. the activities and project have been
community and elsewhere; and realized.
..
The Process of
Evaluation:
..
Evaluation can . Before project implementation, evaluation is needed in order to:
and should be
done: (a) • Assess the possible consequences of the planned
before, (b) project(s) to the people in the community over a period of
during, and (c) time;
after • Make a final decision on what project alternative should be
implementation. implemented; and
Management Information
and Information Management
..
Information management is the process of
Management information and
analyzing and using information which has
information management are
been collected and stored in order to enable
different; management information is
a kind of information (the data);
. managers (at all levels) to make informed
decisions. Management information is the
information management is a kind of
information needed in order to make
management (the system).
management decisions.
..
Monitoring provides This information is collected during the planning and
information about what implementation phases. The information helps to detect if
is going on in the
. anything is going wrong in the project. Management can
project. therefore find solutions to ensure success.
..
All stake holders have a stake in knowing how well things are going
..
Monitoring is a vital
As many individuals and institutions as possible
management & implementation
role that cannot be left to only
. that have any interest in the project, at all levels,
should participate in monitoring.
one stake holder.
..
As with community participation and The persons whom you want to
participatory management, participation in ... participate must be encouraged
monitoring does not happened spontaneously. and trained to participate.
..
Advantages of
Participation:
..
The advantages of participation in monitoring include: (a) a common
undertaking, (b) enhancing accountability, (c) better decisions, (d)
performance improvement, (e) improved design, and (f) more information.
..
This facilitates the
Common Understanding of Problems and
identification of solutions.
Identification of Solutions: Participative monitoring
These solutions are more likely
helps stake holders to get a shared understanding . to be appropriate because they
of the problems facing the community or project
are derived from a current
(their causes, magnitude, effects and implications).
situation.
..
Benefits the Target Groups and
It increases the awareness of people's rights,
Enhances Accountability:
which elicits their participation in guarding
Participation in monitoring
ensures that the people to which
. against project resource misappropriation.
Guarding against resource misappropriation
the project was intended are the
makes project implementation less expensive.
ones benefiting from it.
..
When many people participate in monitoring it means that
Making Appropriate
they have participated in providing management
Decisions: Monitoring
information and contributed to decision making. The
provides information
necessary in making
. decisions from this are more likely to be acceptable and
relevant to the majority of the population. This makes
management
human and resource mobilization for project
decisions.
implementation easier.
..
Performance Improvement During Monitoring, if a performance Therefore
deviation is discovered solutions can be devised. To find participation in
appropriate decisions that can be implemented requires the . monitoring can help
participation of those people who will put the solution into improve project
practice. performance.
..
The lessons learned can
Design of Projects: The information generated during
also be used in the design
project monitoring helps in re-designing projects in . of similar projects
that locality to make them more acceptable.
elsewhere.
..
Each stake holder is putting varying
Collection of Information: If many
emphasis on the different aspects of the
people participate in monitoring they
project using different methods.
are more likely to come up with more
accurate information. This is because,
. Alternatively, one party knowing that the
information they are collecting will be
information that is omitted by one
verified, forestalls deliberate wrong
party, can be collected by the other.
reporting.
..
Challenges of Participation in
Monitoring:
..
Whereas participation in monitoring has a The challenges include: (a) high costs,
number of virtues, it is likely to face a . (b) variations in information, and (c)
number of challenges. inaccuracies.
..
It is a demanding process that can over-stretch
High Initial Costs: Participation in
volunteer spirit at community level and financial
monitoring requires many
resources (eg time, transport and
. resources at district and national levels.
Therefore it must be simple and focussed to vital
performance-related allowances).
elements.
..
Quantity and Variety of
This requires many skills that are lacking in the
Information: Monitoring requires
communities. It therefore necessitates much
collection, documentation and . time and resources for capacity building. It also
sharing of a wide range of
risks wrong reporting.
information.
..
Inaccuracy of Information: Some stake holders, from the . To counteract wrong or
community to the national level, may intentionally provide incorrect reporting needs
wrong information to depict better performance and sensitization and
consensus building that
outputs or because of community or project differences.
is difficult to attain.
..
The advantages of participation in It is therefore necessary to encourage and
monitoring are evidently more . support participatory monitoring as we devise
than the challenges. means to counteract the challenges.
Levels of Monitoring
Community, District, National, Donor
Monitoring methods differ at each level, and complement each other
..
When Museveni came to power, they
There is no universal vocabulary for
ranged from Resistance Council Level
varying levels of government and
One (community or village) up to
administration from the community level
Resistance Council Level Five (District).
to the national level. Terminology varies
More recently, Uganda reverted to a
from country to country. I can not,
former terminology with colonial
therefore, use a set of terms that can be
vestiges: 1 = village, 2 = parish, 3 = sub-
applied in many countries, although the . county, 4 = county and 5 = district. The
principles and methods of community
precise terms are not important here;
empowerment are universally similar (with
what is important is that there are
minor variations between countries). Since
monitoring roles that range from the
these training modules were mainly
village to the national level. Use
developed in Uganda, I am using the
whatever terms are applicable to your
terminology of Uganda.
situation.
...
For monitoring to be effective, there is
Monitoring should be carried out by all
need for a mechanism of giving
stake holders at all levels. Each level,
however, has specific objectives for
. feedback to all people involved at all
levels (community, district, national
monitoring, methods and therefore roles.
and donor).
Monitoring at Community Level:
..
Community level is where implementation and utilization . The specific objectives
of the benefits of the project take place. In most cases it for monitoring at this
is the village and parish level. At this level, the major level therefore include,
(a) ensuring that the
purpose of monitoring is to improve the implementation
projects are implemented
and management of projects. The interest of the
on time, (b) that they are
community as a whole in monitoring school construction,
of good quality and (c)
for example, is to ensure that the construction of the
that the project inputs
school (an output) is being done as planned.
are well utilized.
..
Monitoring at this level involves: Identifying a If the process of project identification
community project. This should be identified is not well done and does not reflect
in a participatory manner to reflect the community interests, it is likely that
community needs and stimulate people's
. the communities will not participate
interest in its implementation and in the monitoring of the
monitoring. implementation activities;
..
Identifying the team(s) to The roles of each team, how they should carry out the
spearhead the monitoring monitoring process, the use and sharing of information
of the project in the
. generated with other groups within and without the
community. community, should be specified and explained;
..
This helps the people monitoring to know the
Design a work plan that guides
activities that should be carried out by
project monitoring. The work plan
particular individuals in a given period of time.
should specify the activities in the . If the activities are not carried out, the people
order that they will be executed and
monitoring get guidance in coming up with
the individuals to execute them.
solution(s);
..
Determine the major activities from
For example if the preparatory activities in a
the work plan. Whereas all
school construction project include,
activities in the work plan are
community mobilization, borrowing of hoes
necessary and should be
monitored, it is useful to identify
. from the neighboring village, digging of the soil
and fetching of water for brick making, the
the major activities on the basis of
major activity summarizing all the sub-activities
which objectives and indicators
could be brick making.
would be set.
..
Determine the indicators Compare what is happening with what was planned
for each activity objective. should be done in the process to tell whether the
The indicators help the project is on schedule and as planned. The monitors
team monitoring to tell how should check at the indicators to measure how far they
far they have gone in have reached in achieving the objectives. This should
achieving the objectives of
. involve looking at the quality of work to ensure that it is
each activity. In our good. The monitoring team may need to involve a
example, one indicator technical person like a local artisan or a technician
could be the number of from the district to ascertain the quality of the project
bricks made. and (if it is of a construction).
..
The monitoring team should For a community project, to avoid big deviations from
then agree on how often they the work plan, monitoring visits should be carried out
should visit the project site . at least once a week. During the project visits, the
as a means of verifying what team should look at what is happening (observe) and
is taking place. talk to every body who is involved in the project;
..
For each activity, the For example the objective of brick making as an activity
monitoring team should . during the school construction project could be; to make
identify the objectives. ten thousand bricks by the end of February.
..
Whenever a monitoring visit is carried . The findings from the monitoring visits
out, those monitoring should write down should be discussed with other members
what their findings. They can use a form of the implementation committee. The
attached in the annex or agree on any monitoring and implementation teams
other reporting format that captures the should use the information collected to
findings of the exercise in relation to the detect and solve the problems facing the
work plan. project.
..
The district and sub-county They should also monitor the outcome of the
officials should get information project (eg the effect of school construction on
from the community monitoring the enrolment levels). The district should also
(monitoring performance in
. monitor the increase in strength, capacity and
relation to turning the inputs power of the target community to stimulate its
into outputs). own development.
..
The objectives therefore include: supporting the The methods for monitoring
improvement in project performance and that can be adopted at this
measuring the applicability of the way the project . level include (a) routine
was designed in relation to community monitoring and (b) qualitative
strengthening. support.
..
Routine Monitoring and Supervisory A copy of the work plan and community
Support: This requires the District monitoring reports should be kept in the
Project Coordinator, Community project site file. This will help whomever
Development Assistant, other technical wants to compare progress with the work
staff and politicians at the district and
. plan and get comments of the monitoring
sub-county to visit the project sites to team to do so without necessarily tracing
ascertain what is happening in relation the members of the monitoring team who
to what was planned. may not be readily available.
..
Make and record comments about good and
During routine monitoring,
bad elements in the project. Recommend
discussions should be made with all
solutions showing who should undertake
the people involved in the
them, with financial, time and the negative
implementation and monitoring of the
project. Look at the manner in which
. effects that may accrue to the project if they
are not taken. A copy of the comments
each team performs its duties (as a
should be left in the project site file/book
means of verifying the increase in
and the other discussed and filed at the
community capacity).
district.
..
Also file and submit a project progress
The sub-counties and districts should
report as part of the routine monthly
organize discussions of project . reporting to the district and national office
progress at least once a month.
respectively.
..
The major issues to look at during the district and sub-county routine monitoring
include:
• To ensure that the inputs for are efficiently and effectively utilized.
• That the planned activities are being realized;
• To measure the applicability of the methodology to community strengthening;
and
• To draw lessons from the project intervention for future projects in the country
and beyond. The lessons will provide the basis for project methodology
replication.
..
The methods for monitoring at this level include: (a) routine monitoring,
level, and (3) national level, indicating the key issues at each level.
..
Community Level:
..
The following table
looks at the main
At the community level the three main actors who have a stake
issues of interest,
in the community strengthening intervention are the:
monitoring
indicators, means
• CBO Executive or Implementing Committee (CIC) of the of observing,
community project; . frequency, and
• Community mobilizers; and suggested
monitoring
• Parish Development Committee (PDC). procedures, for
each of these three
stake holders.
..
Monitoring Means of Monitoring
Stake Holder Issue Freq.
Indicator Observing Procedure
Number of project
Timely Members use
activities Routine
Implementation Weekly routine monitoring
implemented in project visits
of Projects form
time.
Members use
Routine routine monitoring
Appropriate use
No materials project visits. form
Executive of project Weekly
misused Project quality Check quality using
Committee resources
checks the technician's
guidelines
Proper Percentage of Members of the
collection and projects with Reviewing the project committee
storage of project site files; project site Weekly review the project
project number of reports files site file, reports
information in site files and comments
Community Mobilizers (1)
Compare
Mobilizers review sequence of
Number of project activities in
Realistic project project work plans
work plans with the work plan
implementation Monthly with a technical
well sequenced with how they
work plan person, and (2)
activities are
conduct monthly
implemented
project site visits
Community Number of Number of Monthly Project site visits;
participation in persons activities. Discussions with
project performing their Amount of people about their
activities roles resources contributions.
provided by
the
community
Accountability Percentage of Resource PDC members use
Dev't
of Project resources accountability Quarterly project resource
Committee
Resources accounted for form accountability form
...
Sub-County and District
Level:
..
Identification of
The planning unit reviews
projects that fall Review of project the plans from the
Number of
in the district identification Twice parishes, to establish if
projects under
District plan and reports. Project a year they fall under the district
the district plan plan and national priority
Project national visits areas
Coordinator priorities
Number of
and Review of project Planning unit conducts
Community villages using
reports. Focus qualitative enquiries to
leaders community find out if communities
group
Planning acquisition of participation Twice are participating in project
discussions and activities. District specific
Unit community in planning a year
other qualitative procedures must be
management and designed when exercises
enquiry
skills implementing take place
techniques.
projects
..
Adaptation of
Donors implementation
Agency or Ministry conducts
Proportion of the National and meetings with academic
experiences by institutions and community
project design international Annually projects to find out the
other projects
aspects adapted discussions methodological aspects that
and institutions have been replicated
in the country
.
next module, Report Writing, looks more in detail about the writing of reports itself.
..
Reporting is a major activity during In the case of a school construction project,
project monitoring. It is the way in reporting does not end at mentioning the
which information about the number of times the community met to make
process and output of activities, . bricks and build the school walls, but also
and not just the activities, is shared mentions the number of bricks and school
between the stake holders of the walls that were constructed plus the process
project. through which they were accomplished.
..
In community projects, reporting is mainly done through two ways: verbal and written.
..
Verbal Reporting:
..
This is a process where reporting is . The community members find it easier and
done orally. It is the commonest more effective to communicate to others in
means of reporting. words.
..
The advantages of verbal reporting are:
• Low cost. Verbal reporting cuts down significantly the time and other
resources spent on reporting.
..
The challenges of verbal reporting include:
Written Reporting:
..
During monitoring it is important to
Write what you observe, along with
report about the results of activities . reviewing reports of technical people.
not just the activities.
..
The advantages of written reports are:
Evaluation
A Beginners Guide
Dinosaurs would have had a better chance to survive
if they had evaluated climate changes a bit more carefully
Introduction
Often the demands put upon human rights activists and the urgency to act against
human rights violations means that no time is put aside to systematically evaluate
Human Rights Education (HRE) projects and their effects.
However, an evaluation is a positive activity for several reasons:
• It is a chance for practitioners to test for themselves that their efforts are
working and are worthwhile as well as identifying weaknesses to be remedied.
• It is an essential step towards improving effectiveness.
• It gives the program credibility in the eyes of those directly and indirectly
involved (such as students and funders ) It is essential for good planning and
goal setting
• It can raise morale, motivate people and increase awareness of the importance
of the work being carried out. Or it can help solve internal disputes in an
objective and professional way.
• It allows others engaged in similar program work, in your country or abroad,
to benefit from previous experiences.
What is evaluation?
Evaluation is a way of reflecting on the work that has been done and the results
achieved. A well thought out evaluation will help support and develop further any
program, that is why evaluation should be an integrated component of any project or
program plan and work implementation.
Organization centred when the goal of the evaluation is to look at the life of the
organization. This kind of evaluation would ask questions such as: Is the organization
functioning well? How is the organization perceived by the public or by the
authorities?
Learner centred when the goal of the evaluation is to look at the personal
development of the learner. This means that the evaluation will be assessing if the
learner is achieving knowledge about human rights, and if s/he is also acquiring other
benefits such self worth, confidence, empowerment and commitment to human rights.
Although it might be helpful to focus on only one of these goals initially, it is
important to be aware that some overlap will take place. For example when evaluating
the community impact of a project you will inevitably have to look at the efficiency of
the organization concerned.
• take into account the internal as well as the external factors which may be
influencing the work and its outcomes
Planning an evaluation
Measurable objectives and evaluation should be included in the plan from the start.
Evaluation should not be an afterthought.
It is important to have a clear goal for the evaluation. Do we want to evaluate the
impact the project is having on society? or the way the project is functioning? or the
personal development learners are achieving through the project ? In other words
should the evaluation be community impact centred; or organization centred; or
learner centred?
We need to ask what do we want to achieve with the evaluation? Evaluations should
be of interest and value to those involved in the implementation of the program and
those benefiting from it as well as for those funding the work.
To decide the rationale for the evaluation it is important to refer to the project's plan
and budget. This, if well elaborated, is a key factor in helping us to remember the
rationale behind the goals we are trying to achieve and why we have organized our
human rights education work the way we have.
What?
Not to lose track of what the rationale for our evaluation is (the why?) we need to
clearly define which problem or issue is to be evaluated (the what?). It is very
important to set up well defined parameters for the evaluation by being clear on what
is to be evaluated. One of the reasons why we don't evaluate is that we make the task
too big -- it can be helpful to only focus on one or two elements, particularly as you
are carrying out your first evaluation. We then need to decide what is the best type of
evaluation for the purpose we have chosen.
For example, if we need to evaluate the effects we are having in the shaping of
national policy for primary education we need to ensure that the terms of reference of
the evaluation clearly direct the evaluator in this direction and that the methodology
we are using is adequate to collect the relevant data .
We need to analyse the different components of the program; determine the ones
which are key in the area that has been chosen for evaluation; and then include them
in the terms of reference as the parameters within which the evaluation will take
place.
When?
• define deadlines
• stages of the evaluation, how long each will take
• when will people need to be involved (check that they are available)
• how much time is going to be needed to analyse the data collected
• when should the final evaluation report be ready
Who?
How?
The choice of methodology for collecting information depends above all on what you
want to achieve with the evaluation, the cultural context in which you are working
and the resources (time and money) available. For those operating within a tight
budget and conducting the evaluation themselves, the most feasible method of data
collection is through the use of questionnaires and interviews (see section on
methodology).
The answers to the above questions (why, what, when, who and how) should provide
you with the framework for your evaluation which is then presented in what is called
the ' Terms of Reference'. These should contain the parameters (the content and the
limits) of the evaluation and serve as guide to the person or persons carrying
out the evaluation.
It is important to try to have a consensus on the ToR and that these be approved by the
decision making body of the organization.
Do not forget to make a budget for the evaluation. An evaluation does not
necessarily need to cost large amounts, but for accounting purposes it is important to
establish what expenses may result from the evaluation.
Methodology
These four methods can be all used, or combined in different ways depending on the
scope of the evaluation and the resources available. Below there is more information
about questionnaires or carrying out interviews.
A questionnaire or an interview
Questionnaires and interviews are the most common methods of collecting
information from people. Sometimes all participants are questioned and sometimes
just a sample. There are several reasons why such approaches are used so often when
collecting information for an evaluation:
• they can be very flexible and user-friendly methods when used with care and
imagination.
• they usually put fewer demands on the project.
• the same set of questions can be copied and used as often as needed, so
comparisons can be made over time or between groups of students (do current
learners have the same views as people on the program last year?).
To decide whether or not the list of questions will be used in an interview situation or
will be distributed and filled in individually through a questionnaire largely depends
on how much time you have available, how in-depth you want the information to be
and whether or not you feel respondents would be comfortable and would respond
honestly in a personal interview.
The key to a successful interview lies in the relationship created between the
interviewer and the respondent:
• Try not to make the respondent feel that they are being tested.
• Try not to react defensively to any comments the respondent makes.
• Show the respondent that you are really listening by using attentive body-
language, (i.e., facing the speaker; making good eye contact ) and by not
interrupting.
If you can make respondents feel that their opinions are valued and that they can talk
freely and honestly in the interview, then not only are you gathering much valuable
information for evaluation purposes but also possibly helping to create an interactive
environment that could filter through to the program itself.
For both questionnaires and interviews it is important to allow time to analyse the
responses and to ensure their confidentiality.
Preparing the questions
Once the aim of the evaluation has been decided and the Terms of Reference have
been approved, including the method(s) to gather information, it will be necessary to
decide on the style, content and format of the questions for either the questionnaire or
the interview.
Types of questions
There are basically seven kinds of questions that can be used in a survey. As a general
rule one should use the type of question which best serves the aims of the evaluation.
Open questions get the subject talking. They leave an open field in which to answer.
Examples include
''Tell me about...''
''What happened when...?''
Probing questions are used to fill in the detail. They are questions which can tease out
the really interesting areas and detail of a particular topic.
Examples include:
''What exactly did you think of the lesson?''
''What happened at the workshop?''
Reflective questions are useful to obtain further information. They are a repeat of
something the person answering has said or implied. They are mostly used in
interviews.
Examples include:
''You say he over reacted - how?''
''You didn't seem to enjoy that experience - - why was that?''
Reflective questions can also be used to help someone who is upset let off steam so
that rational decisions can then be taken. For example ''You're obviously upset about
this, please tell me more about it?''
Closed questions require a one or two word answer and are used to establish single,
specific facts.
Examples include:
There are different other kinds of ''category scales'' which can be used
For example:
1=frequently; 2=sometimes; 3=almost never;
1=strongly approve; 2=approve; 3=undecided; 4=disapprove;
5=strongly disapprove;
How to choose what type of questions best serve your purpose
A combination of different questions can be used effectively both in a distributed
questionnaire or in an interview situation. However, the design and presentation of
these questions and how you combine them will depend on certain considerations:
The appropriateness of a strategy will of course depend on what you are trying to
achieve, how much time you have available, and on the relationship of those
communicating. One common technique is called ''The funneling technique''.
It involves:
Starting with broad open questions ''Tell me about.....''
Funneling down with an open question ''What exactly did you do...?''
Finishing off with a closed question ''Was it you? Yes or not''
Another technique involves starting with a closed question and expanding out.
There should be plenty of time to analyse all the information gathered during the
evaluation process and to write the report. The evaluation report should include the
Terms of Reference, a description of the methodology used, the conclusions reached,
including a justification of how these were reached, and a set of recommendations.
It is common practice to circulate amongst relevant people (for example, staff -paid
and unpaid - and members of the board) a draft evaluation report, particularly if the
evaluation has been carried out by an external evaluator. This process is to allow time
for people involved in the project to raise questions about the findings of the
evaluation, and possibly to adjust any potential areas of confusion. Of course this
opportunity to comment on the evaluation report does not mean that people can alter
or delete negative findings from the report.
The final report when completed should be delivered to those responsible for the
management of the project. They in consultation with those responsible for the
administration and implementation of the project should look into which
recommendations should be taken forward and how to implement them.
Don't wait until it is too late to start organising an evaluation of your HRE project!
Appendix I
Making sure you have answered all the most important questions:
A final checklist
Why?
Who?
• What are the interest groups involved in the work you want to evaluate?
• How are you going to assure their involvement and active participation?
• Who will gather the information and who will analyse it?
• Who is responsible for what and who takes the final decisions?
When?
Don't start an evaluation if you are not going to finish it or if you are going to ignore
its results and recommendations !!!
Appendix II
This is a sample evaluation designed to give you an example of how terms of
reference and questionnaire may read -- this is not an evaluation form per se.
INTRODUCTION
The aim of this document is to outline the process for evaluating Amnesty
International's Human Rights Education (HRE)work. The evaluation process will
follow the guidelines for evaluation as set out in Chapter 13 of the Amnesty
International Campaigning Manual published in 1997 (AI index ACT10/02/97).
''But I do not have time to be involved in this evaluation, I have other work to do''
If we do not evaluate we cannot improve our work and therefore be more successful
at future HRE activities. The time you spend on inputting to the evaluation now will
save you time in the development of future HRE programs. Evaluation also gives us
an opportunity to learn from each other. For all structures this information is
invaluable. It is time well spent! And we promise to send you the results by ...
BACKGROUND
LONG TERM GOALS FOR AI's HRE PROGRAMSTo reach a wide and culturally
diverse audience of all age groups, and so to realise our belief that all individuals have
both the right and the responsibility to promote and protect human rights.
1. To develop and implement relevant and effective HRE programs in all countries
with AI structures. To ensure that the necessary resources are available for their full
implementation.
2. To work with relevant NGOs and other organizations to ensure that HRE becomes
an integral part of all educational work whether formal or informal. To encourage the
use of creative methods in teaching to enrich the learning processes.
The key strategies adopted to achieve the above goals were as follows:
Objective 1: To introduce human rights issues into formal educational
and training curricula and teaching practices in schools, universities,
professional training institutions and educational fora.
Objective 2: To develop and expand informal HRE programs.
Objective 3: To facilitate and increase the exchange of HRE expertise,
information and materials within AI and the broader human rights
movement.
Objective 4: to establish regional and sub-regional HRE networks
Objective 5: To actively lobby the relevant IGOs, and International
NGOs to recognise and carry out their responsibilities in regard to
HRE
Objective 6: To secure adequate funding at the international and local
level for AI's HRE program.
Objective 7: To establish effective, comprehensive and systematic
review and evaluation mechanisms for AI's HRE work.
THE EVALUATION PROCESS
Please refer to the International HRE Strategy document (POL 32/02/96) and the
above goals for this evaluation when fillining the following questionnaire. Your
answers will form the main body of reference documents in the development of the
full evaluation report, along with the documents listed in the appendix, responses
from other evaluation participants and external contractors and service providers.
If you wish to supply any further information to aid the evaluation please do so. Your
input is not limited to the space provided by the questionnaire. Please attach any
supplementary information you wish.
When compiling your response to the questionnaires please consult widely with the
staff and volunteers within your section and/or structure so that input from all
programmes is reflected in your response.
An electronic copy of the evaluation form is also available via email on request
HRE Evaluator
Campaigning Program
International Secretariat, Amnesty International
1 Easton Street,
London WC1X 8DJ
UK
Section/Structure.........................................................................................
Position.........................................................................................................
6. When and how did you start implementing a HRE project or program?
Please indicate the times of high and low HRE activity since the approval of the
International HRE Strategy in 1996 and the reason for this?
1996 ..................................................................
1997 ..................................................................
1998 ..................................................................
1999 ..................................................................
2000 ..................................................................
7. What sectors have you worked with or targetted? Why?
8. Have you worked with non-literate individuals and communities? How have you
done this?
9. Please describe the most successful HRE project work you have implemented,
including the effects that this has had in improving the human rights situation of your
country/community (use a extra paper if you need to).
10. Please describe the most unsuccessful HRE project work you have implemented,
including the effects that this had in the human rights situation of your
country/community (use a extra paper if you need to).
11. What are the indicators you have used to measure the impact your project/program
have had? I.e. lower incidents of bulling at school, or lower number of cases of torture
denounced or greater number of articles about human rights in the press or greater
number of trails of human rights violations or on the results of tests/exams?
12. What if any HRE materials has your section produced? Please specify the format
ei: book, video, etc.
The evaluation phase also involves a sequence of stages that typically includes: the
formulation of the major objectives, goals, and hypotheses of the program or
technology; the conceptualization and operationalization of the major components of
the evaluation -- the program, participants, setting, and measures; the design of the
evaluation, detailing how these components will be coordinated; the analysis of the
information, both qualitative and quantitative; and the utilization of the evaluation
results.
You may have also heard the terms "qualitative" and "quantitative"
used to describe an evaluation. However, these terms, which are
defined in the glossary, refer to the types of information or data
that are collected during the evaluation and not to the type of
evaluation itself. For example, an outcome evaluation may involve
collecting both quantitative and qualitative information about
participant outcomes.
Step 2: Prepare for the evaluation. Before you begin, you will need
to build a strong foundation. This planning phase includes deciding
what to evaluate, building a program model, stating your objectives
in measurable terms, and identifying the context for the evaluation.
The more attention you give to planning the evaluation, the more
effective it will be. Chapter 5 will help you prepare for your
evaluation.
Highest cost evaluations. At the highest cost level, you will be able
to obtain all of the information available from the other cost options
as well as longer term outcome information on program
participants. The high cost of this type of evaluation is due to the
necessity of tracking or contacting program participants after they
have left the program. Although follow up activities often are
expensive, longer term outcome information is important because it
assesses whether the changes in knowledge, attitudes, or behaviors
that your participants experienced initially are maintained over
time.
Whatever team option you select, you must make sure that you,
the program manager, are part of the team. Even if your role is
limited to one of overall evaluation management, you must
participate in all phases of the evaluation effort.
Possible advantages:
Possible disadvantages:
Selecting this team does not mean that you or your staff need not
be involved in the evaluation. You and other staff members must
educate the evaluator about the program, participants, and
community. Other staff or advisory board members must also be
involved in planning the evaluation to ensure that it addresses your
program's objectives and is appropriate for your program's
participants.
Possible advantages:
Possible disadvantages:
This second option is a good choice if you feel that you have
sufficient staff resources to implement the evaluation, but need
assistance with the technical aspects. An evaluation consultant, for
example, may help with developing the evaluation design,
conducting the data analyses, or selecting or constructing
appropriate data collection tools. You will also want the consultant
to help you develop the evaluation plan to ensure that it is
technically correct and that what you plan to do in the evaluation
will allow you to answer your evaluation questions.
Possible advantages:
An in-house evaluation team may be the least expensive option, but
this is not always true. An in-house staff evaluation team promotes
maximum involvement and participation of program staff and can
contribute to building staff expertise for future evaluation efforts.
Possible disadvantages:
The checklist on the following page can help you decide what type
of team you may need. Answer the questions based on what you
know about your resources.
Whatever team you select, remember that you and your staff need
to work with the evaluation team and be involved in all evaluation
planning and activities. Your knowledge and experience working
with program participants and the community are essential for an
evaluation that will benefit the program, program participants,
community, and funders.
The checklist above can help you select your evaluation team in the
following ways:
If your answer to all the resource questions is "no," you may want
to consider postponing your evaluation until you can obtain funds to
hire an outside evaluator, at least on a consultancy basis. You may
also want to consider budgeting funds for evaluation purposes in
your future program planning efforts.
Creating a contract
A major step in managing an evaluation is the development of a
contract with your outside evaluator. It is important that your
contract include the following:
Who will perform evaluation tasks. The contract should clarify who
is to perform the evaluation tasks and the level of contact between
the evaluator and the program. Some program managers have
found that outside evaluators, after they are hired, delegate many
of their responsibilities to less experienced staff and have little
contact with the program managers or staff. To some extent, a
contract can protect your program from this type of situation.
You may also want your evaluator to assess whether some of your
participants are changing and whether there are any common
characteristics shared by participants that are or are not
demonstrating changes. However, be prepared to accept findings
that may not support your perceptions. Not every program will work
the way it was intended to, and you may need to make some
program changes based on your findings.
Potential Responsibilities of the Evaluator
• Develop an evaluation plan, in conjunction with program staff.
Develop database.
Analyze data.
Begin preparing for the evaluation when you are planning the
program, component, or service that you want to evaluate. This
approach will ensure that the evaluation reflects the program's
goals and objectives. The process of preparing for an evaluation
should involve the outside evaluator or consultant (if you decide to
hire one), all program staff who are to be part of the evaluation
team, and anyone else in the agency who will be involved. The
following steps are designed to help you build a strong foundation
for your evaluation.
After runaway and homeless youth reduce their AOD abuse, they
will seek services designed to help them resolve other
problems they may have.
After CPS workers improve their skills for working with families in
which AOD abuse and child maltreatment coexist, collaboration
and integration of services between the child welfare and the
substance abuse treatment systems will increase.
Program models are not difficult to construct, and they lay the
foundation for your evaluation by clearly identifying your program
implementation and participant outcome objectives. These models
can then be stated in measurable terms for evaluation purposes.
» Measurable objectives:
Who you plan to reach and how many — Classes will be provided to
all youth residing in the shelter during the time of the classes (from
8 to 14 youth for any given session) and to youth who are seeking
crisis intervention services from the youth services agency
(approximately 6 youth for each session). All youth will be between
13 and 17 years old. Youth seeking crisis intervention services will
be recruited to the classes by the intake counselors and the clinical
director.
Step 4: Identify the context for your evaluation. Part of planning for
an evaluation requires understanding the context in which the
evaluation will take place. Think again about building a house.
Before you can design your house, you need to know something
about your lot. If your lot is on a hill, you must consider the slope of
the hill when you design your house. If there are numerous trees on
the lot, you must design your house to accommodate the trees.
The staff context. The support and full participation of program staff
in an evaluation is critical to its success. Sometimes evaluations are
not successfully implemented because program staff who are
responsible for data collection do not consistently administer or
complete evaluation forms, follow the directions of the evaluation
team, or make concerted efforts to track participants after they left
the program. The usual reason for staff-related evaluation problems
is that staff were not adequately prepared for the evaluation or
given the opportunity to participate in its planning and
development. Contextual issues relevant to program staff include
the following:
If you serve a particular cultural group, you may need to select the
individuals who are to collect the evaluation information from the
same cultural or ethnic group as your participants. If you are
concerned about the literacy levels of your population, you will need
to pilot test your instruments to make sure that participants
understand what is being asked of them. More information related
to pilot tests appears in Chapter 7.
When the objective pertains to who will do it, you must collect
information on the characteristics of program staff (including their
background and experience), how they were recruited and hired,
their job descriptions, the training they received to perform their
jobs, and the general staffing and supervisory arrangements for the
program.
When the objective concerns who will participate, you must collect
information about the characteristics of the participants, the
numbers of participants, how they were recruited, barriers
encountered in the recruitment process, and factors that facilitated
recruitment.
I. Evaluation framework
A. What you are going to evaluate
1. Program model (assumptions about target
population, interventions, immediate outcomes,
intermediate outcomes, and final outcomes)
2. Program implementation objectives (stated in
general and then measurable terms)
a. What you plan to do and how
b. Who will do it
c. Participant population and recruitment
strategies
3. Participant outcome objectives (stated in general
and then measurable terms)
4. Context for the evaluation
B. Questions to be addressed in the evaluation
1. Are implementation objectives being attained? If
not, why (that is, what barriers or problems have
been encountered)? What kinds of things
facilitated implementation?
2. Are participant outcome objectives being
attained? If not, why (that is, what barriers or
problems have been encountered)? What kinds of
things facilitated attainment of participant
outcomes?
a. Do participant outcomes vary as a function
of program features? (That is, which aspects
of the program are most predictive of
expected outcomes?)
b. Do participant outcomes vary as a function
of characteristics of the participants or staff?
C. Timeframe for the evaluation
1. When data collection will begin and end
2. How and why timeframe was selected
II. Evaluating implementation objectives - procedures and
methods
III. (question 1: Are implementation objectives being attained,
and if not, why not?)
A. Objective 1 (state objective in measurable terms)
1. Type of information needed to determine if
objective 1 is being attained and to assess
barriers and facilitators
2. Sources of information (that is, where you plan to
get the information including staff, participants,
program documents). Be sure to include your
plans for maintaining confidentiality of the
information obtained during the evaluation
3. How sources of information were selected
4. Time frame for collecting information
5. Methods for collecting the information (such as
interviews, paper and pencil instruments,
observations, records reviews)
6. Methods for analyzing the information to
determine whether the objective was attained
(that is, tabulation of frequencies, assessment of
relationships between or among variables)
B. Repeat this information for each implementation
objective being assessed in the evaluation
IV. Evaluating participant outcome objectives-procedures and
methods
(question 2: Are participant outcome objectives being
attained and if not, why not?)
A. Evaluation design
B. Objective 1 (state outcome objective in measurable
terms)
1. Types of information needed to determine if
objective 1 is being attained (that is, what
evidence will you use to demonstrate the
change?)
2. Methods of collecting that information (for
example, questionnaires, observations, surveys,
interviews) and plans for pilot-testing information
collection methods
3. Sources of information (such as program staff,
participants, agency staff, program managers,
etc.) and sampling plan, if relevant
4. Timeframe for collecting information
5. Methods for analyzing the information to
determine whether the objective was attained
(i.e., tabulation of frequencies, assessment of
relationships between or among variables using
statistical tests)
C. Repeat this information for each participant outcome
objective being assessed in the evaluation
V. Procedures for managing and monitoring the evaluation
A. Procedures for training staff to collect evaluation-related
information
B. Procedures for conducting quality control checks of the
information collection process
C. Timelines for collecting, analyzing, and reporting
information, including procedures for providing
evaluation-related feedback to program managers and
staff
We will keep all of your answers confidential. Your name will never
be included in any reports and none of your answers will be linked
to you in any way. The information that you provide will be
combined with information from everyone else participating in the
study.
If you have any questions about the study you may call [name and
telephone number of evaluator, program manager or community
advocate].
By signing below, you confirm that this form has been explained to
you and that you understand it.
AGREE TO PARTICIPATE
Signed: __________________________________________
Participant or Parent/Guardian
Date: __________________________________________
Chapter 7: How Do You Get the Information You Need for Your
Evaluation?
As Chapter 6 noted, a major section of your evaluation plan
concerns evaluation information - what kinds of information you
need, what the sources for this information will be, and what
procedures you use to collect it. Because these issues are so critical
to the success of your evaluation effort, they are discussed in more
detail in this chapter.
At the end of the chapter, there are two worksheets to help you
plan out the data collection process. One is a sample worksheet
completed for a drug abuse prevention program for runaway and
homeless youth, and the other is a blank worksheet that you and
your evaluation team can complete together. The following sections
cover each column of the worksheet.
• Student grades
• Academic test scores
• Number of behavior or discipline reports
• Teacher assessments of classroom behaviors
• Student self-assessments of classroom behaviors
When you interview staff, you are relying on their memories of what
happened, but when you review case records or logs, you should be
able to get information about what actually did happen. If you
choose to use case records or program logs to obtain evaluation-
relevant data, you will need to make sure that staff are consistent
in recording evaluation information in the records. Sometimes case
record reviews can be difficult to use for evaluation purposes
because they are incomplete or do not report information in a
consistent manner.
Another strategy is to identify existing information on your
participants. Although your program may not collect certain
information, other programs and agencies may. You might want to
seek the cooperation of other agencies to obtain their data, or
develop a collaboration that supports your evaluation.
In addition, you can ask questions of participants after the pilot test
to obtain their comments on the instruments and procedures.
Frequently, after pilot testing the evaluation team will need to
improve the wording of some questions or instructions to the
respondent and delete or add items.
This chapter will not tell you how to analyze evaluation data.
Instead, it provides some basic information about different
procedures for analyzing evaluation data to help you understand
and participate more fully in this process. There are many ways to
analyze and interpret evaluation information. The methods
discussed in this chapter are not the only methods one can use.
Whatever methods the evaluation team decides to use, it is
important to realize that analysis procedures must be guided by the
evaluation questions. The following evaluation questions are
discussed throughout this manual:
Are program implementation objectives being attained? If not, why not? What types
of things were barriers to or facilitated attaining program implementation objectives?
Are participant outcome objectives being attained? If not, why not?
What types of things were barriers to or facilitated attaining
participant outcome objectives?
The following sections discuss procedures for analyzing evaluation
information to answer both of these questions.
Did some participants change more than others and, if so, what
explains this difference? (For example, characteristics of the
participants, types of interventions, duration of interventions,
intensity of interventions, or characteristics of staff.)
Most statistical tests can also answer whether any other factors
affected the relationship between the independent and dependent
variables. For example, was the variation in scores from before to
after the program affected by the ages of the persons taking the
test, their socioeconomic status, their ethnicity, or other factors?
The more independent and mediating variables you include in your
statistical analyses, the more you will understand about your
program's effectiveness.
Sample Outline
Final Evaluation Report
Executive Summary
1. Data collected
2. Method of data collection
3. Examples:
4. Case record reviews
5. Interviews
6. Self-report questionnaires or inventories (if
you developed an instrument for this
evaluation, attach a copy to the final report)
7. Observations
8. Data sources (for each evaluation question)
and sampling plans, when relevant
B. Discussion of issues that affected the outcome
evaluation and how they were addressed
1. Program-related issues
a. Staff turnover
b. Changes in target population
characteristics
c. Changes in services/interventions
during the course of the project
d. Changes in staffing plans
e. Changes in collaborative
arrangements
f. Characteristics of participants
2. Evaluation-related issues
a. Problems encountered in obtaining
participant consent
b. Change in numbers of participants
served requiring change in analysis
plans
c. Questionable cultural relevance of
evaluation data collection instruments
and/or procedures
d. Problems encountered due to
participant attrition
C. Procedures for data analyses
D. Results of data analyses
1. Significant and negative analyses results
(including statement of established level of
significance) for each outcome evaluation
question
2. Promising, but inconclusive analyses results
3. Issues/problems relevant to the analyses
4. Examples:
5. Issues relevant to data collection
procedures, particularly consistency in
methods and consistency across data
collectors
6. Issues relevant to the number of
participants served by the project and those
included in the analysis
7. Missing data or differences in size of sample
for various analyses
E. Discussion of results
1. Interpretation of results for each evaluation
question, including any explanatory
information from the process evaluation
a. The effectiveness of the project in
attaining a specific outcome objective
b. Variables associated with attainment
of specific outcomes, such as
characteristics of the population,
characteristics of the service provider
or trainer, duration, or intensity of
services or training, and
characteristics of the service or
training
2. Issues relevant to interpretation of results
B. Integration of Process and Outcome Evaluation
Information
A. Summary of process evaluation results
B. Summary of outcome evaluation results
C. Discussion of potential relationships between
program implementation and participant outcome
evaluation results
D. Examples:
E. Did particular policies, practices, or procedures
used to attain program implementation objectives
have different effects on participant outcomes?
F. How did practices and procedures used to recruit
and maintain participants in services affect
participant outcomes?
G. What collaboration practices and procedures were
found to be related to attainment of expected
community outcomes?
H. Were particular training modules more effective
than others in attaining expected outcomes for
participants? If so, what were the features of
these modules that may have contributed to their
effectiveness (such as characteristics of the
trainers, characteristics of the curriculum, the
duration and intensity of the services)?
C. Recommendations to Program Administrators or
Funders for Future Program and Evaluation Efforts
D. Examples:
E. Based on the evaluation findings, it is recommended
that the particular service approach developed for this
program be used to target mothers who are 25 years of
age or older. Younger mothers do not appear to benefit
from this type of approach.
F. The evaluation findings suggest that traditional
educational services are not as effective as self-esteem
building services in promoting attitude changes among
adolescents regarding substance abuse. We recommend
that future program development focus on providing
these types of services to youth at risk for substance
abuse.
G. Based on the evaluation findings, it is recommended
that funders provide sufficient funding for evaluation
that will permit a long-term follow-up assessment of
participants. The kinds of participant changes that the
program may bring about may not be observable until 3
or 6 months after they leave the program.