Sie sind auf Seite 1von 30

C M

The ‘SO WHAT’ factor...


Y K
PMS ???
PMS ???

impact evaluation strategies


PMS ???
PMS ???
Non-printing
Colours
Non-print 1

for teacher educators Non-print 2


JOB LOCATION:
PRINERGY 3

TDA_R2733-TDA0731.indd 1 27/8/2009 19:17:03


Purpose

Purpose
This paper arose out of the Training and Development Agency for Schools (TDA)
information and communication technology (ICT) in initial teacher training (ITT)
impact evaluation project 2008/09. It aims to provide an accessible resource
with practical ideas and models for evaluating the impact of a technology
intervention from inception to completion.
One of the key findings of the evaluation was that ‘one size definitely doesn’t
fit all’ when selecting the framework or model of evaluation.
Although intended for teacher educators, this paper will be relevant to
anyone in education interested in assessing the impact of technology in
teaching and learning. The ideas should be useful to those implementing a
range of innovative projects who want a customisable evaluation that
covers the breadth of creative work occurring in ITT with ICT. The evaluation
methods used in the ICT in ITT project can be seen in the main report at
www.tda.gov.uk/techforteaching, where we also put forward an embryonic
model for determining project success factors.

Acknowledgments
This project was conceived and funded in support of the TDA evaluation by Becta. Becta has been working with the TDA to
support the evaluation advisory group. Becta also funded the evaluation team at the University of Wolverhampton to produce
additional materials on evaluating the impact of a technology intervention. Thanks are due to Malcolm Hunt at Becta, who
guided the process, and to Dr Michael Stokes at the University of Wolverhampton, who conceptualised the document and
drew the strands together. The complex logic model referred to in this document was derived from University of Wisconsin (UW)
Extension logic model, with kind permission.

2
Contents

Contents
Section one Evaluation – what is it and why do it? 4

Section two Guiding principles for evaluation 6

Section three Major frameworks and evaluation models: 7


Kirkpatrick’s evaluation of training model
Guskey’s five levels of professional development evaluation
Logic frame model evaluation
Self review framework for ICT
The test-bed e-maturity model

Section four Evaluation tools: 16


The five phases of ICT adoption
E-maturity models
Technology – a vehicle for enquiry-based learning

Section five Information on evaluation 19


Examples of resources available to support and guide evaluation of ICT

Section six An example of the use of an evaluation framework 24


Evaluation factors
Findings

References 28

3
Section one: Evaluation – what is it and why do it?

Section one: People use different terminology when they are talking about evaluation and people
have different perspectives on the nature and purpose of evaluation. According to

Evaluation – Bennett (2003, p15), while there has been ongoing debate for several decades over
the nature and purpose of evaluation, he recognises that “…evaluation forms an
what is it and important area of research in education.” Easterby-Smith (1986, p13) adds his own
three reasons for evaluating more succinctly as:
why do it? • proving
• improving, and
• learning.
This document aims to make the purpose of evaluation and the approaches to
evaluation clearer by concentrating on the evaluation of ICT in education. Definitions
of evaluation abound and Bennett (2003) offers 13 without concluding an overall
definition. One biased towards education evaluation is from Nevo (1995, p11),
who suggests that it is an “act of collecting systematic information regarding the
nature and quality of educational objects”, which suggests that it is a combination
of description and judgement. The UK Evaluation Society (1994) also highlights the
collection of information in saying evaluation is “…an in-depth study which takes
place at a discrete point in time, and in which recognised research procedures are
used in a systematic and analytically defensible fashion to form a judgement on
the value of an intervention.”
How such collection of information or research is organised may direct us to Scriven’s
(1967) idea of having two forms of evaluation: formative evaluation, which would
support the development of your project, and summative evaluation, for assessing
the final impact of a project. Goodall et al (2005, p37) supported this: “Effective
evaluation of continuing professional development (CPD) will usually need to serve
two main purposes: summative evaluation (does the programme/activity improve
outcomes?) and formative assessment (how can the programme/activity be
improved?)”. They go on to be critical of CPD evaluation practice and offer their own
model of evaluation, the ‘route map’ (found in the examples of evaluation practice in
this report in section 5).
In considering ICT in education, the formative function will include the evaluation
of instructional materials and pedagogic processes. This may relate to either the
development or use of materials and delivery of learning. A definition that appears
to be relevant to ICT issues in education is from Stern (1988), who suggests:
“Evaluation is any activity that, throughout the planning and delivery of innovative
programmes, enables those involved to learn and make judgements about the
starting assumptions, implementation processes, and outcomes of the innovation
concerned.” Guskey (1998) offers his definition of evaluation (adapted from the
Joint Committee on Standards for Educational Evaluation, 1994, p1): “Evaluation
is the systematic investigation of merit or worth”, proposing that it is a structured
and a measured and measurable approach. Chelimsky (1997, p101) sums up why
we evaluate in stating: “We look to evaluation as an aid to strengthen our practice,
organisation and programmes.” In order to do this, all critics agree that any reason or
reasons for the evaluation should be stated before any evaluation takes place.
This is reinforced by Guskey (2002), who reminds us that good evaluation is built in
from the outset of the professional development programme or activity, not added

4
Section one: Evaluation – what is it and why do it?

on at the end. The Research Councils UK (2005) emphasise this too in confirming
that evaluation is a process that takes place before, during and after a project.
It includes looking at the quality of the content, the delivery process, and the impact
of the project or programme on the audience(s). Some evaluation frameworks
incorporate a model planning process for a project as well as an evaluation
framework for the project, eg, logic frame models.

Guskey (2002, p1) helps to explain ‘Why evaluate?’: “The processes and procedures
involved in evaluation present an endless list of challenges that range from very
simple to extremely complex. Well-designed evaluations are valuable learning tools
that serve multiple audiences. They inform us about the effectiveness of current
policies or practices, and guide the content, form, and structure of future endeavours.
Poorly designed evaluations, on the other hand, waste time, energy and other
valuable resources…good evaluations do not have to be costly, nor do they require
sophisticated technical skills. What they require is the ability to ask good questions
and a basic understanding about how to find valid answers. Good evaluations provide
information that is sound, useful, and sufficiently reliable to use in making thoughtful
and responsible decisions about projects, programs, and policies.”

5
Section two: Guiding principles for evaluation

Section two: In carrying out evaluations, participants should decide why and how they will carry
them out. Drawing on the experience of CeDARE, Hadfield (2008) proposes five sets

Guiding principles of principles that participants should consider for any evaluation:

for evaluation 1. Identifying their focus and purpose of evaluation


Evaluations should:
• cover the four key levels of access and participation, participant learning,
participant behaviour, and organisational impact
• have clear foci that are at least in part co-constructed with participants and
address their needs as well as those of providers
• be directed towards outcomes which can be communicated to and used by
key stakeholders within the theme, and
• balance the amount of effort to conduct them with the potential benefit of
their outcomes.

2. Building on what is already known


Evaluations should:
• have convincing arrangements for accessing and building upon existing evidence
and knowledge of effective practice, and
• should wherever possible use existing frameworks and tools that are already ‘live’
within the system.

3. Gathering evidence
Evaluations should:
• try as far as possible to reuse and/or increase use of relevant evidence that has
already been collected
• ensure, as far as possible, that the process of collecting any new evidence is a
learning experience for those involved
• have clear strategies for triangulation, by collecting different sorts of evidence
from different groups in more than one context, and
• follow recognised ethical guidelines for both collection and storage.

4. Analysing and interpreting


Evaluations should:
• analyse existing data before collecting additional forms
• use or adapt existing frameworks if they are well recognised and regarded
• balance a search for consistent themes with contradictory messages and the
unexpected outcomes, and
• include practical arrangements for checking interpretations and summaries.

5. Communication and feedback


Evaluations should:
• report back in forms and ways that are accessible and appropriate to
key audiences
• where possible, use short timely feedback loops rather than rely on summative
feedback, and
• generate a short summary of key learning and impact that can be fed to others.

6
Section three: Major frameworks and evaluation models

Section three: This section (bearing in mind the principles above) identifies some major frameworks
for evaluation and provides links to approaches and models of practice in evaluation

Major frameworks for use in a variety of situations. It will draw on methods of practice from the
research by CeDARE (2009) ICT in ITT survey analysis report and on selected
and evaluation examples from the literature and the internet.

models The evaluation research that provided the stimulus for this paper used evaluation
models developed by Kirkpatrick for evaluating training. This also included approaches
for impact evaluation based on the work of Hooper and Reiber (1995) and Fisher
(2006) for applying this evaluation to ICT development and impact on trainees and
trainers (details on the Kirkpatrick evaluation model are found in section 4).

Evaluation models
Kirkpatrick’s evaluation of training model
Kirkpatrick developed his four-step model for the evaluation of training and
development in business organisations and, according to this model, evaluation
should begin at level one and then, as time and budget allows, should move
sequentially through levels two, three and four. Each successive level represents a
more precise measure of the effectiveness of the training programme, but at the
same time requires a more rigorous and time-consuming analysis. The model consists
of four stages, originally described as steps but since 1996 considered as levels,
and is applicable for all forms of programme evaluation, including ICT in ITT.
• Level one: reactions – what the participants in the programme felt about the
project/programme, normally measured by the use of reaction questionnaires
based upon their perceptions. Did they like it? Was the material relevant to
their work? A tool such as a ‘happy sheet’ is often utilised at this level. Level one
evaluation is viewed by Kirkpatrick as the minimum requirement, providing some
information for the improvement of the programme.
• Level two: learning – this moves the evaluation on to assessing the changes in
knowledge, skills or attitude with respect to the programme/project objectives.
Measurement at this level is more difficult, and formal or informal testing or
surveying is often used, preferably pre- and post-programme.
• Level three: behaviour – evaluating at this level attempts to answer the question:
are the newly acquired skills, knowledge or attitude being used in the everyday
environment of the learner? Measuring at this level is difficult as it is often not
easy to predict when the change of behaviour will occur, and therefore important
decisions may have to be made as to when to evaluate, how often to evaluate
and how to go about the evaluation. In the ICT in ITT project, questionnaires to
determine changes in practice were used, with questions based on a modified
e-maturity scale from the work of Hooper and Reiber (1995).
• Level four: results – this level seeks to evaluate the success of the programme in
terms of results for the organisation, usually stated in improvements in quality.
Determining the improvements in quality of practice is probably the most difficult
aspect of their evaluation framework.
(Summary adapted from Tamkin P, Yarnell J, and Kerrin, M 2002)

7
Section three: Major frameworks and evaluation models

Arguments for the use of this model


The four-level model can facilitate professional development evaluations because
it describes how evaluation can be conducted – and how it can be useful – at each
level. There are lots of examples of its use worldwide and it is practical and simple
to use.

Arguments against the use of this model


The main criticisms of the approach are based on the fact that the model has been
used mainly at level one – the satisfaction of learners in the training they have
received. It is also considered that there is an immediate reactive response from
learners at the end of their training that does not clearly link to the other levels.
Such an evaluation may be useful for trainer satisfaction but may not help identify
what has been learned.

Comment
According to the study by Yamkin et al (2002, p.xiii) the overall conclusion “…is that
the [Kirkpatrick] model remains very useful for framing where evaluation might be
made.” The CeDARE ICT in ITT survey analysis used the multi-levels of the Kirkpatrick
model to determine what was already known from reviewing previous project
evaluations of ICT data collection and the identification of a suitable sampling
framework for investigation at a greater depth, ie, at Kirkpatrick’s levels three and
four framework.

Guskey’s five levels of professional development evaluation


Guskey decided to modify Kirkpatrick’s model for use on evaluating staff
development in education. He comments that the Kirkpatrick model had only
limited use in education because it lacked ‘explanatory power’. It was seen as
helpful in addressing a broad range of ‘what’ questions, but fell short when it comes
to explaining ‘why’. This new five-step model (see table 1 below) is one that was
advocated in a study by Goodall et al (2005), who noted that Guskey’s model was
adapted from Kirkpatrick’s (1959) model. Goodall et al went on to suggest their own
‘route map’, drawing from their experience of reviewing the work of Guskey.

8
Section three: Major frameworks and evaluation models

Table 1.
Five levels of professional development evaluation

Evaluation level What questions are addressed? How will information be gathered? What is measured or assessed?
(examples) (examples) (examples)

1. Participants’ reactions Did they like it? Usually questionnaire at the end of Initial satisfaction with the experience
the session
2. Participants’ learning Did participants learn what Assessments, demonstrations, reflections, New knowledge and skill of participants
was intended? portfolios
3. Organisation support and change What was the impact on the organisation? Questionnaires, minutes of meetings, The organisation’s advocacy, support,
Were sufficient resources made available? interviews, focus groups accommodation, IT resources, facilitation
Mentors or coaches used?

4. Participants’ use of new knowledge Did participants effectively apply the Questionnaires, interviews, reflection, Degree and quality of implementation
and skills new skills? observation, portfolios

5. Student learning outcomes What was the impact on students? Student records/results, questionnaires, Student learning outcomes
Did it affect student achievement? participant, portfolios, focus groups performance and achievement; attitude and
Did it influence student well-being? disposition; skills and behaviours
Is student attendance improving?
Adapted from: Guskey, TR (2000).

9
Section three: Major frameworks and evaluation models

Arguments for the use of this model


It is designed for staff development in an educational context. The end product is a
model that is very useful in guiding the implementation and evaluation of a program.
It is straightforward to use.

Arguments against the use of this model


As with Kirkpatrick the model is said to be simplistic. There is also no recognition of
the time-lag necessary between the first three levels and the last two. To evaluate
levels four and five requires the new knowledge or skills identified in levels one to
three to be applied in practice and to have an impact on students’ learning outcomes.
These learning outcomes will have to be recognised and measured over time in order
to evaluate whether the intervention has brought about new teaching approaches
that have been embedded and are successful.

Comment
In using this model Guskey suggests that you start with the questions at level five
as a basis for planning your evaluation. A recent study from Davis et al (2009, p146)
confirmed that “multi-level evaluation of professional development does indeed
apply to ICT-related teacher training. Therefore we recommend that all five of
Guskey’s levels be consistently adopted for the evaluation of ICT training…”.

Logic frame model evaluation


A logic model presents a picture of how your effort or initiative is supposed to work.
It explains why your strategy is a good solution to the problem at hand. Effective
logic models make an explicit, often visual, statement of the activities that will bring
about change and the results you expect to see for the community and its people.
A logic model helps maintain the momentum of the planning and evaluation process
and participant involvement by providing a common language and point of reference.
A detailed model indicates precisely how each activity will lead to desired changes.
In the UK, the logic frame model for evaluation has usually been used for planning
and evaluating large-scale projects in developing countries, however, it is now seen
as a relevant model for whenever evaluation is considered.
A logic model is a “plausible, sensible model of how a programme is supposed to
work” (Bickman, 1987, p5). It serves as a framework and a process for planning to
bridge the gap between where you are and where you want to be. It provides a
structure for clearly understanding the situation that drives the need for an initiative,
the desired end state, and how investments are linked to activities for targeted people
in order to achieve the desired results. A logic model is the first step in evaluation.
The logic model describes the sequence of events thought to bring about benefits
or change over time.

10
Section three: Major frameworks and evaluation models

The elements of the logic frame model are resources, outputs, activities, participation,
short-, medium- and longer-term outcomes, and the relevant external influences,
(Wholey, 1983, 1987). Sunra et al (2003, p6) describe the logic model as “…a visual
link of programme inputs and activities to programme outputs and outcomes, and
shows the basic (logic) for these expectations. The logic frame model is an interactive
tool, providing a framework for programme planning, implementation and evaluation,”
and was one of the models reflected on by Giaffric Ltd (2007) in constructing
its evaluation model for the Joint Information Systems Committee (JISC).
See the complete model in section five of this document.
At its simplest, the logic model may be illustrated by diagram 1.

Diagram 1.
A simple logic frame model

Input Outputs Outcomes

Programme Long-
Activities Participation Short Medium
investments term

In practice the diagram is likely to end up being more complex as each of the areas
under consideration are set out in more detail. See diagram 2.

11
Section three: Major frameworks and evaluation models

Diagram 2.
A more complex logic frame model
Program Action – Logic Model

Inputs Outputs Outcomes – Impact


Activities Participation Short Term Medium Term Long Term

Priorities What we What we do What we reach What the What the What the
Invest short term medium term ultimate
Situation Consider: Conduct Participants results are results are Impact(s) is
Mission Staff workshops,
Needs and Vision meetings Clients Learning Action Conditions
assets Volunteers Deliver Agencies
Values services Awareness Behavior
Symptoms Mandates Time Social
Develop Decision- Knowledge
versus products, makers Practice Economic
Resources Money
problems Local dynamics curriculum, Attitudes Decision-
Research base Customers Civil
resources making
Stakeholder Collaboration Train Skills Environmental
engagement Competition Materials
Provide Satisfaction Options Policies
Intended Equipment counselling
Assess Aspirations Social Action
Outcomes Technology Facilitiate
Partner Motivations
Partners
Work with
media

Assumptions External Factors

Evaluation
Focus – Collect Data – Analyse and Interpret – Report
This diagram is taken from the UW-Extension logic model (2008) used with kind permission from UW-Extension.
12
Section three: Major frameworks and evaluation models

Arguments for the use of this model


It integrates planning, performance measurement and evaluation in one model.
A logic frame model describes a programme and its theory of change. It is useful in
helping to focus an evaluation. Furthermore, suggest Taylor-Powell and Henert (2008),
the process can facilitate team building and stakeholder buy-in, as well as ensuring
that implicit program assumptions are made explicit.
Evaluators have found the logic frame model process useful in a wide range of small
and complex programmes and interventions in industrial, social and educational
contexts. A logic frame model presents a plausible and sensible model of how
the programme will work under certain conditions to solve identified problems
(Bickman, 1987). Thus the logic frame model is the basis for a convincing story of the
programme’s expected performance. A manager has to both explain the elements of
the programme and present the logic of how the program works. Patton (1997) refers
to a programme description such as this as an “espoused theory of action,” that is,
stakeholder perceptions of how the programme will work.

Arguments against the use of this model


The logical approach does suggest that it is too simple as an evaluation framework
as it appears to assume that all projects are linear. It is perceived as rigid and can
lead to the simplification of complex social processes.
The structure of the logic frame model suggests that everything will go according to
plan – programme activities, outcomes and goals are all laid out in advance, as are
indicators with which to monitor these. As such, there is no provision for a change
in project direction nor a space for learning to be fed into project implementation.
Although the logic frame model can be altered during the course of a project, many
commentators note that they are rarely revisited (Earle, 2003, p2).
The most common limitations include a logic frame model represents intention –
it is not reality. It focuses on expected outcomes, so people may overlook unintended
outcomes (positive and negative).

Comment
Evaluators have played a prominent role in using and developing the logic frame
model. This may be why it is often called an “evaluation framework.” Development
and use of logic model concepts by evaluators continues to result in a broad array
of theoretical and practical applications, say Taylor-Powell and Henert (2008).

The self-review framework for ICT


This framework has been designed specifically for use by schools to assess the
e-maturity of the school as an institution. The framework divides into eight
elements which will support and challenge a school to consider how effectively
it is using ICT. Staff from schools are able to sign up to use the framework on the
Becta website: https://selfreview.becta.org.uk/about_this_framework
On registering to use the framework the site offers clear guidelines for using it in
your own context. There are also case-studies and video clips which are available
to support and challenge your school/organisation.

13
Section three: Major frameworks and evaluation models

1. Leadership and management


• Develop and communicate a shared vision for ICT.
• Plan a sustainable ICT strategy.

2. Curriculum
• Plan and lead a broad and balanced ICT curriculum.
• Review and update the curriculum in the light of developments in technology
and practice.
• Ensure pupils’ ICT experiences are progressive, coherent, balanced and consistent.

3. Learning and teaching


• Plan the use of ICT to enhance learning and teaching.
• Meet pupils’ expectations for the use of ICT.
• Encourage teachers to work collaboratively in identifying and evaluating the
impact of ICT on learning and teaching.

4. Assessment
• Assess the capability of ICT to support pupils’ learning.
• Use assessment evidence and data in planning learning and teaching across the
whole curriculum.
• Assess the learning in specific subjects when ICT has been used.

5. Professional development
• Identify and address the ICT training needs of your school and individual staff.
• Provide quality support and training activities for all staff in the use of ICT sharing
effective practice.
• Review, monitor and evaluate professional development as an integral part of the
development of your school.

6. Extending opportunities for learning


• Understand the needs of your pupils and community in their extended use of ICT.
• Ensure provision is enhanced through informed planning, resulting in quality of
use of ICT within and beyond the school.
• Review, monitor and evaluate opportunities to extend learning within and beyond
your school.

7. Resources
• Ensure learning and teaching environments use ICT effectively and in line with
strategic needs.
• Purchase, deploy and review appropriate ICT resources that reflect your school
improvement strategy.
• Manage technical support effectively for the benefit of pupils and staff.

8. Impact on pupil outcomes


• Demonstrate how pupils can make good progress in ICT capability.
• Be aware of how the use of ICT can have a wider positive impact on pupils’ progress.
• Review pupil attitudes and behaviour and how the use of ICT can impact
positively on pupil achievement.
14
Section three: Major frameworks and evaluation models

Arguments for the model


The Becta site offers a number of positive comments from users about the use of
the model.

Arguments against the model


There are no negative comments about the model on the Becta site.

Comment
The Next Generation Learning Charter is a four-level scheme to encourage schools’
engagement with, and progress through, the self-review framework. On registering
with the framework, a school is asked to sign the charter, saying they will undertake
a review of the use of ICT in the school during the next three years. When a school
has reached a benchmark level in three of the eight elements, it can receive a
recognition level certificate. The ICT mark accreditation is reached after an assessor’s
visit confirms that the school has reached the nationally agreed standard in all eight
elements of the framework. The criteria for judging the ICT excellence awards are
based on the highest levels in the framework, and form the top level of the charter.
https://selfreview.becta.org.uk/about_next_generation_learning_charter

The test-bed e-maturity model


This e-maturity model (eMM) has been identified and used successfully in other
project evaluations. The details of the e-maturity models developed by a team
from Manchester Metropolitan and Nottingham Trent Universities in their ICT
test-bed project can be viewed at:
www.evaluation.icttestbed.org.uk/methodology/maturity_model
The evaluation assessed the effectiveness of the implementation of ICT in
educational organisations in relation to five key themes. The evaluation comprises
a range of methodologies, including a survey, maturity model, action research,
qualitative investigation, and benchmarking performance data. The development of
the maturity models was funded by Becta/DfES and copyright of the models remains
with Jean Underwood and Gayle Dillon (authors). Permission to reproduce the
models, or any part of them, must be sought from the authors directly.

15
Section four: Evaluation tools

Section four: These are the elements of evaluation that provide the data to evaluate the indicators,
processes and outcomes of ICT-based projects. Such evaluation tools sit within the

Evaluation tools broad structure of an evaluation model and provide the detailed data from which
conclusions may be drawn.
In the development of evaluation methodology it is important to ensure that
“…we develop research designs that capture what is important rather than what is
measurable” (Coburn, 2003, p9). For this we have to consider a number of factors
and, in her research, Coburn has identified four aspects of ‘scale’ that she considers
are vital to the success of projects designed to bring about reform in practices. Scale
is usually considered as the increasing ‘take-up’ of a particular reform and, in her
research on teaching and learning reform in schools, she suggests that evaluators
should be redefining scale in four dimensions as current views are too limiting and
take-up does not indicate change. The four dimensions of scale are:
• depth – relates to the impact and recognition that the reform has on the individual,
ie, changed their behaviour, understand and use the new pedagogy of the reform
• sustainability – is the capacity of the organisation increased to enable all staff
to maintain these changes?
• spread – describes the reform in terms of the understanding and acceptance
of its principles and norms, not just to schools but to local authorities and
collaborative groups, and
• shift in reform ownership – no longer an ‘external’ reform controlled by a reformer
but becomes an ‘internal’ reform with authority held by the school and teachers
within the school who have the capacity to sustain, spread and deepen the reform
principles themselves.
The identification and measurement of these dimensions requires a range of complex
tools, some of which are available and some of which have to be developed in order
to gather the data that will inform the evaluation of each of these dimensions.
Some of these issues were identified and measured in the CeDARE evaluation
methodology. The summary of the following evaluation methods is from the CeDARE
(2009a) ICT in ITT survey analysis. They formed some of the tools required to
categorise data and define and measure objectives in the survey.

The five phases of ICT adoption


The research of Hooper and Reiber (1995) has helped to support the recognition
of indicators of the capacity of staff to spread and own changes in the use of ICT.
They proposed a ‘model of technology in the classroom’ that was set out in what
they defined as the five phases of adoption of ICT by staff. These are the phases:

1. Familiarisation
• A teacher’s initial experience with ICT. A teacher participates in an ICT training
programme but does not then go on to use the information.

2. Utilisation
• A teacher tries out the ICT in their classroom but does not expand on its use.
‘If the technology was taken away on Monday, hardly anyone would notice
on Tuesday’.
16
Section four: Evaluation tools

3. Integration
• This is the beginning of an understanding of ICT. The teacher decides to use
the technology for something specific in their lesson, and if the technology is
unavailable then the lesson is unable to proceed. If the teacher overcomes this
hurdle they are likely to move to the next phase.

4. Reorientation
• The teacher uses ICT to develop learner-centred approaches and change their
own approach to lessons.

5. Evolution
• The teacher uses ICT to continue to develop new approaches to teaching and
learning with such methods as enquiry-based learning, in which the whole
learning environment is changed.
• At the reorientation and evolution stages the capacity for reform is
clearly recognised.

Diagram 3.
A model of adoption of ICT
5 Evolution
in the classroom.

4 Reorientation

3 Integration

2 Utilisation

1 Familiarisation

Adapted from Hooper and Reiber (1995)

Using questionnaires built around this model would help the evaluator to recognise
the development and capacity for reform of a member of staff.

E-maturity model
The use of a modified e-maturity model has helped to identify and measure the
impact of ICT development and help identify the improvements and capacity of
organisations. Examples of the eMM evaluation tool may be found on the Becta
website and on the ICT test-bed site http://feandskills.becta.org.uk/display.cfm?
resID=38834&page=1886&catID=1868
This link above is the updated version of a tool capable of assessing and improving
the use of technology across the further education (FE) and skills sector. This work

17
Section four: Evaluation tools

has resulted in the development of the Generator, a technology improvement


leadership tool for further education and skills http://feandskills.becta.org.uk/
display.cfm?page=1897
A common approach for all providers – including colleges, work-based learning
organisations, and adult and community education centres – is now out
for consultation.
The initial model is built around four levels of maturity:
• beginning
• developing
• performing, and
• outstanding.

There is also a self-development package on the use and development of eMM from
a University of Wellington website www.utdc.vuw.ac.nz/research/emm/
The underlying idea that guides the development of the eMM is that the ability of
an institution to be effective in any particular area of work is dependent on their
capability to engage in high-quality processes that are reproducible and able to be
extended and sustained as demand grows.
This site provides a step-by-step guide to develop your evaluation questions
on capability.
A number of pilots have shown that an e-maturity model has the potential to
identify the development and capacity of reform in an organisation. Chapman (2006)
carried out a pre-event questionnaire using the levels in the e-maturity FE and skills
developmental model, followed up with a post-event questionnaire some 18 months
later. The effects of the training and its influence on change in pedagogy could be
clearly recognised from this evaluation.

Technology – a vehicle for enquiry-based learning


Fisher et al (2006) has endeavoured to determine ‘how teachers might learn with
digital technologies’ using the work of Shulman and Shulman, who propose that ICT
affords learners the opportunity to engage with activities. Trainee teachers and their
learners may discover that technology provides a suitable vehicle for enquiry-based
learning in which the teachers have changed learning practice and collaborate in
the learning process. How this might be recognised in an evaluation of the use of
technology could be by noting if teachers are ‘ready, willing and able’ to teach as
a result of their affordance of learning, using what Loveless (2006) calls ‘clusters’
of purposeful activity. These are separated into vision for education, motivation to
learn and develop practice, professional knowledge, understanding and practice, and
reflection and learning in community as a basis of questions of individuals or focus
groups. There is an example of its use in a questionnaire in the CeDARE (2009a) ICT
in ITT survey, see questions 13 and 14.

18
Section five: Information on evaluation

Section five: This section outlines sources of further information on tools and ideas for evaluating
ICT from the UK and elsewhere.

Information on Some ICT specific models of evaluation practice, including data collection methods
and approaches to assessment, may be found in handbooks from a number of
evaluation sources. The methodologies are too detailed and comprehensive to review in this
document, and this section provides you with a list of websites and titles that may
be accessed for further comprehensive information. As with other areas of ICT
the models are subject to change and development. Major sources of advice and
information on evaluation methodology for ICT will be found on the Becta
www.becta.org.uk and JISC jisc.ac.uk websites.

Examples of resources available to support and


guide evaluation of ICT
n Educator’s guide to evaluating the use of technology in schools
and classrooms
Link: www.gao.gov/policy/10_1_4.htm  
Sponsor: American Institutes for Research for US Dept of Education
Scope: Evaluating technology use in elementary and secondary schools
Audience: Anyone conducting technology evaluation in schools
Format: Available both as web pages and Adobe pdf document.

n The learning technology dissemination initiative (LTDI)


Link: www.icbl.hw.ac.uk/ltdi/evalstudies/es_all.pdf
Scope: A range of case-studies and ideas of evaluation in one downloadable text
Overview: The LTDI has put together a collection of papers – LTDI: evaluation
studies – on evaluation that offer a number of case-study examples. The paper from
Professor Barry Jackson, Middlesex University, ‘Evaluation of learning technology
implementation’, is particularly relevant.

n A guide to logical model development


Link: www.ojp.usdoj.gov/BJA/evaluation/guide/documents/cdc-logic-model-
development.pdf
Scope: Sundra DL, Scherer J, Anderson LA (2003) present ‘A guide to logic model
development’ for CDC’s Prevention Research Center.
Overview: This is a website from the USA which has a very helpful guide to the
production of a logic model framework and lots of case-study examples.

n A practical guide to evaluation


Link: www.rcuk.ac.uk/aboutrcuk/publications/corporate/evaluationguide.htm
Scope: This is, as it states, a practical guide to anyone drawing up an evaluation
of a technology project.
Overview: This guide is designed for those who lead projects intended to engage
general audiences in science, social science, engineering and technology and the
social, ethical and political issues that new research in these areas raises. It is
intended to help project managers evaluate individual projects, regardless of their
experience of evaluation.

19
Section five: Information on evaluation

n A practical guide to evaluation methods for lecturers


Link: www.icbl.hw.ac.uk/ltdi/ltdi-pub.htm#Cookbook
Scope: This offers step-by-step guides to a range of approaches to evaluation.
Overview: It includes guides to the time, resources and process involved in different
evaluation methods, with hints relating to the stages of the process and links to
related pages.
• Information pages aim to provide some basic practical suggestions and advice,
applicable to a range of different evaluation methods.
• Preparation pages. Sections have been included to provide a framework to the
planning and preparation process involved prior to carrying out an evaluation.
These aim to encourage you to think in more detail about who the evaluation is
for, what you are going to be evaluating, and how best you might carry out such
an evaluation study.
• Testing, refining and presentation pages: encourage you to think of your
evaluation study as an ongoing process used to make improvements in teaching
and learning. Guidance is provided to encourage you to reflect on ways in which
you can act on your results and/or write up your findings in an evaluation report.

n The JISC handbook on evaluation commissioned by JISC from Glenaffric Ltd


(2007). Six steps to effective evaluation: a handbook for programme and
project managers
Link: www.jisc.ac.uk/media/documents/programmes/digitisation/
SixStepsHandbook.pdf
Scope: This offers a logic model framework approach for evaluating technology
projects (see diagram 4). Glenaffric Ltd (2007, p1) states that “this handbook may
be useful for anyone engaged in development activities in the innovative use of ICT
to support education and research.”

20
Section five: Information on evaluation

Diagram 4.
Six steps to effective evaluation

6. Report Findings
1. Identify Stakeholders

Evaluation
Reports Stakeholder
Analysis

5. Analyse Results

2. Describe Project and


Understand Programme
Coding Six Steps to
Frame Effective
Evaluation
Logic
Model
4. Gather Evidence

Evaluation
Data

3. Design Evaluation

Evaluation
Plan

This diagram is taken from Glenaffric Ltd (2007) found at www.jisc.ac.uk/media/


documents/programmes/digitisation/SixStepsHandbook.pdf

n The evalkit
Link: www.jiscinfonet.ac.uk/Resources/evalkit/index_html
Scope: This is a directory of ICT evaluation tools and toolkits for use by the education
sector covering the broad area of curriculum development, media selection, resource
selection, quality assurance, and evaluation of ICT development projects. Please note it
is not the aim of JISC to review or rate the tools and toolkits held within the database
but to raise the awareness of the education community to ICT evaluation toolkits and
tools that are currently available. The toolkit database is also available with a list of links
and downloads for wide-ranging sources for tools to be used in evaluation. Found at
www.jiscinfonet.ac.uk/Resources/evalkit/toolkit-database

n The Kellogg Foundation logic model development guide and evaluation


handbook
Links: www.wkkf.org/Pubs/Tools/Evaluation/Pub3669.pdf
www.wkkf.org/Pubs/Tools/Evaluation/Pub770.pdf
Scope: They are useful guides to planning and evaluating projects with
step-by-step advice.

21
Section five: Information on evaluation

Overview: The program logic model is defined as a picture of how your


organisation does its work – the theory and assumptions underlying the programme.
A program logic model links outcomes (both short- and long-term) with programme
activities/processes and the theoretical assumptions/principles of the program.
The W.K. Kellogg Foundation logic model development guide, a companion
publication to the evaluation handbook, focuses on the development and use
of the program logic model. We have found the logic model and its processes
facilitate thinking, planning, and communications about program objectives and
actual accomplishments.
The Kellogg Foundation also has a useful ‘evaluation toolkit’ found at
www.wkkf.org/Default.aspx?tabid=90&CID=281&ItemID=2810002&NID=2820
002&LanguageID=0

n The National Science Foundation (2002), the 2002 user-friendly handbook for
project evaluation
Link: www.nsf.gov/pubs/2002/nsf02057/nsf02057_1.pdf
Scope: A clear guide to setting out an evaluation framework for a project.
Although based on science projects there are approaches that are applicable to
the use of technology.
Overview: The handbook discusses quantitative and qualitative evaluation methods,
suggesting ways in which they can be used as complements in an evaluation strategy.
As a result of reading this handbook, it is expected that program managers will
increase their understanding of the evaluation process and NSF’s requirements for
evaluation, as well as gain knowledge that will help them to communicate with
evaluators and manage the actual evaluation.

n Online evaluation resource library (OERL)


Link: http://oerl.sri.com/
Scope: A collection of a range of resources for people seeking information on
evaluation.
Overview: OERL’s mission is to support the continuous improvement of project
evaluations. Sound evaluations are critical to determining project effectiveness.
To this end, OERL provides:
– a large collection of sound plans, reports, and instruments from past and current
project evaluations in several content areas, and – guidelines for how to improve
evaluation practice using the website resources.

22
Section five: Information on evaluation

OERL’s resources include instruments, plans and reports from evaluations that have
proved to be sound and representative of current evaluation practices. OERL also
includes professional development modules that can be used to better understand and
utilise the materials made available.

n The route map


Link: http://publications.dcsf.gov.uk/default.aspx?PageFunction=productdetails
&PageMode=publications&ProductId=RR659&
Scope: These materials are intended for use by CPD leaders/coordinators,
participants and providers, departments, teams, schools and LEAs.
Overview: They are an edited version of materials produced as part of a
two-year, DfES-funded research project undertaken by the Universities of
Warwick and Nottingham.
Appendix 8 of the report – Evaluating the impact of continuing professional
development in schools – sets out a model for evaluating the impact of CPD
in schools. It offers a series of steps to follow and questions to ask.

23
Section six: An example of the use of an evaluation framework

Section six. This section gives a worked example of an evaluation model, the logical model
framework, applied to an ICT in ITT project.

An example The example is based on a small-scale technology development in an


employment-based initial teacher training (EBITT) programme making up
of the use of the Dorset Teacher Education Partnership (DTEP).

an evaluation Title of the evaluation – Evaluation of the use of a virtual learning environment (VLE):
improving reflective practice and self-assessment of progress against the
framework QTS standards and supporting practice.
The information for the evaluation is drawn from a short video of individuals talking
about the use of the VLE in their practice CeDARE (2009b).
See video case study – www.tda.gov.uk/techforteaching

Evaluation factors
Any evaluation should consider the five principles of evaluation:
• identify the focus and purpose of evaluation
• build on what is already known
• gather evidence
• analyse and interpret
• communicate and feed back.
(See section 2 for more detail)
The use of the logic frame model ensures that these principles are adhered to as it
encourages participants to clearly think about:
a. input – what is invested in the project
b. outputs – what is done as part of the project
c. outcomes – impact: what results are achieved in the project.
The logic framework model offers both a vehicle for planning and a framework
for evaluation.
“A logic model helps us match evaluation to the actual program so that we measure
what is appropriate and relevant” Taylor-Powell E and Henert E (2008 , p1).
In its simplest form the logic frame is made up of three elements which logically link
activities and effects.

Diagram 1.
A simple logic framework model

Input Outputs Outcomes – impact

Programme Long-
Activities Participation Short Medium
investments term

24
Section six: An example of the use of an evaluation framework

To use this model the first step is to complete a flow model from need to final
impact. When this is completed it will:
• provide a plan for future evaluation
• identify the outcomes that should be measured, and
• provide a guide as to the evaluation tools to be used.
To draw up a simple logic frame model of the DTEP project you will need to
review the video and refer to the model below and the guidelines available at
www.uwex.edu/ces/lmcourse/
Ideally, the logical model should be drawn up at the beginning of a project and should
involve all stakeholders. This will identify the focus and purpose of the evaluation
from its outset.
Try to complete the model step by step using a blank flow chart (a more detailed
teach-yourself guide may be found at
www.uwex.edu/ces/pdande/evaluation/evallogicmodel.html)
You need to:
• identify why the project was set up – situation and priorities
• note what resources are anticipated for the project – input
• identify the activities to be carried out by the project and who will participate
in them – outputs, and
• state the proposed results from the project at short-, medium- and long-term
time-scales – outcomes: impacts.
We have detailed below a completed LFM for the DTEP project as an example.
We have also provided the evaluation outcomes from the project to illustrate
how a logical model would have helped with both planning and evaluation.

25
Section six: An example of the use of an evaluation framework

Diagram 2.
The logic framework model for the DTEP project

Inputs Outputs Outcomes – Impact


Activities Participation Short Term Medium Term Long Term

What we What we do Who we reach What the What the What the
Invest in the short term medium term ultimate
project Install software All trainees results are: results are: Impact(s) is:
and provide receive training Learning Action Conditions Note
TDA funding
laptops The work of CeDARE
Some mentors Trainees Trainees wish to All staff actively
Priorities Equipment (after Coburn, 2003)
Involve and trainers actively use personalise their involved in using
(aims) Consultants consultants receive training the VLE VLE access. the VLE. in evaluating ICT in ITT
Situation
To improve: Train trainees Assessable
More staff to have The organisation has identified the
Staff time Managers,
“Wheelbarrow 1. reflective more training on supports all staff in importance of considering
and some staff partners made work stored on VLE
full of paper” practice the VLE. their use of the VLE. the impact of technology
Encourage use aware of use Trainees use VLE Assessment of Communities of
(portfolio) at 2. self- in terms of:
of VLE of VLE for reflection, work on the VLE. practice at work
the end of assessment
communication, Other staff to join throughout the Depth – How do you ensure
the year to of progress Ofsted and
Facilitate communities the communities organisation and all it impacts on classrooms and
be used in against the consultants to
communities of practice of practice. of them using the in different contexts?
assessing QTS be made
of practice More issues to be VLE as the basis for How does it impact on
QTS standards aware of use identified to that communication.
of VLE maximise the use Reflective practice beliefs and attitudes?
of the VLE. developed. Scope – How do you actively
Trainees start to involve a critical mass of people
share materials and in trying out the
develop communities technology and changing
of practice.
their practice?
Depth Scope Transfer of
Transfer of ownership –
Assumptions ownership How do you ensure people own
Change in their Share their evidence the technology and the
All trainees will use the VLE effectively after a pedagogy – not only with other trainees Personalising the
short training session. All mentors will actively using VLE for storage and mentors for their VLE. Developing the intervention set up by a project.
use the VLE in their relationship with their trainee. of their evidence but reflective comments. potential of the VLE
Ofsted will be able to use the VLE ‘store of also to develop their
Developing
from storage to The traditional ‘Outcomes – Impacts’
evidence’ for assessment. teaching resources. pedagogic vehicle in the Logic Framework Evaluation
communities of
for all staff. should therefore be considered under
Trainees use the VLE practice using the VLE.
to reflect on their All staff changing the headings of depth, scope and
More staff
evidence and to their working transfer of ownership rather than
actively involved in
carry out self- practices to fully
assessment against
using the VLE with
utilise the VLE.
learning, action and conditions.
trainees including
the QTS standards.
other partner
schools.

External Factors
Current limits on the use of VLE as a result of the supplier contract.
Should consider the impact of the use of the VLE using: depth,
scope and transfer of ownership.

This model has been derived from the UW-Extension logic model (2008) used with kind permission from UW-Extension.
26
Section six: An example of the use of an evaluation framework

Findings
The findings of the evaluation from the video were as follows.
Scope of the implementation
• Planning of the project was limited in its scope. There were a number of
assumptions made based on little evidence.
• The training provided for the trainees and other staff was very limited and
unsupported post-training.
• The VLE has meant that the ‘Wheelbarrow of paper’ is no longer needed.
• Trainees found the VLE very useful not only in storing their evidence but also in
developing other areas of their work.
• Still uncertain if Ofsted will accept assessment of trainees via a VLE.

Depth of engagement
• For some trainees the use of the VLE changed the way they worked and developed
their reflective practice.
• New communities of practice were established by trainees.
• Few mentors changed their practice.
• During the project a number of other issues were identified and now need to
be developed.
• The future potential of the VLE has been recognised by trainees and managers
involved in the project.

Transfer of ownership
• Some trainees were developing their own communities of practice with other
users of the VLE.
• Some trainees were changing their practice as a result of having the availability
and resources within the VLE.
• The partnership has recognised the potential of a VLE to develop, change and
improve practice for more than just trainees.

Recommendations
The project has met the aims of the project but has also highlighted the limitations
of outlook of those original aims. The project leaders now need to:
• involve more staff in the use of the VLE
• develop future training events to meet the needs of other groups of staff in using
the VLE
• involve Ofsted in the discussions of their future developments, and
• monitor the impact of the use of the VLE on teaching and learning for trainees
and other staff.

Comment
For those people who have watched the video and followed the steps in the model
it is anticipated that similar recommendations would be suggested. The Logic
Framework Model should enable a straightforward evaluation of any project.
This example has used a limited range of activities but the model has the potential
to be used in either a simple or multifaceted project.

27
References

References Bennett, J, 2003. Evaluation Methods in Research, London: Continuum


Bickman, L, 1987. The Functions of Program Theory. In L Bickman (Ed)
CeDARE, 2009a, ICT in ITT Survey, Final Report, Wolverhampton: University
of Wolverhampton
CeDARE, 2009b, Teacher Trainees: Virtual Learning Environments (VLEs) in Learning
and Teaching, DTEP Case-Study, Wolverhampton: University of Wolverhampton
Chapman, R W C, 2006. From Coordinated to Innovative: Investigating Change
Management in the Use of Electronic Learning Technologies within a Large Further
Education College. Unpublished MA dissertation, University Of Wolverhampton
Chelimsky, E, 1996. Thoughts for a New Evaluation Society. Keynote speech at
UK Evaluation Society Conference, London 19-20 September
Davis, N, Preston, C, and Sahn, I, 2009. ICT Teacher Training: Evidence for Multi-Level
Evaluation from a National Initiative, British Journal of Education Technology, vol 40,
no 1, pp135–148
Earle, L, 2003. Lost in the Matrix: The Logframe and The Local Picture, Paper for
INTRAC’s 5th Evaluation Conference: Measurement, Management and Accountability?,
31 March – 4 April, The Netherlands
Easterby-Smith, M, 1986. Evaluation of Management Education, Training and
Development, Hants, UK: Gower
Fisher, T, Higgins, C, and Loveless, A. Teachers Learning with Digital Technologies:
A Review of Research and Projects, Report 14, Futurelab Series
Glenaffric Ltd, 2007. Six steps to effective evaluation: a handbook for programme
and project managers, http://www.jisc.ac.uk/media/documents/programmes/
digitisation/SixStepsHandbook.pdf accessed 3 May 2009
Goodall, J, Day, C, Harris, A, and Lindsay, G, 2005. Evaluating the Impact of Continuing
Professional Development, DFES RB659, London: DFES
Guskey, T R, 1998. Evaluation Must Become an Integral Part of Staff Development,
Journal of Staff Development, vol 19, no 4
Guskey, T R, 2000. Evaluating professional development, Thousands Oaks,
OH: Corwin Press
Guskey, T R, 2001. JSD Forum: The Backward Approach, Journal of Staff Development,
22(3), 60
Guskey, T R, 2002. The Age of Our Accountability, Course Outline, University
of Kentucky
Hooper, S, and Rieber, L P, 1995. Teaching with Technology. In Ornstein, A C (Ed)
Teaching Theory into Practice, pp154–170, Needham Heights, MA: Allyn and Bacon
Kirkpatrick, D L, 1959.Techniques for Evaluating Programmes. In Journal of the
American Society of Training Directors, vol 13, no 11, pp3–9
Nevo, D, 2006. Evaluation in Education. In Shaw, I F, Greene, J C, Mark, M M (Ed)
The Sage Handbook of Evaluation, pp451–460, London: Sage Publications Ltd
Patton, M, 1997. Utilisation-Focused Evaluation, 3rd Edition, Thousand Oaks,
CA: Sage Publications

28
References

RCUK, 2008. An Introduction to Evaluation,


http://www.rcuk.ac.uk/aboutrcuk/publications/corporate/evaluationguide.htm
accessed 11 March 2009
Scriven, M, 1967. The Methodology of Evaluation. In R E Stake (Ed) AERA Monograph
Series on Curriculum Evaluation No. 1. Chicago: Rand McNally
Stern, E, 1990. The Evaluation of Policy and the Politics of Evaluation, in The Tavistock
Institute of Human Relations Annual Review
Shulman, L S, and Shulman, J H, 2004. How and What Teachers Learn: A Shifting
Perspective, Journal of Curriculum Studies, vol 36, N 2, pp257-271
Sundra, D L, Scherer, J, Anderson, L A, 2003. A Guide to Logic Model Development for
CDC’s Prevention Research Centre, Centre for Disease Control and Prevention
www.ojp.usdoj.gov/BJA/evaluation/guide/documents/cdc-logic-model-
development.pdf accessed 12 March 2009
Tamkin, P, Yarnall, J, Kerrin, M, 2002. Kirkpatrick and Beyond: A Review of Models of
Training Evaluation, Report 392, London: Institute of Employment Studies
Taylor-Powell, E, and Henert, E, 2008. Developing a Logic Model: Teaching and Training
Guide, Madison, WI: University of Wisconsin-Extension, Cooperative Extension,
Program Development and Evaluation www.uwex.edu/ces/pdande accessed
12 March 2009
UK Evaluation Society www.evaluation.org.uk/resources/glossary accessed
5 March 2009
University of Wisconsin–Extension, 2008. Logic Model
http://www.uwex.edu/ces/pdande/evaluation/evallogicmodel.html accessed
12 March 2009
Wholey, J, 1983. Evaluation and Effective Public Management. Boston: Little, Brown
Wholey, J, 1987. Evaluability Assessment: Developing Program Theory
In Bickman, L (Ed)
Using Program Theory in Evaluation: New Directions for Program Evaluation,
33, 5-18. San Francisco, CA, Jossey-Bass Publishers

29
The TDA is committed to providing accessible information.
To request this item in another language or format, contact
TDA corporate communications at the address below or
e-mail: corporatecomms@tda.gov.uk

Please tell us what you require and we will consider with you how
to meet your needs.

Training and Development Agency for Schools


City Tower, Piccadilly Plaza, Manchester M1 4TD
TDA switchboard: t 0870 4960 123
Publications: t 0845 6060 323 e publications@tda.gov.uk

www.tda.gov.uk
© TDA 2009
TDA0731

Das könnte Ihnen auch gefallen