Beruflich Dokumente
Kultur Dokumente
Purpose
This paper arose out of the Training and Development Agency for Schools (TDA)
information and communication technology (ICT) in initial teacher training (ITT)
impact evaluation project 2008/09. It aims to provide an accessible resource
with practical ideas and models for evaluating the impact of a technology
intervention from inception to completion.
One of the key findings of the evaluation was that ‘one size definitely doesn’t
fit all’ when selecting the framework or model of evaluation.
Although intended for teacher educators, this paper will be relevant to
anyone in education interested in assessing the impact of technology in
teaching and learning. The ideas should be useful to those implementing a
range of innovative projects who want a customisable evaluation that
covers the breadth of creative work occurring in ITT with ICT. The evaluation
methods used in the ICT in ITT project can be seen in the main report at
www.tda.gov.uk/techforteaching, where we also put forward an embryonic
model for determining project success factors.
Acknowledgments
This project was conceived and funded in support of the TDA evaluation by Becta. Becta has been working with the TDA to
support the evaluation advisory group. Becta also funded the evaluation team at the University of Wolverhampton to produce
additional materials on evaluating the impact of a technology intervention. Thanks are due to Malcolm Hunt at Becta, who
guided the process, and to Dr Michael Stokes at the University of Wolverhampton, who conceptualised the document and
drew the strands together. The complex logic model referred to in this document was derived from University of Wisconsin (UW)
Extension logic model, with kind permission.
2
Contents
Contents
Section one Evaluation – what is it and why do it? 4
References 28
3
Section one: Evaluation – what is it and why do it?
Section one: People use different terminology when they are talking about evaluation and people
have different perspectives on the nature and purpose of evaluation. According to
Evaluation – Bennett (2003, p15), while there has been ongoing debate for several decades over
the nature and purpose of evaluation, he recognises that “…evaluation forms an
what is it and important area of research in education.” Easterby-Smith (1986, p13) adds his own
three reasons for evaluating more succinctly as:
why do it? • proving
• improving, and
• learning.
This document aims to make the purpose of evaluation and the approaches to
evaluation clearer by concentrating on the evaluation of ICT in education. Definitions
of evaluation abound and Bennett (2003) offers 13 without concluding an overall
definition. One biased towards education evaluation is from Nevo (1995, p11),
who suggests that it is an “act of collecting systematic information regarding the
nature and quality of educational objects”, which suggests that it is a combination
of description and judgement. The UK Evaluation Society (1994) also highlights the
collection of information in saying evaluation is “…an in-depth study which takes
place at a discrete point in time, and in which recognised research procedures are
used in a systematic and analytically defensible fashion to form a judgement on
the value of an intervention.”
How such collection of information or research is organised may direct us to Scriven’s
(1967) idea of having two forms of evaluation: formative evaluation, which would
support the development of your project, and summative evaluation, for assessing
the final impact of a project. Goodall et al (2005, p37) supported this: “Effective
evaluation of continuing professional development (CPD) will usually need to serve
two main purposes: summative evaluation (does the programme/activity improve
outcomes?) and formative assessment (how can the programme/activity be
improved?)”. They go on to be critical of CPD evaluation practice and offer their own
model of evaluation, the ‘route map’ (found in the examples of evaluation practice in
this report in section 5).
In considering ICT in education, the formative function will include the evaluation
of instructional materials and pedagogic processes. This may relate to either the
development or use of materials and delivery of learning. A definition that appears
to be relevant to ICT issues in education is from Stern (1988), who suggests:
“Evaluation is any activity that, throughout the planning and delivery of innovative
programmes, enables those involved to learn and make judgements about the
starting assumptions, implementation processes, and outcomes of the innovation
concerned.” Guskey (1998) offers his definition of evaluation (adapted from the
Joint Committee on Standards for Educational Evaluation, 1994, p1): “Evaluation
is the systematic investigation of merit or worth”, proposing that it is a structured
and a measured and measurable approach. Chelimsky (1997, p101) sums up why
we evaluate in stating: “We look to evaluation as an aid to strengthen our practice,
organisation and programmes.” In order to do this, all critics agree that any reason or
reasons for the evaluation should be stated before any evaluation takes place.
This is reinforced by Guskey (2002), who reminds us that good evaluation is built in
from the outset of the professional development programme or activity, not added
4
Section one: Evaluation – what is it and why do it?
on at the end. The Research Councils UK (2005) emphasise this too in confirming
that evaluation is a process that takes place before, during and after a project.
It includes looking at the quality of the content, the delivery process, and the impact
of the project or programme on the audience(s). Some evaluation frameworks
incorporate a model planning process for a project as well as an evaluation
framework for the project, eg, logic frame models.
Guskey (2002, p1) helps to explain ‘Why evaluate?’: “The processes and procedures
involved in evaluation present an endless list of challenges that range from very
simple to extremely complex. Well-designed evaluations are valuable learning tools
that serve multiple audiences. They inform us about the effectiveness of current
policies or practices, and guide the content, form, and structure of future endeavours.
Poorly designed evaluations, on the other hand, waste time, energy and other
valuable resources…good evaluations do not have to be costly, nor do they require
sophisticated technical skills. What they require is the ability to ask good questions
and a basic understanding about how to find valid answers. Good evaluations provide
information that is sound, useful, and sufficiently reliable to use in making thoughtful
and responsible decisions about projects, programs, and policies.”
5
Section two: Guiding principles for evaluation
Section two: In carrying out evaluations, participants should decide why and how they will carry
them out. Drawing on the experience of CeDARE, Hadfield (2008) proposes five sets
Guiding principles of principles that participants should consider for any evaluation:
3. Gathering evidence
Evaluations should:
• try as far as possible to reuse and/or increase use of relevant evidence that has
already been collected
• ensure, as far as possible, that the process of collecting any new evidence is a
learning experience for those involved
• have clear strategies for triangulation, by collecting different sorts of evidence
from different groups in more than one context, and
• follow recognised ethical guidelines for both collection and storage.
6
Section three: Major frameworks and evaluation models
Section three: This section (bearing in mind the principles above) identifies some major frameworks
for evaluation and provides links to approaches and models of practice in evaluation
Major frameworks for use in a variety of situations. It will draw on methods of practice from the
research by CeDARE (2009) ICT in ITT survey analysis report and on selected
and evaluation examples from the literature and the internet.
models The evaluation research that provided the stimulus for this paper used evaluation
models developed by Kirkpatrick for evaluating training. This also included approaches
for impact evaluation based on the work of Hooper and Reiber (1995) and Fisher
(2006) for applying this evaluation to ICT development and impact on trainees and
trainers (details on the Kirkpatrick evaluation model are found in section 4).
Evaluation models
Kirkpatrick’s evaluation of training model
Kirkpatrick developed his four-step model for the evaluation of training and
development in business organisations and, according to this model, evaluation
should begin at level one and then, as time and budget allows, should move
sequentially through levels two, three and four. Each successive level represents a
more precise measure of the effectiveness of the training programme, but at the
same time requires a more rigorous and time-consuming analysis. The model consists
of four stages, originally described as steps but since 1996 considered as levels,
and is applicable for all forms of programme evaluation, including ICT in ITT.
• Level one: reactions – what the participants in the programme felt about the
project/programme, normally measured by the use of reaction questionnaires
based upon their perceptions. Did they like it? Was the material relevant to
their work? A tool such as a ‘happy sheet’ is often utilised at this level. Level one
evaluation is viewed by Kirkpatrick as the minimum requirement, providing some
information for the improvement of the programme.
• Level two: learning – this moves the evaluation on to assessing the changes in
knowledge, skills or attitude with respect to the programme/project objectives.
Measurement at this level is more difficult, and formal or informal testing or
surveying is often used, preferably pre- and post-programme.
• Level three: behaviour – evaluating at this level attempts to answer the question:
are the newly acquired skills, knowledge or attitude being used in the everyday
environment of the learner? Measuring at this level is difficult as it is often not
easy to predict when the change of behaviour will occur, and therefore important
decisions may have to be made as to when to evaluate, how often to evaluate
and how to go about the evaluation. In the ICT in ITT project, questionnaires to
determine changes in practice were used, with questions based on a modified
e-maturity scale from the work of Hooper and Reiber (1995).
• Level four: results – this level seeks to evaluate the success of the programme in
terms of results for the organisation, usually stated in improvements in quality.
Determining the improvements in quality of practice is probably the most difficult
aspect of their evaluation framework.
(Summary adapted from Tamkin P, Yarnell J, and Kerrin, M 2002)
7
Section three: Major frameworks and evaluation models
Comment
According to the study by Yamkin et al (2002, p.xiii) the overall conclusion “…is that
the [Kirkpatrick] model remains very useful for framing where evaluation might be
made.” The CeDARE ICT in ITT survey analysis used the multi-levels of the Kirkpatrick
model to determine what was already known from reviewing previous project
evaluations of ICT data collection and the identification of a suitable sampling
framework for investigation at a greater depth, ie, at Kirkpatrick’s levels three and
four framework.
8
Section three: Major frameworks and evaluation models
Table 1.
Five levels of professional development evaluation
Evaluation level What questions are addressed? How will information be gathered? What is measured or assessed?
(examples) (examples) (examples)
1. Participants’ reactions Did they like it? Usually questionnaire at the end of Initial satisfaction with the experience
the session
2. Participants’ learning Did participants learn what Assessments, demonstrations, reflections, New knowledge and skill of participants
was intended? portfolios
3. Organisation support and change What was the impact on the organisation? Questionnaires, minutes of meetings, The organisation’s advocacy, support,
Were sufficient resources made available? interviews, focus groups accommodation, IT resources, facilitation
Mentors or coaches used?
4. Participants’ use of new knowledge Did participants effectively apply the Questionnaires, interviews, reflection, Degree and quality of implementation
and skills new skills? observation, portfolios
5. Student learning outcomes What was the impact on students? Student records/results, questionnaires, Student learning outcomes
Did it affect student achievement? participant, portfolios, focus groups performance and achievement; attitude and
Did it influence student well-being? disposition; skills and behaviours
Is student attendance improving?
Adapted from: Guskey, TR (2000).
9
Section three: Major frameworks and evaluation models
Comment
In using this model Guskey suggests that you start with the questions at level five
as a basis for planning your evaluation. A recent study from Davis et al (2009, p146)
confirmed that “multi-level evaluation of professional development does indeed
apply to ICT-related teacher training. Therefore we recommend that all five of
Guskey’s levels be consistently adopted for the evaluation of ICT training…”.
10
Section three: Major frameworks and evaluation models
The elements of the logic frame model are resources, outputs, activities, participation,
short-, medium- and longer-term outcomes, and the relevant external influences,
(Wholey, 1983, 1987). Sunra et al (2003, p6) describe the logic model as “…a visual
link of programme inputs and activities to programme outputs and outcomes, and
shows the basic (logic) for these expectations. The logic frame model is an interactive
tool, providing a framework for programme planning, implementation and evaluation,”
and was one of the models reflected on by Giaffric Ltd (2007) in constructing
its evaluation model for the Joint Information Systems Committee (JISC).
See the complete model in section five of this document.
At its simplest, the logic model may be illustrated by diagram 1.
Diagram 1.
A simple logic frame model
Programme Long-
Activities Participation Short Medium
investments term
In practice the diagram is likely to end up being more complex as each of the areas
under consideration are set out in more detail. See diagram 2.
11
Section three: Major frameworks and evaluation models
Diagram 2.
A more complex logic frame model
Program Action – Logic Model
Priorities What we What we do What we reach What the What the What the
Invest short term medium term ultimate
Situation Consider: Conduct Participants results are results are Impact(s) is
Mission Staff workshops,
Needs and Vision meetings Clients Learning Action Conditions
assets Volunteers Deliver Agencies
Values services Awareness Behavior
Symptoms Mandates Time Social
Develop Decision- Knowledge
versus products, makers Practice Economic
Resources Money
problems Local dynamics curriculum, Attitudes Decision-
Research base Customers Civil
resources making
Stakeholder Collaboration Train Skills Environmental
engagement Competition Materials
Provide Satisfaction Options Policies
Intended Equipment counselling
Assess Aspirations Social Action
Outcomes Technology Facilitiate
Partner Motivations
Partners
Work with
media
Evaluation
Focus – Collect Data – Analyse and Interpret – Report
This diagram is taken from the UW-Extension logic model (2008) used with kind permission from UW-Extension.
12
Section three: Major frameworks and evaluation models
Comment
Evaluators have played a prominent role in using and developing the logic frame
model. This may be why it is often called an “evaluation framework.” Development
and use of logic model concepts by evaluators continues to result in a broad array
of theoretical and practical applications, say Taylor-Powell and Henert (2008).
13
Section three: Major frameworks and evaluation models
2. Curriculum
• Plan and lead a broad and balanced ICT curriculum.
• Review and update the curriculum in the light of developments in technology
and practice.
• Ensure pupils’ ICT experiences are progressive, coherent, balanced and consistent.
4. Assessment
• Assess the capability of ICT to support pupils’ learning.
• Use assessment evidence and data in planning learning and teaching across the
whole curriculum.
• Assess the learning in specific subjects when ICT has been used.
5. Professional development
• Identify and address the ICT training needs of your school and individual staff.
• Provide quality support and training activities for all staff in the use of ICT sharing
effective practice.
• Review, monitor and evaluate professional development as an integral part of the
development of your school.
7. Resources
• Ensure learning and teaching environments use ICT effectively and in line with
strategic needs.
• Purchase, deploy and review appropriate ICT resources that reflect your school
improvement strategy.
• Manage technical support effectively for the benefit of pupils and staff.
Comment
The Next Generation Learning Charter is a four-level scheme to encourage schools’
engagement with, and progress through, the self-review framework. On registering
with the framework, a school is asked to sign the charter, saying they will undertake
a review of the use of ICT in the school during the next three years. When a school
has reached a benchmark level in three of the eight elements, it can receive a
recognition level certificate. The ICT mark accreditation is reached after an assessor’s
visit confirms that the school has reached the nationally agreed standard in all eight
elements of the framework. The criteria for judging the ICT excellence awards are
based on the highest levels in the framework, and form the top level of the charter.
https://selfreview.becta.org.uk/about_next_generation_learning_charter
15
Section four: Evaluation tools
Section four: These are the elements of evaluation that provide the data to evaluate the indicators,
processes and outcomes of ICT-based projects. Such evaluation tools sit within the
Evaluation tools broad structure of an evaluation model and provide the detailed data from which
conclusions may be drawn.
In the development of evaluation methodology it is important to ensure that
“…we develop research designs that capture what is important rather than what is
measurable” (Coburn, 2003, p9). For this we have to consider a number of factors
and, in her research, Coburn has identified four aspects of ‘scale’ that she considers
are vital to the success of projects designed to bring about reform in practices. Scale
is usually considered as the increasing ‘take-up’ of a particular reform and, in her
research on teaching and learning reform in schools, she suggests that evaluators
should be redefining scale in four dimensions as current views are too limiting and
take-up does not indicate change. The four dimensions of scale are:
• depth – relates to the impact and recognition that the reform has on the individual,
ie, changed their behaviour, understand and use the new pedagogy of the reform
• sustainability – is the capacity of the organisation increased to enable all staff
to maintain these changes?
• spread – describes the reform in terms of the understanding and acceptance
of its principles and norms, not just to schools but to local authorities and
collaborative groups, and
• shift in reform ownership – no longer an ‘external’ reform controlled by a reformer
but becomes an ‘internal’ reform with authority held by the school and teachers
within the school who have the capacity to sustain, spread and deepen the reform
principles themselves.
The identification and measurement of these dimensions requires a range of complex
tools, some of which are available and some of which have to be developed in order
to gather the data that will inform the evaluation of each of these dimensions.
Some of these issues were identified and measured in the CeDARE evaluation
methodology. The summary of the following evaluation methods is from the CeDARE
(2009a) ICT in ITT survey analysis. They formed some of the tools required to
categorise data and define and measure objectives in the survey.
1. Familiarisation
• A teacher’s initial experience with ICT. A teacher participates in an ICT training
programme but does not then go on to use the information.
2. Utilisation
• A teacher tries out the ICT in their classroom but does not expand on its use.
‘If the technology was taken away on Monday, hardly anyone would notice
on Tuesday’.
16
Section four: Evaluation tools
3. Integration
• This is the beginning of an understanding of ICT. The teacher decides to use
the technology for something specific in their lesson, and if the technology is
unavailable then the lesson is unable to proceed. If the teacher overcomes this
hurdle they are likely to move to the next phase.
4. Reorientation
• The teacher uses ICT to develop learner-centred approaches and change their
own approach to lessons.
5. Evolution
• The teacher uses ICT to continue to develop new approaches to teaching and
learning with such methods as enquiry-based learning, in which the whole
learning environment is changed.
• At the reorientation and evolution stages the capacity for reform is
clearly recognised.
Diagram 3.
A model of adoption of ICT
5 Evolution
in the classroom.
4 Reorientation
3 Integration
2 Utilisation
1 Familiarisation
Using questionnaires built around this model would help the evaluator to recognise
the development and capacity for reform of a member of staff.
E-maturity model
The use of a modified e-maturity model has helped to identify and measure the
impact of ICT development and help identify the improvements and capacity of
organisations. Examples of the eMM evaluation tool may be found on the Becta
website and on the ICT test-bed site http://feandskills.becta.org.uk/display.cfm?
resID=38834&page=1886&catID=1868
This link above is the updated version of a tool capable of assessing and improving
the use of technology across the further education (FE) and skills sector. This work
17
Section four: Evaluation tools
There is also a self-development package on the use and development of eMM from
a University of Wellington website www.utdc.vuw.ac.nz/research/emm/
The underlying idea that guides the development of the eMM is that the ability of
an institution to be effective in any particular area of work is dependent on their
capability to engage in high-quality processes that are reproducible and able to be
extended and sustained as demand grows.
This site provides a step-by-step guide to develop your evaluation questions
on capability.
A number of pilots have shown that an e-maturity model has the potential to
identify the development and capacity of reform in an organisation. Chapman (2006)
carried out a pre-event questionnaire using the levels in the e-maturity FE and skills
developmental model, followed up with a post-event questionnaire some 18 months
later. The effects of the training and its influence on change in pedagogy could be
clearly recognised from this evaluation.
18
Section five: Information on evaluation
Section five: This section outlines sources of further information on tools and ideas for evaluating
ICT from the UK and elsewhere.
Information on Some ICT specific models of evaluation practice, including data collection methods
and approaches to assessment, may be found in handbooks from a number of
evaluation sources. The methodologies are too detailed and comprehensive to review in this
document, and this section provides you with a list of websites and titles that may
be accessed for further comprehensive information. As with other areas of ICT
the models are subject to change and development. Major sources of advice and
information on evaluation methodology for ICT will be found on the Becta
www.becta.org.uk and JISC jisc.ac.uk websites.
19
Section five: Information on evaluation
20
Section five: Information on evaluation
Diagram 4.
Six steps to effective evaluation
6. Report Findings
1. Identify Stakeholders
Evaluation
Reports Stakeholder
Analysis
5. Analyse Results
Evaluation
Data
3. Design Evaluation
Evaluation
Plan
n The evalkit
Link: www.jiscinfonet.ac.uk/Resources/evalkit/index_html
Scope: This is a directory of ICT evaluation tools and toolkits for use by the education
sector covering the broad area of curriculum development, media selection, resource
selection, quality assurance, and evaluation of ICT development projects. Please note it
is not the aim of JISC to review or rate the tools and toolkits held within the database
but to raise the awareness of the education community to ICT evaluation toolkits and
tools that are currently available. The toolkit database is also available with a list of links
and downloads for wide-ranging sources for tools to be used in evaluation. Found at
www.jiscinfonet.ac.uk/Resources/evalkit/toolkit-database
21
Section five: Information on evaluation
n The National Science Foundation (2002), the 2002 user-friendly handbook for
project evaluation
Link: www.nsf.gov/pubs/2002/nsf02057/nsf02057_1.pdf
Scope: A clear guide to setting out an evaluation framework for a project.
Although based on science projects there are approaches that are applicable to
the use of technology.
Overview: The handbook discusses quantitative and qualitative evaluation methods,
suggesting ways in which they can be used as complements in an evaluation strategy.
As a result of reading this handbook, it is expected that program managers will
increase their understanding of the evaluation process and NSF’s requirements for
evaluation, as well as gain knowledge that will help them to communicate with
evaluators and manage the actual evaluation.
22
Section five: Information on evaluation
OERL’s resources include instruments, plans and reports from evaluations that have
proved to be sound and representative of current evaluation practices. OERL also
includes professional development modules that can be used to better understand and
utilise the materials made available.
23
Section six: An example of the use of an evaluation framework
Section six. This section gives a worked example of an evaluation model, the logical model
framework, applied to an ICT in ITT project.
an evaluation Title of the evaluation – Evaluation of the use of a virtual learning environment (VLE):
improving reflective practice and self-assessment of progress against the
framework QTS standards and supporting practice.
The information for the evaluation is drawn from a short video of individuals talking
about the use of the VLE in their practice CeDARE (2009b).
See video case study – www.tda.gov.uk/techforteaching
Evaluation factors
Any evaluation should consider the five principles of evaluation:
• identify the focus and purpose of evaluation
• build on what is already known
• gather evidence
• analyse and interpret
• communicate and feed back.
(See section 2 for more detail)
The use of the logic frame model ensures that these principles are adhered to as it
encourages participants to clearly think about:
a. input – what is invested in the project
b. outputs – what is done as part of the project
c. outcomes – impact: what results are achieved in the project.
The logic framework model offers both a vehicle for planning and a framework
for evaluation.
“A logic model helps us match evaluation to the actual program so that we measure
what is appropriate and relevant” Taylor-Powell E and Henert E (2008 , p1).
In its simplest form the logic frame is made up of three elements which logically link
activities and effects.
Diagram 1.
A simple logic framework model
Programme Long-
Activities Participation Short Medium
investments term
24
Section six: An example of the use of an evaluation framework
To use this model the first step is to complete a flow model from need to final
impact. When this is completed it will:
• provide a plan for future evaluation
• identify the outcomes that should be measured, and
• provide a guide as to the evaluation tools to be used.
To draw up a simple logic frame model of the DTEP project you will need to
review the video and refer to the model below and the guidelines available at
www.uwex.edu/ces/lmcourse/
Ideally, the logical model should be drawn up at the beginning of a project and should
involve all stakeholders. This will identify the focus and purpose of the evaluation
from its outset.
Try to complete the model step by step using a blank flow chart (a more detailed
teach-yourself guide may be found at
www.uwex.edu/ces/pdande/evaluation/evallogicmodel.html)
You need to:
• identify why the project was set up – situation and priorities
• note what resources are anticipated for the project – input
• identify the activities to be carried out by the project and who will participate
in them – outputs, and
• state the proposed results from the project at short-, medium- and long-term
time-scales – outcomes: impacts.
We have detailed below a completed LFM for the DTEP project as an example.
We have also provided the evaluation outcomes from the project to illustrate
how a logical model would have helped with both planning and evaluation.
25
Section six: An example of the use of an evaluation framework
Diagram 2.
The logic framework model for the DTEP project
What we What we do Who we reach What the What the What the
Invest in the short term medium term ultimate
project Install software All trainees results are: results are: Impact(s) is:
and provide receive training Learning Action Conditions Note
TDA funding
laptops The work of CeDARE
Some mentors Trainees Trainees wish to All staff actively
Priorities Equipment (after Coburn, 2003)
Involve and trainers actively use personalise their involved in using
(aims) Consultants consultants receive training the VLE VLE access. the VLE. in evaluating ICT in ITT
Situation
To improve: Train trainees Assessable
More staff to have The organisation has identified the
Staff time Managers,
“Wheelbarrow 1. reflective more training on supports all staff in importance of considering
and some staff partners made work stored on VLE
full of paper” practice the VLE. their use of the VLE. the impact of technology
Encourage use aware of use Trainees use VLE Assessment of Communities of
(portfolio) at 2. self- in terms of:
of VLE of VLE for reflection, work on the VLE. practice at work
the end of assessment
communication, Other staff to join throughout the Depth – How do you ensure
the year to of progress Ofsted and
Facilitate communities the communities organisation and all it impacts on classrooms and
be used in against the consultants to
communities of practice of practice. of them using the in different contexts?
assessing QTS be made
of practice More issues to be VLE as the basis for How does it impact on
QTS standards aware of use identified to that communication.
of VLE maximise the use Reflective practice beliefs and attitudes?
of the VLE. developed. Scope – How do you actively
Trainees start to involve a critical mass of people
share materials and in trying out the
develop communities technology and changing
of practice.
their practice?
Depth Scope Transfer of
Transfer of ownership –
Assumptions ownership How do you ensure people own
Change in their Share their evidence the technology and the
All trainees will use the VLE effectively after a pedagogy – not only with other trainees Personalising the
short training session. All mentors will actively using VLE for storage and mentors for their VLE. Developing the intervention set up by a project.
use the VLE in their relationship with their trainee. of their evidence but reflective comments. potential of the VLE
Ofsted will be able to use the VLE ‘store of also to develop their
Developing
from storage to The traditional ‘Outcomes – Impacts’
evidence’ for assessment. teaching resources. pedagogic vehicle in the Logic Framework Evaluation
communities of
for all staff. should therefore be considered under
Trainees use the VLE practice using the VLE.
to reflect on their All staff changing the headings of depth, scope and
More staff
evidence and to their working transfer of ownership rather than
actively involved in
carry out self- practices to fully
assessment against
using the VLE with
utilise the VLE.
learning, action and conditions.
trainees including
the QTS standards.
other partner
schools.
External Factors
Current limits on the use of VLE as a result of the supplier contract.
Should consider the impact of the use of the VLE using: depth,
scope and transfer of ownership.
This model has been derived from the UW-Extension logic model (2008) used with kind permission from UW-Extension.
26
Section six: An example of the use of an evaluation framework
Findings
The findings of the evaluation from the video were as follows.
Scope of the implementation
• Planning of the project was limited in its scope. There were a number of
assumptions made based on little evidence.
• The training provided for the trainees and other staff was very limited and
unsupported post-training.
• The VLE has meant that the ‘Wheelbarrow of paper’ is no longer needed.
• Trainees found the VLE very useful not only in storing their evidence but also in
developing other areas of their work.
• Still uncertain if Ofsted will accept assessment of trainees via a VLE.
Depth of engagement
• For some trainees the use of the VLE changed the way they worked and developed
their reflective practice.
• New communities of practice were established by trainees.
• Few mentors changed their practice.
• During the project a number of other issues were identified and now need to
be developed.
• The future potential of the VLE has been recognised by trainees and managers
involved in the project.
Transfer of ownership
• Some trainees were developing their own communities of practice with other
users of the VLE.
• Some trainees were changing their practice as a result of having the availability
and resources within the VLE.
• The partnership has recognised the potential of a VLE to develop, change and
improve practice for more than just trainees.
Recommendations
The project has met the aims of the project but has also highlighted the limitations
of outlook of those original aims. The project leaders now need to:
• involve more staff in the use of the VLE
• develop future training events to meet the needs of other groups of staff in using
the VLE
• involve Ofsted in the discussions of their future developments, and
• monitor the impact of the use of the VLE on teaching and learning for trainees
and other staff.
Comment
For those people who have watched the video and followed the steps in the model
it is anticipated that similar recommendations would be suggested. The Logic
Framework Model should enable a straightforward evaluation of any project.
This example has used a limited range of activities but the model has the potential
to be used in either a simple or multifaceted project.
27
References
28
References
29
The TDA is committed to providing accessible information.
To request this item in another language or format, contact
TDA corporate communications at the address below or
e-mail: corporatecomms@tda.gov.uk
Please tell us what you require and we will consider with you how
to meet your needs.
www.tda.gov.uk
© TDA 2009
TDA0731