Sie sind auf Seite 1von 6

The Clearing House: A Journal of Educational Strategies,

Issues and Ideas

ISSN: 0009-8655 (Print) 1939-912X (Online) Journal homepage: http://www.tandfonline.com/loi/vtch20

Evaluating Staff Development

Dr. Donald C. Orlich

To cite this article: Dr. Donald C. Orlich (1989) Evaluating Staff Development, The
Clearing House: A Journal of Educational Strategies, Issues and Ideas, 62:8, 370-374, DOI:
10.1080/00098655.1989.10114097

To link to this article: http://dx.doi.org/10.1080/00098655.1989.10114097

Published online: 29 Jul 2010.

Submit your article to this journal

Article views: 2

View related articles

Citing articles: 1 View citing articles

Full Terms & Conditions of access and use can be found at


http://www.tandfonline.com/action/journalInformation?journalCode=vtch20

Download by: [UNSW Library] Date: 22 April 2016, At: 19:18


Evaluating Staff Development
Four Decision-Making Models

DONALD C. ORLICH

T he goal of all inservice education, given adequate


resources, is to offer the highest quality program
possible. If staff developers are to achieve and maintain
are improved through rational decision making based
upon objectively collected data.

high-quality programs, they must collect trustworthy in- Decision-Making Models


formation so that their immediate and future decisions The models selected have various proponents: Daniel
are based on empirical data. There are a variety of gen- L. Stufflebeam (CIPP Model), Marvin C. Alkin (CSE
eral evaluation models from which staff developers may Model), Michael Scriven (formative and summative),
select to assess their programs. Matthew B. Miles (temporary systems). Each model is ex-
Ernest R. House (1978) identified eight major evalua- panded here and applied to staff development programs.
tion models, their assumptions, and uses. These eight
Downloaded by [UNSW Library] at 19:18 22 April 2016

models are applicable to staff development and subsume CIPP Model


the program evaluation strategies described by Bruce W. Daniel L. Stufflebeam (1971) proposed the CIPP
Tuckman (1985). The models described in Houses tax- Model as having four major components: (1) context, (2)
onomy are the following: input, (3) process, and (4) product-leading to the acro-
1. Systems Analysis nym CIPP. Each component requires specific decision-
2. Behavioral Objectives making actions. Stufflebeam views the decision-making
3. Decision Making process as one that modifies, adjusts, sustains, or discon-
4. Goal-Free tinues any program or any of its parts.
5. Art Criticism Context evaluation, according to Stufflebeam, is con-
6. Accreditation ducted at the activity site so that one may gather informa-
7. Adversary tion concerning needs, problems, and objectives. The
8. Transaction context evaluation is a reality check.
One uses input evaluation to gather data on the
According to House, models one through four attempt to strengths and weaknesses of alternative strategies, each
provide objectivity or explicit measures. Models five of which could probably accomplish the programs ob-
through eight tend to provide more subjective, or im- jectives. For example, if a series of workshops were to be
plicit, knowledge about that which is being judged. conducted on the subject of preventing accidents in
For the purpose of this paper, only four decision-mak- sports, then one would consider what resources would be
ing models will be discussed, as these four tend to be the needed to achieve the projects stated mission.
most practical and efficient methods of evaluating staff Process evaluation methods are used to determine the
development programs and are the ones most often used. various techniques, strategies, and designs by which pro-
One assumption this writer makes is that any evalua- cedures are used in a program.
tion activity must ultimately lead to better decision mak- Product evaluation refers to a final, overall decision-
ing. If an evaluation model is to be helpful, it must be making process in which one decides whether to (1) con-
viewed as a process by which staff development programs tinue a project as is, (2) modify the project and continue
to use it, or (3) terminate the project.
To use the CIPP Model, a school district must have
personnel who can identify the specific elements of each
Dr. Orlich k a professor in the Department of Educa- of these CIPP components. These personnel have to
tional Adminktration and Supervision at Washington specify exactly what they consider to be the context, in-
State University. put, process, and product components. Second, person-

370
1989, Val. 62 Staff Development 371

nel must be designated to collect the data for each of the the participants. To accomplish these evaluation objec-
four areas. Finally, a series of standards must be devel- tives, staff developers may choose to use two additional
oped by which the evaluation itself is judged either mean- evaluation methodologies: formative and summative.
ingful or useless. The CIPP Model is rather complex and Michael Scriven (1%7) and others have suggested these
requires well-trained evaluators. modes. Let us examine the components.
Formative evaluation is designed to provide ongoing
The CSE Model feedback as quickly as possible. Formative instruments
Marvin C. Alkin (1970) suggested the more eclectic are specifically designed to monitor the activities or com-
CSE Evaluation Model. (CSE refers to UCLAs Cen- ponents of a program as they take place, in order to de-
ter for the Study of Evaluation.) The basic principle be- termine where problems are emerging. Formative evalua-
hind the CSE Model is that evaluation is an ongoing tion allows problems to be speedily identified and recti-
process that helps decision makers to select among alter- fied.
natives in a more informed way. (This element is found in Only a few selected items need to be checked in the
all models in some form.) Yet it should be noted that to course of a formative evaluation. These would all be based
select among alternatives means that viable alternatives on the stated learning objectives for the project. The im-
are in fact available. Evaluation must be viewed, argues portant point is to collect feedback while enough time re-
Alkin, as a means by which directions can be changed, mains to make corrections.
programs modified or eliminated, and personnel reshuf- Summative evaluation is conducted as the final assess-
fled as need be. In most cases, staff developers do not use ment of a project (or part of a project). Summative eval-
Downloaded by [UNSW Library] at 19:18 22 April 2016

evaluation to specify alternative directions. uations may take several forms, as long as they are con-
Alkin identified five decision areas and their concomi- sistent with the prescribed objectives of the program.
tant evaluation requirements. Summative data can be tabulated into absolute responses
and then given as a percentage for each item. Compari-
Decisions Evaluations
sons between participants may be made on summative
1. Selections of objec- Needs assessment data (but not on the formative measures). Recall that for-
tives or problems mative evaluations are designed to give feedback,
2. Programs to meet Plans whereas summative evaluation is for grading. These
objectives evaluations are placed at logical points in the project,
3. Program operations Implementation such as at the ends of units, learning activities, or pro-
4. Program improvement Progress gram elements. Most important, a single summative eval-
5. Program certification Outcomes uation is inadequate. The summative sets are arranged in
profiles to illustrate the sum of evaluation activities.
Alkin stresses that each of the five paired areas re-
quires the collection of information, an evaluation of Temporary Systems Approach
that information or data, and, finally, a decision based In any social system, permanent structures, such as in-
on the quantifiable information. In all steps, the evalua- stitutions or organizations, endure beyond the lifetimes
tor must realize that the judgments are based on a proba- of their members. Humans expect their institutions to ex-
bility of success. ist forever. Churches, the military, colleges, govern-
When one judges a staff development program, the ments, families, and the schools are all permanent fea-
first two pairs would be most critical, that is, objectives/ tures of society. Thus, we live in a society marked by per-
needs assessments and program/plans. However, the manence and the knowledge that some things simply do
judging of instructional components of the program not change very much.
would rely chiefly on the last three pairs-operations/im- Matthew B. Miles (1%) made these observations and
plementations, improvement/progress, and certifica- then asked the question, How do permanent organiza-
tion/outcome. tions change? One apparent solution was through the
The CSE Model requires continuous interaction establishment of temporary systems within the perma-
among all elements and providers of a staff development nent ones. Temporary systems, noted Miles, operate for
program. Feedback from all participants in an inservice only short durations, have well-established goals, and are
project is critical in the CSE Model. The feedback is used expected to end after short periods of time. Workshops,
by project directors who want to capitalize on their own conferences, clinics, seminars, and training sessions, in
human resources. which a small number of people meet for a defined pe-
riod of time to achieve a specified set of goals or objec-
Formative-SummativeEvaluation tives, operate as temporary systems.
The basic objectives of any evaluation system are to While involved in a temporary system, participants
determine (1) the extent to which the project objectives temporarily drop most of their usual roles and responsi-
are being achieved and (2) the impact of the project on bilities (from their permanent systems) and concentrate
372 The Clearing House April

on a few short-term objectives. The participants know To install a temporary system into the evaluation para-
that they will be in the temporary system for only a brief digm of any inservice education project, staff developers
period. During the temporary system phase of a training must consider five rather simple phases: (1) planning or
project, participants are free to try out new ideas, prac- preparing for the project; (2) organizing for the projects
tice a new technique without the usual penalties for mis- start-up; (3) operating the project; (4) closing the system,
takes, and work in a generally supportive and noncom- that is, preparing the participants for their customary
petitive climate removed from back-home interruptions. roles; and ( 5 ) implementing the strategies learned in the
project.
The evaluation of staff development activities is time-
FIGURE 1 consuming and requires a commitment to using data that
Project Desires are generated. However, evaluations of all the activities
of any inservice project should be carried out to ensure
Directions: Please place an X on the line between the
paired words to indicate your desires for
that the following take place:
this workshop.
Staff reactions and perceptions are obtained.
I desire that this workshop be Adjustments are made as needed.
1. More information - - - More practice Successful activities are identified for future use.
oriented oriented Outstanding presenters are identified and used again.
- - - Conducted with Success or failure of a project can be determined early.
Downloaded by [UNSW Library] at 19:18 22 April 2016

2. Conducted by
lecture as the hands-on Long- or short-range profiles are compiled.
primary format experiences Participants learn that their evaluations have an impact
3. Oriented toward - - - Oriented toward on staff development.
student problems teacher problems
4. Theoretically - - - Experientially When using a temporary systems model, data must be
oriented oriented systematically collected from all participants. For exam-
5. Balanced: - - - Information-giving ple, if a workshop leader is unsure about the format and
lectures, only emphasis workshop participants want, he or she would
demonstrations,
and activities query them, using a questionnaire such as that shown in
figure 1. Decisions about the presentations could then be

FIGURE 2
Feedback (Use after full-day session)

Directions: Please place an X on the line that best describes your response.

1. Do you think your time is being used effectively?

Used very Used Used Used very


effectively effectively ineffectively ineffectively

2. Is there adequate discussion following activities?

Very Very
adequate Adequate Inadequate inadequate

3. Are the project sessions informative enough?

Very Very
informative Informative Uninformative uninformative

4. To what extent is the material presented relevant to your classroom instruction?

Very Very
relevant Relevant Irrelevant irrelevant

5. Any suggestions or comments?


1989, Vol. 62 Staff Development 373

FIGURE 3
Perceptions

You are having a variety of experiences during this project and, of course, these experiences affect what you
learn. These experiences and your consequent learning will help the program director improve the project.
For each item below, circle the number showing how well you think the management tasks have been done by
the director and the staff.

Low 0 1 2 3 4 5 6 7 8 9 High

Frequencies
1. Project goals were not specified 0 1 2 3 4 5 6 7 8 9 Project goals were specified clearly.
clearly.

2. The climate of this project was 0 1 2 3 4 5 6 7 8 9 The climate of this project was very
poor. good.

3. The wrong people came to this 0 1 2 3 4 5 6 7 8 9 The right people came to this
project. project.

4. The overall design of this project 0 1 2 3 4 5 6 7 8 9 The overall design of this project
was ineffective was quite effective.
Downloaded by [UNSW Library] at 19:18 22 April 2016

5. This project did not get off to a 0 1 2 3 4 5 6 7 8 9 This project did get off to a very good
good start. start.

6. This project will have no influence 0 1 2 3 4 5 6 7 8 9 This project will strongly influence
on how I teach. how I teach.

7. Staff resources were poorly used 0 1 2 3 4 5 6 7 8 9 Staff resources were well used in
in this project. this project.

8. No experiential or hands-on 0 1 2 3 4 5 6 7 8 9 Experiential or hands-on activi-


learning activities were used in ties have been frequently used in this
this project. project.

9. I would definitely not recommend 0 1 2 3 4 5 6 7 8 9 I would definitely recommend that


that this project be conducted this project be repeated for others.
for others.

Note: See Orlich and Hannaford (1986) for a 14-year longitudinal study using this model. The elements of goals, climate,
design, and influence tend to be critical for success.

adjusted before the workshop begins. Feedback is an es- study that used the temporary systems approach. Of the
sential part of this model (figure 1). traits listed in figure 3, four tended to be critical for suc-
Figure 2 shows how feedback can be obtained on four cess: (1) clear goals, (2) positive climate, (3) overall de-
elements of a typical inservice session. A final, or sum- sign, and (4) influence on instruction. These findings may
mative, evaluation could be conducted using the model in not be surprising, for they represent elements critical for
figure 3. successful staff development programs.
A school district staff developer could compile the data
obtained from the various instruments and prepare pro- Conclusion
files of various inservice efforts. Decision-making evalua- There are many reasons to evaluate staff development
tion models are used to find out how well the project was programs and inservice education projects. Depending
perceived, in order to make rational decisions in the fu- on the project objectives, one could use group tests, stu-
ture. Trends that are observed can be maintained, if desir- dent achievement tests, classroom observations, attitude
able, or changed if the trends are considered undesirable. scales, or anecdotal records as mechanisms for determin-
The author has been using temporary systems manage- ing the success of an inservice project. When using deci-
ment on all inservice projects that he has directed since sion-making evaluation models, staff developers can
1972. There is no doubt that this method has improved make critical adjustments in any training program so that
the operation of the projects. In 1986, Marion Hanna- every participant masters the content and, more impor-
ford and the author reported on a 14-year longitudinal tant, uses the skills or knowledge as was intended.
374 The Clearing House April

Bruce Joyce and Beverly Showers (1988) provided a House, E. R. 1978. Assumptions underlying evaluation models. Educa-
tional Reseurcher, 7(3): 4 1 2 .
practical and realistic rationale for evaluating inservice Joyce, B., and B. Showers. 1988. Student achievement through staff
programs. They concluded that if we truly intend to in- development. New York: Longman.
crease student learning through staff development pro- Miles, M. B. 1964. On temporary systems. In Innovations in education,
edited by Matthew B. Miles. New York: Teachers College Press.
grams, serious evaluation of those programs will be nec- Orlich, D. C., and M. E. Hannaford. 1986. Longitudinal evaluation
essary (p. 127). No question about it, systematic evalua- strategy for staff development. Paper presented at the National Staff
tion of inservice projects is essential. Development Council Annual Conference, Atlanta, Georgia,
December 15.
Scriven, M. 1967. The methodology of evaluation. AERA Monograph
Series on Curriculum Evaluation, No. 1: 39-83.
Stufflebeam, D. L. 1971. The relevance of the CIPP evaluation model
REFERENCES for educational accountability. Journal of Research and Development
in Education, 5(1): 19-21.
Alkin, M. C. 1970. Products for improving educational evaluation. Tuckman, B. W . 1985. Evaluating instructional program, 2d ed.
Evaluation Comment, 3(1): 1-4. Boston: Allyn and Bacon.
Downloaded by [UNSW Library] at 19:18 22 April 2016

Das könnte Ihnen auch gefallen