Sie sind auf Seite 1von 7

Running Head: EVALUATION MODEL CRITIQUE 1

Evaluation Model Critique: An Objectives-Oriented Evaluation

For the Computer-Aided Drafting Program at Keiser University

John Peterson
Keiser University
EDL751
Professor Geffney
September 19, 2010
EVALUATION MODEL CRITIQUE 2

Evaluation Model Critique: An Objectives-Oriented Evaluation

For the Computer-Aided Drafting Program at Keiser University

Introduction

According to Worthen (1990), there has been a permanent increase in educational

programs’ size and expense. “Not surprisingly, taxpayers and public officials have

increasingly urged that these programs be made more accountable to their publics”

(Worthen, 1990, p. 42).

To describe a program evaluation, it is essential to identify its elements; although,

as Taylor-Powell, Steele, and Douglah (1996) wrote, “There is no blueprint or recipe for

conducting a good evaluation” (p. 2).

This paper will describe an objectives-oriented evaluation for the computer-aided

drafting program at Keiser University.

Rationale for Selecting an Objectives-Oriented Evaluation Approach

Fitzpatrick, Sanders, and Worthen (2004) stated, “The distinguishing feature of an

objectives-oriented evaluation is that the purposes of some activity are specified, and then

evaluation focuses on the extent to which those purposes are achieved” (p. 71). This

feature is a perfect fit to the style of teaching and learning of the Computer-Aided

Drafting Program at Keiser University. Computer-Aided Drafting courses are based on

hands-on activities, with lab exercises performed by students from material assigned

specifically to fulfill the objectives of the curriculum.


EVALUATION MODEL CRITIQUE 3

Program Description

Keiser University’s Associate of Science degree in Computer-Aided Drafting

develops design techniques and skills that satisfy entry-level requirements as a

general designer in a CAD environment. Students explore the theoretical design

process in architecture, mechanical, civil, and structural engineering, together

with 3-D modeling principles. In addition to traditional design training, hands-on

computer-assisted design is applied to all disciplines (Keiser University, 2010, pp.

176-177).

The Strengths and Weaknesses of an Objectives-Oriented Evaluation

The strength of objectives-oriented evaluations resides on its simplicity. They are

easily understood, easy to follow and implement, and produces information relevant to

the mission (Fitzpatrick, Sanders, & Worthen, 2004).

There are several weaknesses to an objectives-oriented evaluation, according to

Fitzpatrick, Sanders, and Worthen (2004), they lack a real evaluative component; they

lack standards to judge the importance of observed discrepancies between objectives and

performance levels; they neglect the value of the objectives; they ignore important

alternatives that should be considered; neglect transactions that occur within the program

or activity being evaluated; neglect the context in which the evaluation takes place;

ignores important outcomes other than those covered by the objectives; omits evidence of

program value not reflected in its own objective, and promote a linear, inflexible

approach to evaluation (Fitzpatrick, Sanders, & Worthen, 2004).


EVALUATION MODEL CRITIQUE 4

Planning the Program Evaluation

Worthen (1990) stated, “In an educational context, a program can be thought of as

any educational enterprise aimed at the solution of a particular educational problem or the

improvement of some aspect of an educational system” (p. 42).

To evaluate the Computer-Aided Drafting program at Keiser University it is

important to follow the steps proposed by Taylor-Powell, Steele, and Douglah (1996) as

shown on Table 1.

Table 1

Planning a program evaluation

 What are you going to evaluate?


 What is the purpose of the evaluation?
 Who will use the evaluation? How will they use it?
Focusing the evaluation  What questions will the evaluation seek to answer?
 What information do you need to answer the questions?
 When is the evaluation needed?
 What resources you will need – time, money, people?
 What sources of information will you use?
Collecting the information  What data collection method(s) will you use?
 What collection procedures will you use?
How will the data be analyzed?
Using the information How will the information be interpreted – By whom?
How will the evaluation be communicated and shared?
Implementing the plan: timeline and responsibilities,
Managing the evaluation
budget, and finalizing the plan
Note: Source: Taylor-Powell, E., Steele, S., and Douglah, M. (1996, February). Planning
a program evaluation. Program Development and Evaluation, University of Wisconsin-
Extension. Retrieved from http://learningstore.uwex.edu/Assets/pdfs/G3658-01.pdf

In order to answer the questions presented above is helpful to refer to Bennett’s


EVALUATION MODEL CRITIQUE 5

hierarchy, as shown on Table 2. As explained by Taylor-Powell, Steele, and Douglah

(1996), “The use of this hierarchy can help to describe a program’s logic and expected

links from inputs to end results” (p. 6). An example of application of this hierarchy would

be a program that “may show evidence of accomplishments at the first five levels long

before practices are changed, actions are taken or long term community improvements

are made” (Taylor-Powell, Steele, & Douglah, 1996, p. 6).

Table 2

Bennett’s hierarchy of evidence for program evaluation.

Social, economic, environmental conditions intended as end


7 Impact results, impacts or benefits of programs; public and private
benefits.
Patterns of behavior and procedures, such as decisions taken,
6 Actions recommendations adopted, practices implemented, actions taken,
technologies used, policies enacted.
Knowledge (awareness, understanding, mental abilities); opinions
5 Learning (outlooks, perspectives, view-points); skills (verbal or physical
abilities); aspirations (ambitions, hopes).
Degree of interest; feelings toward the program; positive or
4 Reactions negative interest in topics addressed, acceptance of activity leaders,
and attraction to educational methods of program activities.
Number of people reached; characteristics / diversity of people;
3 Participation
frequency and intensity of contact / participation.
Events, educational methods used; subject matter taught; media
2 Activities
work, promotional activities.
Staff and volunteer time; salaries; resources used: equipment,
1 Resources
travel.
Note: Source: Bennett and Rockwell, 1995, Targeting Outcomes of Programs (TOP);
slightly modified (as cited by Taylor-Powell, Steele, & Douglah, 1996, p. 6)
EVALUATION MODEL CRITIQUE 6

Conclusion

Taylor-Powell, Steele, and Douglah (1996) correctly wrote that a program

development is an ongoing systematic process. Chen (2005) stated, “The body of

evaluation knowledge needs empirical feedback to nurture its growth” (p. 270).

The Computer-Aided Drafting program has a need for an evaluation performed on

a regular basis because of its technological aspect. There are many changes happening to

society due to advances on business practices and the information technology that

supports these practices. A program evaluation is not meant to be a punitive process; on

the contrary, it is designed to assure progress on the program analyzed, and it is a good

policy to setup evaluation protocols for frequent evaluations.


EVALUATION MODEL CRITIQUE 7

References

Chen, H. (2005). Practical program evaluation: Assessing and improving planning,

implementation and effectiveness. Thousand Oaks, CA: Sage.

Fitzpatrick, J. L., Sanders, J. R., & Worthen, B. R. (2004). Program evaluation:

Alternative approaches and practical guidelines (3rd ed.). Boston, MA: Pearson.

Keiser University (2010, August). University-Wide Catalog and Announcement Bulletin

(10)1 pp. 176-177

Taylor-Powell, E., Steele, S., and Douglah, M. (1996). Planning a program evaluation.

Program Development and Evaluation, University of Wisconsin-Extension.

Retrieved from http://www.uwex.edu/ces/pdande/evaluation/evaldocs.html

Worthen, B. (1990). Program evaluation. H. Walberg & G. Haertel (Eds.). The

international encyclopedia of educational evaluation (pp. 42-47). Toronto, ON:

Pergammon Press.