Sie sind auf Seite 1von 6

What is assessment?

Assessment is.

Assessment is NOT...

A process

Useless (except if poorly done)

About collecting information

An end goal

Conducted to improve
educational programs

The same as course grades

A way to demonstrate program


effectiveness

The only information considered


when evaluating programs

Focused on student learning


and development outcomes

Student satisfaction or opinions

A scholarly endeavor

Highly valued by university


leadership, sponsors, and
accreditors

Primarily useful for program


faculty and leaders

Why not?

Why not?

Why do assessment?

There are three types of reasons people conduct program assessment. Each type
addresses different program needs and each benefits different program stakeholders.
Program Improvement

Helps program developers identify areas of improvement for the program

Shows program developers the actual impact the program has on students

Recruitment

Provides parents with evidence of the value a program has for their child

Gives prospective students evidence of why they should participate

Accountability

Meets University annual program reporting and program review requirements

Addresses accrediting agency program evaluation requirements

May apply to other funding or regulatory agencies requirements

How do you do assessment?

Assessment is a six-step cyclical process. The diagram shows the steps of the cycle
which are each described in detail below.

Program Objectives
At this stage, program leaders clearly state the impact the program is expected to have
on participants (i.e., the program goals). These goals are broken down into more specific
statements in the form of learning objectives, statements that specify an observable
behavior and expected level of the behavior after completing the program.
The observable behavior in the objective statement is what is measured for assessment.
The criteria in the objective statement is the standard which determines whether the
objective was achieved.
PASS provides a workshop on writing learning objectives. Also, please see
the "Objectives Dos and Don'ts List" and the "Effective Objectives Checklist" for more
information.

Assessment Design
Next, you must plan how the observable behavior stated in the objective will be
measured and when. The first challenge in this step is identifying an assessment
instrument to measure the observable behavior. For information about assessment
instruments, click here.
After finding an instrument, decide how and when it will be used to gather assessment
data. Commonly, instruments are administered to program participants before and after
the program so that change in the scores can be examined (i.e., pre-post testing).
Ideally, the instrument will be given to a group of program participants and nonparticipants, so that scores can be compared across these groups (i.e., control vs.
treatment groups).
When designing the assessment plan, think about how different possible results will be
used. Move forward with an assessment plan if there is a clear plan of action for how the
results will be used.
Contact PASS for information about how to get help with selecting your assessment
instrument and designing an assessment plan.
Data Collection
Make the assessment happen! Administer the assessment instrument at the planned
times and collect trustworthy data.
Things to consider in this step include whether the instrument will be administered online
or on paper, who will administer the instrument, and under what conditions. The
conditions for administration should be standardized as much as possible. This means
that the conditions should be the same for all participants taking the instrument.
Standardization increases the comparability of individuals' scores.
Finally, how will students be motivated to complete the instrument? Often with
assessment, results on the instruments have little to no consequences for students.
Thus, motivating students to take the instrument seriously is the responsibility of the
program or the person administering the instrument. Consult with PASSfor ideas about
ways to improve student motivation on assessments.
Analyze Data
Information gathered through the assessment process can be either qualitative (textual)
or quantitative (numbers). There are general guidelines for appropriate analysis of either
type of data. In order to ensure that the valuable data is analyzed in a way that
maximizes the trustworthiness of the conclusions drawn from the analysis, a person
skilled in data analysis methodologies should guide this phase of assessment.
Contact PASS for information about how to get help with data analysis.
Report Results
Every program has multiple stakeholders and different audiences who have interest in
one or more aspects of the assessment process. In order to reap the full benefits of
conducting assessment, programs should budget and plan for presenting their
assessment to each relevant audience. At a minimum, programs should present the

results to all involved in the program directly and to direct supervisors.


Reports of assessment results can be verbal or written and can take many different
forms. When reporting results to different stakeholders, customize the format and
presentation to the specific needs and interests of that audience. When reporting results
the goal should always be to maximize the utility and applicability of the results for that
audience. Consider why that audience is interested in the assessment results and focus
on addressing that interest in your report. Also, make sure to use language appropriate
for the audience.
PASS can help you prepare reports and presentations of your assessment results for
multiple audiences!
Use Results
Assessment is only useless if the results are not used! When done well, assessment
provides valuable information that can inform decisions about a program. Regardless of
the results of a specific assessment cycle, if the other steps in the cycle were done well,
there should be some action that can be made to improve the program for the next cycle.
Often, when just beginning to conduct assessment, results are used to inform changes in
the assessment design. After a few assessment cycles, however, the assessment design
should be strong enough that the results can be trusted to inform program changes.

Why can't we use course grades for assessment?

Course grades are assigned to individual students to indicate the extent to which a
student has met the instructor's expectations for a given set of course requirements.
Assessment results are intended to reflect the extent to which all students achieve the
objectives of a program. Clearly, grades and assessment differ in that one deals with
individuals and courses and the other deals with groups and programs. Why does this
difference matter?
How grades are assigned varies across courses and course sections. This means we
can't compare grades across courses to make inferences about the program as a whole!
Often, things unrelated to program objectives are considered when assigning grades.
This means that grades are not a pure measure of student learning.
Grades are assigned based on one instructor's judgment only. This means that there is a
lot of subjectivity in grades. Assessment involves multiple-raters and checks on reliability
and validity of scores.
Grades don't tell us what information the student knows and doesn't know. Assessments
are intended to provide additional and more descriptive information so that we know
more about what each assessment score means about a student's ability-level.

Why don't student opinions or satisfaction count as assessment?

Why can't we just ask students if they thought the program was effective? If students are
satisfied with their learning and the program quality, isn't that evidence of effectiveness?
The purpose of assessment is to provide information about the student learning and
development that occurs as a result of a program. Student's opinions and satisfaction
ratings are considered indirect measures of student learning. They provide some
information, but it is insufficient to make inferences about the program. In order to feel
comfortable basing decisions about a program on assessment results, the assessment
needs to use a direct measure of student learning. Direct measures means that scores
reflect the student's actual level of knowledge or development.

Center for Assessment Publisher: programassessment@jmu.edu


and
Research Studies
MSC 6806
Harrisonburg, VA 22807
540.568.6706

Das könnte Ihnen auch gefallen