Beruflich Dokumente
Kultur Dokumente
Assessment is.
Assessment is NOT...
A process
An end goal
Conducted to improve
educational programs
A scholarly endeavor
Why not?
Why not?
Why do assessment?
There are three types of reasons people conduct program assessment. Each type
addresses different program needs and each benefits different program stakeholders.
Program Improvement
Shows program developers the actual impact the program has on students
Recruitment
Provides parents with evidence of the value a program has for their child
Accountability
Assessment is a six-step cyclical process. The diagram shows the steps of the cycle
which are each described in detail below.
Program Objectives
At this stage, program leaders clearly state the impact the program is expected to have
on participants (i.e., the program goals). These goals are broken down into more specific
statements in the form of learning objectives, statements that specify an observable
behavior and expected level of the behavior after completing the program.
The observable behavior in the objective statement is what is measured for assessment.
The criteria in the objective statement is the standard which determines whether the
objective was achieved.
PASS provides a workshop on writing learning objectives. Also, please see
the "Objectives Dos and Don'ts List" and the "Effective Objectives Checklist" for more
information.
Assessment Design
Next, you must plan how the observable behavior stated in the objective will be
measured and when. The first challenge in this step is identifying an assessment
instrument to measure the observable behavior. For information about assessment
instruments, click here.
After finding an instrument, decide how and when it will be used to gather assessment
data. Commonly, instruments are administered to program participants before and after
the program so that change in the scores can be examined (i.e., pre-post testing).
Ideally, the instrument will be given to a group of program participants and nonparticipants, so that scores can be compared across these groups (i.e., control vs.
treatment groups).
When designing the assessment plan, think about how different possible results will be
used. Move forward with an assessment plan if there is a clear plan of action for how the
results will be used.
Contact PASS for information about how to get help with selecting your assessment
instrument and designing an assessment plan.
Data Collection
Make the assessment happen! Administer the assessment instrument at the planned
times and collect trustworthy data.
Things to consider in this step include whether the instrument will be administered online
or on paper, who will administer the instrument, and under what conditions. The
conditions for administration should be standardized as much as possible. This means
that the conditions should be the same for all participants taking the instrument.
Standardization increases the comparability of individuals' scores.
Finally, how will students be motivated to complete the instrument? Often with
assessment, results on the instruments have little to no consequences for students.
Thus, motivating students to take the instrument seriously is the responsibility of the
program or the person administering the instrument. Consult with PASSfor ideas about
ways to improve student motivation on assessments.
Analyze Data
Information gathered through the assessment process can be either qualitative (textual)
or quantitative (numbers). There are general guidelines for appropriate analysis of either
type of data. In order to ensure that the valuable data is analyzed in a way that
maximizes the trustworthiness of the conclusions drawn from the analysis, a person
skilled in data analysis methodologies should guide this phase of assessment.
Contact PASS for information about how to get help with data analysis.
Report Results
Every program has multiple stakeholders and different audiences who have interest in
one or more aspects of the assessment process. In order to reap the full benefits of
conducting assessment, programs should budget and plan for presenting their
assessment to each relevant audience. At a minimum, programs should present the
Course grades are assigned to individual students to indicate the extent to which a
student has met the instructor's expectations for a given set of course requirements.
Assessment results are intended to reflect the extent to which all students achieve the
objectives of a program. Clearly, grades and assessment differ in that one deals with
individuals and courses and the other deals with groups and programs. Why does this
difference matter?
How grades are assigned varies across courses and course sections. This means we
can't compare grades across courses to make inferences about the program as a whole!
Often, things unrelated to program objectives are considered when assigning grades.
This means that grades are not a pure measure of student learning.
Grades are assigned based on one instructor's judgment only. This means that there is a
lot of subjectivity in grades. Assessment involves multiple-raters and checks on reliability
and validity of scores.
Grades don't tell us what information the student knows and doesn't know. Assessments
are intended to provide additional and more descriptive information so that we know
more about what each assessment score means about a student's ability-level.
Why can't we just ask students if they thought the program was effective? If students are
satisfied with their learning and the program quality, isn't that evidence of effectiveness?
The purpose of assessment is to provide information about the student learning and
development that occurs as a result of a program. Student's opinions and satisfaction
ratings are considered indirect measures of student learning. They provide some
information, but it is insufficient to make inferences about the program. In order to feel
comfortable basing decisions about a program on assessment results, the assessment
needs to use a direct measure of student learning. Direct measures means that scores
reflect the student's actual level of knowledge or development.