Sie sind auf Seite 1von 4

VALIDITY

The degree to which a test really measures what it claims to assess. The
extent to which their claim is truthful and genuine.

What is the connection between reliability and validity? Is one needed to


achieve the other? (yes)

A test must be reliable if it is to be valid. If reliability reflects the amount of


error in a test, you can't say that you're accurately measuring some
trait/characteristic if scores vary a great deal. Reliability is necessary for
validity, but not sufficient (more information is needed).

You CAN have good reliability WITHOUT validity. You can attain consistent
scores, but the test might not be measuring what you think you're
measuring. For example:

Back in the 1960's to 1970's, the Frostig Test of Visual Perception revealed
reliable scores on test-retest, but they claimed that it measured the degree to
which a student had a reading impairment. We now know that troubles with
visual perception DO NOT necessarily contribute to reading problems. (Many
kids with good visual perception have poor reading ability, while most kids
with very poor visual perception skills are excellent readers. Most poor
readers have no visual perception problems.)

As mentioned in Key Concepts, reliability and validity are closely related. To better understand
this relationship, let's step out of the world of testing and onto a bathroom scale.

If the scale is reliable it tells you the same weight every time you step on it as
long as your weight has not actually changed. However, if the scale is not
working properly, this number may not be your actual weight. If that is the
case, this is an example of a scale that is reliable, or consistent, but not
valid. For the scale to be valid andreliable, not only does it need to tell you the
same weight every time you step on the scale, but it also has to measure your
actual weight.
Switching back to testing, the situation is essentially the same. A test can be
reliable, meaning that the test-takers will get the same score no matter when or
where they take it, within reason of course. But that doesn't mean that it is valid
or measuring what it is supposed to measure. A test can be reliable without being
valid. However, a test cannot be valid unless it is reliable.

“A valid test is always reliable but a reliable test is not


necessarily valid”
Posted on November 29, 2011by alhoward
Reliability and validity are two important characteristics of any measurement
procedure.

Reliability has been defined as ‘the extent to which results are consistent over time…
and if the results of a study can be reproduced under a similar methodology, then the
research instrument is considered to be reliable.’ (Joppe 2000). This means a test is
considered reliable if the same results are produced repeatedly, if it were to be carried
out again. The more consistent the results produced, the higher the reliability of the
measurement procedure.

It is a common mistake to assume the terms “reliability” and “validity” have the same
meaning. While they are related, the two concepts are very different. In an effort to clear up
any misunderstandings, I have defined each here for you.

Reliability
Of the two terms, reliability is the simpler concept to explain and understand. If you are
focusing on the reliability of a test, all you need to ask is—are the results of the test
consistent? If I take the test today, a week from now and a month from now, will my results
be the same?

If an assessment is reliable, your results will be very similar no matter when you take the
test. If the results are inconsistent, the test is not considered reliable.

Validity
Validity is a bit more complex because it is more difficult to assess than reliability. There
are various ways to assess and demonstrate that an assessment is valid, but in simple
terms, validity refers to how well a test measures what it is supposed to measure.
There are several approaches to determine the validity of an assessment, including the
assessment of content, criterion-related and construct validity.

 An assessment demonstrates content validity when the criteria it is measuring aligns with the
content of the job. Also, the extent to which that content is essential to job performance
(versus useful-to-know) is part of the process in determining how well the assessment
demonstrates content validity.

For example, the ability to type quickly would likely be considered a large and crucial aspect
of the job for an executive secretary compared to an executive. While the executive is
probably required to type, such a skill is not as nearly as important to performing that job.
Ensuring an assessment demonstrates content validity entails judging the degree to which
test items and job content match each other.
 An assessment demonstrates criterion-related validity if the results can be used to predict a
facet of job performance. Determining if an assessment predicts performance requires that
assessment scores are statistically evaluated against a measure of employee performance.

For example, an employer interested in understanding how well an integrity test identifies
individuals that are likely to engage in counterproductive work behaviors might compare
applicants’ integrity test scores to how many accidents or injuries those individuals have on
the job, if they engage in on-the-job drug use, or how many times they ignore company
policies. The degree to which the assessment is effective in predicting such behaviors is the
extent to which it exhibits criterion-related validity.

 An assessment demonstrates construct validity if it is related to other assessments


measuring the same psychological construct--a construct being a concept used to explain
behavior (e.g., intelligence, honesty).

For example, intelligence is a construct that is used to explain a person’s ability to


understand and solve problems. Construct validity can be evaluated by comparing
intelligence scores on one test to intelligence scores on other tests (i.e., Wonderlic Cognitive
Ability Test to the Wechsler Adult Intelligence Scale).
Reliable and Valid?

The tricky part is that a test can be reliable without being valid. However, a test cannot be
valid unless it is reliable. An assessment can provide you with consistent results, making it
reliable, but unless it is measuring what you are supposed to measure, it is not valid.

What are the biggest questions you have surrounding reliability and validity?

Citation: Huitt, W. (1999, October). Reliability and validity. Educational Psychology Interactive.
Valdosta, GA: Valdosta State University. Retrieved [date],
fromhttp://www.edpsycinteractive.org/topics/intro/relvalid.html
The collecting of quantitative data (measurement) and doing research always raises
the issues of reliability and validity. The issue of reliability is essentially the same for
both measurement and research design. Reliability attempts to answer our concerns
about the consistency of the information collected (i.e., can we depend on the data or
findings?), while validity focuses on accuracy. The relationship between reliability
and validity can be confusing because measurements (e.g., tests) and research can be
reliable without being valid, but they cannot be valid unless they are reliable. This
simply means that for a test or study to be valid it must consistently (reliability) do
what it purports to do (validity). For a measurement (e.g., a test score) to be judged
reliable it should produce a consistent score; for the research study to be considered
reliable each time it is replicated it too should produce similar results.

Dictionary definitions of terms used in measurement often give one only part of the
picture. For example, validity is given as the adverb of valid which means "strong."
Unfortunately, this type of definition is not specific enough when the term is used in
certain contexts such as testing or research. Additionally, education and psychology
use validity in multiple ways, each having several subvarieties.

In the area of testing (including standardized and teacher made tests), educators and
psychologists are concerned with content, criterion-related (predictive), and construct
validity. Disciplines that conduct research are concerned with other types of validity:
Internal and external. The issues of research validity are discussed from a general
perspective by Campbell and Stanley (1966).

,,,,,,,,,,,,,,,, http://www.edpsycinteractive.org/topics/intro/relvalid.html

Das könnte Ihnen auch gefallen