Sie sind auf Seite 1von 4

1. 1.

Instrumentation is the process of constructing research instruments that could be used


appropriately in gathering data on the study. The questionnaire, interview and observation
are the most commonly used tools in gathering data.
2. 2. 1. The Questionnaire The Questionnaire is a set orderly arranged questions carefully
prepared to answer the specific problems of the study. Webster Dictionary defines the word
“questionnaire” as a list of questions to be answered by a group of people especially
designed to get facts or information. It is a list of written questions related to a particular
topic, provided with space for residents to fill-up (Good, 1959).
3. 3. There are two types of questions that could be used in the preparation of a questionnaire,
namely: 1) Open-ended question. The questions are listed in a way that it allows the
respondents to freely express himself or herself on the subject or issue. It does not
enumerate alternative responses. Example: “How are you affected by the change of
leadership in your organization ?” “What do you suggest to improve the management of your
organization?” Open-Ended Question
4. 4. 2) Fixed Alternative Question. This is otherwise called the closed type of question which
provides a list of choices among enumerated alternatives. Hence, the subject responses are
limited to a specific alternative. Example 1: “what is the status of the local programs and
projects initiated by municipal/city government units towards attainment of MDG 5 or
improved maternal health? 5 Very Much Initiated 4 Much Initiated 3 Moderately Initiated 2
Slightly Initiated 1 Not at all
5. 5. a) Performance measures and managing for results b) Combating corruption and
promoting new work culture c) Encouraging partnership, cooperation, and collaboration for
good governance d) Establish feedback mechanisms e) Providing policy and institutional
support f) Improving and strengthening public confidence in government system and justice
g) Disclosure of public information h) Using information technology and clear and simplified
online transactions. Example 2. The items below are solutions adopted to the problems
encountered by the government officials that affect their governance. Rank each item from 1
to 8; 1 as the most effective solution and 8 as the least solution.
6. 6. The tabulation of the results of a dry-run should be done to find-out as to whether the
answers being tabulated provide significant solutions to the specific questions of the study.
This may also provide an idea of how to improve the questionnaire and to make the
tabulation of results easy and enable the researchers to tabulate the responses accurately.
7. 7. The questionnaire should include clear instructions or directions of what to do with the list.
Each questionnaire should include a cover letter cordially and courteously composed, neatly
organized and typed, containing the following information: 1.Introductory greetings, the
subject of the study, and a brief description of significance of the study; 2. Vital role of the
respondent in answering the questions;
8. 8. b) Preparation of the Questionnaire •A review of related literature and studies will be very
helpful in the preparation of the questionnaire. Once a topic for research has been decided
and approved by the appropriate body, the formulation of specific questions follows. Try to
examine the questionnaires used in some of the research studies similar to your approved
topic. •They can serve as a guide in the formulation of your questions in the questionnaire.
You may also talk to people who are knowledgeable in the construction of a questionnaire.
•Draft your questions and after its completion, finalize the questionnaire for comments and
suggestions for improvement to your adviser or to anyone who is knowledgeable on the
preparation of a questionnaire for editing.
9. 9. •Find time to rewrite the questionnaire taking into consideration and integrating the
corrections and suggestions given. •Test for the reliability, effectiveness and validity of your
questionnaire through dry-run. Take into consideration the clarity of items, vagueness of
statements, time element in answering the questions, convenience in tabulating the answers,
difficulties and other related problems. •A dry-run is done to a group of people composed of
at least 20 members of the same characteristics as the respondents of your study. If for
example your respondents are Local Government Executive (LGEs) of a city or district, your
questionnaire can be administered for dry-run to some 20-30 LGEs of another city or district.
10. 10. •Where to return the questionnaire and when (deadline). To facilitate return of completed
questionnaire, a self- addressed stamped envelope may be included for out-of- town
respondents; •Guarantee of confidentiality of the information and anonymity of The
respondents. •Statement of gratitude for the cooperation and participation of the
respondents. •Expression of willingness to supply the respondents the result of the study.
•Personal signature of the researcher, and •Endorsement from respected and influential
person/s related to the study. This may help in the excellent retrieval of the questionnaire.
11. 11. Republic of the Philippines Pangasinan State University GRADUATE SCHOOL Urdaneta
City September 8, 2014 ____________________________
____________________________ ____________________________ Sir/Madam: The
undersigned is a graduate school student of Pangasinan State University, Urdaneta City,
presently working on a research entitled “Development Orientation and Governance of
Municipal and City Government Officials in Pangasinan”. In view of this, may I would like to
request permission from your good office to administer and float questionnaires to
city/municipality appointive government officials so that the objectives of the study may be
fully realized. Thank you for your favorable response to this request. More power and God
bless. Very truly yours, NOIME F. SALON Researcher Noted: ZENAIDA U. SUYAT, ED. D.
Dean

Instrument, Validity,
Reliability
Part I: The Instrument
Instrument is the general term that researchers use for a measurement device (survey, test, questionnaire,
etc.). To help distinguish between instrument and instrumentation, consider that the instrument is the
device and instrumentation is the course of action (the process of developing, testing, and using the
device).
Instruments fall into two broad categories, researcher-completed and subject-completed, distinguished by
those instruments that researchers administer versus those that are completed by participants.
Researchers chose which type of instrument, or instruments, to use based on the research question.
Examples are listed below:

Researcher-completed Instruments Subject-completed Instruments


Rating scales Questionnaires
Interview schedules/guides Self-checklists
Tally sheets Attitude scales
Flowcharts Personality inventories
Performance checklists Achievement/aptitude tests
Time-and-motion logs Projective devices
Observation forms Sociometric devices
Usability
Usability refers to the ease with which an instrument can be administered, interpreted by the participant,
and scored/interpreted by the researcher. Example usability problems include:
1. Students are asked to rate a lesson immediately after class, but there are only a few minutes before
the next class begins (problem with administration).
2. Students are asked to keep self-checklists of their after school activities, but the directions are
complicated and the item descriptions confusing (problem with interpretation).
3. Teachers are asked about their attitudes regarding school policy, but some questions are worded
poorly which results in low completion rates (problem with scoring/interpretation).
Validity and reliability concerns (discussed below) will help alleviate usability issues. For now, we can
identify five usability considerations:
1. How long will it take to administer?
2. Are the directions clear?
3. How easy is it to score?
4. Do equivalent forms exist?
5. Have any problems been reported by others who used it?
It is best to use an existing instrument, one that has been developed and tested numerous times, such as
can be found in the Mental Measurements Yearbook. We will turn to why next.

Part II: Validity


Validity is the extent to which an instrument measures what it is supposed to measure and performs as it
is designed to perform. It is rare, if nearly impossible, that an instrument be 100% valid, so validity is
generally measured in degrees. As a process, validation involves collecting and analyzing data to assess
the accuracy of an instrument. There are numerous statistical tests and measures to assess the validity of
quantitative instruments, which generally involves pilot testing. The remainder of this discussion focuses
on external validity and content validity.
External validity is the extent to which the results of a study can be generalized from a sample to a
population. Establishing eternal validity for an instrument, then, follows directly from sampling. Recall
that a sample should be an accurate representation of a population, because the total population may not
be available. An instrument that is externally valid helps obtain population generalizability, or the degree
to which a sample represents the population.
Content validity refers to the appropriateness of the content of an instrument. In other words, do the
measures (questions, observation logs, etc.) accurately assess what you want to know? This is particularly
important with achievement tests. Consider that a test developer wants to maximize the validity of a unit
test for 7th grade mathematics. This would involve taking representative questions from each of the
sections of the unit and evaluating them against the desired outcomes.

Part III: Reliability


Reliability can be thought of as consistency. Does the instrument consistently measure what it is intended
to measure? It is not possible to calculate reliability; however, there are four general estimators that you
may encounter in reading research:
1. Inter-Rater/Observer Reliability: The degree to which different raters/observers give consistent
answers or estimates.
2. Test-Retest Reliability: The consistency of a measure evaluated over time.
3. Parallel-Forms Reliability: The reliability of two tests constructed the same way, from the same
content.
4. Internal Consistency Reliability: The consistency of results across items, often measured with
Cronbach’s Alpha.

Relating Reliability and Validity


Reliability is directly related to the validity of the measure. There are several important principles. First, a
test can be considered reliable, but not valid. Consider the SAT, used as a predictor of success in college. It
is a reliable test (high scores relate to high GPA), though only a moderately valid indicator of success (due
to the lack of structured environment – class attendance, parent-regulated study, and sleeping habits –
each holistically related to success).
Second, validity is more important than reliability. Using the above example, college admissions may
consider the SAT a reliable test, but not necessarily a valid measure of other quantities colleges seek, such
as leadership capability, altruism, and civic involvement. The combination of these aspects, alongside the
SAT, is a more valid measure of the applicant’s potential for graduation, later social involvement, and
generosity (alumni giving) toward the alma mater.
Finally, the most useful instrument is both valid and reliable. Proponents of the SAT argue that it is both.
It is a moderately reliable predictor of future success and a moderately valid measure of a student’s
knowledge in Mathematics, Critical Reading, and Writing.

Part IV: Validity and Reliability in Qualitative Research


Thus far, we have discussed Instrumentation as related to mostly quantitative measurement. Establishing
validity and reliability in qualitative research can be less precise, though participant/member checks, peer
evaluation (another researcher checks the researcher’s inferences based on the instrument (Denzin &
Lincoln, 2005), and multiple methods (keyword: triangulation), are convincingly used. Some
qualitative researchers reject the concept of validity due to the constructivist viewpoint that reality is
unique to the individual, and cannot be generalized. These researchers argue for a different standard for
judging research quality. For a more complete discussion of trustworthiness, see Lincoln and Guba’s
(1985) chapter.

Das könnte Ihnen auch gefallen