Sie sind auf Seite 1von 6

Research Data Collection: I. Methods of Data Collection II.

Evaluating Data Quality


Janet Sullivan Wilson, PhD, RN

I. Data Collection Methods


After a researcher defines the things, phenomena, or variables to be studies, a problem and hypothesis are formulated The next step is for the researcher to determine how the variables or things being studies must be measured, observed, or recorded Appropriate data collection is essential to the validity of a study

Data Collection Methods Vary According to:


Structure Quantifiability Obtrusiveness Objectivity

Data Sources
A.

Existing Data: Historical Secondary Analysis of data collected by others Hospital Records

Data Sources
B.

New Data Self Reports: unstructured, semistructured, structured interviews; focus group interviews; life histories, diaries Direct Observation Biophysiological

Types of Structured Self Reports


Questionnaires: closed-ended; fixed alternative; open-ended Instruments: content area questions developed and pretested Scales: social/psych scales (e.g., Likert; semantic differential; summated rating scale; VAS) vignettes; projective techniques; Q-sorts (problem = response set biases)

Unstructured Observational Methods


Participant Observations of the physical setting, the participants, the activities, the process, the frequency/duration, the outcomes Recording = log, field notes, theoretical notes, methodological notes, personal notes

Structured Observational Methods


Category system Checklist Rating scales Time sampling Event sampling

Biophysiologic Measures
In vivo In vitro Advantages: precision, objective, valid measurement; may be less costly Disadvantages: measuring tool may be an extraneous variable; damaging?; interferences; too precise & do not see whole picture

II. Evaluating Quantitative Data Quality: Reliability & Validity


Reliability = the degree of consistency or accuracy with which an instrument measures an attribute (The higher the reliability score the less error there is); stability Reliability coefficient = the measure of this accuracy (r=.00-1.00) Test/re-test method

Reliability (cont.)
Internal Consistency (tests = splithalf techniques; Cronbachs alpha or coefficient alpha) Equivalence determines the consistency or equivalence of the instrument (test=interrate or interobserver reliability)

Validity
Validity is the degree to which an instrument measures what it is supposed to measure Types of Validity: Content Criterion-related Predictive Construct

Ways to Assess Qualitative Data


Credibility Dependability Confirmability Transferability

Assessment of Qualitative Data: Credibility


Credibility established through:
Prolonged engagement Persistent observation Triangulation (data source, investigator, theory, method) External checks: Peer debriefing, Member checks Searching for disconfirming evidence Researcher credibility

Assessment of Qualitative Data: Dependability


Dependability refers to data stability over time and over conditions Ways to assess dependability: Stepwise replication inquiry audit

Assessment of Qualitative Data: Confirmability


Confirmability refers to neutrality or objectivity of the data Audit trail and inquiry audits used to establish both confirmability and dependability Thick description

Assessment of Qualitative Data: Transferability


Transferability refers to the extent to which the findings can be transferred to other settings or groups

Das könnte Ihnen auch gefallen