Sie sind auf Seite 1von 7

Chapter 10: Validity of Research Results in Quantitative, Qualitative, and Mixed Research Answers to Review Questions 10.1.

hat is a confoundin! varia"le, and why do confoundin! varia"les create pro"le#s in research studies$ An extraneous variable is a variable that MAY compete with the independent variable in explaining the outcome of a study. A confounding variable (also called a third variable) is a variable that DO ! cause a problem because it is empirically related to both the independent and dependent variable. A confounding variable is a type of extraneous variable (it"s the type that we #now is a problem$ rather than the type that might potentially be a problem). 10.%. &dentify and define the four different types of validity that are used to evaluate the inferences #ade fro# the results of 'uantitative studies. %. !tatistical conclusion validity. Definition& 'he degree to which one can infer that the independent variable (()) and dependent variable (D)) are related and the strength of that relationship. *. (nternal validity. Definition& 'he degree to which one can infer that a causal relationship exists between two variables. +. ,onstruct validity. Definition& 'he extent to which a higher-order construct is well represented (i.e.$ well measured) in a particular research study. .. xternal validity. Definition& 'he extent to which the study results can be generali/ed to and across populations of persons$ settings$ times$ outcomes$ and treatment variations. 10.(. hat is statistical conclusion validity, and what is the difference "etween null hypothesis si!nificance testin! and effect si)e esti#ation$ !tatistical conclusion validity is the degree to which one can infer that the independent variable (()) and dependent variable (D)) are related and the strength of that relationship. 0ull hypothesis significance testing (a ma1or topic in ,hapter %2) is used to determine whether we can re1ect the null hypothesis (which says there is 0O relationship present) and accept the alternative hypothesis (which says there (! a relationship). 0ote that when we re1ect the null hypothesis$ the researcher says that the relationship is statistically significant. ffect si/e estimation involves the use of some type of effect si/e indicator (such as the percentage of variance explained$ the si/e of the correlation$ the si/e of the difference between two group means$ etc.) to inform you of the si/e or strength of an observed relationship.

(n other words$ null hypothesis testing tells us whether we have observed a real (i.e.$ non-chance) relationship$ and an effect si/e indicator tells us how strong a significant relationship is.

10.*. hat is internal validity, and why is it so i#portant in "ein! a"le to #a+e causal inferences$ (nternal validity is defined as the 3approximate validity with which we infer that a relationship between two variables is causal4 (,oo# and ,ampbell$ %565$ p.+6). Often in research we want to be able to ma#e causal inferences (i.e.$ state that two variables are causally related). 'o do this$ we must have internal validity which is obtained through the use of design features and control techni7ues. 'he best designs are the strong experimental designs$ and the best control techni7ue is random assignment to groups. 0ote that it is essential for us to be able to ma#e causal inferences because doing so helps us to #now how to improve the world (e.g.$ find effective teaching practices$ find ways to help people reach positive mental health$ etc.). (f you listen to your everyday language$ you will see that cause and effect is embedded in your daily thin#ing. 10.,. hat are the two types of causal relationships, and how do these two types of causal relationships differ$ %. ,ausal description involves describing the conse7uences of manipulating an independent variable. *. ,ausal explanation involves more that 1ust causal description8 it involves explaining the mechanisms through which and the conditions under which a causal relationship holds. 'hat is$ causal explanation includes the use of mediating9intervening variables and9or moderating variables. (Definitions of these terms are in 'able *.*.) 10.-. hat type of evidence is needed to infer causality, and how does each type of evidence contri"ute to #a+in! a causal inference$ 'he three necessary conditions for cause and effect are %) )ariable A and variable : must be related (the relationship condition)$ *) ;roper time order must be established (the temporal antecedence condition)$ and +) 'he relationship between variable A and variable : must not be due to some confounding extraneous or third variable (the lac# of alternative explanation condition). (f you are going to argue that causation is occurring$ then you must address each of the three conditions. You must also ma#e sure that none of the threats to internal validity discussed in the chapter represents an alternative explanation for the research results. (A hand table showing these three necessary conditions for inferring causal relationships is shown in ,hapter %+ in 'able %+.%. 'his idea applies throughout the boo#.)

10... hat is an a#"i!uous te#poral precedence threat, and why does it threaten internal validity$ (f you loo# again at the three necessary conditions for cause and effect listed in the last 7uestion$ you will see that ambiguous temporal precedence simply means that you have

not met condition two (i.e.$ you have not established proper time order with your variables). <or example$ if cancer was observed to occur before smo#ing$ you would have failed to meet the re7uirement of proper time order (smo#ing must occur before the onset of cancer if you plan on arguing that smo#ing causes cancer). Ambiguous temporal precedence is formally defined as the inability of the researcher (based on the data) to specify which variable is the cause and which is the effect. (f you cannot meet this necessary condition but your variables are related then you need to 1ust say that the two variables are related (i.e.$ you cannot say that they are causally related). 10./. hat is a history threat, and how does it operate$ =henever you measure your dependent variable with a pretest followed by implementation of a treatment followed by the measurement of the dependent variable again at the posttest$ you should worry about the history effect. You hope to conclude that the difference between the pretest and the posttest is due to the treatment$ but the history threat can cause problems. 'he history threat refers to any event$ other than the planned treatment event$ that occurs between the pretest and posttest measurement and has an influence on the dependent variable. (f both a treatment and a history event occur between the pretest and posttest in a one-group design$ you will not #now whether the observed difference between the pretest and posttest is due to the treatment or due to the history event. (n short$ those events are confounded. 10.0. hat is a #aturation threat, and how does it operate$ >et"s assume again you are using the design shown in <igure %?.% (see your textboo#). (n that design$ the effect of the treatment is estimated by the change measured from the pretest to the posttest on the outcome (i.e.$ dependent) variable. Maturation is a problem that can threaten the researcher"s ability to conclude that the treatment caused or produced the change from pretest to posttest. Maturation is any physical or mental change that occurs over time that affects performance on the dependent variable. ,hildren are especially prone to maturation because they are naturally changing so rapidly. (n short$ if you have a maturation effect operating$ it is confounded with the treatment and you do not #now whether the change observed from pretest to posttest is due to the treatment or simply due to maturation. 10.10. hat is a testin! threat, and why does it exist$ 'he testing effect is another threat that can occur when using the design shown in <igure %?.%. 'esting is any change in the scores on the second administration of a test that results from having previously ta#en the test.

Again$ in the one-group pretest-posttest design shown in <igure %?.%$ testing would be a threat if the participants were affected by having ta#en the pretest. 'hat effect would be confounded with the treatment effect.

10.11. hat is an instru#entation threat, and when would this threat exist$ An instrumentation effect is another problem that can occur when using the design shown in <igure %?.%. (nstrumentation is any change that occurs in the way the dependent variable is measured over time. Again$ in the one-group pretest-posttest design shown in <igure %?.%$ instrumentation would be a threat to internal validity if the way the dependent variable was measured changed from time one (pretest) to time two (posttest). 'he effect would be confounded with the treatment effect. 10.1%. hat is a re!ression artifact threat, and why does this threat exist$ Another problem that can occur when using the design shown in <igure %?.% is the regression artifact effect (sometimes called 3regression to the mean4). @egression artifacts is defined as the tendency of very high scores to become lower over time and for very low scores to become higher over time. Again$ in the one-group pretest-posttest design shown in <igure 2.%$ regression artifacts would be a threat if you had selected participants with extremely high scores (e.g.$ on the !A'). 'his is because some of these high scorers probably did a little better than they would normally do$ and their scores will be a little lower when they ta#e the test again. 'his regression artifact would be confounded with any treatment effect. 10.1(. hat is a differential selection threat, and when would this threat exist$ Differential selection is defined as selecting participants for various treatment groups that have different characteristics. 'his is not a threat to the design we have been discussing so far (i.e.$ the onegroup pretest-posttest design shown in <igure %?.%). 'hat"s because that design does give you a read on the change for the people in the study. 'his is a threat for the design shown in <igure %?.* (see your textboo#)$ as long as there is no random assignment to the groups (because random assignment will usually prevent differential selection from occurring because it will$ on average$ ma#e the groups the same on all variables other than the variable manipulated by the researcher). =hen you have two or more groups (and no random assignment to the groups)$ any difference observed between the groups might be due to the characteristics of the people in the different groups rather than the treatment. (n other words$ the selection variables such as those shown in 'able %?.% might the real reason that the groups differ. (n short$ you cannot conclude that the observed differences between the groups at the posttest is due to the different treatments because it is confounded with participant characteristics

10.1*. hat is #eant "y an additive and interactive effect as a threat to internal validity$ Additive and interactive effects refers to the fact that the threats to internal validity can sometimes combine to produce a bias in the study which threatens our ability to conclude that the independent variable is the cause of the differences in the dependent variable. One example of this #ind of threat is called the selection-history effect. 'he selection-history effect occurs when an event occurs in a multi-group design (such as the one shown in <igure %?.*) that differentially affects the different comparison groups. <or example$ if someone came into one group"s room and shouted the president has been shot by did not go into the other group"s room$ we would expect a differential effect. Another example is the selection-maturation effect. 'he selection-maturation effect occurs when an event occurs in a multi-group design where the participants in one of the groups experience a different rate of maturation than the participants in a different group. 10.1,. 1ow does differential attrition threaten internal validity$ Attrition simply refers to the fact that participants sometimes drop out of a research study. Differential attrition can occur in a multi-group design (not a single group design)$ and it is defined as the differential loss of participants from the various comparison groups. 'his is a problem in the design shown in <igure %?.* because the groups can become different because of the people dropping out rather than 1ust the treatment. (n other words$ the differences due to differential attrition and the differences due to the treatments are confounded. 10.1-. hat is external validity, and why is it i#portant$ xternal validity is the degree to which the results of a study can be generali/ed to and across populations of persons$ settings$ times$ outcomes$ and treatment variations. (n short$ external validity has to do with generali/ing. 10.1.. hat is population validity, and why is it difficult to achieve$ ;opulation validity is the degree to which the results of the study can be generali/ed to individuals that were not included in the study. (t is difficult to achieve because$ first$ in experimental research it is usually not feasible to randomly select from the target population (e.g.$ how would you get a random sample of people with dyslexiaA). Also$ even if we get a random sample of the accessible population (i.e.$ the research participants who are available for participation in the research study)$ we still would often find that the accessible population is different from the target population (the larger population to whom the study results are to be generali/ed). 10.1/. hat is ecolo!ical validity$ cological validity is the degree to which one can generalize the results of the study across different settings and different contexts.

10.10. hat is te#poral validity$ 'emporal validity is the degree to which one can generalize the results of the study across time (e.g.$ do results found previously still apply and will results found today apply in the futureA). 10.%0. hat is treat#ent variation validity, and why can this "e a threat to external validity$ 'reatment variation validity is the degree to which one can generalize the results of the study across variations of the treatment (i.e.$ if the treatment were varied a little$ would the results be similarA). 10.%1. hat is outco#e validity$ Outcome validity is the degree to which one can generalize the results of the study across different but related dependent variables (e.g.$ if a study showed an effect on self-esteem would it also show and effect on the related construct of self-efficacyA). 10.%%. hat is construct validity, and how is it achieved$ ,onstruct validity is the degree to which a construct is represented (i.e.$ measured well) in a research study. :asically$ in all research studies we want to have good measurement. 10.%(. hat is operationalis#, and what is its purpose$ Operationalism refers to the process of representing constructs by a specific set of steps or operations. (n other words$ we want to measure things well and we want to ma#e it clear to our readers exactly how we carried out our measurement (so they can 1udge for themselves how well our measurement was). 10.%*. hat is #ultiple operationalis#, and why is it used$ Multiple operationalism refers to the use of two or more measures (rather than 1ust one measure) to represent a construct. 'he use of multiple measures of a single construct gives you your best chance of fully representing a construct. 'he worst way to measure something is to try to measure it with a single item. <or example$ you certainly would not want to measure (B with a single item$ rightA 10.%,. hat is treat#ent diffusion, and when is it #ost li+ely to occur$ 'reatment diffusion means that the participants in one treatment condition are exposed to information from the other treatment condition. (t is most li#ely to occur when the groups are in close proximity. (t also is more li#ely to occur in field research that in laboratory research because in the former the researcher has much less control over the environment in which the research ta#es place. 10.%-. hat is #eant "y research validity in 'ualitative research$ (n any #ind of research we want our research findings to be trustworthy and defensible. 'hat"s what we mean by research validity in 7ualitative research.

10.%.. hy is researcher "ias a threat to validity, and what strate!ies are used to reduce this effect$ @esearcher bias occurs when a researcher selectively notices only the results that are consistent with what he or she wants or expects to find. 'he researcher must be very careful to avoid this. One strategy is called reflexivity$ which refers to self-reflection by the researcher on his or her biases and predispositions. 'he point of reflexivity is to see and attempt to minimi/e the influence of your personal biases. An important strategy for minimi/ing researcher bias (in addition to reflexivity) is to use negative-case sampling (i.e.$ to purposively loo# for and$ if present$ carefully examine cases that disconfirm your expectations) 10.%/. hat is the difference "etween descriptive validity, interpretive validity, and theoretical validity$ Descriptive validity refers to the factual accuracy of the account as reported by the researcher. (nterpretive validity means that the 7ualitative researcher accurately portrays the meanings given by the participants to what is being studied. 'his means the researcher understands how the participants thin# and can portray the participants" meanings about things. 'heoretical validity refers to the degree to which a theoretical explanation developed to explain the data actually fits the data. As you can see$ one has to do with accurate description (descriptive validity)$ one has to do with getting and representing the insider"s view (interpretive validity)$ and one has to do with the explanation or theory fitting the data (theoretical validity). 10.%0. hat strate!ies are used for pro#otin! descriptive, interpretative, and theoretical validity$ 'he strategies for promoting validity in 7ualitative research are shown in 'able %?.*. <or descriptive validity$ investigator triangulation is especially helpful. <or interpretative validity$ participant feedbac# and use of low-inference descriptors are especially helpful. <or theoretical validity$ the following are especially helpful& extended fieldwor#$ theory triangulation$ pattern matching$ and peer review. 10.%0. 1ow is external validity assessed in 'ualitative research, and why is 'ualitative research typically wea+ on this type of validity$ You will recall that external validity refers to the degree to which you can !enerali)e your findings. 'his is often wea# in 7ualitative research because only a few cases are typically examined in 7ualitative research. (n fact$ 7ualitative researchers are often far less interested in obtaining external validity than in having good in-depth examination of the cases or group and the context in which it is located.

Das könnte Ihnen auch gefallen