Sie sind auf Seite 1von 19

THREE TRUE EXPERIMENTAL DESIGNS

1. THE PRETEST-POSTTEST CONTROL


GROUP DESIGN

Pede I. Casing, PhD – I MS Math Ed


University of Science and Technology of Southern Philippines
Cagayan de Oro City
works with a system unbiased; all angles
or method presented

Research is a systematic and objective


creation of knowledge.
(Creswell, 2013)
a creative
process
Present the
answer Collect data to
answer to the

Defining question

Research Pose a question


Symbolism for Diagramming Experimental Designs

X = exposure of a group to an experimental treatment


O = observation or measurement of the dependent variable
If multiple observations or measurements are taken,
subscripts indicate temporal order – I.e., O1, O2, etc.
R = random assignment of test units;
individuals selected as subjects for the experiment
are randomly assigned to the experimental groups
Pretest-Posttest Control Group Design
• A.K.A., Before-After with Control
• True experimental design
• Experimental group tested before and after treatment exposure
• Control group tested at same two times without exposure to
experimental treatment
• Includes random assignment to groups
• Effect of all extraneous variables assumed to be the same on both
groups
• Do run the risk of a testing effect
Pretest-Posttest Control Group Design
• Diagrammed as
– Experimental Group: R O1 X O2
– Control Group: R O3 O4
• Effect of the experimental treatment equals
(O2 – O1) -- (O4 – O3)
• Example
– 20% brand awareness among subjects before an advertising treatment
– 35% in experimental group & 22% in control group after the treatment
– Treatment effect equals (0.35 – 0.20) – (0.22 – 0.20) = 13%
“PRETEST - POSTTEST”
CONTROL GROUP DESIGN

T1 T2 T3

Trt O X O

Ctrl O O
Experimental Validity
• Internal Validity
– Indicates whether the independent variable was the sole cause of the
change in the dependent variable
• External Validity
– Indicates the extent to which the results of the experiment are applicable
to the real world
Controls for Internal Validity
• Maturation Effect
– Effect on experimental results caused by experimental subjects maturing
or changing over time
– Example: During a daylong experiment, subjects may grow hungry, tired,
or bored
• Testing Effect
– In before-and-after studies, pretesting may sensitize subjects when taking
a test for the 2nd time.
– Example: It may cause subjects to act differently than they
would have if no pretest measures were taken.
Controls for Internal Validity
• History Effect
– Specific events in the external environment between the 1st & 2nd
measurements that are beyond the experimenter’s control
– Example: Common history effect occurs when competitors change their
marketing strategies during a test marketing experiment
– e.g. a major employer closes its plant in a test market area
• Statistical Regression Effect
– Operating where groups have been selected on the basis of their extreme
scores.
Controls for Internal Validity
• Instrumentation Effect
– Caused by a change in the wording of questions, in interviewers, or in other
procedures used to measure the dependent variable.
– Ex.: New questions about women are interpreted differently than earlier questions
• Selection Effect
– Sampling bias that results from differential selection of respondents for the
comparison groups.
– Example: control and experimental groups are assigned self-selecting groups based on
preference for soft drinks.
Controls for Internal Validity

• Mortality or Sample Attrition


– Results from the withdrawal of some subjects from the experiment before it is
completed
– Effects randomization
– Especially troublesome if some withdraw from one treatment group and not from the
others (or at least at different rates) e.g. subjects in one group of students withdraw
from school
Factors Jeopardizing External Validity
• Reactive or Interaction Effect of Testing
– A pretest might increase or decrease the respondent’s sensitivity or responsiveness to
the experimental variable and thus make the results obtained for a pretested
population unrepresentative of the effects of the experimental variable for the
unrepresented universe from which the experimental respondents were selected.
• Interaction Effects of Selection Biases and the Experiment Variable
• Reactive Effects of Experimental Arrangements
– Preclude generalization about the effect of the experimental variable upon persons
being exposed to it in non-experimental settings.
Tests of Significance
Tests of Significance for Design 4
“Pretest-Posttest Control Group
Design”
2. Use of gain scores and 1. A wrong statistic in common use
covariance (t test &
ANCOVA)

4. Statistics for random assignment


of intact classrooms to treatments (A
covariance analysis would use
pretest means as the covariate).
References
Aronson, E. (2008). Jigsaw Classroom: overview of the technique. Retrieved
2008, February 15, from http://www.jigsaw.org/overview.htm
Aronson, E. (2012). Jigsaw Basics. Retrieved December 5, 2012, from jigsaw.org
Bratt, C. (2008). The jigsaw classroom under test: No effect on intergroup
Relations present. Journal of Community & Applied Social Psychology, 18,
403 419
Casing, P. (2014). Cooperative Learning Strategy in Teaching Trigonometry.
Unpublished Thesis, Saint Columban College, Pagadian City
Clarke, J. (2004). Pieces of the puzzle: The jigsaw method. In S. Sharan (Ed.),
Handbook of cooperative learning methods. Westport CT: Greenwood
Press.
Deped Order No. 8. (2015). Policy Guidelines on Classroom Assessment for the
K to12 Basic Education Program.
Fostanes, J. (2008). Brain Teasers for Mathematics Students. Smartbooks
Publishing, Quezon City
http://www.timss.bc.edu
Lestik, M., & Plous, S. (2012). Jigsaw Classroom. Retrieved October 24, 2012,
from jigsaw.org
McKinney, S.E.,Chappell, Shannan, Berry, R.Q, & Hickman, Bythella. (2009). An
Examination of the Instructional Practices of Mathematics Teachers in
Urban Schools.Preventing School Failure 53.4: 278-884. Web. 6 Oct.
2009.
Ornstein, Allan C. (2009). Strategies for Effective Teaching. New York: Harper
Collins Publishers, Inc.
Perkins, D. V., & Saris, R. N. (2001). A "Jigsaw Classroom" technique for
Undergraduate statistics courses". Teaching of Psychology. pp. 111–113.
Retrieved December 5, 2012.
Perkins, D. V., &Tagler, M. J. (n.d.). Jigsaw Classroom. Retrieved December 5,
2012
Slavin, R. E. (1995). Cooperative learning: Theory, research, and practice (2nd
ed.). Boston: Allyn & Bacon. Tierney, R. (1995) Reading Strategies and
Practices. Boston: Allyn & Bacon.
Walker, I., & Crogan, M. (1998). Academic performance, prejudice, and the
jigsaw classroom: New pieces to the puzzle. Journal of Community &
Applied Social Psychology, 8,381-393
Chapter Sixteen

Alternative Research Designs

PowerPoint Presentation created by


Dr. Susan R. Burns
Morningside College

Smith/Davis (c) 2005 Prentice Hall


Experimental Research
Documentation

Das könnte Ihnen auch gefallen