Sie sind auf Seite 1von 6

1

!"#$%&'&(# *+,-
./012( 3++,
4%5 61"712$71&2 8579552 :1;5< =2< >=2<&? @AA5$7"


It is important to know the distinction between fixed effects and random effects. These
terms denote diIIerent ways oI thinking about the levels oI our independent or predictor
variables (e.g., grouping Iactors in ANOVA designs). Although we have not noted this
point explicitly, all the designs and analyses that we have considered up to this point in
time have involved Iixed eIIects.

A given Iactor (e.g,. treatment, gender, time) is considered a Iixed eIIect when:
1. The levels oI the Iactor included in the experiment are the only ones in which
we are interested and to which we want to generalize our results
2. The levels oI the Iactor included in the experiment are the only ones we would
manipulate or use iI we were to repeat the experiment
3. We consider the population eIIects oI each level oI the Iactor (e.g., the ' s ! )
to remain constant across diIIerent repetitions oI the experiment
4. Within a given level oI a Iactor (e.g,. level 1 oI Iactor A), the only source oI
variability among observations is a random error component.

Concerning point 4 above, recall our model Ior the one-way ANOVA:
if f if
Y " ! # $ % % . It
turns out this model itselI does not indicate whether we consider the Iactor manipulated
(let`s say Iactor A) is Iixed or random. What counts is the stipulation that
ij
2 2
and are Iixed constants, while is considered a random variable.
Based on these assumptions, it can be easily shown that
f
v #
" ! #
& & $


A Iactor is considered random when:
1. The levels oI the Iactor included in a particular experiment are considered a
random sample drawn Irom a much larger population oI potential levels oI the
Iactor. Let me emphasize that I`m talking here about randomly sampling
levels oI a Iactor, which is distinct Irom random sampling oI subjects.
2. We would likely use (i.e., sample) other levels oI the Iactor iI we were to
repeat the experiment. For example, iI we wanted to assess the eIIects oI the
particular experimenter used in a social psychological investigation and we
used 5 experimenters in Experiment # 1, we would probably use 5 other
experimenters in Experiment # 2.
3. As a result oI point # 2, the eIIects oI a given Iactor (the '
f
s ! ) will vary Irom
replication to replication
4. The experimenter wants to generalize conclusions about the eIIects oI that
Iactor beyond the speciIic levels manipulated or sampled in a particular
experiment.
5. Within a given level oI a Iactor (e.g., 'level # 1 oI Iactor X, 'Experimenter #
1, 'Therapist # 3), there are now 2 sources oI variability: random error and
2
the variability introduced by the random selection oI the particular levels oI
that Iactor (recall that such levels will not be identical across experiments).

Concerning point #5 above, we can write the one-way random eIIects model as:

if f if
Y " ! # $ % %
Note how this is identical to the one-way Iixed eIIects model. What makes it diIIerent is
the stipulation that the '
f
s ! are not Iixed constants that are always the same across
diIIerent samples oI subjects. Rather, these eIIects vary across experiments. II we Iurther
stipulate that there are an inIinite or at least reasonably large number oI levels oI the
given Iactor, we can actually talk meaningIully about the variance oI the eIIects. We will
denote this as
2
!
& . II we Iurther assume that , , and
f if
" ! # are independent oI one
another, we are led to the Iollowing model Ior the variability oI the observed scores:

2 2 2
Y ! #
& & & $ % (1)
That is, there are two source oI the variability oI scores: random error and variability due
to the random selection oI the given level oI the Iactor.

It`s probably most helpIul to think about the distinction between Iixed and random eIIects
designs Irom the perspective oI a given example. In social psychological experiments
involving Ialse Ieedback (e.g,. 'you did wonderIully on that task , 'you perIormed much
worse than the average person), it is reasonable to suppose that the speciIic experimenter
who delivers the Ieedback may have subtle but signiIicant inIluences on scores. Let`s say
that a psychologist wants to assess the eIIects oI experimenter on subjects` mood aIter
receiving positive (i.e., success) Ieedback. The psychologist conducts an experiment in
which 5 research assistants who serve as experimenters deliver Ialse success Ieedback to
participants who`ve perIormed a concept Iormation task. AIter 5 minutes, subjects` mood
is assessed.

The question here is whether 'experimenter is considered a Iixed eIIect or a random
eIIect. Based on what I`ve told you, it could be either. Let`s say that the researcher is only
interested in these 5 experimenters. For example, maybe these are the only people that he
has trained up to be an experimenter and the only ones that he will use in his most
important subsequent experiment, the primary goal oI which is actually to manipulate
type oI Ieedback. He`s not really interested in experimenter eIIects in general. He just
wants to know iI these 5 experimenters will be a source oI variability in the data.
Furthermore, he doesn`t consider these 5 experimenters as a random sample Irom a
population oI potential experimenters: indeed, he`s speciIically selected these individuals
because he thought that they would be good experimenters (e.g., because they do not
express emotion non-verbally, because they can maintain a neutral posture, because they
can Iollow a standardized protocol etc., etc). II he adopts these positions, he would
probably consider experimenter to be a Iixed eIIect.

On the other hand, he could consider experimenter to be a random eIIect. He could make
the Iollowing argument. First, let`s say his interest is not so much in these 5
experimenters but whether experimenters in general contribute to variability in mood. II
so, he would most likely consider these 5 experimenters as 5 out oI a possibly inIinite
3
number oI experimenters. Likely it is the population oI experimenters to which he would
like to generalize his results. In addition, he might be willing to consider these 5
experimenters as a random sample Irom a population oI experimenters. Furthermore, iI
he replicated the study, he would most likely not use these 5 experimenters but rather
sample another group oI 5 experimenters. This approach would be diIIerent Irom the
Iixed eIIects approach: in the latter case, iI the researcher attempted to replicate the study,
he would use the same 5 experimenters (or 5 people who would essentially have to be
considered clones oI the original 5).

In some respects, the diIIering perspectives on replication are critical. Consider what the
random eIIects perspective would imply Ior the variability oI the score Ior subject # 2 oI
the experimenter # 3 group
23
( ) Y across replications oI the experiment. One source oI
variability is simply at the level oI the individual subjectdiIIerent individuals will be
chosen across diIIerent random samples. An index oI this source oI variability is
2
#
& . A
second source oI variability, however, is due to the experimenter. It is very likely that
diIIerent people will be chosen as 'experimenter # 3 iI we assume that experimenters are
randomly sampled as well as subjects. An index oI this source oI variability is
2
!
& . II we
assume that the 2 sources oI variability are independent oI one another, then we can
decompose the total variability in the Y scores into two components, as indicated by
equation # 1 above. In the Iixed eIIects context, variability in the scores oI subject # 2 Ior
experimenter # 3 across experiments is due only to variability in the sampling oI subjects
2
( )
#
& . Figure 10.1 in Maxwell & Delaney nicely illustrates the points made here about
sources oI variability in the random eIIects model. We will review this Iigure in class.

Let me note several points about Iixed vs. random eIIects:

1. In multiway designs, you can have all possible combinations (e.g., in a two-way
design, two Iixed eIIects Iactors, two random eIIects Iactors, or one Iixed and one
random Iactor). The two-way designs that we talked about in previous lectures
consisted oI two Iixed eIIects. Combinations oI Iixed and random eIIects are
known as mixed effects designs. An example oI a mixed eIIects design is
Educational Intervention (Intervention A/Intervention B/No Intervention) X
Classroom (6 classrooms nested within each oI the intervention conditions). The
Iormer eIIect is Iixed while the latter is random.

2. In mixed eIIects designs, random eIIects can be crossed with Iixed eIIects or
nested within Iixed eIIects. For example, consider a Treatment (A/B) X Therapist
design in which the Iormer eIIect is Iixed and the latter is random. Let`s say we
have 10 therapists. II each oI these 10 therapists sees patients in both treatment
groups, then Treatment and Therapist are crossed. II, however, 5 therapists treat
patients with Treatment A alone and the other 5 therapists use Treatment B alone,
then therapists are nested within treatment groups. The choice oI crossed vs.
nested is based on conceptual, experimental design, or pragmatic considerations.
For example, iI 5 oI the therapists are well-trained experts in Treatment A and the
other 5 therapists are experts in Treatment B, and iI the goal oI the experiment is
4
to assess which treatment is more eIIective when administered by experts, then a
nested design would be called Ior. You have to be a bit careIul about the
terminology here. Sometimes, when people reIer to 'mixed designs, they reIer
speciIically to the case when Iixed and random eIIects Iactors are completely
crossed. In other cases, crossed and nested designs are each considered a type oI
mixed design. I will adopt the latter terminology but Iocus more on the crossed
case in subsequent handouts. For a discussion oI nested designs, see Maxwell &
Delaney pp. 433-448.

3. A term that is commonly applied to random eIIects designs particularly those
with ~ 1 random eIIect -- is variance components analvsis. Indeed, a common
goal oI such random eIIects designs is to assess the proportions oI variability in
the data that are due to diIIerent random eIIects. For example, in a generalizability
design, we might ask what proportion oI the variability in non-verbal responses to
emotional stimuli are due to the emotional stimuli used (e.g., Iilms), the raters
assessing non-verbal behavior, and subjects.

4. Sometimes it is diIIicult to tell whether a Iactor is Iixed or random. Avoid blind
assumptions about which is operative in any particular setting. II there is one
mistake that is commonly made in practice by researchers, it is the tendency to
treat a Iactor as Iixed when it really should be considered random.

5. It is very important to know whether a Iactor is considered Iixed or random
because the speciIic ratio used to conduct an F test varies depending on the
designation. Making a mistake here can result in highly inaccurate test statistics.
II you have any questions about your own data, think careIully (Iixed or random?)
and, once you have made up your mind, consult a textbook (or your class notes) iI
you cannot remember the appropriate test statistic to Iorm. In the case oI the most
common mistake noted above (a Iactor is treated as Iixed when it is really
random), a common result in the traditional ANOVA context is excessive Type 1
errors. Oddly enough inIlated Type 1 errors are most likely to be observed not Ior
the Iactor in question but Ior other Iactors in the design (i.e., those that are
correctly designated as Iixed). For more complicated designs, a more complex
pattern oI eIIects can be observed. We will discuss these points in class and in
subsequent handouts.

6. Even if voure clear about whether a given effect is fixed or random, vou need to
carefullv examine the output of statistical packages, particularlv when a mixed-
model design is used. In some cases, even when you correctly inIorm the program
about which eIIects are Iixed and which are random, you need to either insert
additional statements or do hand computations to compute the correct ratio oI
mean squares. In manv cases, the most salient (i.e., default) output is incorrect.

7. Both perspectives make pretty strong assumptions that might be acceptable in the
abstract but questionable in practice. For example, in the example above, even iI
we used the same 5 experimenters across replications oI the experiment, would it
5
necessarily be the case that a given experimenter would have an exactly identical
eIIect across two samples oI subjects? This would be assumed by the Iixed eIIects
view. In the random eIIects case, it is oIten doubtIul whether the levels oI the
Iactor are actually randomly sampled (note comments above about
experimenters).

8. An important way to identiIy random eIIects is to ascertain whether a case can be
made that: (1) the levels oI an independent variable are truly sampled Irom a
larger universe oI possibilities; (2) we`re not uniquely interested in the speciIic
levels that we`re using in our experiment. For example, in cognitive and
educational psychology experiments, the speciIic items or task stimuli used can
oIten legitimately be considered a random sample Irom a larger universe.
Consider a study on reading comprehension in which the experimental stimuli are
6 sample paragraphs that are read beIore a comprehension test is administered.
These paragraphs would very likely be considered to be samples Irom an
eIIectively inIinite number oI possibilities. Furthermore, the experimenter would
likely not be especially interested in these speciIic 6 paragraphs. In this case, a
random eIIects model would be appropriate. Indeed, papers have been written
over the years urging cognitive psychologists to use random eIIects or mixed (see
below) models to analyze data Irom such experiments (a commonly cited paper is
Clark, 1973, Journal of Jerbal Learning and Jerbal Behavior). II Iixed eIIects
models are used, the experimental manipulation might look stronger than it really
is. There are other cases in which random eIIects models are commonly used. For
example, in educational intervention research, classrooms are commonly sampled
(e.g., 10 classrooms get intervention A, 10 others get intervention B). Classroom`
is typically considered a random eIIect here. As noted above, there are a number
oI cases in research in which random eIIects designs could and probably should
be used but are not.

9. In the ANOVA context that is the Iocus oI this class, it is common to reIer to a
given eIIect as either Iixed or random. In other contexts, an eIIect can be both
Iixed and random. For example in multilevel models assessing change over time,
the Iixed eIIect Ior time denotes the average change over time, and the random
eIIects Ior time (one per subject) denotes the deviations oI individual subjects`
patterns oI change Irom the average pattern. In the ANOVA context, we typically
assume that the average oI the random eIIects in the population is 0. Thus, there
really is no point in estimating Iixed eIIects in addition to random eIIects. I should
note, however, that there are even ANOVA-type models that can accommodate
both Iixed and random eIIects Ior one and the same Iactor.

10. Random eIIects models can be more restrictive than Iixed eIIects models. For
example, the methods oI estimation that we`ll be using in this class typically
assume equal n`s across the levels oI the random eIIects. Secondly, in some
mixed designs, exact F ratios are impossible to compute Ior certain combinations
oI Iixed and random eIIects. In such contexts, what are known as quasi-F ratios
are used. Furthermore, there are a variety oI diIIerent methods Ior estimating
6
random eIIects. The approach that we will use in this class (deriving expected
values oI mean squares) is known as the method oI moments. There are other
approaches (maximum likelihood, restricted maximum likelihood) that in some
contexts (unequal n`s) are more appropriate and can lead to Iairly diIIerent results.
In general, designs with random eIIects are more complicated to analyze than
designs that only include Iixed eIIects.

11. Power is a salient consideration in designs with random eIIects just like it is in
designs with Iixed eIIects. In general, however, power computations are more
complicated. A signiIicant determinant oI power in random eIIects designs is not
just number oI individual subjects but suIIicient numbers oI levels oI the random
eIIects. Sometimes designs with random eIIects don`t use enough levels and the
result can be a signiIicant compromise oI power. One reason that power is
especially important in designs with random eIIects is that the error term Ior some
eIIects oI interest is commonly larger than it is Ior designs with Iixed eIIects only
(the reason being that we need to take into account additional sources oI random
variability beyond simply the error due to the vagaries oI sampling subjects).
However, the degrees oI Ireedom associated with the error term (the denominator
oI our F ratio) are oIten smaller. II you think about it, this combination oI a larger
error component with smaller dI would likely translate into a signiIicant loss oI
power in many contexts relative to the Iixed eIIects only context (I`m assuming
here that we`re including the same eIIects either way and varying whether they
are considered Iixed or random).

Das könnte Ihnen auch gefallen