Sie sind auf Seite 1von 2

\documentclass{article}

\begin{document}

\hspace{7 mm} Random effects are another approach to designing experiments and
modeling data. Random effects are appropriate when the treatments are random
samples from a population of potential treatments. They are also useful for random
subsampling from populations. Random-effects models make the same kinds of
decompositions into overall mean, treatment effects, and random error that we have
been using, but random-effects models assume that the treatment effects are random
variables. Also, the focus of inference is on the population, not the individual
treatment effects. This chapter introduces random-effects models.

\section{11.1 Models for Random Effects}


\hspace{1 mm} We have been using models for data that take the form:\
\begin{center}
$y_{ij} = \mu_{i} + \epsilon_{ij} = \mu + \alpha_{i} + \epsilon_{ij}$
\end{center}
\hspace{13 mm} The parameters of the mean structure ($\mu_{i}$ , $\mu $, and
$\alpha_{i}$) have been treated as fixed, unknown numbers with the treatment
effects summing to zero, and 254 Random Effects the primary thrust of our inference
has been learning about these mean parameters. These sorts of models are called
fixed-effects models, because the treatment effects are fixed numbers.\\

The basic random effects model begins with the usual decomposition: \\
\begin{center}
$y_{ij} = \mu + \alpha_{i} + \epsilon_{ij}$. \\
\end{center}

We assume that the $\epsilon_{ij}$ are independent normal with mean 0 and variance
$\sigma^2$ , as we did in fixed effects. For random effects, we also assume that
the treatment effects $\alpha_{i} $ are independent normal with mean 0 and variance
$\sigma^2$ $\alpha$ , and that the $\alpha_{i}$'s and the $\epsilon_{ij}$'s are
independent of each other. Random effects models do not require that the sum of the
$\alpha_{i}$'s be zero. The variance of $y_{ij}$ is $\sigma^2$ $\alpha$ +
$\sigma^2$. The terms $\sigma^2$ $\alpha$ and $\sigma^2$ are called components of
variance or variance components. Thus the random-effects model is components
sometimes called a of variance model.\\

The model for two-way random effects is:


\begin{center}
$y_{ijk} = \mu + \alpha_{i}+ \beta_{j} + \alpha\beta_{ij} + \epsilon_{ijk}$ ,\\
\end{center}
where $\alpha_{i}$ is a main effect factor for A, $\beta_{j}$ is a main effect for
factor B, $\alpha\beta_{ij}$ is an AB interaction, and $\epsilon_{ijk}$ is random
error. The model assumptions are that all the random effects $\alpha_{i},
\beta_{j}, \alpha\beta_{ij}$, and $\epsilon_{ijk}$ are independent, normally
distributed, with mean 0. Each effect has its own variance: Var$(\alpha_{i}) =
\sigma^2 \alpha$ , Var$(\beta_{j}) = \sigma^2 \beta$ , Var$( \alpha \beta_{ij} )
= \sigma^2 \alpha \beta$, and Var$(\epsilon_{ijk}) = \sigma^2 $. The variance of
$\gamma_{ijk}$ is $\sigma^2 \alpha + \sigma^2 \beta + \sigma^2 \alpha\beta
+\sigma^2$, and the correlation of two responses is the sum of the variances of the
random components that they share, divided by their common variance $\sigma^2
\alpha + \sigma^2 \beta +\sigma^2 \alpha\beta +\sigma^2$ . \\

The model for three-way random effects is \\


\begin{center}
$y_{ijkl} = \mu + \alpha_{i} + \beta_{j} + \alpha\beta_{ij} + \gamma_{k} +
\alpha \gamma_{yik} + \beta \gamma_{jk} + \alpha\beta_{yijk} + \epsilon_{ijkl}$\\
\end{center}

\ \\
where $\alpha_{i}, \beta{j}, and \gamma_{k}$ are main effects; $\alpha\beta_{ij}$,
$ \alpha \gamma_{ik}$, $\beta \gamma_{jk}$, and $\alpha \beta \gamma_{ijk}$ are
interactions; and $\epsilon_{ijkl}$ is random error. The model assumptions remain
that all the random effects are independent and normally distributed with mean 0.
Each effect has its own variance:Var$(\alpha_{i}) = \sigma^2 \alpha$ , Var$
(\beta_{j}) = \sigma^2 \beta$, Var$(\gamma_{k}) = \sigma^2 \gamma$, Var$
(\alpha\beta_{ij}) = \sigma^2 \alpha\beta$, Var$(\alpha\gamma_{ik}) = \sigma^2
\alpha\gamma$, Var$(\beta\gamma_{jk}) = \sigma^2 \beta\gamma$, Var$
(\alpha\beta\gamma_{ijk}) = \sigma^2 \alpha\beta\gamma$,and Var$(\epsilon_{ijk})
= \sigma^2$.Generalization to more factors is straightforward, and Chapter 12
describes some additional variations that can occur for factorials with random
effects.\\

\end{document}

Das könnte Ihnen auch gefallen