Sie sind auf Seite 1von 22

Week 2 PART III

POST-HOC TESTS

POST HOC TESTS


When we get a significant F test result in
an ANOVA test for a main effect of a factor
with more than two levels, this tells us we
can reject Ho
i.e. the samples are not all from
populations with the same mean.
We can use post hoc tests to tell us which
groups differ from the rest.

POST HOC TESTS


There are a number of tests which can be
used. SPSS has them in the ONEWAY
and General Linear Model procedures
SPSS does post hoc tests on repeated
measures factors, within the Options
menu

Sample data

Post Hoc test


button

Select desired test

Tests of Between-Subjects Effects


Dependent Variable: SCORE
Source
Corrected Model
Intercept
GROUP
Error
Total
Corrected Total

Type III Sum


of Squares
372.150a
6777.992
372.150
273.600
7677.000
645.750

df
3
1
3
16
20
19

Mean Square
124.050
6777.992
124.050
17.100

a. R Squared = .576 (Adjusted R Squared = .497)

ANOVA Table

F
7.254
396.374
7.254

Sig.
.003
.000
.003

Multiple Comparisons
Dependent Variable: SCORE
LSD

(I) GROUP
1

(J) GROUP
2
3
4
1
3
4
1
2
4
1
2
3

Mean
Difference
(I-J)
-7.80*
-3.00
-10.80*
7.80*
4.80
-3.00
3.00
-4.80
-7.80*
10.80*
3.00
7.80*

Std. Error
2.77
2.62
2.50
2.77
2.77
2.67
2.62
2.77
2.50
2.50
2.67
2.50

Sig.
.013
.268
.001
.013
.103
.278
.268
.103
.007
.001
.278
.007

Based on observed means.


*. The mean difference is significant at the .05 level.

Post Hoc Tests

95% Confidence Interval


Lower Bound
Upper Bound
-13.68
-1.92
-8.54
2.54
-16.11
-5.49
1.92
13.68
-1.08
10.68
-8.66
2.66
-2.54
8.54
-10.68
1.08
-13.11
-2.49
5.49
16.11
-2.66
8.66
2.49
13.11

Choice of post-hoc test


There are many different post hoc tests,
making different assumptions about
equality of variance, group sizes etc.
The simplest is the Bonferroni procedure

Bonferroni Test
first decide which pairwise comparisons
you will wish to test (with reasonable
justification)
get SPSS to calculate t-tests for each
comparison
set your significance criterion alpha to be .
05 divided by the total number of tests
made

Bonferroni test
repeated measures factors are best
handled this way
ask SPSS to do related t-tests between all
possible pairs of means
only accept results that are significant
below .05/k as being reliable (where k is
the number of comparisons made)

PLANNED COMPARISONS/
CONTRASTS
It may happen that there are specific
hypotheses which you plan to test in
advance, beyond the general rejection of
the set of null hypotheses

PLANNED COMPARISONS
For example:
a) you may wish to compare each of three
patient groups with a control group
b) you may have a specific hypothesis that for
some subgroup of your design
c) you may predict that the means of the four
groups of your design will be in a particular
order

PLANNED COMPARISONS
Each of these can be tested by specifying
them beforehand - hence planned
comparisons.
The hypotheses should be orthogonal that is independent of each other

PLANNED COMPARISONS
To compute the comparisons, calculate a
t-test, taking the difference in means and
dividing by the standard error as estimated
from MSwithin from the ANOVA table

TEST OF LINEAR TREND


planned contrast
for more than 2 levels, we might predict a
constantly increasing change across
levels of a factor
In this case we can try fitting a model to
the data with the constraint that the means
of each condition are in a particular rank
order, and that they are equidistant apart.

TEST OF LINEAR TREND


The Between Group Sum of Squares is
then partitioned into two components.
the best fitting straight line model through the
group means
the deviation of the observed group means
from this model

TEST OF LINEAR TREND


The linear trend component will have one
degree of freedom corresponding to the
slope of the line.
Deviation from linearity will have (k-2) df.
Each of these components can be tested,
using the Within SS, to see whether it is
significant.

TEST OF LINEAR TREND


If there is a significant linear trend, and
non-significant deviation from linearity,
then the linear model is a good one.
For k>3, The same process can be
done for a quadratic trend - a parabola
is fit to the means. For example, you
may be testing a hypothesis that as
dosage level increases, the measure
initially rises and then falls (or vice
versa).

TEST OF LINEAR TREND


Report
SCORE
GROUP
1.00
2.00
3.00
Total

Mean
13.7352
15.6401
19.9698
16.4484

N
8
8
8
24

Std. Deviation
2.3244
1.8961
2.5631
3.4408

TEST OF LINEAR TREND


22
20
18
16
14
12
10

Mean SCORE

8
6
4
2
0
Group 1

Group 2

GROUP

Group 3

TEST OF LINEAR TREND


ANOVA Table

SCORE * GROUP

Between
Groups
Within Groups
Total

(Combined)
Linearity
Deviation from Linearity

Sum of
Squares
163.319
155.480
7.840
108.974
272.293

df
2
1
1
21
23

Mean Square
81.660
155.480
7.840
5.189

F
15.736
29.962
1.511

Sig.
.000
.000
.233

Das könnte Ihnen auch gefallen