Sie sind auf Seite 1von 133

What is NONPARAMETRIC Statistics?

Normality doesnt hold for all data.


Similarly, some data may not have any
particular fixed distribution such as Binomial
or Poisson.
Such sets of data are called Non-parametric
data or Distribution-free .
We use nonparametric tests for these
populations.
the population distribution is highly skewed or
very heavily tailed.
Median is a better measure to find the center
than the mean.





The sample size is small (usually less than 30)
and not normal
(we find that out using SAS orother statistical
programs).
When do we use NONPARAMETRIC Statistics?
14.1.1 Sign Test and Confidence Interval
Sign test for a Single Sample

We want to test a hypothesis at a
significant level if the true
median is above a certain known
value .
o
14.1.1 Sign Test and Confidence Interval
Example:
THERMOSTAT DATA:
Perform the sign test to
determine if the median
setting is different from
the design setting of 200
0
F.

.
202.2 203.4
200.5 202.5
206.3 198.0
203.7 200.8
201.3 199.0
14.1.1 Sign Test and Confidence Interval
STEP 1:



We find the signs of each sample by comparing
with 200.






.
STEP 2:

0 0
0
200
200
a
H
H
:
:
= =
> =
202.2 > 200 203.4 > 200
200.5 > 200 202.5 > 200
206.3 > 200 198.0 < 200
203.7 > 200 200.8 > 200
201.3 > 200 199.0 < 200
0
8
i
s x u
+
= > =
0
2
i
s x u

= < =
14.1.1 Sign Test and Confidence Interval
.


What do we do if there is a Tie?

0 i
x =
1) We can break the tie at random, meaning putting it with either
s
+
or s

. For a large sample it may not make a big difference,


but the result may vary significantly for a small sample.

2) We can contribute towards each s
+
and s

. However, we
can not calculate the p-value using fractions. So, we should
not do it.

3) We exclude the ties. This may reduce the sample size and
hence the power of the test. For a large sample, it should not
be a big deal.
14.1.1 Sign Test and Confidence Interval
.
STEP 3:


Why Binomial?

Well, S+ and S- are the only two variables in the sample set, n.



.
8
~ (10, )
10
s Bin
+

2
~ (10, )
10
s Bin


( )
s
P s p
n
+
+
= = ( ) 1 1
s s
P s p
n n
+

= = =
1
s s n
s s n
n n n
s
p
n
+
+
+
+ =
+ = =
=
both s
+
and s

are binomially distributed with


probability p and 1-p respectively.


~ ( , ) S Bin n p
+
AND ~ ( ,1 ) S Bin n p



14.1.1 Sign Test and Confidence Interval
.
STEP 4:












Since they both S+ and S- have the same binomial distribution, we
can denote a common r.v S:

S ~ bin (n, ).

When
0 0
: H = is true, the
0
is the true median.
Therefore, s s
+
= and p=1/2, because the number of
samples above the median is equal to the number of
samples below the media. Consequently, 1-p=
too.



1
~ ( , )
2
S Bin n
+
and
1
~ ( , )
2
S Bin n



1
~ (10, )
2
S Bin
14.1.1 Sign Test and Confidence Interval
.
Now we can calculate the p-value using the
binomial distribution:


alternatively,

{ } { }
10
10
8
10
1
8
8 2
0
1
0. 55
2
n
n
i s i
n
P value P S s P S
i
+
+
= =
| |
| |
> =
|
| |
| |
= > = = =
| | |
\ . \ .
\ \ . .

{ } { }
10
2
0 0
0
1
1
.0 2
2
0
1
2 2
55
i
n
s
i
n
P valu P S e P S s
i

= =
| |
| |
= s = =
| |
\ .
\ .
| |
| |
= s =
| |
\ .
\ .

14.1.1 Sign Test and Confidence Interval
STEP 5:
We compare our p-value with the
significant level:

P-value= .055

At o= .05, P-value = .055>.05.

We fail to reject the null hypothesis.
14.1.1 Sign Test and Confidence Interval
For large sample : (n > 20):
We can also approximate it by normal distribution.

( ) ( )
2
n
E S E S
+
= = and
( ) ( )
4
n
Var S Var S
+ +
= =

/ 2 1/ 2
/ 4
s n
z
n
+

= where is the continuity
correction.
We reject the null hypothesis if z z
o
> .

Equivalently,
,
1
2 2 4
n
n n
s z b
o o +
> + + = after rearranging.
14.1.1 Sign Test for matched pairs
Sign test for Matched Pairs
When observations are matched


Then:
- S
+
= the positive differences
- S
-
= the negative differences

Note: the magnitude of the differences is not
known
When pairs are matched P(A,B)= P(B,A)
14.1.1 Sign Test for matched pairs
No.
Method
A
Method
B Difference No
Method
A
Method
B Differences
i x
i
y
i
d
i
i x
i
y
i
d
i

1 6.3 5.2 1.1 14 7.7 7.4 0.3
2 6.3 6.6 -0.3 15 7.4 7.4 0
3 3.5 2.3 1.2 16 5.6 4.9 0.7
4 5.1 4.4 0.7 17 6.3 5.4 0.9
5 5.5 4.1 1.4 18 8.4 8.4 0
6 7.7 6.4 1.3 19 5.6 5.1 0.5
7 6.3 5.7 0.6 20 4.8 4.4 0.4
8 2.8 2.3 0.5 21 4.3 4.3 0
9 3.4 3.2 0.2 22 4.2 4.1 0.1
10 5.7 5.2 0.5 23 3.3 2.2 1.1
11 5.6 4.9 0.7 24 3.8 4 -0.2
12 6.2 6.1 0.1 25 5.7 5.8 -0.1
13 6.6 6.3 0.3 26 4.1 4 0.1
14.1.1 Sign Test for matched pairs
Note that for the matched paired test all tied
entries (x
i
= y
i
) are disregarded
Then
n=23 since x
i
= y
i
, for i=15,18,21
S
+
= 20
S
-
= 3

Using
S -n/2 1/ 2
/ 4
z
n

=
14.1.1 Sign Test for matched pairs
.






Two sided p-value:
2(1- (3.336))=0.0008

-This indicates a significant difference
between Method A and Method B
20-23/2-1/2
3.336
23/ 4
=



14.1.2 Wilcoxon Signed Rank Test
Who is Frank Wilcoxon?

Born: September 2 1892
Wilcoxon was born to American parents in County Cork, Ireland.
Frank Wilcoxon grew up in Catskill, New York although he
did receive pat of his education in England. In 1917 he
graduated from Pennsylvania Military College with a B.S. He
then received his M.S. in Chemistry in 1921 from Rutgers
University. In 1924 he received his PhD from Cornell
University in Physical Chemistry.
In 1945 he published a paper containing two tests he is most
remembered for, the Wilcoxon signed-rank test and the
Wilcoxon rank-sum test. His interest in statistics can be
accredited to R.A. Fisher's text, Statistical Methods for
Research Worker (1925).

Over the course of his career Wilcoxon published 70 papers.
14.1.2 Wilcoxon Signed Rank Test
Who is Frank Wilcoxon?

14.1.2 Wilcoxon Signed Rank Test
Alternative method to the Sign Test


The Wilcoxon Signed Rank Test


Improves on the Sign Test.
Unlike the sign test the Wilcoxon Signed Rank
Test not only looks at whether x
i
> or x
i
< , but it
also considers the magnitude of the
difference d
i
=x
i
-
0.


14.1.2 Wilcoxon Signed Rank Test
.
Note : Wilcoxon Signed Rank Test
assumes that the observed
population distribution is
symmetric.

(Symmetry is not required for the
Sign Test)


14.1.2 Wilcoxon Signed Rank Test

Step 1:
Rank order the differences d
i
in terms of their
absolute values.

Step 2:
w
+
= sum r
i
(ranks) of the positive differences.
w
-
= sum r
i
(ranks) of the negative differences.

if we assume no ties
Then
w
+
+ w
-
= r
1
+ r
2
+ + r
n
= 1 + 2 + 3 + + n
=

( 1)
2
n n +
14.1.2 Wilcoxon Signed Rank Test

Step 3:
reject H
0
if w
+
is large or if w
-
is small!!
14.1.2 Wilcoxon Signed Rank Test
The size of w
+
, w
-
needed to reject H
0
at is
determined using the distributions of the
corresponding W
+
, W
-
r.v.s when H
0
is true.
Since the null distributions are identical and
symmetric the common r.v. is denoted by W.

p-value:
= P(W w
+
) = P(W w
-
)


Reject H
0
if p-value is
14.1.2 Wilcoxon Signed Rank Test

1 if ith rank corresponds to a positive sign

0 if ith rank corresponds to a negative sign

1
n
i
i
W iZ +
=
=

E (w
+
) = E( iZ
i
)

= E(1Z
1
+2Z
2
++nZ
n
)

= E(1Z
1
)+E(2Z
2
)++E(nZ
n
)

= 1E(Z
1
) + 2E(Z
2
)++nE(Z
n
), [ E(Z
1
)= E(Z
2
)==E(Z
n
) ]

= 1E(Z
1
) + 2E(Z
1
)++nE(Z
1
)

= (1+2+3++n) E(Z
1
)

( 1)
2
n n
p
+
=
Z
i
~ Bernoulli (P)
P=p(x
i
>

0
) , P=1/2
i
Z

=

14.1.2 Wilcoxon Signed Rank Test



Var(W
+
) = Var(iZ
i
)

= Var(1Z
1
+2Z
2
++nZ
n
)

= Var(1Z
1
)+Var(3Z
2
)++Var(nZ
n
)

= 1Var(Z
1
)+ 2Var(Z
2
)++nVar(Z
n
)

= 1Var(Z
1
)+ 2Var(Z
1
)++nVar(Z
1
)

= (1+2++n) Var(Z
1
)



( 1)(2 1)
(1 )
6
n n n
p p
+ +
=
14.1.2 Wilcoxon Signed Rank Test

Then a Z-test is based on the statistic:






H
0
: =
0
H
a
:
0

Reject H
0
if Z Z




( 1)
w 1/ 2
4
( 1)(2 1)
24
n n
n n n
z
+
+

=
+ +
H
0
: =
0
H
a
:
0

reject H
0
if Z Z



H
0
: =
0
H
a
:
0

reject H
0
if
(1) Z Z

(2) Z -Z


The two sided p-value is

2p( W w
max
) = 2p( W w
min
)
14.1.2 Wilcoxon Signed Rank Test

14.1.2 Summary
Sign Rank Test VS Sign Test

Weighs each signed
difference by its rank

If the positive differences are
greater in magnitude than the
negative differences they get
a higher rank resulting in a
larger value of w
+
This improves the power of
the signed rank test

But it also affects the type I
error if the population
distribution is NOT symmetric

Counts the number of
differences
YOU WOULDNT WANT THIS
TO HAPPEN!
14.1.2 Summary

Sign Rank Test VS Sign Test
Preferred
Test

And the winner is
14.1.2 Summary


I pity the Fu that messes with the Wilcoxon
Signed Rank Test !!!

No.
Method
A
Method
B Difference
Rank
No
Method
A
Method
B Differences
Rank
i Xi Yi Di i Xi Yi Di
1 6.3 5.2 1.1 19.5 14 7.7 7.4 0.3 8
2 6.3 6.6 -0.3 8 15 7.4 7.4 0 -
3 3.5 2.3 1.2 21 16 5.6 4.9 0.7 15
4 5.1 4.4 0.7 15 17 6.3 5.4 0.9 17.5
5 5.5 4.1 1.4 23 18 8.4 8.4 0 -
6 7.7 6.4 1.3 22 19 5.6 5.1 0.5 12
7 6.3 5.7 0.6 17.5 20 4.8 4.4 0.4 10
8 2.8 2.3 0.5 12 21 4.3 4.3 0 -
9 3.4 3.2 0.2 5.5 22 4.2 4.1 0.1 2.5
10 5.7 5.2 0.5 12 23 3.3 2.2 1.1 19.5
11 5.6 4.9 0.7 15 24 3.8 4 -0.2 5.5
12 6.2 6.1 0.1 2.5 25 5.7 5.8 -0.1 2.5
13 6.6 6.3 0.3 8 26 4.1 4 0.1 2.5
14.1.2 Wilcoxon Signed Rank Test
14.1.2 Wilcoxon Signed Rank Test
.


w
-
= 8 + 5.5 + 2.5 = 16
then
w
+
=
Two sided p-value:

2(1 (3.695)) = 0.0002
23(24)
260 1/ 2
4
3.695
23(24)(47)
24
z

= =
23(24)
16 260
2
=
14.1.2 Wilcoxon Signed Rank Test


If d
i
= 0 then the observations are dropped and
only the nonzero differences are retained


Given |d
i
|s are tied for the same rank a new
rank is assigned to them called the midrank.
14.1.2 Wilcoxon Signed Rank Test


No. A B Diff R No. A B Diff R
i x
i
y
i
d
i
r
i
i x
i
y
i
d
i
r
i

15 7.4 7.4 0 - 8 2.8 2.3 0.5 12
18 8.4 8.4 0 - 10 5.7 5.2 0.5 12
21 4.3 4.3 0 - 19 5.6 5.1 0.5 12
12 6.2 6.1 0.1 2.5 7 6.3 5.7 0.6 18
22 4.2 4.1 0.1 2.5 4 5.1 4.4 0.7 15
25 5.7 5.8 -0.1 2.5 11 5.6 4.9 0.7 15
26 4.1 4 0.1 2.5 16 5.6 4.9 0.7 15
9 3.4 3.2 0.2 5.5 17 6.3 5.4 0.9 18
24 3.8 4 -0.2 5.5 1 6.3 5.2 1.1 20
2 6.3 6.6 -0.3 8 23 3.3 2.2 1.1 20
13 6.6 6.3 0.3 8 3 3.5 2.3 1.2 21
14 7.7 7.4 0.3 8 6 7.7 6.4 1.3 22
20 4.8 4.4 0.4 10 5 5.5 4.1 1.4 23
In the new table we see that when
n=12,22,25,26 |d
i
|=0.1
Then d
1
=d
2
=d
3
=d
4
=0.1

Then



Therefore the new ranks of the above
differences are not 1,2,3,4 but rather 2.5
14.1.2 Wilcoxon Signed Rank Test


1 2 3 4 10
2.5
4 4
+ + +
= =
14.2 Inferences for independent samples
.
1. Wilcoxon rank sum test
Assumption: There are no ties in the two samples.
Hypothesis:

Step1: Rank all observations
Step2: Sum the ranks of the two samples
separately( =sum the ranks of the xs,
=sum the ranks of the ys)
Step3: Reject null hypothesis if is large or if is
small
Problem: Distributions of are not same when
0 0 0
: . :
a
H vs H = >
1
w
2
w
1
w
2
w
1 2
, W W
1 2
n n =
14. 2.1 Wilcoxon-Mann-Whitney Test
.
1. Mann Whitney test

Step1: Compare each with each
( = #pairs in which , = #pairs in which )
Step2: Reject if is large or is small

Rank sum test statistic:
P-value:
For large samples, we approximate to normal, when



Rejection rule: If then reject
i
x
j
y
1
u
i j
x y >
2
u
i j
x y <
1 1 2 2
1 1 2 2
( 1) ( 1)
,
2 2
n n n n
u w u w
+ +
= =
1 2
( ) ( ) P U u P U u > = s
1 2 1 2
( 1)
( ) , ( )
2 12
n n n n N
E U Var U
+
= =
1
1
( )
2
,
( )
u E U
Z z
Var U
o

= >
0
H
0
H
1
u
2
u
14. 2.1. Wilcoxon-Mann-Whitney Test

Example: Failure Times of Capacitors
Control Group Stressed Group
5.2 17.1 1.1 7.2
8.5 17.9 2.3 9.1
9.8 23.7 3.2 15.2
12.3 29.8 6.3 18.3
7.0 21.1
Control Group Stressed Group
4 13 1 7
8 14 2 9
10 17 3 12
11 18 5 15
6 16
: c.d.f. of the control group
and : c.d.f. of the stressed
group


T.S.: w
1
=95, w
2
=76, u
1
=59, u
2
=21
P-value=.051 from Table A.11
Compare with large sample
normal approx:





P-value= .052










Table 1: Times to Failure
Table 2: Ranks of Times to Failure
1
F
2
F
0 1 2 1 2
: . :
a
H F F vs H F F = <
(8)(10) 1
59
2 2
1.643
(8)(10)(19)
12
Z

= =
14. 2.1. Wilcoxon-Mann-Whitney Test
.
Null Distribution of the Wilcoxon-Mann-
Whitney Test Statistic

Assumption:
Under H
0
, all N= n
1
+ n
2
observations
come from the common distribution
F
1
=F
2
.

All possible orderings of these
observations with n
1
coming from F
1

and n
2
coming from F
2
are equally
likely.

14. 2.1. Wilcoxon-Mann-Whitney Test

Example: Find the null distribution of W
1

and U
1
when n
1
=2 and n
2
=2.
Ranks
w1

u1
Null distn of W1
and U1
1 2 3 4 w1 u1 p
x x y y 3 0 3 0 1/6
x y x y 4 1 4 1 1/6
x y y x 5 2 5 2 2/6
y y x x 7 4 6 3 1/6
y x y x 6 3 7 4 1/6
y x x y 5 2
14. 2.2. Wilcoxon-Mann-Whitney Confidence Interval


1 2

1 2
1 1 2 2
1
and distrbutions belong to a location parameter family
with location paramaters and , respectively.
( ) ( ) and ( ) ( )
where F is a common unknown distribution function,
F F
F x F x F y F y
u u
u u
u
<=> = =
2 and are the respective population medians. u
14. 2.2. Wilcoxon-Mann-Whitney Confidence Interval

1 2
,
1 2
1 2
1 2
( 1) ( )
Step 2
Let be the lower /2 critical point
of the null distribution of the statistics.
Then a 100 1 )% CI for is given by

,
n n /
u N u
u u
U
( -
d d
o
o u u
u u

+
=

s s
1 2
1 2
1 2
A CI for can be obtained by inverting the Mann-Whitney test.
The procedure is as follows:
Step 1
Calculate all differences
(1 , 1 )
and rank them:

ij
i j
N n n
d x y i n j n
u u
=
= s s s s
(1) (2) ( )

where ( )is the ordered values of the differences
N
ij i j
d d d
d i d x y
s s s
=
14. 2.2. Wilcoxon-Mann-Whitney Confidence Interval

Example
Find 95% CI for the difference between the median failure times of the control group
and thermally stressed group of capacitors the data from ex 14.7.

1 2 1 2 8, 10, 8*10 80
The lower 2.2% critical point of the distribution of U 17
By symmetry the upper 2.2% critical point 80-17 63
Setting / 2 0.022 , 1- 1 0.044 0.956
95.6 % CI for the diffe
n n N n n
o o
= = = = =
=
= =
= = =
=>
(18) (63)
rence between the median failure times
[ , ]
where the ( ) are the ordered values of the differences
Differences are calcucated in an array form in Table 14.7.
Counting the 18th
ij i j
ij
d d
d i d x y
d
=
=
(18) (63)
ordered differences from the lower and high ends.
Therefore, 95.6% CI for the difference of the two medians
[ , ] [-1.1, 14.7] d d => =
14. 2.2. Wilcoxon-Mann-Whitney Confidence Interval
Table A.11 (pg. 684)

n1 n2 u1=upper critical point
(80-u1=lower critical point)
P(W>w1)
Upper Tail
Probabilities
8 10 59 (80-59=21) 0.051
10 62 (80-62=18) 0.027
10 63 (80-63=17) 0.022
10 66 (80-66=14) 0.010
10 68 (80-68=12) 0.006
14.3 Inferences for Several Independent Samples
One-way layout experiment

Completely Randomized Design
Comparing a > 2 treatment.
The available experimental units are
randomly assigned to each treatment.
No. of experimental units in different
treatment groups does not have to be
same.
The data classified according to the level of a single
treatment factor.
Treatment
1 2 a
X1 1
X1 2
:
:
X1 n1

X2 1
X2 2
:
:
X2 n2

:
:
:


Xa 1
Xa 2
:
:
Xa na
Sample Median
1 1
a
Sample SD S1 S2 Sa
14.3 Inferences for Several Independent Samples
Example of One-way layout experiment

Comparing effectiveness of different pills
on migraine.
Comparing duration of different tires.
etc
14.3 Inferences for Several Independent Samples
Assumption
1. The data on the each treatment
form a random sample from a continuous
c.d.f. F
i
.

2. Random samples are
independent.

3. F
i
( y ) = F ( y
i
) ,
where
i
is the location
of parameter of F
i

i
=

Median of F
i

F
1

F
2

F
a

a
14.3 Inferences for Several Independent Samples
Hypothesis

H0: F
1
= F
2
= = F
a

H1: F
i
< F
j
for some i = j

It can be changed to

H0:
1
=
2
= =
a

H1:
i
>
j
for some i = j

F
1

F
2

F
a

a
Can we say that all Fis are the same?
14.3.1 Kruskal Wallis Test
STEP 1:








STEP 2:


Rank all N = n
i
observations in ascending
order. Assign mid-ranks in case of ties.
r
ij
= rank (y
ij
)
a
i=1
N = r
ij
= 1 + 2 + + N =
N ( N + 1 )
2
E [ r ] =
( N + 1 )
2
Calculate rank sums r
i
= r
ij

and averages r
i
= r
i
/ n
i
, i = 1, 2, , a.
j=1
n
i
14.3.1 Kruskal Wallis Test
STEP 3:







STEP 4:


Calculate the Kruskal-Wallis test statistic
kw = n
i
( r
i
- )
12
N ( N + 1 )
i=1
a ( N + 1 )
2
2
= - 3( N + 1 )
12
N ( N + 1 )
i=1
a r
i

n
i

2
Reject H
0
for large value of kw.
If n
i
s are large, kw follows chi-square dist. with
a-1 degrees of freedom.
14.3.1 Kruskal Wallis Test
Example :










NRMA, the worlds biggest car insurance
company, has decided to test the durability of
tires from 4 major companies.
14.3.1 Kruskal Wallis Test
Example :










Average Test Scores
Different Tires from 4 major co.


14.59
23.44
25.43
18.15
20.82
14.06
14.26
20.27
26.84
14.71
22.34
19.49
24.92
20.20
27.82
24.92
28.68
23.32
32.85
33.90
23.42
33.16
26.93
30.43
36.43
37.04
29.76
33.88
Ranks of Average Test Scores
3
13
16
5
9
1
2
8
17
4
10
6
14.5
7
19
14.5
20
11
23
26
12
24
18
22
27
28
21
25
1
28





24.92

24.92
49 66.5 125.5 165



Example :










14.3.1 Kruskal Wallis Test
Example :










14.3.1 Kruskal Wallis Test
kw
= - 3( N + 1 )
12
N ( N + 1 )
i=1
a r
i

n
i

2
= + + + +
12
28(29)
(49)
7
(66.5)
7
(125.5)
7
(165)
7
2 2 2 2
- 3(29)
= 18.134
[ ]
Example :










14.3.1 Kruskal Wallis Test
kw = 18.34 > X
3,.005
= 12.837
14.3.2 Pairwise Comparisons
Comparing 2 groups among treatments

H
0
: E ( R
i
R
j
) = 0 and

Var( R
i
R
j
) = +
N ( N + 1 )
12
1
n
i


1
n
j


( )
For large n
i
s, R
i
R
j
follows approximately
normally distributed.
z
ij
=
r
i
- r
j
N ( N + 1 )
12
1
n
i


1
n
j


( )
+
14.3.2 Pairwise Comparisons
To control the type I familywise error rate
at level
o
Iz
ij
I statistic should be referred to appropriate
Studentized range distribution.
Tukey Method ( Chapter. 12)
14.3.2 Pairwise Comparisons
+
N ( N + 1 )
12
1
n
i


1
n
j


( )
q
a,,
o
2
I r
i
- r
j
I >
q
a,,
o
2
I z
ij
I > or
No. of treatment group compared = a
Degree of freedom =
(assumption : sample is large)
Compare with critical constant q
a, ,.
.
o
Example :










Ranks of Average Test Scores
Different Tires from 4 major co.


3
13
16
5
9
1
2
8
17
4
10
6
14.5
7
19
14.5
20
11
23
26
12
24
18
22
27
18
21
25
49 66.5 125.5 165

14.3.2 Pairwise Comparison
Example :










14.3.2 Pairwise Comparison
Let be .05. o
1
7
1
7
( )
= +
3.63
2
(28)(29)
12
= 11.29
I r
1
r
4
I , I r
1
r
4
I > 11.29
We differ
from
GOODYEAR!!!
+
N ( N + 1 )
12
1
n
i


1
n
j


( )
q
a,,
o
2
o
14.4 Inferences for Several Matched Samples
Randomized block design


Friedman test
treatment groups and blocks.
2 a > 2 b >
A distribution-free rank-based test for comparing the
treatments in the randomized block design

Hypothesis
H
0
: F
1j
= F
2j
= = F
aj
H
1
: F
ij
< F
kj
for some i = k
It can be changed to
H
0
:
1
=
2
= =
a
H
1
:
i
>
k
for some i = k

14.4.1 Friedman Test

STEP 1:






STEP 2:







Rank all N = n
i
observations in ascending
order. Assign mid-ranks in case of ties.
r
ij
= rank (y
ij
)
a
i=1
Calculate rank sums r
i
= r
ij
, i = 1, 2, , a.
j=1
b

14.4.1 Friedman Test
STEP 3:







STEP 4:


Calculate the Friedman test statistic
fr =

( r
i
- )
12
ab( a + 1 )
i=1
a b ( a + 1 )
2
2
= - 3b( a + 1 )
12
ab( a + 1 )
i=1
a
r
i

2
Reject H
0
for large value of fr.
If n
i
s are large, fr follows chi-square dist. with
a-1 degrees of freedom.
14.4.1 Friedman Test
Example :










Drip loss in Meat Loaves
Oven
Position
Batch Rank
sum
1 Rank 2 Rank 3 Rank
1
2
3
4
5
6
7
8
7.33
3.22
3.28
6.44
3.83
3.28
5.06
4.44
8
1
2.5
7
4
2.5
6
5
8.11
3.72
5.11
5.78
6.50
5.11
5.11
4.28
8
1
4
6
7
4
4
2
8.06
4.28
4.56
8.61
7.72
5.56
7.83
6.33
7
1
2
8
5
3
6
4
23
3
8.5
21
16
9.5
16
11
14.4.1 Friedman Test
Example :










Friedman test statistic equals
fr = - 3b( a + 1 )
12
ab( a + 1 )
i=1
a
r
i

2
=
12
8*3*9
[ 23 +3 +8.5 +21 +16 +9.5 +16 +11 ] 3*3*9
2 2 2 2 2 2 2 2
= 17.583 > = 16.012
2
7,.025
_
significant differences between the
oven positions
However, No. of blocks is only 3; the large
sample chi-square approximation may not be
accurate.
14.4.2 Pairwise Comparisons
Comparing 2 groups among treatments

H
0
: E ( R
i
R
j
) =0 and

Var( R
i
R
j
) =
a ( a + 1 )
6b
As in the case of the Kruskal-Wallis test, i and j can
be declared different at significance level if
o
, ,
( 1)
6
2
a a
i j
q
a a
r r
b

+
>
14.5 Rank Correlation Methods
.
What is Correlation?

Correlation indicates the strength and direction
of a linear relationship between two random
variables.

In general statistical usage, correlation to the
departure of two variables from independence.

Correlation does not imply causation.

14.5.1 Spearmans Rank Correlation Coefficient
.
Charles Edward Spearman


Born September 10, 1863

Died September 7, 1945
(82 years old)

An English psychologist
known for work in statistics,
as a pioneer of factor
analysis, and for
Spearman's rank
correlation coefficient.

BTW, he looks like Sean
Connery
Yearly alcohol consumption from wine



Yearly heart disease (Per 100,000)



19 Country Study
14.5.1 Spearmans Rank Correlation Coefficient
.
What are we correlating?
D
A
T
A
X
i
Y
i
U
i
V
i

No. Country
Alcohol
from Wine
Heart
Disease
Deaths
Rank X Rank Y D
i

1 Australia 2.5 211 11 12.5 -1.5
2 Austria 3.9 167 15 6.5 8.5
3 Belgium 2.9 131 13.5 5 8.5
4 Canada 2.4 191 10 9 1
5 Denmark 2.9 220 13.5 14 -0.5
6 Finaland 0.8 297 3 18 -15
7 France 9.1 71 19 1 18
8 Iceland 0.8 211 3 12.5 -9.5
9 Ireland 0.7 300 1 19 -18
10 Italy 7.9 107 18 3 15
11 Netherlands 1.8 167 8 6.5 1.5
12 New Zealand 1.9 266 9 16 -7
13 Norway 0.8 227 3 15 -12
14 Spain 6.5 86 17 2 15
15 Sweden 1.6 207 7 11 -4
16 Switzerland 5.8 115 16 4 12
17 UK 1.3 285 6 17 -11
18 US 1.2 199 5 10 -5
19 W. Germany 2.7 172 12 8 4

14.5.1 Spearmans Rank Correlation Coefficient
Spearmans Rank Correlation Coefficient
A nonparametric (distribution-free) rank statistic
proposed in 1904 as a measure of the strength
of the associations between two variables.

The Spearman rank correlation coefficient can
be used to give a measure of monotone
association that is used when the distribution of
the data make Pearson's correlation coefficient
undesirable.

1
2 2
1 1
( )( )
( ( ) )( ( ) )
n
i i
i
s
n n
i i
i i
u u v v
r
u u v v
=
= =

=


2
1
2
6
1
( 1)
n
i
i
s
d
r
n n
=
=

If D
i
is integer then:
14.5.1 Spearmans Rank Correlation Coefficient
Relevant Formulas
2
1
2
6
1
( 1)
n
i
i
s
d
r
n n
=
=

(6)(2081.5)
1 0.826
(19)(360)
s
r = =
14.5.1 Spearmans Rank Correlation Coefficient
Examples
From previous data we calculate:
14.5.1 Spearmans Rank Correlation Coefficient
Hypothesis Testing Using Spearman
H
o
: X and Y are independent
H
a
: X and Y are positively
associated
14.5.1 Spearmans Rank Correlation Coefficient
For large values of N (> 10) is approximated
by the normal distribution with a mean
( ) 0
s
E R =
1
( )
1
s
Var R
n
=

1
s
z r n =
Using the test statistic:
14.5.1 Spearmans Rank Correlation Coefficient
Examples
From previous data we calculate:
0.826 18 3.504 z = =
1
s
z r n =
P-value = 0.0004
14.5.2 Kendalls Rank Correlation Coefficient
born September 6, 1907
died March 29, 1983 (76 years
old)
Maurice Kendall was born in
Kettering, North Hampton shire
He studied mathematics at St.
John's College, Cambridge,
where he played cricket and
chess
After graduation as a
Mathematics Wrangler in 1929,
he joined the British Civil Service
in the Ministry of Agriculture. In
this position he became
increasingly interested in using
statistics.
Developed the rank correlation
coefficient in 1948.

14.5.2 Kendalls Rank Correlation Coefficient
Kendalls Rank Correlation Coefficient
A pair of Bivariate random variables

( , )
i i
X Y
( , )
j j
X Y
( )( ) 0
i j i j
X X Y Y >
Which implies:

i j
X X > AND
or
AND

i j
Y Y >
i j
X X <
i j
Y Y <
Concordant:
14.5.2 Kendalls Rank Correlation Coefficient
Kendalls Rank Correlation Coefficient
Discordant:

( )( ) 0
i j i j
X X Y Y <
i j
X X >
AND
or
AND
i j
Y Y >
i j
X X <
i j
Y Y <
Which implies:

14.5.2 Kendalls Rank Correlation Coefficient
Kendalls Rank Correlation Coefficient
Tied Pair:

OR
OR
BOTH
Which implies:

( )( ) 0
i j i j
X X Y Y =
i j
X X =
i j
Y Y =
14.5.2 Kendalls Rank Correlation Coefficient
Relevant Formula
( ) (( )( ) 0)
c i j i j
P Concordant P X X Y Y t = = >
( ) (( )( ) 0)
d i j i j
P Discordant P X X Y Y t = = <
c d
t t t =
1 1 t s s +

c d
t t t =
14.5.2 Kendalls Rank Correlation Coefficient
Relevant Formula
N
c
= # of Concordant Pairs
N
d
= # of Discordant Pairs

c d
N N
N
t

=
14.5.2 Kendalls Rank Correlation Coefficient
Formula Continued
1
2
g
j
x
j
a
T
=
| |
=
|
\ .

1
2
h
j
y
j
b
T
=
| |
=
|
\ .

( )( )
c d
x y
N N
N T N T
t

=

If there are ties the formula is modified:
Suppose there are g groups of tied X
i
s with a
j
tied
observations in the j
th
group and h groups of tied
Y
i
s with b
j
tied observations in the j
th
group.
14.5.2 Kendalls Rank Correlation Coefficient
Formula Explanation
Five pairs of observations:(x,y)=
(1,3)
(1,4)
(1,5)
(2,5)
(3,4)
There is g=1 group of a
1
=3 tied
xs equal to 1 and there are h=2
groups of tied ys
Group 1 has b
1
=2 tied ys qual to
4 and group 2 has b
2
=2 tied ys
equal to 5.
14.5.2 Kendalls Rank Correlation Coefficient
Formula Example continued
3 2
3 1 4
2 2
x
T
| | | |
= + = + =
| |
\ . \ .
2 2
1 1 2
2 2
y
T
| | | |
= + = + =
| |
\ . \ .
i Country X
i
Y
i
N
ci
N
di
N
ti

1 Ireland 0.7 300 0 18 0
2 Iceland 0.8 211 3 11 3
3 Norway 0.8 227 2 13 1
4 Finland 0.8 297 0 15 0
5 US 1.2 199 5 9 0
6 UK 1.3 285 0 13 0
7 Sweden 1.6 207 3 9 0
8 Netherlands 1.8 167 5 5 1
9 New Zealand 1.9 266 0 10 0
10 Canada 2.4 191 2 7 0
11 Australia 2.5 211 1 7 0
12
West
Germany 2.7 172 1 6 0
13 Belgium 2.9 131 2 4 0
14 Denmark 2.9 220 0 5 0
15 Austria 3.9 167 0 4 0
16 Switzerland 5.8 115 0 3 0
17 Spain 6.5 86 1 1 0
18 Italy 7.9 107 0 1 0
19 France 9.1 71 0 0 0
N
c
=25 N
d
=141 N
t
=5

Data
25 141

0.690
(171 4)(171 2)
t

= =

14.5.2 Kendalls Rank Correlation Coefficient
Testing Example

( )( )
c d
x y
N N
N T N T
t

=

14.5.2 Kendalls Rank Correlation Coefficient
Hypothesis Testing
0
: 0
: 0
a
H
H
t
t
=
>

( ) 0 E t =
2(2 5)

( )
9 ( 1)
n
Var
n n
t
+
=

9 ( 1)

2(2 5)
n n
z
n
t

=
+
25 141

0.690
(171 4)(171 2)
t

= =

(9)(19)(18)
0.690 4.128
2(43)
z = =
P-value < 0.0001
14.5.2 Kendalls Rank Correlation Coefficient
Testing Example
14.5.3 Kendalls Coefficient of Concordance

Kendalls Coefficient of Concordance
Q: Why do we need Kendalls Coefficient of
Concordance?
A: It is a measure of association between several
matched samples.

Q: Why not use Kendalls Rank Correlation
Coefficient instead?
A: Because its only works for two samples.
How can you apply this to real life?

A common & interesting example:

A taste-testing experiment used four tasters to
rank eight recipes with the following results. Are
the tasters in agreement??
Hmm lets find out!



14.5.3 Kendalls Coefficient of Concordance
Kendalls Coefficient of Concordance
Taster Rank
Recipe 1 2 3 4 Sum
1 5 4 5 4 18
2 7 5 7 5 24
3 1 2 1 3 7
4 3 3 2 1 9
5 4 6 4 6 20
6 2 1 3 2 8
7 8 7 8 8 31
8 6 8 6 7 27
14.5.3 Kendalls Coefficient of Concordance
Kendalls Coefficient of Concordance
14.5.3 Kendalls Coefficient of Concordance

How does it work?
It is closely related to Freidmans test statistic
(mentioned in 14.4).

The a treatments are candidates
(recipes).

The b blocks are judges (Tasters).

Each judge ranks the a candidates.

14.5.3 Kendalls Coefficient of Concordance
Kendalls Coefficient of Concordance
The discrepancy of the actual rank sums under
perfect disagreement as defined by:




Is a measure of agreement between the judges


2
1
( 1)
2
a
i
i
b a
d r
=
+

=
`
)

The maximum value of this measure is


attained when there is perfect agreement :

It is given by:

2
2 2
max
1
( 1) ( 1)
2 12
a
i
b a b a a
d ib
=
+

= =
`
)

14.5.3 Kendalls Coefficient of Concordance


Kendalls Coefficient of Concordance
14.5.3 Kendalls Coefficient of Concordance
Kendalls Coefficient of Concordance
Kendalls w statistic :
Is an estimate of the variance of the row sums
of ranks Ri divided by the maximum possible
value the variance can take:



This occurs when all judges are in agreement.
Hence;
2
2 2
1
max
12 ( 1)
( 1) 2
a
i
i
d b a
w r
d b a a
=
+

= =
`

0 1 w s s
14.5.3 Kendalls Coefficient of Concordance
Kendalls Coefficient of Concordance
What relationship does w and fr,
Freidmans statistic have?



Does the Kendalls w statistic relate to the
Spearmans rank correlation coefficient?
only when a=2 :


( 1)
fr
w
b a
=

2 1
s
r w =
14.5.3 Kendalls Coefficient of Concordance
Kendalls Coefficient of Concordance
Q: How can we perform statistical tests?
What distribution does it follow?

In order to perform a test on w for statistical
significance:
Use chi-square distribution.
Use (n-1) degrees of freedom.
2
( ) _
Kendalls Coefficient of
Concordance
In order to find out whether or not tasters are
in agreement, we calculate the Kendalls
coefficient of concordance.
Freidmans statistic: fr=24.667
Therefore,
w = 24.667/ (4)(8)= 0.881 ,
Comparing fr=24.667 with =14.067,
since fr exceeds this critical value
we conclude that tasters agree .

2
7,.05
_
14.6.1 Permutation Tests
Permutation Test
1) General Idea
A permutation test is a type of statistical significance test in
which a reference distribution is obtained by calculating all
possible values of the test statistic under rearrangements the
labels on the observed data points. Confidence intervals can
be derived from the tests.
2) Inventor
The theory has evolved from
the works of R.A. Fisher and
E.J.G. Pitman in the 1930s.
14.6.1 Permutation Tests
Major Theory & Derivation
The permutation test finds a p-value as the proportion of
regroupings that would lead to a test statistic as extreme as
the one observed. Well consider the permutation test based
on sample averages, although one could computing and
comparing other test statistics

We have two samples that we with to compare

Hypotheses:
Ho: differences between two samples are due to chance
Ha: sample 2 tends to have higher values than sample 1 not
due to simply to chance
Ha: sample 2 tends to have smaller values than sample 1, not
due simply to chance
Ha: there are differences between the two samples not just to
chance
14.6.1 Permutation Tests
To See if the observed difference d from our data
supports Ho or one of the selected alternatives, do
the following steps of a Permutation Test:
Ms. Merry Huilin
Ma~ ^^*
14.6.2 Bootstrap Method
Bootstrapping is a statistical method for estimating the
sampling distribution of an estimator by sampling with
replacement from the original sample, most often with the
purpose of deriving robust estimates of standard errors and
confidence intervals of a population parameter like a mean,
median, proportion, odds ratio, correlation coefficient or
regression coefficient.
1) General Idea
2) Inventor
Homepage: http://stat.stanford.edu/~brad/
E-mail: brad@stat.stanford.edu

Bradley Efron(1938-present)'s work has spanned both
theoretical and applied topics, including empirical Bayes
analysis, applications of differential geometry to statistical
inference, the analysis of survival data, and inference for
microarray gene expression data.
14.6.2 Bootstrap Method
3) Major Theory and Derivation
Consider the cases where a random sample of size n is drawn from an
unspecified probability distribution, The basic steps in the bootstrap
procedure are following
14.6.3 Jackknife Method
Jackknife is a statistical method for estimating and
compensating for bias and for deriving robust estimates of
standard errors and confidence intervals. Jackknifed statistics
are created by systematically dropping out subsets of data
one at a time and assessing the resulting variation in the
studied parameter.
1) General Idea
2) Inventor
Richard Edler von Mises(1883 - 1953)
was a scientist who worked on practical
analysis, integral and differential
equations, mechanics, hydrodynamics
and aerodynamics, constructive geometry,
probability calculus, statistics and
philosophy.
14.6.3 Jackknife Method
3) Major Theory and Derivation
Now we briefly describe how it is possible to obtain the standard
deviation of a generic estimator using the Jackknife method. For
simplicity we consider the average estimator. Let us consider the
variables:

where X is the sample average. X
(i)
is the sample average of
the data set deleting the ith point. Then we can define the
average of x
(i)
:

The jackknife estimate of standard deviation is then defined as:

SAS program
%macro _SASTASK_DROPDS(dsname);
%IF %SYSFUNC(EXIST(&dsname)) %THEN %DO;
DROP TABLE &dsname;
%END;
%IF %SYSFUNC(EXIST(&dsname, VIEW)) %THEN %DO;
DROP VIEW &dsname;
%END;
%mend _SASTASK_DROPDS;

%LET _EGCHARTWIDTH=0;
%LET _EGCHARTHEIGHT=0;

PROC SQL;
%_SASTASK_DROPDS(WORK.SORTTempTableSorted);
QUIT;

PROC SQL;
CREATE VIEW WORK.SORTTempTableSorted
AS SELECT ScoreChange FROM MIHIR.AMS572;
QUIT;
TITLE;
TITLE1 "Distribution analysis of: ScoreChange";
Title2 " Wilcoxon Rank Sum Test";

ODS EXCLUDE CIBASIC BASICMEASURES EXTREMEOBS MODES MOMENTS QUANTILES;
PROC UNIVARIATE DATA = WORK.SORTTempTableSorted
MU0=0
;
VAR ScoreChange;
HISTOGRAM / NOPLOT ;

RUN; QUIT;
PROC SQL;
%_SASTASK_DROPDS(WORK.SORTTempTableSorted);
QUIT;
SAS program
Distribution analysis of: ScoreChange Wilcoxon Rank
Sum Test

The UNIVARIATE Procedure
Variable: ScoreChange (Change in Test Scores)

Tests

for

Location:

Mu0=0
Test Statistic p Value
Student's t t -0.80079 Pr > |t| 0.4402
Sign M -1 Pr >= |M| 0.7744
Signed Rank S -8.5 Pr >= |S| 0.5278
/**Kruskal-Wallis Test and Wilcoxon-Mann-Whitney Test **/
%macro _SASTASK_DROPDS(dsname);
%IF %SYSFUNC(EXIST(&dsname)) %THEN %DO;
DROP TABLE &dsname;
%END;
%IF %SYSFUNC(EXIST(&dsname, VIEW)) %THEN %DO;
DROP VIEW &dsname;
%END;
%mend _SASTASK_DROPDS;

%LET _EGCHARTWIDTH=0;
%LET _EGCHARTHEIGHT=0;

PROC SQL;
%_SASTASK_DROPDS(WORK.TMP0TempTableInput);
QUIT;

PROC SQL;
CREATE VIEW WORK.TMP0TempTableInput
AS SELECT PreTest, Gender FROM MIHIR.AMS572;
QUIT;

TITLE;
TITLE1 "Nonparametric One-Way ANOVA";

PROC NPAR1WAY DATA=WORK.TMP0TempTableInput WILCOXON
;
VAR PreTest;
CLASS Gender;

RUN; QUIT;
PROC SQL;
%_SASTASK_DROPDS(WORK.TMP0TempTableInput);
QUIT;
SAS program
Nonparametric One-Way ANOVA

The NPAR1WAY Procedure
Wilcoxon Scores (Rank Sums) for Variable PreTest
Classified by Variable Gender
Gender N
Sum of
Scores
Expected
Under H0
Std Dev
Under H0
Mean
Score
F 7
40.0 45.50 6.146877
5.714286
M 5
38.0 32.50 6.146877
7.600000
Average scores were used for ties.
Wilcoxon Two-Sample Test
Statistic 38.0000




Normal Approximation


Z 0.8134
One-Sided Pr > Z 0.2080
Two-Sided Pr > |Z| 0.4160




t Approximation


One-Sided Pr > Z 0.2166
Two-Sided Pr > |Z| 0.4332
Z includes a continuity correction
of 0.5.
Kruskal-Wallis Test
Chi-Square 0.8006
DF 1
Pr > Chi-Square 0.3709


SAS program
/** Wilcoxon Signed Rank Test **/
%macro _SASTASK_DROPDS(dsname);
%IF %SYSFUNC(EXIST(&dsname)) %THEN %DO;
DROP TABLE &dsname;
%END;
%IF %SYSFUNC(EXIST(&dsname, VIEW)) %THEN %DO;
DROP VIEW &dsname;
%END;
%mend _SASTASK_DROPDS;

%LET _EGCHARTWIDTH=0;
%LET _EGCHARTHEIGHT=0;

PROC SQL;
%_SASTASK_DROPDS(WORK.SORTTempTableSorted);
QUIT;

PROC SQL;
CREATE VIEW WORK.SORTTempTableSorted
AS SELECT ScoreChange FROM MIHIR.AMS572;
QUIT;
TITLE;
TITLE1 "Distribution analysis of: ScoreChange";
TITLE2 "Wilcoxon Signed Rank Test";

ODS EXCLUDE CIBASIC BASICMEASURES EXTREMEOBS MODES MOMENTS QUANTILES;
PROC UNIVARIATE DATA = WORK.SORTTempTableSorted
MU0=0
;
VAR ScoreChange;
HISTOGRAM / NOPLOT ;
RUN; QUIT;
PROC SQL;
%_SASTASK_DROPDS(WORK.SORTTempTableSorted);
QUIT;
SAS program




Tests

for

Location:

Mu0=0
Test Statistic p Value
Student's t t -0.80079 Pr > |t| 0.4402
Sign M -1 Pr >= |M| 0.7744
Signed Rank S -8.5 Pr >= |S| 0.5278
Distribution analysis of: ScoreChange Wilcoxon Signed
Rank Test

The UNIVARIATE Procedure
Variable: ScoreChange (Change in Test Scores)

SAS program
/**Friedman Test **/
%macro _SASTASK_DROPDS(dsname);
%IF %SYSFUNC(EXIST(&dsname)) %THEN %DO;
DROP TABLE &dsname;
%END;
%IF %SYSFUNC(EXIST(&dsname, VIEW)) %THEN %DO;
DROP VIEW &dsname;
%END;
%mend _SASTASK_DROPDS;
%LET _EGCHARTWIDTH=0;
%LET _EGCHARTHEIGHT=0;

PROC SQL;
%_SASTASK_DROPDS(WORK.SORTTempTableSorted);
QUIT;

PROC SQL;
CREATE VIEW WORK.SORTTempTableSorted
AS SELECT Emotion, Subject, SkinResponse FROM WORK.HYPNOSIS1493;
QUIT;
TITLE; TITLE1 "Table Analysis";
TITLE2 "Results";

PROC FREQ DATA = WORK.SORTTempTableSorted
ORDER=INTERNAL
;
TABLES Subject * Emotion * SkinResponse /
NOROW
NOPERCENT
NOCUM
CMH SCORES=RANK
ALPHA=0.05;
RUN; QUIT;
PROC SQL;
%_SASTASK_DROPDS(WORK.SORTTempTableSorted);
QUIT;
SAS program


Table Analysis Results:

The FREQ Procedure:
Summary Statistics for Emotion by SkinResponse
Controlling for Subject

Cochran-Mantel-Haenszel Statistics (Based on Rank Scores)
Statistic Alternative Hypothesis DF Value Prob
1 Nonzero Correlation 1 0.2400 0.6242
2 Row Mean Scores Differ 3 6.4500 0.0917
3 General Association 84 . .
At least 1 statistic not computed--singular covariance matrix.

Total Sample Size = 32

SAS program
/*Spearman correlation*/
%macro _SASTASK_DROPDS(dsname);
%IF %SYSFUNC(EXIST(&dsname)) %THEN %DO;
DROP TABLE &dsname;
%END;
%IF %SYSFUNC(EXIST(&dsname, VIEW)) %THEN %DO;
DROP VIEW &dsname;
%END;
%mend _SASTASK_DROPDS;

%LET _EGCHARTWIDTH=0;
%LET _EGCHARTHEIGHT=0;

PROC SQL;
%_SASTASK_DROPDS(WORK.SORTTempTableSorted);
QUIT;

PROC SQL;
CREATE VIEW WORK.SORTTempTableSorted
AS SELECT Arts, Economics FROM WORK.WESTERNRATES5171;
QUIT;

TITLE1 "Correlation Analysis";

/*Sperman Method*/

PROC CORR DATA=WORK.SORTTempTableSorted
SPEARMAN
VARDEF=DF
NOSIMPLE
NOPROB
;
VAR Arts;
WITH Economics;
RUN;
SAS program
/*Kendall Method */

PROC CORR DATA=WORK.SORTTempTableSorted
KENDALL
VARDEF=DF
NOSIMPLE
NOPROB
;
VAR Arts;
WITH Economics;

RUN;


RUN; QUIT;

PROC SQL;
%_SASTASK_DROPDS(WORK.SORTTempTableSorted);

QUIT;

SAS program


Correlation Analysis
The CORR Procedure
1 With Variables: Economics
1 Variables: Arts



Spearman

Correlation

Coefficients,

N

=

52

Arts
Economics 0.27926
1 With Variables: Economics
1 Variables: Arts
Kendall

Taub

Correlation

Coefficients,

N

=

52

Arts
Economics 0.18854


Correlation Analysis
The CORR Procedure
SAS program
What
happened to
his eyes!!!!!

I dont
really
believe in
peace
buddies
Statistics is funny!
How?
They are going to
kill me. HELP!
Are you
still taking
the picture?
Is it safe to
look at the
camera?
I dont know but I
am looking.
We love
statistics
Losers!

Das könnte Ihnen auch gefallen