Sie sind auf Seite 1von 7

Basic Statistical Procedures and How to Compute (4)

Date 13th April 2011

First of all I would like to apologize for delay post about statistical
tests on Facebook. Now we focus the popular statistical test Chi-square
statistics.
Types of Chi-Squares
Pearson's chi-square is by far the most common type of chi-square
significance test. If simply "chi-square" is mentioned, it is probably
Pearson's chi-square. This statistic is used to test the hypothesis of no
association of columns and rows in tabular data. It can be used even with
nominal data. Note that chi square is more likely to establish significance
to the extent that (1) the relationship is strong, (2) the sample size is
large, and/or (3) the number of values of the two associated variables is
large. A chi-square probability of .05 or less is commonly interpreted by
social scientists as justification for rejecting the null hypothesis that the
row variable is unrelated (that is, only randomly related) to the column
variable. Its calculation is discussed below.
(Yates' correction is an arbitrary, conservative adjustment to chisquare when applied to tables with one or more cells with frequencies less
than five. It is only applied to 2 by 2 tables. Some authors also apply it to
all 2 by 2 tables since the correction gives a better approximation to the
binomial distribution. Yates' correction is conservative in the sense of
making it more difficult to establish significance. SPSS. Some computer
packages label Yates' correction as continuity corrected chi-square in their
output.)
Chi-square goodness-of-fit test. The goodness-of-fit test is simply a
different use of Pearsonian chi-square. It is used to test if an observed
distribution conforms to any other distribution, such as one based on
theory (ex., if the observed distribution is not significantly different from a
normal distribution) or one based on some other known distribution (ex., if
the observed distribution is not significantly different from a known
national distribution based on Census data). The Kolmogorov-Smirnov
goodness-of-fit test is preferred for interval data, for which it is more
powerful than chi-square goodness-of-fit.
Likelihood ratio chi-square is an alternative procedure to test the
hypothesis of no association of columns and rows in nominal-level tabular
data. It is supported by SPSS output and is based on maximum likelihood
estimation. Though computed differently, likelihood ratio chi-square is
interpreted the same way. For large samples, likelihood ratio chi-square
will be close in results to Pearson chi-square. Even for smaller samples, it
rarely leads to different substantive results.
Mantel-Haenszel chi-square, also called the Mantel-Haenszel test for
linear association or linear by linear association chi-square, unlike ordinary
and likelihood ratio chi-square, is an ordinal measure of significance. It is

preferred when testing the significance of linear relationship between two


ordinal variables because it is more powerful than Pearson chi-square
(more likely to establish linear association). Mantel-Haenzel chi-square is
not appropriate for nominal variables. If found significant, the
interpretation is that increases in one variable are associated with
increases (or decreases for negative relationships) in the other greater
than would be expected by chance of random sampling. Like other chisquare statistics, M-H chi-square should not be used with tables with small
cell counts.
SPSS Output. To obtain chi-square in SPSS: Select Analyze, Descriptive Statistics,
Crosstabs; select row and column variables; click Statistics; select Chi-square.
Chi square calculation in SPSS data format is very simple as I
mention above, but it has its limitation. You must have the whole set of
data for your study subjects. Sometimes we have only numbers of
subjects in each categories or Data file constructed in tables instead of
raw data. For that reason we try to compute another software packages
like WinPEPI or Open Epi (Both are free and very useful).
http://www.brixtonhealth.com/pepi4windows.html
http://www.openepi.com/
Predictor Variable --------------------------------------------------------->
Variable
Categorical
Categorical
Sex (M/F)
(+/-)

Outcome

---------------------------------------------------------->
----------------------------------------------------------->Disease

Disease Status (DS A, DS B and DS C)


Sex (M/F)

----------------------------------------->

We want to know is the difference in distribution (number or counts


of disease (+/-) between male and female.
Diabetes Mellitus (+)
Male

12

Female
Total

Diabetes Mellitus (-)


67

12
24

79
19

86

Total

31
110

First open WinPEPI program. Click on COMPARE 2.

Since our data is 2x2 format, choose A.


Set Male as A, Female as B and DM (+) as Yes and DM (-) as No. Then Click
Run.

First results Showed, Fisher Exact Results, and other alternative Methods,
Choose, two tailed result from Fishers P. If you want Pearsons Chisquare
results, scroll down the result page.

If you want Odds Ratio in 2x2 table scroll down to the bottom of this result
page.

Sometimes our table is not 2x2 table, say, 2x3 or 2x4 or 2xk (2 column
and multiple rows. Back to the COMPARE 2 page and choose F (2xk)
tables. Enter the data and run.

For computation of large tables rather than 2x2 or 2xk (say 3x4 tables)
select ETCETERA on first page of Winpepi Program and Select (G) analysis
of a large contingency table.

Fill required number of columns and press enter.

Fill the data from your table an click run.

Das könnte Ihnen auch gefallen