Sie sind auf Seite 1von 28

CHI-SQUARE TEST

SHARVIL NAIK 11BPE037 SAGAR HASSANI 11BPE071 DIVYAM SINGH 11BPE063 ROHIT SHARMA 10BPE054 SARIN NAMALA 10BPE031

INDEX
INTRODUCTION FORMULA CHI-SQUARE DISTRIBUTION APPROACH

EXAMPLES
REFERENCES

Chi-square test
Karl Pearson introduced a test to distinguish whether an observed set of frequencies differs from a specified frequency distribution The chi-square test uses frequency data to generate a statistic
Karl Pearson

INTRODUCTION
A chi-squared test, also referred to as chi-square test , is any statistical hypothesis test in which the sampling distribution of the test statistic is a chi-squared distribution when the null hypothesis is true. It helps us in determining whether the experimentally determined data fits the results expected from theory. The chi-square test is a goodness of fit test: it answers the question of how well do experimental data fit expectations.

The chi-square test uses frequency data to generate a statistic.

CONTINUED
Some examples of chi-squared tests where the chi-squared distribution is approximately valid are as follows: 1) Pearson's chi-squared test. 2) Yates' chi-squared test. Applications of this test vary widely. Some of them are as follows: 1) Goodness of fit of distributions (Pearson's chi-squared test). 2) Test of independence of attributes. 3) Test of homogeneity.

FORMULA
First we determine the number of each individual/counts that have been observed and how many would be expected given the basic theory.

Then we calculate the chi-square statistic using the above formula.


The is the Greek letter chi; the is a sigma; it means to sum the following terms for all individuals/counts. o is the number of individuals of the given type observed; e is the number individuals of that type expected from the null hypothesis. Note that you must use the number of individuals, the counts, and NOT the proportions, ratios, or frequencies.

Chi-Square Distribution
Although the chi-square distribution can be derived through math theory, we can also get it experimentally: Let's say we do the same experiment 1000 times, do the same self-pollination of a Pp heterozygote, which should give the 3/4 : 1/4 ratio. For each experiment we calculate the chisquare value, them plot them all on a graph. The x-axis is the chi-square value calculated from the formula. The y-axis is the number of individual experiments that got that chisquare value.

Chi-Square Distribution, p. 2
You see that there is a range here: if the results were perfect you get a chi-square value of 0 (because obs = exp). This rarely happens: most experiments give a small chi-square value (the hump in the graph). Note that all the values are greater than 0: that's because we squared the (obs - exp) term: squaring always gives a non-negative number. Sometimes you get really wild results, with obs very different from exp: the long tail on the graph. Really odd things occasionally do happen by chance alone (for instance, you might win the lottery).

When to Use the Chi-Square Goodness of Fit Test


The chi-square goodness of fit test is appropriate when the following conditions are met: The sampling method is simple random sampling. The population is at least 10 times as large as the sample. The variable under study is categorical. The expected value of the number of sample observations in each level of the variable is at least 5. This approach consists of four steps: (1) state the hypotheses, (2) formulate an analysis plan, (3) analyze sample data, and (4) interpret results.

State the Hypotheses


Every hypothesis test requires the analyst to state a null hypothesis and an alternative hypothesis. The hypotheses are stated in such a way that they are mutually exclusive. That is, if one is true, the other must be false; and vice versa. For a chi-square goodness of fit test, the hypotheses take the following form: H0: The data are consistent with a specified distribution. Ha: The data are not consistent with a specified distribution. Typically, the null hypothesis specifies the proportion of observations at each level of the categorical variable. The alternative hypothesis is that at least one of the specified proportions is not true.

Formulate an Analysis Plan


The analysis plan describes how to use sample data to accept or reject the null hypothesis. The plan should specify the following elements: Significance level. Often, researchers choose significance levels equal to 0.01, 0.05, or 0.10; but any value between 0 and 1 can be used. Test method. Use the chi-square goodness of fit test to determine whether observed sample frequencies differ significantly from expected frequencies specified in the null hypothesis. The chi-square goodness of fit test is described in the next section, and demonstrated in the sample problem at the end of this lesson

Analyze Sample Data


Using sample data, find the degrees of freedom, expected frequency counts, test statistic, and the P-value associated with the test statistic: Degrees of freedom. The degrees of freedom (DF) is equal to the number of levels (k) of the categorical variable minus 1: DF = k - 1 . Expected frequency counts. The expected frequency counts at each level of the categorical variable are equal to the sample size times the hypothesized proportion from the null hypothesis Ei = npi

where Ei is the expected frequency count for the ith level of the categorical variable, n is the total sample size, and pi is the hypothesized proportion of observations in level i.

test statistic. The test statistic is a chi-square random variable (2) defined by the following equation: 2 = [ (Oi - Ei)2 / Ei ] where Oi is the observed frequency count for the ith level of the categorical variable, and Ei is the expected frequency count for the ith level of the categorical variable. P-value. The P-value is the probability of observing a sample statistic as extreme as the test statistic. Since the test statistic is a chi-square, use the Chi-Square Distribution Calculator to assess the probability associated with the test statistic. Use the degrees of freedom computed above.

Interpret Results
If the sample findings are unlikely, given the null hypothesis, the researcher rejects the null hypothesis. Typically, this involves comparing the P-value to the significance level, and rejecting the null hypothesis when the P-value is less than the significance level.

EXAMPLE
Lets start with a theory for how the offspring will be distributed: the null hypothesis. We will discuss the offspring of a self-pollination of a heterozygote. The null hypothesis is that the offspring will appear in a ratio of 3/4 dominant to 1/4 recessive. As an example, we count offspring, and get 290 purple and 110 white flowers. This is a total of 400 (i.e. 290 + 110) offspring. We expect a 3/4 : 1/4 ratio. We need to calculate the expected numbers (we MUST use the numbers of offspring, NOT the proportion!!!); this is done by multiplying the total offspring by the expected proportions. This we expect 400 * 3/4 = 300 purple, and 400 * 1/4 = 100 white.

CONTINUED
Thus, for purple, o = 290 and e = 300. For white, o = 110 and e = 100. Now it's just a matter of plugging into the formula: 2 = (290 - 300)2 / 300 + (110 - 100)2 / 100 = (-10)2 / 300 + (10)2 / 100 = 100 / 300 + 100 / 100 = 0.333 + 1.000 = 1.333. This is our chi-square value. Now we need to see what it means and how to use it.

CONTINUED
Let's say we do the same experiment 1000 times, do the same selfpollination of the same heterozygote, which should give the 3/4 : 1/4 ratio. For each experiment we calculate the chi-square value and then we plot them all on a graph. The x-axis is the chi-square value calculated from the formula. The y-axis is the number of individual experiments that got that chi-square value.

CONTINUED
As we can see there is a range here. If the results had been perfect we might have obtained a chi-square value of 0 (because o = e). This rarely happens as most experiments give a small chi-square value. Note that all the values are greater than 0: that's because we squared the (o - e) term: squaring always gives a non-negative number.

The x-axis is the chi-square value calculated from the formula. The y-axis is the number of individual experiments that got that chi-square value.

CONTINUED
For most work (and for the purposes of this class), a result is said to not differ significantly from expectations if it could happen at least 1 time in 20 i.e. if the difference between the observed results and the expected results is small enough that it would be seen at least 1 time in 20 over thousands of experiments, we fail to reject the null hypothesis. For technical reasons, we use fail to reject instead of accept. 1 time in 20 can be written as a probability value p = 0.05, because 1/20 = 0.05.

CONTINUED
A critical factor in using the chi-square test is the degree of freedom, which is essentially the number of independent random variables involved. Degree of freedom is simply the number of classes of offspring minus 1. For our example, there are 2 classes of offspring: purple and white. Thus, degree of freedom (d.f.) = 2 -1 = 1.

CONTINUED
Critical values for chi-square are found on tables, sorted by degrees of freedom and probability levels. Be sure to use p = 0.05. If our calculated chi-square value is greater than the critical value from the table, we reject the null hypothesis.

If our chi-square value is less than the critical value, we fail to reject the null hypothesis (that is, we accept that our genetic theory about the expected ratio is correct).

CONTINUED
In our example of 290 purple to 110 white, we calculated a chi-square value of 1.333, with 1 degree of freedom. Looking at the table, 1 d.f. is the first row, and p = 0.05 is the sixth column. Here we find the critical chi-square value of 3.841. Since our calculated chisquare, 1.333, is less than the critical value of 3.841, we fail to reject the null hypothesis. Thus, an observed ratio of 290 purple to 110 white is a good fit to a 3/4 to 1/4 ratio.

Problem
Acme Toy Company prints baseball cards. The company claims that 30% of the cards are rookies, 60% veterans, and 10% are All-Stars. The cards are sold in packages of 100. Suppose a randomly-selected package of cards has 50 rookies, 45 veterans, and 5 All-Stars. Is this consistent with Acme's claim? Use a 0.05 level of significance

Solution
The solution to this problem takes four steps: (1) state the hypotheses, (2) formulate an analysis plan, (3) analyze sample data, and (4) interpret results. We work through those steps below: State the hypotheses. The first step is to state the null hypothesis and an alternative hypothesis. 1. Null hypothesis: The proportion of rookies, veterans, and All-Stars is 30%, 60% and 10%, respectively. 2. Alternative hypothesis: At least one of the proportions in the null hypothesis is false. Formulate an analysis plan. For this analysis, the significance level is 0.05. Using sample data, we will conduct a chi-square goodness of fit test of the null hypothesis.

Analyze sample data. Applying the chi-square goodness of fit test to sample data, we compute the degrees of freedom, the expected frequency counts, and the chi-square test statistic. Based on the chisquare statistic and the degrees of freedom, we determine the P-value DF = k - 1 = 3 - 1 = 2 (Ei) = n * pi (E1) = 100 * 0.30 = 30 (E2) = 100 * 0.60 = 60 (E3) = 100 * 0.10 = 10 2 = [ (Oi - Ei)2 / Ei ] 2 = [ (50 - 30)2 / 30 ] + [ (45 - 60)2 / 60 ] + [ (5 - 10)2 / 10 ] 2 = (400 / 30) + (225 / 60) + (25 / 10) = 13.33 + 3.75 + 2.50 = 19.58

where DF is the degrees of freedom, k is the number of levels of the categorical variable, n is the number of observations in the sample, Ei is the expected frequency count for level i, Oi is the observed frequency count for level i, and 2 is the chi-square test statistic

The P-value is the probability that a chi-square statistic having 2 degrees of freedom is more extreme than 19.58. We use the Chi-Square Distribution Calculator to find P(2 > 19.58) = 0.0001.

REFERENCES
Chi Square Statistics Link:math.hws.edu Pearson's chi-squared test Wikipedia, the free encyclopedia Link: en.wikipedia.org

THANKYOU.

Das könnte Ihnen auch gefallen