chi-squared statistic
Let be a discrete random variable with possible outcomes with probability of each outcome.
independent observations are obtained where each observation hasthe same distribution as . Bin the observations into groups,so that each group contains all observations having the same outcome. Next, count the number of observations in each group to get corresponding to the outcomes , sothat . It is desired to find out how close the actualnumber of outcomes are to their expected values
.
Intuitively, this “closeness” depends on how big the sample is,and how large the deviations are between the observed and theexpected, for all categories. The value
(1) |
called the statistic, or the chi-squaredstatistic, is such a measure of “closeness”. It is also known asthe Pearson-chi-squared statistic, in honor of the Englishstatistician Karl Pearson, who showed that (1) has approximately achi-squared distribution (http://planetmath.org/ChiSquaredRandomVariable) with degrees of freedom. The degree of freedom depends on thenumber of free variables
in , and is not always , as wewill see in Example .
Usually, statistic is utilized in hypothesis testing, wherethe null hypothesis specifies that the actual equals the expected. Alarge value of means either the deviations from theexpectations are large or the sample is small, and therefore, eitherthe null hypothesis should be rejected or there is not enoughinformation to give a meaningful interpretation
. How large of adeviation, compared to the sample size, is enough to reject the nullhypothesis depends on the degree of freedom of chi-squareddistribution of and the specified critical values.
Examples.
- 1.
Suppose a coin is tossed 10 times and 7 heads are observed.We would like to know if the coin is fair based on theobservations. We have the following hypothesis
:
Break up the observations into two groups: heads and tails. Then,according to ,
Checking the table of critical values of chi-squared distributions,we see that at degree of freedom , there is a 0.100 chance that the value is higher than 2.706. Since , we maynot want to reject the null hypothesis. However, we may not wantto outrightly accept it either simply because the sample size is not verylarge.
- 2.
Now, a coin is tossed 100 times and 70 heads are observed.Using the same null hypothesis as above,
Even at p-value , the corresponding critical value of 7.879is quite a bit smaller than 16. So we will reject the nullhypothesis even at confidence level 99.5%(p-value).
- 3.
statistic can be used in non-parametric situations aswell, particularly, in contingency tables
. Three dice of varyingsizes are each tossed 100 times and the top faces are recorded. Theresults of the count of each possible value of the top face, foreach die is summarized in the following table:
Dietop face 1 2 3 4 5 6 all Die 1 16 19 17 15 19 14 100 Die 2 17 18 14 13 22 16 100 Die 3 12 20 19 18 20 11 100 All dice 45 57 50 46 61 41 300 Let count of top face, and Die . Next, wewant to test the following hypotheses:
Since we do not know the exact distribution ofthe top faces, we approximate the distribution by using the lastrow. For example, the (marginal) probability that top face = 1 is. This says that the probability that top face = 1in Die = . Then, based on thenull hypothesis, we have the following table of “expected count”:
Dietop face 1 2 3 4 5 6 Die 1 15.0 19.0 16.7 15.3 20.3 13.7 Die 2 15.0 19.0 16.7 15.3 20.3 13.7 Die 3 15.0 19.0 16.7 15.3 20.3 13.7 For each die, we can compute the . For instance, for thefirst die,
The results are summarized in the following
degrees of freedom Die 1 0.176 5 Die 2 1.636 5 Die 3 1.969 0 All dice 3.781 10 Note that the degree of freedom for the last dice is 0 because theexpected counts in the last row are completely determined by thosein the first two rows (and the totals). Looking up the table, wesee that there is a that the value of will begreater than , and since , we accept the nullhypothesis: the outcomes of the tosses have no bearing on which dieis tossed.
Remark. In general, for a 2-way contingencytable, the statistic is given by
(2) |
where and are the actual and expected counts inCell . When the sample is large, has a chi-squareddistribution with degrees of freedom. In particular,when testing for the independence between two categorical variables,the expected count is