Sie sind auf Seite 1von 3

INLS 98, Spring 2004

HYPOTHESIS TESTING; ANOVA The null hypothesis Stating the research question as no difference between the treatment group and the control group Examples: state the null hypothesis for each of the analyses weve already done Chi square Correlation t test The steps in hypothesis testing State the null hypothesis Specify a significance level The criterion/level used to reject the null hypothesis The probability that youre wrong to reject the null hypothesis (i.e., there is not a relationship/difference when you conclude that there is one) Usually .05 or .01 (a 5% chance of being wrong in rejecting the null hypothesis, or a 1% chance of being wrong) Calculate the statistic(s) being evaluated Means, correlation, etc. Calculate the p value: the probability of obtaining a statistic as different or more different from the parameter specified in the null hypothesis as the statistic computed from the data (Lane, http://davidmlane.com/hyperstat/logic_hypothesis.html) Compare the p value with the significance level determined in the second step If the p value < the criterion, then reject the null hypothesis If the p value => the criterion, then do not reject the null hypothesis If the null hypothesis is rejected, evaluate the strength of the relationship or the size of the effect/difference observed In other words, evaluate the practical significance of the results

INLS 98, Spring 2004 Hypothesis testing; ANOVA, page 2

Mistakes in hypothesis testing: Type I and Type II errors

Type I error: the null hypothesis was rejected when it should not have been Type II error: the null hypothesis should have been rejected, but it was not Most people try to guard against Type I errors more carefully than Type II errors, because Type I errors lead to wrong conclusions, while Type II errors just miss possibly correct conclusions Analysis of variance (ANOVA) Used to test differences between two or more means (t test can only be used with two means) Factors: variables Multiple factors might be considered Levels: possible values of a variable/factor Each factor/variable will have two or more levels Factors and levels are used to set up a table Each factor is a dimension of the table Each level of a factor is a row or column on that factors dimension Logic of the calculation The variability in the scores is partitioned MSE: mean square error, the variability among the participants within a particular cell of the table MSB: mean square between, the variability between the cells of the table Both are understood as estimates of the population variance If the null hypothesis is true, then MSE and MSB are expected to be about the same If the null hypothesis is false, then MSE will be larger than MSB

INLS 98, Spring 2004 Hypothesis testing; ANOVA, page 3

The significance tests associated with analysis of variance are based on the ratio of MSB to MSE If the ratio is large enough, then the null hypothesis that the population means are equal can be rejected (Lane, http://davidmlane.com/hyperstat/intro_ANOVA.html) F = MSB/MSE Two types of degrees of freedom p value will be given in results Post hoc tests If ANOVA finds that the means are different among several means, then which are different from which? Fishers least significant different (LSD) method *Tukeys honestly significantly different (HSD) method (less powerful, more conservative) **Newman-Keuls method Duncans procedure (like Newman-Kreuls, but with more liberal critical values) An example: Is a persons job related to: Their salary? The length of time theyve used computers? The frequency with which they use computers? Perceived usefulness of computers?

Das könnte Ihnen auch gefallen