Sie sind auf Seite 1von 41

IQRA UNIVERSITY (IU)

Quantitative Techniques In Analysis Report of about Tests of Regression Analysis Differences Between Groups
Submitted To: Syed Hammad Ali Submitted By: Syed Asfar Ali Kazmi (14807)

and

Date of Submitted: 13-05-2012

CONTENTS:
Other Test
1. Test of Normality 2. Test of linearity 3-5 67

Tests of Regression Analysis


3. Linear Regression 4. Multiple Regression 8 - 11 12 15

Differences Between Groups


5. One Sample T Test . 16 18 18 21 22 23 23 25 25 27 27 29 30 - 40 6. Independent Sample T Test 7. Paired Sample T Test 8. One-Way ANOVA 9. Two-Way ANOVA 10. SPANOVA 11. MANOVA

. .

OTHER TESTS

Test of Normality:

Normality tests are used to determine whether a data set is well-modeled by a normal distribution or not, or to compute how likely an underlying random variable is to be normally distributed.
Question:
Is the variable AGE OF RESPONDENT of GSS2000R.sav is normally distributed or not?

Hypothesis: HO: The null hypothesis states that the data is normally distributed HA: The alternative hypothesis states that the data is not normally distributed Interpretation:
Case Processing Summary Cases Valid N AGE OF RESPONDENT 270 Percent 100.0% N 0 Missing Percent .0% N 270 Total Percent 100.0%

This Table show us that sample size caese N=270, there is no missing cases and the data is 100% valid.

Tests of Normality Kolmogorov-Smirnov Statistic AGE OF RESPONDENT a. Lilliefors Significance Correction .083 df 270
a

Shapiro-Wilk Statistic .957 df 270 Sig. .000

Sig. .000

Decision: The test statistics are shown in the third table. Here two tests for normality are run. In our case, since we have N= 270 elements sample size,for AGE OF RESPONDENT data set which is greather than 51 elements, we use the Kolmogorov-Smirnov test . From A, the p-value is 0.000 is less than 0.05. So we can reject the null hypothesis and conclude the sample is not normally distributed.

The histogram plot indicates the shape of the distributed data have a bell shape These data are clearly Not normally distributed.

From this graph we can conclude that the data appears is not to be normally distributed as it follows the diagonal line closely and does appear to have a non-linear pattern.

Test of Linearity: Linearity means that the amount of change, or rate of change, between scores on two variables are constant for the entire range of scores for the variables. when one variable( X ) increases and the other variable(Y) is also be increased in the same way. we check through linearity test that there relationships are linear not linear. Question: hypothesis test of the correlation coefficient, the relationship between "total hours spent on the chat" and "total hours spent on the Internet" is not linear. However, the square transformation of the independent variable "total hours per spent on the Internet" does result in a relationship that is linear.

Interpretation:

The value of R2 Linear (0.656) suggests that the relationship between total time spent on the internet and the total hours spent on chat is strong. R= 0.8099

Tests for Regression Analysis

Regression analysis is u s e d t o me a s u r e t h e r e l a t i o n s h i p b e t w e e n t w o o r m o r e variables. One variable is called dependent (response, or outcome) variable and the other is called Independent (explanatory or predictor) variables.It is used to check that due to one unit change in the independent variable(s) how much change occurs in dependent variable. Or The use of regression to make quantitative predictions of one variable from the values of another variable is called regression analysis. There are following several types of regression, which may be used by the researcher.

Linear regression Multiple linear regression Quadratic / Curvilinear regression Logistic / Binary logistic regression Multivariate logistic regression

Linear Regression:
When one dependent variable depends on single independent variable then their dependency called linear regression it is a measure of how strongly the independent variable predicts the dependent variable and its model is given by y = a + bx Assumptions:

Variables are measured at the interval or ratio level (continuous). Variables are approximately normally distributed There is a linear relationship between the two variables.

Question:
Can we predict math achievement from grades in high school?
Variables: D.V= Math achievement I.V = grades in high school Hypothesis:

H0: there is the relationship b/w V1 and V2 HA: there is no relationship b/w V1 and V2

Interpretation:
Variables Entered/Removed Model Variables Entered
d

Variables Removed
a

Method . Enter

grades in h.s.

a. All requested variables entered. b. Dependent Variable: math achievement test

The above table tells us about the independent variable and the regression method used. Here we see that the independent variable i.e. grads in high school is entered for the analysis as we selected the Enter method.

Model Summary Model R


d

Adjusted R R Square
a

Std. Error of the Estimate

Square .244

.504

.254

5.80018

a. Predictors: (Constant), grades in h.s.

This table gives us the R-value, which represents the correlation between the observed values and predicted values of the dependent variable. R-Square is called the coefficient of determination and it gives the adequacy of the model. Here the value of R-Square is 0.504 that means the independent variable in the model can predict 50% of the variance in dependent variable. Adjusted R-Square gives the more accurate information about the model fitness if one can further adjust the model by his own.

ANOVA Model 1 Regression Residual Total a. Predictors: (Constant), grades in h.s. b. Dependent Variable: math achievement test Sum of Squares 836.606 2455.875 3292.481 df

Mean Square 1 73 74 836.606 33.642

F 24.868

Sig. .000
a

The above table gives the test results for the analysis of one-way ANOVA. The results are given in three rows. The first row labeled Regression gives the variability in the model due to known reasons. The second row labeled Residual gives the variability due to random error or unknown reasons. F-value in this case is 24.868 and the p-value is given by 0.000 which is less that 0.05, so we reject our null hypothesis and conclude that there is no relationship between math achievement and grades in high school.

Coefficients Model Unstandardized Coefficients B 1 (Constant) grades in h.s. .397 2.142 Std. Error

Standardized Coefficients Beta t .157 .504 4.987 Sig. .876 .000

2.530 .430

a. Dependent Variable: math achievement test

The above table gives the regression constant and coefficient and their significance. These regression coefficient and constant can be used to construct an ordinary least squares (OLS) equation and also to test the hypothesis of the independent variable. Using the regression coefficient and the constant term given under the column labeled B; one can construct the OLS equation for predicting the math achievement i.e.

Math achievement = .397 + (2.1242) (grades in h.s)

Multiple Regression (Hierarchical Method)


Multiple regression is the most commonly used technique to assess the relationship between one dependent variable and several independent variables. There are three major types of multiple regression i.e.

Simultaneous regression. Hierarchical or Sequential regression. Stepwise or statistical regression.

Assumptions: 1) 2) 3) 4) 5) Dependent variables should be scale. The relationship between the predictor variable and the dependent variable in linear. The error/residual Multicollinearity should not be exist Homogenity should be exist

Question :

How well can we predict current salary from a combination of three variables Beginning salary, Educational Level, and Month since hire? Variables: Dependent variable = Indendepent Variable = Current Salary (1) Beginning salary (2) Educational Level (3) Month since hire

Interpretation:
Descriptive Statistics Mean Current Salary Beginning Salary Months since Hire Educational Level (years) $34,419.57 $17,016.09 81.11 13.49 Std. Deviation $17,075.661 $7,870.638 10.061 2.885 N 474 474 474 474

Correlations Beginning Current Salary Pearson Correlation Current Salary Beginning Salary Months since Hire Educational Level (years) Sig. (1-tailed) Current Salary Beginning Salary Months since Hire Educational Level (years) N Current Salary Beginning Salary Months since Hire Educational Level (years) 1.000 .880 .084 .661 . .000 .034 .000 474 474 474 474 Salary .880 1.000 -.020 .633 .000 . .334 .000 474 474 474 474 Months since Hire .084 -.020 1.000 .047 .034 .334 . .152 474 474 474 474 Educational Level (years) .661 .633 .047 1.000 .000 .000 .152 . 474 474 474 474

Variables Entered/Removed Model Variables Entered Variables Removed

Method

Educational Level (years), Months since Hire, Beginning Salary


a

. Enter

a. All requested variables entered. b. Dependent Variable: Current Salary

Model Summary Model R


d

Adjusted R R Square
a

Std. Error of the Estimate

Square .800

.895

.801

$7,645.998

a. Predictors: (Constant), Educational Level (years), Months since Hire, Beginning Salary

ANOVA Model 1 Regression Residual Total Sum of Squares 1.104E11 2.748E10 1.379E11 df

Mean Square 3 470 473 3.681E10 5.846E7

F 629.703

Sig. .000
a

a. Predictors: (Constant), Educational Level (years), Months since Hire, Beginning Salary b. Dependent Variable: Current Salary

Coefficients Model Unstandardized Coefficients B 1 (Constant) Beginning Salary Months since Hire Educational Level (years) a. Dependent Variable: Current Salary -19986.502 1.689 155.701 966.107 Std. Error 3236.616 .058 35.055 157.924

Standardized Coefficients Beta t -6.175 .779 .092 .163 29.209 4.442 6.118 Sig. .000 .000 .000 .000

Collinearity Statistics Tolerance VIF

.597 .994 .595

1.676 1.006 1.679

Collinearity Diagnostics Model Dimension Condition Eigenvalue 1 1 2


dimension0 dimension1

Variance Proportions Beginning (Constant) .00 .01 .02 .97 Salary .01 .57 .42 .00 Months since Hire .00 .02 .12 .86 Educational Level (years) .00 .00 .92 .08

Index 1.000 5.542 13.734 23.392

3.847 .125 .020 .007

3 4

a. Dependent Variable: Current Salary

Result:
Simultaneously multiple regression was conducted to investigate the best predictors of current salary. The means, standard deviation and inter correlations can be found in table. The combination of variables to predict current salary from Beginning salary,Educational Level ,& Month since hire was statistically significant, F =629.703., p< 0 .0 5 . T h e b e t a c o e f f i c i e n t s a r e p r e s e n t e d i n l a s t t a b l e . No t e t h a t a l l i n d e n p e n t v a r i a b l e s Beginning salary,Educational Level ,& Month since hire are significantly predicts on current salary when all three variables are included. The adjusted R2 value was 0.800. This indicatesthat 80 % of the variance in current salary is a large effect.

( Differences Between Groups )


T-TEST Statistics: The t test is used to compare to groups to answer the differential research questions. Its values determines the difference by comparing means. Hypothesis for T-test: HO: there is no difference between variable 1 and variable 2 (Accept when the significant value is greater than 0.05) H1:There is difference between variable 1 and variable 2 (Accept when the significant value is less than 0.05) Types of T-test There are three types of T-tests. 1) One sample t-test. 2) Independent sample t-test. 3) Paired sample t-test

1) ONE SAMPLE T-TEST: One sample t-test is used to determine if there is difference between population mean(Test value) and the sample mean (X) Assumptions and conditions of sample t-test: 1.The dependent variable should be normally distributed within the population 2.The data are independent. (scores of one participant are not depend on scores of the other: participant are independent of one another )

Question: Is the average salary of employee in the ( employee data.sav ) is equal or more than $ 30000 ( per month ) in US?
The hypotheses are: 1)The null hypothesis states that the average salary of the employee is equal to 30000 H0: 30000

2) The alternative hypothesis states that the average salary of the employee is not equal to 30000 HA: 30000

Interpretation:0

One-Sample Statistics N Current Salary 474 Mean $34,419.57 Std. Deviation $17,075.661 Std. Error Mean $784.311

In above table N shows the total number of observation.(sample Size N=474 employees) The average salary of total employees is 34,419.57. The standard deviation of the data is 17,075.661and the standard error of the mean is 784.311.

One-Sample Test Test Value = 30000 95% Confidence Interval of the Difference T Current Salary 5.635 df 473 Sig. (2-tailed) .000 Mean Difference $4,419.568 Lower $2,878.40 Upper $5,960.73

Through above table we can observe that, i. ii. iii. iv. v. T value(5.635) is positive which show that our estimated mean value is less than actual value of mean. Degree of freedom is (N 1) = 473. `````The P-value is 0.000 which is less than 0.05. The difference between the estimated & actual mean is 4,419.568. Confidence interval has the lower & upper limit 2,878.40 & 5,960.73 respectively. The confidence interval limits does not contains zero.

Decision:On the basis of following observation I reject my Null hypothesis and accept the Alternative hypothesis. I am almost 100% sure on my decision. i. ii. The P-value is 0.000 which is less than 0.05. The confidence interval limits does not contains zero. Therefore the average salary of employees is not equal to 30000.

2) INDEPENDENT SAMPLE T-TEST: Independent sample T-test is used to compare two independent groups (Male and Female) with respect to there effect on same dependent variable.

Assumptions and conditions of Independent T-test: 1.Variance of the dependent variable for two categories of the independent variableshould be equal to each other. 2.Dependent variable should be scale 3.Data on dependent variable should be independent. Question: Are the mean differences average salaries of male & female employees differ significantly or equal to their current salary in US? ( employee data.sav)
The hypotheses are: 1) The null hypothesis states that the average salary of the male employee is equal to average salary of the female employee. H0 : 2) The alternative hypothesis states that the average salary of the male employee is not equal to average salary of the female employee. HA :

Interpretation:-

Group Statistics Gender Current Salary Female Male N 216 258 Mean $26,031.92 $41,441.78 Std. Deviation $7,558.021 $19,499.214 Std. Error Mean $514.258 $1,213.968

Through above table we can observe that,


i. ii.

Total number of male is 258 and the female is 216. The mean value of salaries of male employee is 41,441.78 & the female employee is 26,031.92.

iii. iv.

Standard deviation of salaries of male employee is 19,449.214 & the female employee is 7,558.021. Standard error of mean of salaries of male employees is 1,213.968 & the Standard error of mean of salaries of female employees is 514.258.
Independent Samples Test Levene's Test for Equality of Variances Sig. (2F Sig. t df 472 tailed) Mean Difference Std. Error Difference t-test for Equality of Means 95% Confidence Interval of the Difference Lower Upper

Current Salary

Equal variances assumed Equal variances not assumed

119.669 .000 10.945

.000 $15,409.862 $1,407.906 $12,643.322 $18,176.401

11.688 344.262

.000 $15,409.862 $1,318.400 $12,816.728 $18,002.996

Interpretation:In above table we have two parts (a) f-test, (b) t-test, through which we can observe that, i. ii. iii. iv. v. vi. vii. viii. F value is 119.669 with significant value of 0.00 which is less than 0.05. On the basis of P-value of F-test part we assume that the variance of the two populations is not equal. T value is positive which show that the mean value of salaries of male employees is greater than the mean value of salaries of female employees Degree of freedom is 344.262. The P-value is 0.000 which is less than 0.05. The difference between the two population mean is 15,409.862. The standard error difference between the two population mean is 1,318.400. Confidence interval has the lower & upper limit 12,816.728 & 18,002.996 respectively. The confidence interval limits does not contains zero.

Decision:On the basis of following observation I reject my Null hypothesis and accept the Alternative hypothesis. I am almost 100% sure on my decision.

i. ii.

The P-value is 0.000 which is less than 0.05. The confidence interval limits does not contains zero. The average salaries of male & female employees are not equal.

3) Paired t-test : A paired (samples) t-test is used when you have two related observations (i.e., two observations per subject) and you want to see if the means on these two normally distributed interval variables differ from one another.

Assumptions and conditions of Paired sample T-test :


1)The independent variable is dichotomous and its levels (or groups) are paired, or matched, in some way (husband-wife, pre-post etc) 2) The dependent variable is normally distributed in the two conditions

Question:
The mean difference of the two paired variables ( current and beginning salary) is significant or equal?

Interpretation:

Paired Samples Statistics Mean Pair 1 Current Salary Beginning Salary $34,419.57 $17,016.09 N 474 474 Std. Deviation $17,075.661 $7,870.638 Std. Error Mean $784.311 $361.510

Through above table we can observe that,


i. ii. iii. iv.

The mean value of current & beginning salary is 34,419.57 & 17,016.09 respectively. Total number of both groups is 474 individually. The standard deviation of current & beginning salary is 17,075.661 & 7,870.638 respectively The standard error mean of current & beginning salary is 784.331 & 361.510 respectively

Paired Samples Correlations N Pair 1 Current Salary & Beginning Salary 474 Correlation .880 Sig. .000

Through above table we can observe that, i. ii. iii. The total number of pair is 474. 0.88 show that the both values of group are highly co-related, which indicate that the employees who has greater begging salary has also greater current salary. The P-value is 0.00 which is less than 0.05

Paired Samples Test Paired Differences Std. Std. Mean Deviation Error Mean 95% Confidence Interval of the Difference Lower Upper t df 473 Sig. (2tailed) .000

Pair Current Salary - $17,403.481 $10,814.620 $496.732 $16,427.407 $18,379.555 35.036 1 Beginning Salary

Interpretation:In above table we have two parts (a) f-test, (b) t-test, through which we can observe that, i. ii. iii. The mean value of pair is 17,403.481. The standard deviation of pair is 10,814.620. The standard error mean of pair is 496.732.

iv. v. vi. vii.

Confidence interval has the lower & upper limit 16,427.407 & 18,379.555 respectively. The confidence interval limits does not contains zero. T- Value is 35.036. Degree of freedom is (N-1) = 473. P-vale is 0.00 which is less than 0.05.

Decision:On the basis of following observation I reject my Null hypothesis and accept the Alternative hypothesis. I am almost 100% sure on my decision. iii. iv. The P-value is 0.000 which is less than 0.05. The confidence interval limits does not contains zero.

The mean difference of the two paired variables i.e. current and beginning salary is significant or not same.

One-way ANOVA: A one-way analysis of variance (ANOVA) is used when you have a categorical independent variable (with two or more categories) and a normally distributed interval dependent variable and you wish to test for differences in the means of the dependent variable broken down by the levels of the independent variable. Assumptions:

Independent variable consists of two or more categorical independent groups. Dependent variable is either interval or ratio (continuous). Dependent variable is approximately normally distributed for each category of the independent variable. Equality of variances between the independent groups (homogeneity of variances). Observations are independent.

Question: Are there difference among the ethnicity groups(Euro-American, AfricanAmerican, Latino-American, Asian-American ) on competence scale?

Dependent variable : Competence scale Independent variable : Ethnicity groups

Interpretation:

Descriptives competence scale 95% Confidence Interval for Std. N Euro-Amer AfricanAmer Latino-Amer Asian-Amer Total 10 7 71 2.8750 3.3214 3.2746 .72887 .51467 .66300 .23049 .19453 .07868 2.3536 2.8454 3.1177 3.3964 3.7974 3.4316 1.00 2.50 1.00 3.50 4.00 4.00 40 14 Mean 3.4000 3.1786 Deviation .54243 .90101 Mean Std. Error Lower Bound .08577 .24080 3.2265 2.6583 Upper Bound 3.5735 3.6988 Minimum Maximum 1.75 1.00 4.00 4.00

The descriptives table provides us the descriptive statistics. mean, standard deviation and 95% confidence intervals for the dependent variable (competence scale ) for each separate groups of ethnicity ( Euro-American, African-American, Latino-American, Asian-American ) as well as when all groups are combined (Total ).
Test of Homogeneity of Variances competence scale Levene Statistic 1.323 df1 3 df2 67 Sig. .274

The assumption of equal variances (Homogeneity ) has been met,because the sig. value =.274 is more than 0.05 .(we accept the Null hypothesis )

ANOVA competence scale Sum of Squares Between Groups Within Groups Total 2.370 28.399 30.769 df 3 67 70 Mean Square .790 .424 F 1.864 Sig. .144

The above table gives the test results for the analysis of one-way ANOVA. The results are given in three rows. The first row labeled between groups gives the variability due to the different designations of the ethnicity groups (known reasons). The second row labeled within groups gives the variability due to random error (unknown reasons), and the third row gives the total variability. In this case, F-value is 1.864, and the corresponding p-value=0.144 is greather than 0.05. Therefore we accept the null hypothesis and conclude that the no difference among the ethnicity groups are not the same in all four categories

There is no difference between the ethnicity groups and the competence scale.

Two-way ANOVA:
The two-way ANOVA compares the mean differences between groups that have been split on two independent variables (called factors). You need two independent, categorical variables and one continuous, dependent variable

Assumptions to use two-way ANOVA:


As with other parametric tests, we make the following assumptions when using two-way ANOVA:

Dependent variable is either interval or ratio (continuous). Dependent variable is approximately normally distributed for each category of the independent variable. The variances among populations must be equal (homogeneity). Data are interval or nominal.

Interpretation:

Between-Subjects Factors Value Label math grades 0 1 father's educ revised 2.00 3.00 1.00 less A-B most A-B HS grad or less Some College BS or More 16 19 43 30 38 N

This table is provide us the sample size N of our independent variables ( math grades and Fathers educations revised groups ).

Descriptive Statistics Dependent Variable:math achievement test math grades less A-B father's educ revised HS grad or less Some College
dimension2

Mean 9.8261 12.8149 12.3636 11.1008 10.4889 16.4284 21.8335 14.9000 10.0877 14.3958 16.3509 12.6621

Std. Deviation 5.03708 5.05553 7.18407 5.69068 6.56574 3.43059 2.84518 7.00644 5.61297 4.66544 7.40918 6.49659

N 23 9 11 43 15 7 8 30 38 16 19 73

BS or More Total most A-B HS grad or less Some College


dimension2 dimension1

BS or More Total Total HS grad or less Some College


dimension2

BS or More Total

This table is very useful as it provides the mean and standard deviation for the groups that have been split by both independent variables. In addition, the table also provides "Total" rows, which allows means and standard deviations for groups only split by one independent variable or none at all to be known There are six-cell means will be shown in the plot.

Levene's Test of Equality of Error Variances Dependent Variable:math achievement test F 2.548 df1 5 df2 67 Sig.

.036

Tests the null hypothesis that the error variance of the dependent variable is equal across groups. a. Design: Intercept + mathgr + faedRevis + mathgr * faedRevis

The Levene Statistic p-value = 0.36 is greater than = 0.05 ,so we fail to

reject the null hypothesis that the variances are all equal. Since the variances appear to be equal (and we have random/independent samples), the assumption of homogeneity has been met we may continue with ANOVA .

One-way repeated measures ANOVA: one-way repeated measures analysis of variance if you had onecategorical independent variable and a normally distributed interval dependentvariable that was repeated at least twice for each subject. This is the equivalent of the paired samples t-test, but allows for two or more levels of the categorical variable. This tests whether the mean of the dependent variable differs by the categoricalvariable.

SPANOVA / MIXED ANOVA: (split plot ANOVA)


Robust: if the assumptions are not met even though we run the test. Anova = difference between the groups (boys & Girls) (couples, non couples) Spanova = difference with in the groups. (1st class to last class anxiety level of boys)(time intervals) Question: which intervention develop maths skill or develop confidence building in more effective in reducing students fear of statistics test score across the three time period: (pre-intervention, post intervention and follow up(three month later) Assumptions: One variable between the group One variable within the group Continuous variable Homogeniety of variance co-variance (will find through box m test) More assumptions like the ANOVA Homogeniety of variance co-variance: Changes between the two groups are the same in the three time intervals. (through genereal liner model)

Procedure: Analyze>general linera model> repeated measeure type: Time, factor 3 shift with in group Fear 1, fear 2, fear 3intervals add 03 and between subject: types of class

Options: Check Estimation, effect size, homogeniety plots: groups = separate line, time = horizontal

Interpretation:

Between-Subjects Factors Value Label type of class 1 2 maths skills confidence building Three dependent variables are there. N 15 15

Descriptive Statistics type of class Mean fear of stats time1 maths skills confidence building Total fear of stats time2 maths skills confidence building Total fear of stats time3 maths skills confidence building Total 35.23 6.015 30 37.50 36.07 34.40 5.151 5.431 6.631 30 15 15 40.17 37.67 37.33 5.160 4.515 5.876 30 15 15 39.87 40.47 Std. Deviation 4.596 5.817 N 15 15

Range is 20-60, 60 is the highst fear of mean we see here the aveerage of the means and which says that in Time1: they are very much fear Time2: they are not very much fear Time3: they are very much fear

Box's Test of Equality of Covariance Matricesa Box's M F df1 df2 Sig. Tests the null hypothesis that the observed covariance matrices of the dependent variables are equal across groups. a. Design: Intercept + group Within Subjects Design: time If it is > 0.05 Null hypotheses is equal, assumpition of co-variance is met 1.520 .224 6 5680.302 .969

Multivariate Testsc Effect Hypothesis Value time Pillai's Trace Wilks' Lambda Hotelling's Trace Roy's Largest Root 1.970 26.593a 2.000 27.000 .000 .663 53.185 1.000 1.970 26.593a 2.000 27.000 .000 .663 53.185 1.000 .337 26.593a 2.000 27.000 .000 .663 53.185 1.000 F df .663 26.593a Error df Partial Eta .663 Noncent. 53.185 Observed Powerb 1.000 Sig. Squared Parameter

2.000 27.000 .000

time * group

Pillai's Trace Wilks' Lambda Hotelling's Trace Roy's Largest Root

.131 .869 .151 .151

2.034a 2.034a 2.034a 2.034a

2.000 27.000 .150 2.000 27.000 .150 2.000 27.000 .150 2.000 27.000 .150

.131 .131 .131 .131

4.067 4.067 4.067 4.067

.382 .382 .382 .382

a. Exact statistic b. Computed using alpha = .05 c. Design: Intercept + group Within Subjects Design: time Time: Wilks Lambda: Significance is < than 0.05 and which means there are differences in the fear scale of 03 dimension / time intervals. Then we seee time * groups(time depends on groups): in here significane value is > than 0.05 which means there is no differences. If the changes same over time for the two different groups. Partial Eta Squared(it is predicting dependent variable in time interval)(it tells us the effect size of differences / relationship):we square root 0.663 = 0.81is is largely effective and give the very large differences b/w the time interval for the dependent variable FEAR. it will explain the 0.663 the time.

Mauchly's Test of Sphericityb Measure:MEASURE_1 Within Subjects Effect dimension1 time Mauchly's W .348 Approx. ChiSquare 28.517 df 2 Sig. .000 GreenhouseGeisser .605 Huynh-Feldt .640 Epsilona Lowerbound .500

Tests the null hypothesis that the error covariance matrix of the orthonormalized transformed dependent variables is proportional to an identity matrix. a. May be used to adjust the degrees of freedom for the averaged tests of significance. Corrected tests are displayed in the Tests of Within-Subjects Effects table. b. Design: Intercept + group Within Subjects Design: time

Are the variance of the student remain same during the interval 1 st time student get the marks (like = 1st student is 10, 2nd is 20, 3rd is 30, 4th is 40, 5th is 50) Are the variance of the student remain same during the interval 2 st time student get the marks (like = 1st student is 20, 2nd is 30, 3rd is 40, 4th is 50, 5th is 60) Are the variance of the student remain same during the interval 3 rd time student get the marks (like = 1st student is 30, 2nd is 40, 3rd is 50, 4th is 60, 5th is 70) Here above variance are the same than we met the assumption of spersity (if the intervals have same variance / range it is called spercity)

Tests of Within-Subjects Effects Measure:MEASURE_1 Source Type III Sum of Square s time Sphericity Assumed Greenhouse -Geisser Huynh-Feldt Lowerbound time * group Sphericity Assumed Greenhouse -Geisser Huynh-Feldt Lowerbound 19.467 19.467 1.281 1.000 15.198 19.467 2.303 2.303 19.467 1.210 16.082 2.303 19.467 2 365.867 365.867 1.281 1.000 365.867 1.210 365.867 df 2 Mean Square 182.93 3 302.24 7 285.63 2 365.86 7 9.733 F 43.28 6 43.28 6 43.28 6 43.28 6 2.303 Sig. .00 0 .00 0 .00 0 .00 0 .10 9 .13 4 .13 2 .14 0 .076 2.303 .311 .076 2.950 .352 .076 2.788 .342 .076 4.606 .449 .607 43.286 1.000 .607 55.445 1.000 .607 52.397 1.000 Partial Eta Square d .607 Noncent. Paramete r 86.571 Observe d Powera 1.000

Error(time Sphericity ) Assumed Greenhouse -Geisser Huynh-Feldt Lowerbound

236.667 236.667 236.667 236.667

56 33.89 4 35.86 5 28.00 0

4.226 6.983 6.599 8.452

a. Computed using alpha = .05

Tests of Within-Subjects Contrasts Measure:MEASURE_1 Source Time Type III Sum of Squares time time * group Linear Quadratic Linear Quadratic Quadratic 365.067 .800 19.267 .200 207.667 29.000 df 1 1 1 28 28 Mean Square .800 19.267 .200 7.417 1.036 F .772 2.598 .193 Sig. .000 .387 .118 .664 1 365.067 49.222 Partial Eta .637 .027 .085 .007 Noncent. 49.222 .772 2.598 .193 Observed Powera 1.000 .136 .344 .071 Squared Parameter

Error(time) Linear

a. Computed using alpha = .05 There are difference b/w faer scale when effecting time.

Levene's Test of Equality of Error Variancesa F fear of stats time1 fear of stats time2 fear of stats time3 Tests the null hypothesis that the error variance of the dependent variable is equal across groups. a. Design: Intercept + group Within Subjects Design: time .770 1 28 .388 .767 1 28 .389 .893 df1 1 df2 28 Sig. .353

Here assumption are met that the variances are same for 1st, 2nd & 3rd time interval

Tests of Between-Subjects Effects Measure:MEASURE_1 Transformed Variable:Average Source Intercept Group Error Type III Sum of Squares 127464.100 4.900 2330.000 df 1 1 28 Mean Square 127464.100 4.900 83.214 F 1531.757 .059 Sig. .000 .810 Partial Eta Squared .982 .002 Noncent. Parameter 1531.757 .059 Observed Powera 1.000 .056

a. Computed using alpha = .05 Here we see signifiacnce level which is >0.05 says that with in the group both are effecting the fear level in the same way and have no differences Its partial ETA iafter squaring is 0.04 which is very small hich says that there is no efect of group on the fear

Here lines are in same direction in all 3rd intervals which says that there are no differences

One-way MANOVA:
Multivariate Analysis of Variance ( Manova ) is used to model two or more dependent variables that are continuous with one or more categorical predictor variables.

Assumptions:

One independent variable consists of two or more categorical independent groups. Two or more dependent variables that are either interval or ratio (continuous) Multivariate Normality Equality of variances between the independent groups (homogeneity of variances). Independence of cases.

Question:
Do male and female differ in terms of overall well being in other words are males better adjusted than female in term of their positive and negative mood stats and levels of perceived stress ?

Interpretion:
Between-Subjects Factors Value Label sex 1 2 MALES FEMALES N 184 248

This table show us the independent variable which gender sex the sample size cases of males are N= 184 and females cases areN= 248

Descriptive Statistics sex Total positive affect MALES FEMALES Total Total negative affect MALES FEMALES Total Total perceived stress MALES FEMALES Total Mean 33.62 33.69 33.66 18.71 19.98 19.44 25.79 27.42 26.72 Std. Deviation 6.985 7.439 7.241 6.901 7.178 7.082 5.414 6.078 5.854 N 184 248 432 184 248 432 184 248 432

the Descriptive Statistics table show us the samples sizes of all dependent variables means and standard deviations.

Box's Test of Equality of Covariance Matrices Box's M F df1 df2 Sig.


a

6.942 1.148 6 1074771.869 .331

Tests the null hypothesis that the observed covariance matrices of the dependent variables are equal across groups. a. Design: Intercept + sex

One of the assumptions of the MANOVA is homogeneity of covariances, which is tested for by Box's Test of Equality of Covariance Matrices. If the "Sig." value is less than .005 (P < 0.05) then the assumption of homogeneity of covariances was violated. Then we can say this the assumption of homogeneity of covaiances has been met (P = .331).

Multivariate Tests Effect Hypothesis Value Intercept Pillai's Trace Wilks' Lambda Hotelling's Trace Roy's Largest Root sex Pillai's Trace Wilks' Lambda Hotelling's Trace Roy's Largest Root a. Exact statistic b. Computed using alpha = .05 c. Design: Intercept + sex .025 3.569
a

Partial Eta Error df Sig. .000 .000 Squared Noncent. Parameter Observed Power
b

F
a a

df

.987 10841.625 .013 10841.625

3.000 428.000 3.000 428.000

.987 32524.875 .987 32524.875

1.000 1.000

75.993 10841.625

3.000 428.000

.000

.987 32524.875

1.000

75.993 10841.625

3.000 428.000

.000

.987 32524.875

1.000

.024 .976

3.569 3.569

a a

3.000 428.000 3.000 428.000

.014 .014

.024 .024

10.707 10.707

.788 .788

.025

3.569

3.000 428.000

.014

.024

10.707

.788

3.000 428.000

.014

.024

10.707

.788

The Multivariate Tests table is where we find the actual result of the one-way MANOVA. You need to look at the second Effect, labelled "Sex", and the Wilks' Lambda row (highlighted in red). To determine whether the one-way MANOVA was statistically significant you need to look at the "Sig." column. We can see from the table that we have a "Sig." value of .014, which means P < 0.0166. Therefore, we can conclude that this sex gender

impact was significantly dependent on which prior behaviours they had attended (P < 0.05/3= 0.01666).

Levene's Test of Equality of Error Variances F Total positive affect Total negative affect Total perceived stress 1.065 1.251 2.074 df1 1 1 1

df2 430 430 430

Sig. .303 .264 .151

Tests the null hypothesis that the error variance of the dependent variable is equal across groups. a. Design: Intercept + sex

Levene's Test of Equality of Error Variances Table, as shown We can see from the table above that all dependent variables have homogeneity of variances (P > .05)

Tests of Between-Subjects Effects Source Dependent Variable Type III Sum of Squares Corrected Model Total positive affect Total negative affect Total perceived stress Intercept Total positive affect Total negative affect Total perceived stress sex Total positive affect Total negative affect Total perceived stress Error Total positive affect Total negative affect 21442.088 430 49.865 22596.218 430 52.549 281.099 1 281.099 8.342 .004 .019 8.342 .822 172.348 1 172.348 3.456 .064 .008 3.456 .458 .440 1 .440 .008 .927 .000 .008 .051 299040.358 1 299040.358 8874.752 .000 .954 8874.752 1.000 158121.903 1 158121.903 3170.979 .000 .881 3170.979 1.000 478633.634 1 478633.634 9108.270 .000 .955 9108.270 1.000 281.099
d

Partial Mean df
a

Eta F Sig. Squared .000

Noncent. Parameter .008

Observed Power
b

Square .440

.440

.008 .927

.051

172.348

172.348

3.456 .064

.008

3.456

.458

281.099

8.342 .004

.019

8.342

.822

Total perceived stress Total Total positive affect Total negative affect Total perceived stress Corrected Total Total positive affect Total negative affect Total perceived stress

14489.121 430

33.696

512110.000 432

184870.000 432

323305.000 432

22596.657 431

21614.435 431

14770.220 431

a. R Squared = .000 (Adjusted R Squared = -.002) b. Computed using alpha = .05 c. R Squared = .008 (Adjusted R Squared = .006) d. R Squared = .019 (Adjusted R Squared = .017)

Das könnte Ihnen auch gefallen