Sie sind auf Seite 1von 13

Mann Whitney

Ranks group Rakesh spending Vishal Total N 30 30 60 Mean Rank 27.97 33.03 Sum of Ranks 839.00 991.00

Test Statistics

spending Mann-Whitney U Wilcoxon W Z Asymp. Sig. (2-tailed) a. Grouping Variable: group 374.000 839.000 -1.124 .261

P value is 26% which is greater than alpha=5%, so accept h0. Calculated value of z=1.124 which is less than 1.96 accept h0.

Kruskal Wallis
Descriptive Statistics N spending group 90 90 Mean 74.2333 2.0000 Std. Deviation 40.71166 .82107 Minimum 10.00 1.00 Maximum 154.00 3.00

Ranks group Rakesh Vishal spending 3.00 Total 30 90 23.42 N 30 30 Mean Rank 58.38 54.70

Test Statistics

a,b

Spending Chi-Square df Asymp. Sig. 32.460 2 .000

a. Kruskal Wallis Test b. Grouping Variable: group

P value is less than alpha, reject Ho At 5%, chi square value is around 9% (check from table), Chi square calculated is 32 % which is greater than table value for df 2 and alpha 5%, reject Ho Ho: All the population identical

KS Test
Descriptive Statistics N spending 30 Mean 87.5333 Std. Deviation 41.01365 Minimum 25.00 Maximum 152.00

One-Sample Kolmogorov-Smirnov Test spending N Normal Parameters


a,b

30 Mean Std. Deviation Absolute 87.5333 41.01365 .153 .153 -.145 .840 .481

Most Extreme Differences

Positive Negative

Kolmogorov-Smirnov Z Asymp. Sig. (2-tailed) a. Test distribution is Normal. b. Calculated from data.

P is greater than alpha, accept H0 Z cal 0.84 is less than table value 1.96 hence reject Ho H0: Distribution is normal

Regression
Variables Entered/Removed Model Variables Entered 1 Vishvesh
b a

Variables Removed

Method

. Enter

a. Dependent Variable: Sushant b. All requested variables entered.

Model Summary Model R R Square Adjusted R Square 1 .021


a

Std. Error of the Estimate 30.59104

Durbin-Watson

.000

-.035

2.580

a. Predictors: (Constant), Vishvesh b. Dependent Variable: Sushant

ANOVA Model Regression 1 Residual Total Sum of Squares 11.570 26202.730 26214.300 Df

Mean Square 1 28 29 11.570 935.812

F .012

Sig. .912
b

a. Dependent Variable: Sushant b. Predictors: (Constant), Vishvesh

Coefficients Model Unstandardized Coefficients

Standardized Coefficients

Sig.

B (Constant) 1 Vishvesh a. Dependent Variable: Sushant -.027 91.380

Std. Error 11.206 .241

Beta 8.155 -.021 -.111 .000 .912

Residuals Statistics Minimum Predicted Value Std. Predicted Value Standard Error of Predicted Value Adjusted Predicted Value Residual Std. Residual Stud. Residual Deleted Residual Stud. Deleted Residual Mahal. Distance Cook's Distance Centered Leverage Value a. Dependent Variable: Sushant 89.2912 -1.597 5.586 83.8102 -50.30893 -1.645 -1.673 -52.04409 -1.731 .000 .000 .000 Maximum 91.1124 1.286 10.654 92.8783 43.31614 1.416 1.471 47.52524 1.504 2.551 .132 .088

Mean 90.3000 .000 7.776 90.0140 .00000 .000 .004 .28598 .005 .967 .036 .033

Std. Deviation .63162 1.000 1.409 2.39510 30.05898 .983 1.017 32.18797 1.031 .689 .035 .024

N 30 30 30 30 30 30 30 30 30 30 30 30

The equation of regression line is Sushant= 91.380 - .027 *Vishvesh

Durbin Watson is around 2.5 (which should be 2 for no autocorrelation), autocorrelation is minimal R square is 2% suggests that simple linear regression model is explaining only 2% of the error From Anova table, P value is 91.2 % which is much higher than alpha=5% hence accept H0 i.e. model is not significant, same can be concluded from regression coefficient also... From Histogram and PP plot, it is concluded that residuals are normally distributed (Approximately) From scatterplot, it can be concluded that error terms are not independent, we could say that since error terms are evenly distributed on either side of 0, we can assume that expected value of Ui is equal to zero, it conveys that scatter plot is having a pattern which shows that variances are not homogenous

Multiple Regression

Descriptive Statistics Mean Vishal Rakesh Sushant Suja VIshvesh 54.3000 45.3000 90.3000 87.5333 40.3333 Std. Deviation 30.33167 27.29930 30.06562 41.01365 23.58453 N 30 30 30 30 30

Correlations Vishal Vishal Rakesh Pearson Correlation Sushant Suja VIshvesh Vishal Rakesh Sig. (1-tailed) Sushant Suja VIshvesh Vishal Rakesh N Sushant Suja VIshvesh 1.000 -.195 -.075 -.072 .059 . .151 .347 .354 .378 30 30 30 30 30 Rakesh -.195 1.000 .124 -.221 -.140 .151 . .256 .120 .230 30 30 30 30 30 Sushant -.075 .124 1.000 .108 -.021 .347 .256 . .286 .456 30 30 30 30 30 Suja -.072 -.221 .108 1.000 .019 .354 .120 .286 . .461 30 30 30 30 30 VIshvesh .059 -.140 -.021 .019 1.000 .378 .230 .456 .461 . 30 30 30 30 30

Variables Entered/Removed Model Variables Entered VIshvesh, Suja, 1 Sushant, Rakesh


b

Variables Removed

Method

. Enter

a. Dependent Variable: Vishal b. All requested variables entered.

Model Summary Model R R Square Adjusted R Square 1 .233


a

Std. Error of the Estimate 31.77139

Durbin-Watson

.054

-.097

2.078

a. Predictors: (Constant), VIshvesh, Suja, Sushant, Rakesh b. Dependent Variable: Vishal

ANOVA Model Regression 1 Residual Total a. Dependent Variable: Vishal Sum of Squares 1444.767 25235.533 26680.300 Df

Mean Square 4 25 29 361.192 1009.421

F .358

Sig. .836
b

b. Predictors: (Constant), VIshvesh, Suja, Sushant, Rakesh

Coefficients Model Unstandardized Coefficients

Standardized Coefficients

Sig.

Collinearity Statistics

B (Constant) Rakesh 1 Sushant Suja VIshvesh a. Dependent Variable: Vishal 74.076 -.236 -.036 -.085 .040

Std. Error 26.696 .226 .200 .149 .253

Beta 2.775 -.212 -.036 -.115 .031 -1.041 -.179 -.572 .157 .010 .308 .859 .572 .877

Tolerance

VIF

.911 .965 .932 .980

1.097 1.036 1.073 1.020

Coefficient Correlations Model VIshvesh Suja Correlations 1 Sushant Rakesh Covariances VIshvesh .002 .138 .064 VIshvesh 1.000 .012

Suja .012 1.000 -.140 .237 .000

Sushant .002 -.140 1.000 -.151 9.495E-005

Rakesh .138 .237 -.151 1.000 .008

Suja Sushant Rakesh a. Dependent Variable: Vishal

.000 9.495E-005 .008

.022 -.004 .008

-.004 .040 -.007

.008 -.007 .051

Collinearity Diagnostics Model Dimension Eigenvalue Condition Index (Constant) 1 2 1 3 4 5 a. Dependent Variable: Vishal 4.350 .303 .214 .097 .036 1.000 3.786 4.507 6.701 11.001 .00 .00 .00 .01 .99

Variance Proportions Rakesh .01 .48 .07 .29 .16 Sushant .00 .00 .01 .57 .41 Suja .01 .05 .32 .42 .20 VIshvesh .01 .23 .55 .04 .16

Residuals Statistics Minimum Predicted Value Std. Predicted Value Standard Error of Predicted Value Adjusted Predicted Value Residual Std. Residual Stud. Residual Deleted Residual Stud. Deleted Residual Mahal. Distance Cook's Distance Centered Leverage Value a. Dependent Variable: Vishal 38.6706 -2.214 7.096 30.4443 -45.11292 -1.420 -1.487 -49.83715 -1.526 .480 .001 .017 Maximum 65.0253 1.520 18.215 70.3962 62.71774 1.974 2.195 77.55574 2.394 8.565 .228 .295

Mean 54.3000 .000 12.773 54.1595 .00000 .000 .002 .14052 .010 3.867 .039 .133

Std. Deviation 7.05830 1.000 2.294 8.98856 29.49900 .928 1.013 35.22242 1.039 1.689 .049 .058

N 30 30 30 30 30 30 30 30 30 30 30 30

The equation of regression line is Pritish= -4.568 0.109 *Shomit + 0.584*Syam 0.20*Upendra 0.021*Vinay + 0.709*Sabyasachi

Inferences: Coefficients of all the variables are not significant, as the p value of these coefficients are Rakesh 30.8%, Sushant 85.9 Suja 57.2% and Vishvesh 87.7% respectively which are greater than alpha=5%. This significance would hold good even for Standardised values as all these variables are been measured with the same unit of measurement. The Model has R square of 5.4% which is very low. None of the VIF value are greater than 5% so there is no collinearity existing between independent variables. Validation of Assumptions: 1. 2. 3. 4. Model is Linear. Xis are deterministic. Ui is a random variable. Quiet evident from the residual plots and scatter plots. E(Ui)=0 , the standardised residual plots are evenly distributed on either side of zero, which makes this assumption relatively fair.

5. Uis are normally distributed. PP plot and Histogram plotted for error terms shows that, the distribution is close to a normal distribution. Majority of the points in PP plot are close to the line for Normal Distribution, and histogram also shows an approximate normal distribution. 6. Variances of Error terms are equal, from the residual plots plotted, it is obvious that, there variances are more or less uniform. Otherwise the residual plots would have had some particular shape (span out, converging in etc). 7. There is no Auto Correlation between the error terms. Durbin Watson Statistic shows a value of 1.813, which close to 2, which shows that autocorrelation is almost inexistent. 8. Cov(Ui,Uj)=0, since auto correlation is absent, covariance between error terms are zero.(Correlation is zero) 9. Xis are distinctly many. Clearly from the sample data we can see multiple values for each of the independent variables. 10. Sample Points must be greater than the number of independent variables. In this example there 5 independent variables and number observations are more than 5. Ideally number of observation should be at least 5 per independent variable. The sample contains 60 observations. 11. No Multicollinearity. Variance Inflation Factors (VIF) in the output produced is less than 5 for 4 independent variables where as for Upendra (VIF=5.328) shows that a high multicollinearity.

Das könnte Ihnen auch gefallen