Sie sind auf Seite 1von 4

Table 2

Based on the regression model, the positive slope of (0.301) indicates that the variable
x and variable y are positively related to each other resulting in an increase of y (stock
prices) over the progressing period. This is well illustrated in the graph which shows an
incline from left to right.
The Multiple R (r1) is the correlation coefficient. It shows the strength of linear
relationship of the two variables. The result shows a strength percentage of (100%)
which is interpreted as very strong correlation of the two variables. A value of 1 means a
perfect positive relationship.
The R Squared (r2) is the coefficient of determination. It determines the proportion of
variance in the dependent variable that can be explained by the independent variable. It
shows the variance between variable x and variable y which results in (100%)
dependency for the dependent variable indicating how the data fits with the regression
model. Therefore, it expressed a perfect correlation between the variables.
The Adjusted R-squared is a modified version of R-squared that has been adjusted for
the number of predictors in the model. If the actual R square is close to zero the
adjusted R square can be slightly negative. The result is 1 and it will only increase if the
new term improves the model more than would be expected by chance and it will
decrease when a predictor improves the model by less than expected by chance.
The Standard Error is an important indicator of how precise an estimate indicates the
precision of the predicted y value to the actual y value of the overall population. Bases
on the regression model, it resulted in an error of (2.29311E-15) which gives a negative
value. When the standard error increases, the means are more spread out, it becomes
more likely that any given mean is an inaccurate representation of the true population
mean. Otherwise, if the standard error decreases, the more accurate the projection
compare to the true data.
The F Value is the result of a test where the null hypothesis is that all of the regression
coefficients might equal to zero making the regression insignificant. According to the
analysis, it resulted in (1.42146E+30) which means that the model is significant.
The Significance of F is the probability that the null hypothesis in the regression model
cannot be rejected. Therefore, based on the result (2.7433E-118) indicates that it is
significant and will reject the null hypothesis and accept the alternative hypothesis since
it is lower than 0.05.
The t-statistic measures how many standard errors the coefficient is away from zero.
Any t-value greater than +2 or less than – 2 is acceptable. As the t-value rise, the
greater the confidence will show in the coefficient as a predictor. The larger the absolute
value of the t-value, the smaller the p-value, and the greater the evidence against the
null hypothesis. Based on the result of the intercept (1.18255E+16), there is a small p-
value (2.9287E-126). On the other hand, the result of X-variable (1.19225E+15)
indicates that there is a small p-value (2.7433E-118) and therefore, it's significant.

Lastly, the P-Value is a measure of the probability that an observed difference could
have occurred just by random chance. The lower the p-value, the greater the statistical
significance of the observed difference. In conclusion, variable y resulted in (2.9287E-
126) which is less than 0.05 indicating that it is significant while x variable resulted in
(2.7433E-118) which is also less than 0.05 indicating that it is significant.

Table 3

Based on the regression model, the positive slope of (0.177) indicates that the variable
x and variable y are positively related to each other resulting in an increase of y (stock
prices) over the progressing period. This is well illustrated in the graph which shows an
incline from left to right.
The Multiple R (r1) is the correlation coefficient. It shows the strength of linear
relationship of the two variables. The result shows a strength percentage of (18.76%)
which is interpreted as weak or negligent correlation of the two variables which means
that the projected data and the actual data has a low degree of correlation with each
other.
The R Squared (r2) is the coefficient of determination. It determines the proportion of
variance in the dependent variable that can be explained by the independent variable. It
shows the variance between variable x and variable y which results in (3.52%)
dependency for the dependent variable indicating how the data fits with the regression
model. According to the analysis above, it signifies that there is a weak linear correlation
between the two variables
The Adjusted R-squared is a modified version of R-squared that has been adjusted for
the number of predictors in the model. If the actual R square is close to zero the
adjusted R square can be slightly negative. The result is (-0.08539604) and it will only
increase if the new term improves the model more than would be expected by chance
and it will decrease when a predictor improves the model by less than expected by
chance.
The Standard Error is an important indicator of how precise an estimate indicates the
precision of the predicted y value to the actual y value of the overall population. Bases
on the regression model, it resulted in an error of (0.896614001) which gives 89.66%, a
high result. When the standard error increases, the means are more spread out, it
becomes more likely that any given mean is an inaccurate representation of the true
population mean because an error that is less than 0.05 is more desirable.
The F Value is the result of a test where the null hypothesis is that all of the regression
coefficients might equal to zero making the regression insignificant. According to the
analysis, it resulted in (0.291904219) which gives low result. A large f-value is more
desirable as it indicates that the model is highly significant.
The Significance of F is the probability that the null hypothesis in the regression model
cannot be rejected. Therefore, based on the result (0.6037) indicates that it is not
significant and will accept the null hypothesis and reject the alternative hypothesis since
it is higher than 0.05.
The t-statistic measures how many standard errors the coefficient is away from zero.
Any t-value greater than +2 or less than – 2 is acceptable. As the t-value rise, the
greater the confidence will show in the coefficient as a predictor. Low t-values are
indications of low reliability of the predictive power of that coefficient. The larger the
absolute value of the t-value, the smaller the p-value, and the greater the evidence
against the null hypothesis. Based on the result of the intercept (0.0444), there is a
small p-value (1.2886). On the other hand, the result of X-variable (0.6037) indicates
that there is a small p-value (-0.5790) and therefore, it is significant since it is lower than
0.05.

Lastly, the P-Value is a measure of the probability that an observed difference could
have occurred just by random chance. The lower the p-value, the greater the statistical
significance of the observed difference. In conclusion, variable y resulted in (0.0444)
which is less than 0.05 indicating that it is significant while x variable resulted in (0.6037)
which is higher than 0.05 indicating that it is not significant.

The t-statistic (or t-value) is a measure of the statistical significance of an independent


variable X in explaining the dependent variable Y. The statistic measures how many
standard errors the coefficient is away from zero.
Any t-value greater than +2 or less than -2 is acceptable. The higher the t-value, the
greater the confidence we have in the coefficient as a predictor. Low t-values are
indications of low reliability of the predictive power of that coefficient.
 A negative coefficient suggests that as the independent variable increases, the dependent
variable tends to decrease.

Das könnte Ihnen auch gefallen