Beruflich Dokumente
Kultur Dokumente
HARSHAD BAJPAI
Other statistics:
typical or standard error of the estimate = residual error = best measure of validity (with criterion variable on the Y axis)
Correlation
The concept of correlation is a statistical tool which studies the Relationship between two variables and Correlation Analysis involves various methods and techniques used for studying and measuring the extent of the relationship between the two variables. Two variables are said to be in correlation if the change in one of the variables results in a change in the other variable
Types of Correlation
There are two important types of correlation. They are (1) Positive and Negative correlation and (2) Linear and Non Linear correlation.
Examples
Some examples of series of positive correlation are: (i) Heights and weights; (ii) Household income and expenditure; (iii) Price and supply of commodities; (iv) Amount of rainfall and yield of crops.
Negative correlation
Correlation between two variables is said to be negative or inverse if the variables deviate in opposite direction. That is, if the increase in the variables deviate in opposite direction. That is, if increase (or decrease) in the values of one variable results on an average, in corresponding decrease (or increase) in the values of other variable.
Examples
Some examples of series of negative correlation are: (i) Volume and pressure of perfect gas; (ii) Current and resistance [keeping the voltage constant (iii) Price and demand of goods
The relationship between two variables is said to be non linear if corresponding to a unit change in one variable, the other variable does not change at a constant rate but changes at a fluctuating rate. In such cases, if the data is plotted on a graph sheet we will not get a straight line curve. For example, one may have a relation of the form y = a + bx + cx2 or more general polynomial.
The Coefficient of Correlation One of the most widely used statistics is the coefficient of correlation r which measures the degree of association between the two values of related variables given in the data set. It takes values from + 1 to 1. If two sets or data have r = +1, they are said to be perfectly correlated positively if r = -1 they are said to be perfectly correlated negatively; and if r = 0 they are uncorrelated.
variance
In statistical significance testing, the p-value is the probability of obtaining a test statistic at least as extreme as the one that was actually observed, assuming that the null hypothesis is true. In this context, value a is considered more "extreme" than b if a is less likely to occur under the null. One often "rejects the null hypothesis" when the p-value is less than the significance level (Greek alpha), which is often 0.05 or 0.01. When the null hypothesis is rejected, the result is said to bestatistically significant.
Null hypothesis H0: < 24 Alternative hypothesis Ha : >24 Similarly different combinations
Errors
Type I error
Rejecting Ho when Ho is true Type II error Accepting Ho when Ha is true
Level of significance
Level of significance is the probability of making a Type-I error. Denoted by the symbol . The person doing the hypothesis testing specifies the value of .
Value of alfa
If the cost of making the type I error is high, lower values of are preferred. If the cost is low the higher values are preferred. Like in case of critical components in automobiles, the values should be low, to ensure errors are no entertained
Standard error
Standard error is the standard deviation of the sample. S.E = / n SE is the of the mean (X), that is called SE to signify how much the Mean varies.
Criterion
P value its a probability, computed using z, holds acceptance and rejection criterion
Two approaches
1) p-value approach 2) critical value approach
Criteria
Reject Ho if p-value < alfa( level of significance) P-value is called the observed level of significance P-value calculation depends upon he type of the test