Beruflich Dokumente
Kultur Dokumente
Eg: Rupee cost of bags produces Weights of shipments Shop Numbers etc.
Disparate use: Although 80 kgs is twice as much as 40 kgs, but shop no. 80 is not twice as big as shop no. 40. So, the appropriateness of data analysis depends upon level of data measurement.
The process of assigning numbers to objects in such a way that specific properties of the objects are faithfully represented by specific properties of the numbers. Measurement is used to capture some construct - For example, if research is needed on the construct of depression, it is likely that some systematic measurement tool will be needed to assess depression.
In business research, measurement of variables is a indispensable requirement Problem Defining what is to be measured, and how it is to be accurately and reliably measured Some things (or concepts) which are inherently abstract in their nature (e.g. job satisfaction, employee morale, brand loyalty of consumers) are more difficult to measure than concepts which can be assigned numerical values (e.g. sales volume for employees X, Y and Z)
The values of nominal data are classification or categories. These classes have no quantitative properties. Therefore, no comparison can be made in terms of one being category being higher than the other E.g. responses to questions about marital status, coded as: Single = 1, Married = 2, Divorced = 3, Widowed = 4 Because the numbers are arbitrary, so arithmetic operations dont make any sense (e.g. does Widowed 2 = Married?!) Nominal data are also called qualitative or categorical.
This level is higher than Nominal. In addition to categorization, we can also rank or order objects. E.g. College course rating system: poor = 1, fair = 2, good = 3, very good = 4, excellent = 5 While its still not meaningful to do arithmetic on this data (e.g. does 2*fair = very good?!), we can say things like: excellent > poor or fair < very good That is, order is maintained no matter what numeric values are assigned to each category. The difference between first and second is not necessarily equivalent to the difference between second and third, or between third and fourth.
Does not assume that the intervals between numbers are equal. Example: finishing place in a race (first place, second place)
1st place
2nd place
3rd place
4th place
1 hour
2 hours
3 hours
4 hours
5 hours
6 hours
7 hours
8 hours
Data is always numerical. Also referred to as quantitative or numerical. Distances between consecutive numbers have meaning. Arithmetic operations can be performed on Interval Data, thus its meaningful to talk about 2*Height, or Price + Rs.100, and so on.
Also these distances are equal, i.e. interval data have equal intervals. Fahrenheit Temperature Zero point is a matter of convenience and not a natural or fixed zero point. It is just another point on scale and doesnt mean the absence of phenomenon. Eg: Zero degrees F is not the lowest temperature possible.
Eg: heights, weights, prices, volume etc. 80 kgs is twice as much 40 kgs, so the ratio 80:40 is meaningful.
Multiple Regression
Discriptive Analysis Chi-Square ANOVA MANOVA Canonical Correlation
Type of composite measure composed of several items that have a logical or empirical structure among them. Scale takes advantage of differences in intensity among the indicators of a variable. For example, when a question has the response choices of "always," "sometimes," "rarely," and "never," this is a scale because the answer choices are rank-ordered and have differences in intensity. Another example would be "strongly agree," "agree," "neither agree nor disagree," "disagree," "strongly disagree."
A scale is basically a continuous spectrum or series of categories and has been defined as any series of items that are arranged progressively according to value or magnitude, into which an item can be placed according to its quantification
17
Rating scales have several response categories and are used to elicit responses with regard to the object, event, or person studied. Ranking scales, make comparisons between or among objects, events, or persons and elicit the preferred choices and ranking among them.
18
Dichotomous Scale Category Scale Semantic Differential Scales Numerical Scale Itemized Rating Scale Likert/Summated Rating Scales Fixed or Constant Sum Rating Scale Stapel Scale Graphic Rating Scale Consensus Scale: Thruston Scales Magnitude Scaling
19
1. Dichotomous Scale
1. Used to elicit a Yes or No answer 2. Nominal Scale used
2. Category Scale
1. Uses multiple items to elicit response 2. Nominal Scale used
Where in India do you reside? o Delhi o Mumbai o Kolkata o Chennai
3. Likert Scale
This is an Interval scale and the differences in responses between any two points on the scale remain the same. Is designed to examine how strongly subjects agree or disagree with statements on a 5-point scale as following:
________________________________
22
5. Summative Ratings
A number of items collectively measure one construct (Job Satisfaction) A number of items collectively measure a dimension of a construct and a collection of dimensions will measure the construct (Selfesteem)
23
7. Magnitude Scaling
Attempts to measure constructs along a numerical, ratio level scale Respondent is given an item with a pre-assigned numerical value attached to it to establish a norm The respondent is asked to rate other items with numerical values as a proportion of the norm Very powerful if reliability is established
24
25
9. Numerical Scale
Is similar to the semantic differential scale, with the difference that numbers on a 5- points or 7-points scale are provided, as illustrated in the following example: How pleased are you with your new job? Extremely Extremlely pleased 5 4 3 2 1 displeased
26
30
Are used to tap preferences between two or among more objects or items (ordinal in nature). However, such ranking may not give definitive clues to some of the answers sought. Example: There are 4 product lines, the manager seeks information that would help decide which product line should get the most attention. Assume: 35% of respondents choose the 1st product. 25% of respondents choose the 2nd product. 20% of respondents choose the 3rd product. 20% of respondents choose the 4th product. 100%
31
The manager cannot conclude that the first product is the most preferred. Why? Because 65% of respondents did not choose that product. We have to use alternative methods like Forced Choice, Paired Comparisons, and the Comparative Scale. .
32
Paired Comparison:
Paired comparison scales ask a respondent to pick one of two objects from a set based upon some stated criteria.
Comparative Scale It provides a benchmark or a point of reference to assess attitude towards the subject under study.
We need to assess the goodness of the measures developed. That is, we need to be reasonably sure that the instruments we use in our research do indeed measure the variables they are supposed to, and that they measure them accurately.
36
Deals with the consistency of the instrument. A reliable test is one that yields consistent scores when a person takes the test two alternate forms of the test or when an individual takes the same test on two or more different occasions. Reliability of measure indicates extent to which it is without bias and hence ensures consistent measurement : across time (stability) and across the various items in the instrument (internal consistency). ordinal measures always yield the same order, interval measurements always yield the same order and same distance between the measured items 66
Stability: ability of a measure to remain the same over time, despite uncontrollable testing conditions or the state of the respondents themselves.
TestRetest Reliability: The reliability coefficient obtained with a repetition of the same measure on a second occasion. Parallel-Form Reliability: Responses on two comparable sets of measures tapping the same construct are highly correlated.
41 84
When a questionnaire containing some items that are supposed to measure a concept is administered to a set of respondents now, and again to the same respondents, say several weeks to 6 months later, then the correlation between the scores obtained is called the test-retest coefficient. The higher the coefficient is, the better the testretest reliability, and consequently, the stability of the measure across time.
42
When responses on two comparable sets of measures tapping the same construct are highly correlated, we have parallel-form reliability. Both forms have similar items and the same response format, the only changes being the wording and the order or sequence of the questions. What we try to establish in the parallel-form is the error variability resulting from wording and ordering of the questions. If two such comparable forms are highly correlated (say 8 and above), we may be fairly certain that the measures are reasonably reliable, with minimal error variance caused by wording, ordering, or other factors.
43
Consistency in the type of result a test yields Time & space Participants Not perfectly similar result but very close-to being similar indicative of the homogeneity of the items in the measure that tap the construct.
72
Split-Half Reliability: randomly divide items into 2 subsets and examine the consistency in total scores across the 2 subsets (any drawbacks?) involves scoring two halves of a test separately for each subject and calculating the correlation coefficient between the two scores. Split-half reliability reflects the correlations between two halves of an instrument.
INTERNAL CONSISTENCY RELIABILITY Relevant for measures that consist of more than 1 item (e.g., total scores on scales, or when several behavioral observations are used to obtain a single score) Internal consistency refers to inter-item reliability, and assesses the degree of consistency among the items in a scale, or the different observations used to derive a score Want to be sure that all the items (or observations) are measuring the same construct Cronbach Alpha is the most popular measure.
Cronbach's alpha is an index of reliability associated with the variation accounted for by the true score of the "underlying construct." Allows a researcher to measure the internal consistency of scale items, based on the average inter-item correlation Indicates the extent to which the items in your questionnaire are related to each other Indicates whether a scale is one-dimensional or multidimensional
Cronbach's alpha ranges between 0 to 1 The more items, generally the higher the internal reliability will be The higher the score, the more reliable the generated scale is. General guide: A score of .70 or greater is generally considered to be acceptable
.6 = marginal reliabiility .7 = Good .8 = Very Good .9 = Excellent >.95 = too high; items are too inter-related and therefore some are redundant
Item-total Statistics
Alpha if Item Deleted .8629 .8661 .9021 .8589 .8622 .8587 .8662 .8637 .8623 .8638
Alpha =
.8790
Item-total Statistics Scale Mean if Item Deleted MATHS1 MATHS2 MATHS4 MATHS5 MATHS6 MATHS7 MATHS8 MATHS9 MATHS10 22.2694 22.0280 21.9727 22.4605 22.0753 22.0849 22.8642 22.0280 22.4590 Scale Variance if Item Deleted 24.0699 25.2710 24.4372 24.2235 23.5423 25.0777 24.3449 24.5812 24.3859 Corrected ItemTotal Correlation .6821 .6078 .7365 .6801 .7255 .6166 .6562 .7015 .6524
Alpha =
.9024
Quality of items; concise statements, homogenous words (some sort of uniformity) Adequate sampling of content domain; comprehensiveness of items Longer assessment less distorted by chance factors Developing a scoring plan (esp. for subjective items rubrics) Ensure VALIDITY
Validity has been defined as referring to the appropriateness, correctness, meaningfulness, and usefulness of the specific inferences researchers make based on the data they collect. Validation is the process of collecting and analyzing evidence to support such inferences. It is the most important idea to consider when preparing or selecting an instrument. Validity tests show how well an instrument that is developed measures the particular concept it is intended to measure. Validity is concerned with whether we measure the right concept. e.g. is absenteeism from work a valid measure of job satisfaction or are there other influences like a flu epidemic which is keeping employees from work
56
Unlike reliability, validity is not absolute Validity is the degree to which variability (individual differences) in participants scores on a particular measure, reflect individual differences in the characteristic or construct we want to measure Depends on the PURPOSE :Specific to a particular purpose! E.g. a ruler may be a valid measuring device for length, but isnt very valid for measuring volume Measuring what it is supposed to Must be inferred from evidence; cannot be directly measured
.
58
Content validity ensures that the measure includes an adequate and representative set of items that tap the concept. The more the scale items represent the domain of the concept being measured, the greater the content validity. In other words, content validity is a function of how well the dimensions and elements of a concept have been delineated. Major concern for achievement tests (where content is emphasized)
How closely content of questions in the test relates to content of the curriculum? Can you test students on things they have not been taught?
59
Face validity refers to the extent to which a measure APPEARS to measure what it is supposed to measure Not statisticalinvolves the judgment of the researcher (and the participants) A measure has face validityif people think it does Just because a measure has face validity does not ensure that it is a valid measure (and measures lacking face validity can be valid) Does it appear to measure what it is supposed to measure? Example: Lets say you are interested in measuring, Propensity towards violence and aggression. By simply looking at the following items, state which ones qualify to measure the variable of interest:
Have you been arrested? Have you been involved in physical fighting? Do you get angry easily? Do you sleep with your socks on? Is it hard to control your anger? Do you enjoy playing sports?
Degree to which the predictor is adequate in capturing the relevant aspects of criterion Uses Correlation analysis Criterion-Related Validity is established when the measure differentiates individuals on a criterion it is expected to predict. This can be done by establishing what is called concurrent validity or predictive validity. Two Types: Concurrent validity Predictive validity
61
Concurrent Criterion Validity how well performance on a test estimates current performance on some valued measure (criterion)? (e.g. test of dictionary skills can estimate students current skills in the actual use of dictionary observation) measure and criterion are assessed at the same time Predictive Criterion Validity how well performance on a test predicts future performance on some valued measure (criterion)? (e.g. reading readiness test might be used to predict students achievement in reading) elapsed time between the administration of the measure to be validated and the criterion is a relatively long period (e.g., months or years)
High school seniors who score high on the the CBSE Class 12th are better prepared for college than low scorers (concurrent validity) Probably of greater interest to college admissions administrators, CBSE Class 12th scores predict academic performance three years later (predictive validity)
Construct Validity testifies to how well the results obtained from the use of the measure fit the theories around which the test is designed. Measures what accounts for the variance Attempts to identify the underlying constructs This is assessed through convergent and discriminant validity.
Convergent validity is established when the scores obtained with two different instruments measuring the same concept are highly correlated. Discriminant validity is established when, based on theory, two variables are predicted to be uncorrelated, and the scores obtained by measuring them are indeed empirically found to be so.
64
To have construct validity, a measure should both: Correlate with other measures that it should be related to (convergent validity) And, not correlate with measures that it should not correlate with (discriminant validity) Construct Validity: Techniques used
Correlation of proposed test with other existing tests Factor analysis Multi-trait-multimethod analysis Convergent validity - Calls for high correlation between the different measures of the same construct Discriminant validity - Calls for low correlation between sub-scales within a construct
65
Does the test measure the human CHARACTERISTIC(s) it is supposed to? Examples of constructs or human characteristics:
Mathematical reasoning Verbal reasoning Musical ability Spatial ability Mechanical aptitude Motivation
Each construct is broken down into its component parts E.g. motivation can be broken down to:
Interest Attention span Hours spent Assignments undertaken and submitted, etc. All of these sub-constructs put together measure motivation
Unclear directions Difficult reading vocabulary and sentence structure Ambiguity in statements Inappropriate level of difficulty Poorly constructed test items Test items inappropriate for the outcomes being measured Tests that are too short Improper arrangement of items (complex to easy?) Identifiable patterns of answers Administration and scoring Nature of criterion
Reliability vs Validity
Performance-based assessment forms are high in both validity and reliability (true/false) A test item is said to be unreliable when most students answered the item wrongly (true/false) When a test contains items that do not represent the content covered during instruction, it is known as an unreliable test (true/false) Test items that do not successfully measure the intended learning outcomes (objectives) are invalid items (true/false) Assessment that does not represent student learning well enough are definitely invalid and unreliable (true/false) A valid test can sometimes be unreliable (true/false) If a test is valid, it is reliable! (by-product)