Beruflich Dokumente
Kultur Dokumente
several alternative ways of looking at the problem because of their extensive problem-solving
experiences in various other organizational setups.
- The external teams might have more knowledge because of their periodic training programs,
which the teams within the organization may not have access to.
Disadvantages of external Consultants/ Researchers
- The cost of hiring an external team is usually high
- It take more time to understand the organization, they are not readily accepted by employees
- They charge additional fees for their assistance in the implementation and evaluation phases.
Knowledge of research greatly enhances the decision-making skills of the manager.
Ethics in business research refers to a code of conduct or expected societal norm of behavior
while conducting research.
6. Objectivity: the conclusions drawn through the interpretation of the results of data analyses
should be objective. The more objective the interpretation, the more scientific the research.
7. Generalizability: refers to the scope of applicability of the research findings in one
organizational setting to other settings. The wider the range of applicability of the solutions
generated by research, the more useful the research is to the users.
8. Parsimony: simplicity in explaining the problem that occur, and in generating solutions for the
problem is always preferred to complex research frameworks that consider an unmanageable
number of factors.(3-45,10-48)
-Deductive reasoning: we start with a general theory and then apply this theory to specific case..
Example: we know that all high performers are highly proficient in their jobs. If John is a high
performer, we then conclude that he is highly proficient in his job.
Hypothesis testing is deductive in nature.
-Inductive reasoning is a process where we observe certain phenomena and on this basis arrive at
conclusions. In other words in induction we logically establish a general proposition based on
observed facts.
The hypothetico-deductive method provides a useful, systematic approach to solving basic and
managerial problems.
The seven steps of the Hypothetico-deductive method:
1. Identify a broad problem area
2. Define the problem statement
3. Develop hypotheses
a. A scientific hypothesis must meet two requirements:
i.
It must be testable
ii.
4. Determine measures
5. Data collection
6. Data analysis
a. The data gathered are statistically analyzed to see if the hypothesis that were
generated have been supported
7. Interpretation of data
a. Decide whether the hypothesis are supported or not by interpreting the meaning of
the results of the data analysis.
A problem does not necessarily mean that something is seriously wrong with a current
situation that needs to be rectified immediately. A problem could also indicate an interest in an
issue where finding the right answers might help to improve an existing situation.
Secondary data: data gathered through existing sources. (data already exists and do not have to
be collected by the researcher.)
Primary data: data gathered for research from the actual side of occurrence of events.
It is important for the researcher to be well acquainted with the background of the company
studied.
A problem statement is a clear, precise, and succinct statement of the specific issue that a
researcher wishes to investigate. It should be relevant, feasible and interesting.
A problem statement in relevant if it is meaningful from a managerial perspective, an academic
perspective or both.
A problem statement is feasible is you are able to answer the problem statement within the
restrictions of the research project.
The research proposal drawn up by the investigator is the result of a planned, organized, and
careful effort, and basically contains the following:
1. The purpose of the study
2. The specific problem to be investigated
3. The scope of the study
4. The relevance of the study
5. The research design offering details on:
a. The sampling design
b. Data collection methods
c. Data analysis
6. Time frame of the study
7. The budget
8. Selected bibliography
The theoretical framework is the foundation on which the entire research project is based.
A hypothesis can be defined as a logically conjectured relationship between two or more
variables expressed in the form of a testable statement. By convention in the social sciences, to
call a relationship statistically significantwe should be confident that 95% out of the observed
relationship is true.
-If-then statements: if employees are more healthy, then they will take sick leave less frequently.
-If terms such as positive, negative, more than and less than, and the like are used then these
hypotheses are directional, because the direction of the relationship between the variables is
postulated.
-Non-directional hypothesis are those that do postulate a relationship or difference, but offer no
indication of the direction of these relationships or differences.
-The null hypothesis is a hypothesis set up to be rejected in order to support an alternative
hypothesis.
-The alternate hypothesis, which is the opposite of the null hypothesis, is a statement expressing
a relationship between two variables or indicating differences between groups.
-In deduction, theoretical model is first developed, hypothesis are formulated, data gathered and
then hypothesis are tested. In induction, new hypothesis are based on data already collected and
then are then tested.
Data analysis:
1. Feel for data
2. Goodness of data
3. Hypotheses testing
The case study, which is an examination of studies done in other similar organizational
situations, is also a method of solving problems.
-An exploratory study is undertaken when not much is known about the situation at hand, or no
information is available on how similar problems or research issues have been solved in the past.
Exploratory studies are also necessary when some facts are known, but more information is
needed for developing a viable theoretical framework. It is important to note that doing a study
for the first time in a particular organization does not make the research exploratory in nature;
only when knowledge is scant and a deeper understanding is sought, does the study become
exploratory.
-A descriptive study is undertaken in order to ascertain and be able to describe the characteristics
of the variables of interest in a situation.
The goal of a descriptive study is to describe relevant aspects of the phenomenon of interest from
an individual or other perspective.
-Hypothesis testing is undertaken to explain the variance in the dependent variable or to predict
organizational outcomes.
-It is not difficult to see that in exploratory studies, the researcher is basically interested in
exploring the situational factors so as to get a grip on the characteristics of the phenomenon of
interest. Descriptive studies are undertaken when the characteristics or the phenomenon tob e
tapped in a situation are known to exist, and one wants to be able to describe them better by
offering a profile of the factors. Hypothesis testing offers an enhanced understanding of the
relationship that exists among variables.
The study in which the researchers wants to delineate the cause of one or more problems is
called a causal study. When the researcher is interested in delineating the important variables
associated with the problem, the study is called a correlational study.
The intention of the researcher conducting a causal study is to be able to state that variable X
causes variable Y.
-A correlation study is conducted in the natural environment of the organization with minimum
interference by the researcher with the normal flow of work.
-In studies conducted to establish cause-and-effect relationship, the researcher tries to
manipulate certain variables so as to study the effects of such manipulation on the dependent
variable of interest.
-Correlational studies done in organizations are called field studies. Studies conducted to
establish cause and effect relationship using the same natural environment in which employees
normally functions are called field experiments.
Experiments done to establish a cause-and-effect relationship beyond the possibility of the least
doubt require the creation of an artificial, contrived environment in which all the extraneous
factors are strictly controlled., the study is called a lab experiment.
In summary:
1. Field studies: where various factors are examined in the natural setting in which daily
activities go on as normal with minimal researcher interference.
2. Field experiments, where cause-and-effect relationships are studied with some amount of
researcher interference, but still in the natural setting where work continuous in the
normal fashion.
3. Lab experiments, where the researches explores cause-and-effect relationships, not only
exercising a high degree of control but also in an artificial and deliberately created
setting.
The unit of analysis refers to the level of aggregation of the data collected during the subsequent
data analysis stage.
-Two-person groups are called dyads ( husband-wife). If we want to study buying behaviours we
have to collect data from, lete say, 60 individuals, if we want to study a group, we have to collect
data from 6 different groups.
-A study can be undertaken in which data are gathered just once, perhaps over a period of days or
weeks or months, in order to answer a research question, such studies are called one-shot or
cross-sectional studies.
Studies as when data on the dependent variable are gathered at two ore more points in time to
answer the research questions are called longitudinal studies.
Longitudinal studies take more time and effort and cost more than cross-sectional studies.
-An interval scale lets us measure the distance between any two points on the scale. This helps us
to compute the means and the standard deviations of the responses on the variables. The interval
scale not only groups individuals according to certain categories and taps the order of these
groups it also measures the magnitude of the differences in the preferences among the
individuals. (a thermometer is a good example of an interval-scaled instrument.)
-The ratio scale not only measures the magnitude of the differences between points on the scale
but also taps the proportions in the differences. It is the most powerful scale of the four scales
because it has a unique zero origin and subsumes all the properties of the other three scales, the
weighing balance is a good example of a ratio scale. (the weighting balance is a good example.)
Thus: the nominal scale highlights the differences by classifying objects or persons into groups,
and provides the least amount of info on the variable. The ordinal scale provides some additional
info by rank-ordering the categories of the nominal scale. The interval scale, not only ranks, but
also provides us with info on the magnitude of the differences in the variable. The ratio scale
indicates not only the magnitude of the differences, but also the proportion.
There are 2 main categories of scales: the rating scale and the ranking scale.
-Rating scales have several response categories and are used to elicit responses with regard to the
object, event, or person studied.
-Ranking scales make comparisons between or among objects, events, or persons and eclicit the
preferred choices and ranking among them.
The following rating scales are often used in organizational research:
-The dichotomous scale is used to elicit a Yes or No answer, as in the example below. Note that a
nominal scale is used to elicit the response.
Example: Do you own a car? Yes No
-Category scale uses multiple items to elicit a single response. This also uses the nominal scale
Example: where do you live?
Best
Eindhoven
Tilburg
-Likert scale (summated scaled) is designed to examine how strongly subjects agree or disagree
with statements on a 5-point scale with the following anchors ( interval scale) :
Strongly disagree disagree neither agree/ nor disagree
1
agree
strongly agree
-Semantic differential scale is used to assess respondents attitudes toward a particular brand,
advertisement, object or individual ( interval scale). Example:
Beautiful..ugly
Courageous..timid
-Numerical scale is similar to the Semantic Differential Scale, with the difference that numbers
on a 5-point or 7-point scale are provided, with bipolar adjectives at both ends, as illustrated
below. (Interval scale)
Extremely pleased
7 6 5 4 3 2 1
Extremely displeased
-Itemized rating scale a 5-point or 7-point scale with anchors, as needed, is provided for each
item and the respondent states the appropriate number on the side of each item, or circles the
relevant umber against each item. (interval scale) example:
1 very unlikely
2 unlikely
4 likely
-----
-----
5 very likely
When a neutral point is provided, it is a balanced rating scale, and when it is not, it is an
unbalanced rating scale. (an increase from 5 to 7 or 9 points on a rating scale does not improve
the reliability of the ratings.
-Fixed or constant Sum Scale : the respondents are here asked to distribute a given number of
points across various items. (ordinal scale ) example: in choosing a toilet soap, indicate the
importance you attach to each of the following five aspects by allotting points for each tot total
100 in all
Fragrance
----
Color
----
Shape
----
Size
----
100
-Stapel scale: this scale simultaneously measures both the direction and intensity of the attitude
toward the items under study. (interval scale). Example
+3
+3
+3
+2
+2
+2 +1
+1
+1
-1
-1
-2
-2
-2
-3
-3
-3
-Graphic rating scale: A graphical representation helps the respondents to indicate on this scale
their answers to a particular question by placing a mark at the appropriate point on the line.
(ordinal scale) example.
all right
excellent
-- 1 very bad
The faces scale, which depicts faces ranging from smiling to sad, is also a graphic rating scale.
-Consensus scale: a panel of judges selects certain items, which in its view measure the relevant
concept. The items are chosen particularly based on their pertinence or relevance to the concept.
One such consensus scale is the Thurstone Equal Appearing Interval Scale, where a concept is
measured by a complex process followed by a panel of judges.
-In multidimensional scaling, objects, people or both are visually scaled and a conjoint analysis
is performed. This provides a visual image of the relationship in space among the dimensions of
a construct.
-Ranking scales are used to tap preferences between two or among more objects or items (ordinal
in nature). Alternative methods used are:
The paired comparison scale is used when among a small number of objects, respondents are
asked to choose between two objects at a time. This helps to assess preferences. This is a good
method if the number of stimuli presented is small.
The forced choice enables respondents to rank objects relative to one another among the
alternatives provided.
Example: rank the following magazines you would like to subscribe to in the order of preference,
assigning 1 for the most preferred choice and 5 for the least preferred.
Fortune
---
Playboy
---
Time
---
People
---
Prevention
---
-The comparative scale provides a benchmark or a point of reference to assess attitudes toward
the current object, event, or situation under study.
Example: how useful is it to invest in treasure bonds?
More useful
1
less useful
5
Thus: rating scales are used to measure most behavioral concepts. Ranking scales are used to
make comparisons or rank the variables that have been tapped on a nominal scale.
Different cultures react differently to issues of scaling.
-The goodness of measures is; we need to be reasonably sure that the instruments we use in our
research do indeed measure the variables they are supposed to, and that they measure them
accurately.
-reliability is a test of how consistently a measuring instrument measures whatever concept it is
measuring. (concerned with stability and consistency of measurement)
- validity is a test of how well an instrument that is developed measures the particular concept it
is intended to measure. (concerned with whether we measure the right concept)
Several types of validity tests are used to test the goodness of measures.
- Content validity ensures that the measure includes an adequate and representative set of items
that tap the concept. The more the scale items represent the domain or universe of the concept
being measured, the greater the content validity.
- Face validity indicates that the items that are intended to measure a concept, do, on the face of
it, look like they measure the concept.
exploratory studies
ii.
iii.
2. Panels, like focus groups, are another source of primary information for research
purposes. Panels meet more than once. A panel is static (the same members over
extended periods of time) or dynamic (panel members change from time to time as
various phases of the study are in progress ).
Panels are typically used when several aspects of a product are to be studied from time to
time.
the Delphi technique is a forecasting method that uses a cautiously selected panel of
experts in a systematic, interactive manner.
3. Unobtrusive measures originate from a primary source that does not involve people.
Example: The number of different brands of soft drinks cans found in trash bags provide
a measure of their consumption levels.
Open-ended versus closed questions allow respondents to answer them in any way they
choose.
a closed question asks the respondent to make choices among a set of alternatives given
by the researcher. They help the respondent to make quick choices.
Leading questions: questions should not be phrased in a way that they lead the respondent
to give the answers that the researcher would like them to give.
by asking a leading question we are signaling and pressuring respondents to say no.
Questions should not be worded such that they elicit socially desirable responses.
Classification data/ personal information elicit suck info as age, educational level, marital status
and income.
The researcher may also play the role of the participant-observer. Here, the researches
enters the organization and becomes a part of the work team.
H. 10 Sampling
The process of selecting the right individuals, objects or events as representatives for the entire
population is known as sampling.
-Population refers to the entire group of people, events, or things of interest that the researcher
wishes to investigate.
-An element is a single member of the population.
-The population frame is a listing of all the elements in the population from which the sample is
drawn.
-Sample is a subset of the population. Is thus a subgroup of the population.
-A sampling unit is the element that is available for selection in some stage of the sampling
process.
-A subject is a single member of the sample, just as an element is a member of the population.
The reason for using a sample, rather than collecting data from the entire population, are selfevident.
Attributes or characteristics of the population are generally normally distributed.
Sampling is the process of selecting a sufficient number of the right elements from the
population. The major steps include:
1. Define the population
2. Determine the sample frame
3. Determine the sampling deisgn
a. In probability sampling, the elements in the population have some known, nonzero chance of being selected as sample subjects.
b. In nonprobability sampling the elements to not have a known or predetermined
chance of being selected as subjects.
4. Determine the appropriate sample size
5. Execute the sampling process
Non-response error exists to the extent that those who did respond to your survey are different
from those who did not on characteristics of interest in your study. 2 important sources of nonresponse are not-at-homes and refusals.
Probability sampling can either be unrestricted or restricted.
a. unrestricted or simple random sampling: every element in the population has a known and
equal chance for selected as a subject
Judgment sampling: used when a limited number or category of people have the info that
is sought.
Quote sampling: ensures that certain groups are adequately represented in the study
through the assignment of a quota.
A reliable and valid sample should enable us to generalize the findings from the sample to the
population under investigation.
Precision refers to how close our estimate is to the true population characteristics.
Confidence denotes how certain we are that our estimates will really hold true for the
population.
Efficiency in sampling is attained when, for a given level of precision (Standard error), the
sample size could be reduces, or for a given sample size, the level of precision could be
increased.
Grounded theory: expresses the idea that theory will emerge from data through an iterative
process that involves repeated sampling, collection of data en analysis of data until theoretical
saturation is reached.
Kijken of formules terug komen in oude tentamens, zo nodig nog toevoegen!!!
Data transformation is also necessary when several questions have been used to measure a single
concept.
Frequencies simply refer to the number of times various subcategories of a certain phenomenon
occur. Frequencies can also be visually displayed as bar charts, histograms or pie charts.
There are 3 measures of central tendency: the mean, median, and the mode.
* mean (average) is a measure of central tendency that offers a general picture of the data
without unnecessarily inundating one with each of the observations in a data set,
* the median is the central item in a group of observations when they are arrayed in either an
ascending or descending order.
* the mode is the most frequently occurring phenomenon.
Measures of dispersion include the range, the standard deviation, the variance and the
interquartile range.
* range refers to the extreme values in a set of observations.
* the variance is calculated by subtracting the mean from each of the observations in the data set,
taking the square of this difference and dividing the total of these by the number of observations.
* the standard deviation offers an index of the spread of a distribution or variability in the data.
We would like to see the nature, direction and significance of the bivariate relationships of the
variables used in the study. A correlation matrix is used to examine relationships between
interval and/or ratio variables.
Relationship between 2 nominal variables: chi-square test.
chi-square test of significance helps us to see whether or not two nominal variables are related.
Besides that test, other tests such as fisher exact probability test and the Cochran Q test are used
to determine the relationship between 2 nominal variables.
A Pearson correlation matrix will indicate the direction, strength, and significance of the
bivariate relationship among all the variables that were measured at an interval or ratio scale.
The correlation could range from -1.0 and +1.0.
Cronbachs alpha is a reliability coefficient that indicates how well the items in a set are
positively correlated to one another. The closer cronbachs alpha is to 1, the higher the internal
reliability. Reliabilities less than 0.6 are considered to be poor, those in the range 0.7 acceptable
and those over 0.8 good.
Factorial validity can be established by submitting the data for factor analysis.
Criterion-related validity can be established by testing the power of the measure to differentiate
individuals who are known to be different.
Convergent validity can be established when there is a high degree of correlation between 2
different sources responding to the same measure.
Discriminant validity can be established when 2 distinctly different concepts are not correlated to
each other.
Alpha
Effect size: is the size of a difference or the strength of a relationship in the population.
Sample size
Univariate statistical techniques are used when you want to examine two-variable relationships.
If you are interested in the relationship between many variables, multivariate statistical
techniques are required.
The one sample t-test is used to test the hypothesis that the mean of the population from a sample
is drawn is equal to a comparison standard. (formule pagina 339)
The wilcoxon signed-rank test is a nonparametric test for examining significant differences
between two related samples or repeated measurements on a single sample.
Mcnemars test is a nonparametric method used on nominal data. It assesses the significance of
the difference between two dependent samples when the variable of interest is dichotomous.
Mcnemars test Is a rather straightforward technique to test marginal homogeneity. Marginal
homogeneity refers to equality between one or more of the marginal row totals and the
corresponding marginal column totals.
An independent samples t-test is carried out to see if there are any significant differences in the
means for two groups in the variable of interest. That is, a nominal variable that is split into two
subgroups is tested to see if there is a significant mean difference between the two split groups
on a dependent variable.
An ANOVA helps to examine he significant mean differences among more than two groups on
an interval or ratio-scaled dependent variable.
Simple regression analysis is used in a situation where one independent variable is hypothesized
to affect one dependent variable.
The basic idea of multiple regression analysis is similar to that of simple regression analysis.
Only in this case, we use more when one independent variable to explain variance In the
dependent variable.
Standardized regression coefficients or beta coefficients are the estimates resulting form a
multiple regression analysis performed on variables that have been standardized.
A dummy variable is a variable that has two or more distinct levels, which are coded 0 or 1.
Dummy variables allow ut to use nominal or ordinal variables as independent variables to
explain, understand or predict the dependent variable.
Multicollinearity is an often encountered statistical phenomenon in which two or more
independent variables in a multiple regression model are highly correlated. A common cutoff
value is a tolerance value of 0,1, which corresponds to a VIF of 10.
-Steps:
Checking the reliability of measures: Cronbachs alpha
Obtaining descriptive statistics: frequency distributions
Descriptive statistics: measures of central tendencies and dispersions
Inferential statistics: Pearson correlation
Hypothesis testing
Overall interpretation and recommendations to the President