Sie sind auf Seite 1von 29

KISI-KISI METOPEN

H1. Introduction to research


-Research is simply the process of finding solutions to a problem after a thorough study and
analysis of the situational factors. To be a successful manager it is important for you to know
how to go about making the right decisions by being knowledgeable about various steps involved
in finding solutions to problematic issues.
-Business research can be described as a systematic and organized effort to investigate a specific
problem encountered in the work setting, that needs a solution. The first step in research is to
know where the problem areas exist in the organization, and to identify as clearly and
specifically as possible the problems that need to be studied and resolved. Once the problem that
needs attention is clearly defined, then steps can be taken to gather information, analyze the data,
determine the factors that are associated with the problem and solve is by taking the necessary
corrective measures.
Research thus encompasses the process of inquiry, investigation, examination and
experimentation. We can now define business research as an organized, systematic, data-based,
critical, objective, scientific inquiry or investigation into a specific problem, undertaken with the
purpose of finding answers or solutions to it.
Data can be quantitative (generally gathered by structured questions) or qualitative (generated
from broad answers).
-Applied research is one to solve a current problem faced by the manager in the work setting,
demanding a timely solution. Example: a product is not selling so well, the manager wants to
know why, so he can take corrective action.
-Basic research(fundamental, pure) is to generate a body of knowledge by trying to comprehend
how certain problems that occur in organizations can be solved.
-So applied research is done when the problems are currently experienced in your organization,
basic research is done chiefly to make a contribution to existing knowledge.
-A group of research methods will enable managers to understand, predict and control their
environment. Knowledge of research and problem-solving processes helps managers to identify
problem situations before they get out of control.
Research and research methods helps managers to:
1. Identify and effectively solve minor problems in the work setting

2. Know how to discriminate good from bad research


3. Take calculated risk in decision making
4. Combine experience with scientific knowledge while making decisions
While hiring researchers the manager should make sure that:
1. The roles and expectations of both parties are made explicit.
2. Relevant philosophies and value systems of the organization are clearly stated and
constrains are communicated.
3. A good rapport is established with the researchers
Advantages of Internal Consultants/ Researchers
- The internal team would stand a better chance of being readily accepted by the employees in
the sub-unit of the organization where research needs to be done.
- The team would require less time to understand the structure, the philosophy, climate,
functioning and work systems of the organization.
- They would be available for implementing their recommendations after the research findings
are accepted. They would also be available for evaluating the effectiveness of the changes, and
considering further changes if and when necessary.
- The internal team might cost less, because they need less time to understand the system. For
problems that are of low complexity, the internal team would bei deal.
Disadvantages of internal Consultants/Researchers
- The internal team may fall into a stereotyped way of looking at the organization and its
problems, because of their long tenure as internal consultants. This would inhibit any fresh ideas
and perspectives that might be needed to correct the problem.
- There is a possibility that even the most highly qualified internal research teams are not
perceived as experts by the staff and management, so that their recommendations dont get the
attention they deserve.
Advantages of external Consultants/ Researchers
- The external team has a wealth of experience from having worked with different types of
organizations that have had the same or similar problems. They would be able to ponder over

several alternative ways of looking at the problem because of their extensive problem-solving
experiences in various other organizational setups.
- The external teams might have more knowledge because of their periodic training programs,
which the teams within the organization may not have access to.
Disadvantages of external Consultants/ Researchers
- The cost of hiring an external team is usually high
- It take more time to understand the organization, they are not readily accepted by employees
- They charge additional fees for their assistance in the implementation and evaluation phases.
Knowledge of research greatly enhances the decision-making skills of the manager.
Ethics in business research refers to a code of conduct or expected societal norm of behavior
while conducting research.

H2. Scientific investigation


-Scientific research focuses on solving problems and pursues a step-by-step logical, organized
and rigorous method to identify the problems, gather data, analyses them and draw valid
conclusions there from. Applies to both basic and applied research.
Scientific investigation tends to be more objective and helps managers to highlight the most
critical factors on the workplace that need specific attention.
-The hallmarks or main distinguishing characteristics of scientific research may be listed as
follows:
1. Purposiveness: een goede aanleiding en doel om dit onderzoek te doen
2. Rigor: a good theoretical base and a round methodological design would add rigor to a
purposive study. Rigor connotes carefulness, scrupulousness, and the degree of exactitude in
research investigations .
3. Testability: scientific research lends itself to testing logically developed hypotheses to see if
the data support the educated conjectures or hypotheses that are developed after a careful study
of the problem situation.
4. Replicability: the results are repeated, to see whether the hypotheses are reflective of the true
state of affairs in the population.
5. Precision and confidence: precision(confidence interval,20-30) refers to the closeness of the
findings to reality based on a sample. Confidence(confidence level,95%) refers to the
probability that our estimations are correct.

6. Objectivity: the conclusions drawn through the interpretation of the results of data analyses
should be objective. The more objective the interpretation, the more scientific the research.
7. Generalizability: refers to the scope of applicability of the research findings in one
organizational setting to other settings. The wider the range of applicability of the solutions
generated by research, the more useful the research is to the users.
8. Parsimony: simplicity in explaining the problem that occur, and in generating solutions for the
problem is always preferred to complex research frameworks that consider an unmanageable
number of factors.(3-45,10-48)
-Deductive reasoning: we start with a general theory and then apply this theory to specific case..
Example: we know that all high performers are highly proficient in their jobs. If John is a high
performer, we then conclude that he is highly proficient in his job.
Hypothesis testing is deductive in nature.
-Inductive reasoning is a process where we observe certain phenomena and on this basis arrive at
conclusions. In other words in induction we logically establish a general proposition based on
observed facts.
The hypothetico-deductive method provides a useful, systematic approach to solving basic and
managerial problems.
The seven steps of the Hypothetico-deductive method:
1. Identify a broad problem area
2. Define the problem statement
3. Develop hypotheses
a. A scientific hypothesis must meet two requirements:
i.

It must be testable

ii.

It must be falsifiable: it must be possible to disprove the hypothesis

4. Determine measures
5. Data collection
6. Data analysis

a. The data gathered are statistically analyzed to see if the hypothesis that were
generated have been supported
7. Interpretation of data
a. Decide whether the hypothesis are supported or not by interpreting the meaning of
the results of the data analysis.

Other types of research are Case studies and action research


Case studies: involve in depth, contextual analyzes of similar situations in other organizations,
where the nature and definition of the problem happen to be the same as experienced in the
current station.
Action research: is sometimes undertaken by consultants who want to initiate change processes
in organizations. Thus, action research is a constantly evolving project with interplay among
problem, solution, effect or consequences and new solutions.

H3. The problem area and the problem statement

A problem does not necessarily mean that something is seriously wrong with a current
situation that needs to be rectified immediately. A problem could also indicate an interest in an
issue where finding the right answers might help to improve an existing situation.
Secondary data: data gathered through existing sources. (data already exists and do not have to
be collected by the researcher.)
Primary data: data gathered for research from the actual side of occurrence of events.
It is important for the researcher to be well acquainted with the background of the company
studied.
A problem statement is a clear, precise, and succinct statement of the specific issue that a
researcher wishes to investigate. It should be relevant, feasible and interesting.
A problem statement in relevant if it is meaningful from a managerial perspective, an academic
perspective or both.
A problem statement is feasible is you are able to answer the problem statement within the
restrictions of the research project.

The research proposal drawn up by the investigator is the result of a planned, organized, and
careful effort, and basically contains the following:
1. The purpose of the study
2. The specific problem to be investigated
3. The scope of the study
4. The relevance of the study
5. The research design offering details on:
a. The sampling design
b. Data collection methods
c. Data analysis
6. Time frame of the study
7. The budget
8. Selected bibliography

H4. The research process: step 1, 2 and 3


-Scientific inquiry is the hypothetico-deductive mode can be discussed to its two aspects
- the process of developing the conceptual framework and the hypotheses for testing
- the design which involves the planning of the actual study, dealing with such aspects as the
location of the study, the selection of the sample, and collection and analysis of the data.
-The research process for basic and applied research(fig4.1):
1. Observation
2. Preliminary data gathering
3. Problem definition
4. Theoretical framework
5. Generation of hypotheses
6. Scientific research

7. Data collection, analysis, and interpretation


8. Deduction
9. Report writing
10. Report presentation
11. Managerial decision making
-The identification of the broad problem area
This refers to the entire situation where one sees a possible need for research and problem
solving
1. currently existing problems
2. areas that need to be improved
3. conceptual or theoretical issue that needs to be tightened up
4. some research questions that a basic researcher wants to answer empirically
2. Preliminary data collection
The nature of information needed by the researcher for the purpose could be broadly classified
under three headings:
1. Background information on the organization ( the origin and history of the company; size in
terms of employees, assets; charter; location; resources;financial)
2. Information on structural factors and management philosophy ( ask questions to management,
roles and positions, specialization, communications, control, rewards )
3. Perceptions, attitudes and behavioural responses of organizational members and client systems
this is also called primary data. ( by establishing good rapport with the individuals and following
the right questioning techniques the researchers will be able to obtain useful information. Nature
of work, superiors, participation demi-saison making, rewards, oportunities.)
The main idea in gathering information on values, structures and processes is that these might
often reveal the root of the real problem.
Literature survey is the documentation of a comprehensive review of the published and
unpublished work from secondary sources of data in the areas of specific interest to the
researcher. The purpose of the literature review is to ensure that no important variable is ignored.
A good literature survey ensures that:
1. important variables that are likely to influence the problem situation are not left out of the
study.
2. a clearer idea emerges as to what variables would be most important to consider, why they
would be considered important, and how they should be investigated to solve the problem.

3. the problem statement can be made with precision and clarity


4. testability and replicability of the findings of the current research are enhanced
5. dont reinvent the wheel
6. The probleem investigated is percieved by the scientific community as relevant And
significant.
Literature survey is identified in three steps:
1. identify the various published and unpublished materials that are available on the topics of
interest, and gaining access to these.
2. gathering the relevant information
3. writing up the literature review
Three forms of databases come in handy while reviewing the literature:
- The bibliographic databases which display only the bibliographic citations ( the name of the
author, title of the article etc.)
- The abstract databases which in addition provide an abstractor summary of the article
- The full-text databases which provide the full text of the article
Problem Definition:
-It is fruitful to define a problem as any situation where a gap exists between the actual and the
desired ideal states.
It is very important that symptoms of problems are not defined as the real problem. One way of
determining that the problem rather than symptoms, is being addressed is to ask the question: is
this factor I have identified an antecedent, the real problem or the consequence?. Problem
definition or problem statement, as it is also often referred to, is a clear, precise, and succinct
statement of the question or issue that is to be investigated with the goal of finding an answer or
solution.

H. 4 theoretical framework and hypothesis development


A theoretical framework represent your beliefs on how certain phenomena are related to each
other and an explanation of why you believe that these variables are associated with each other.
From this framework we can develop testable hypotheses to examine whether the theory
formulated is valid or not.
A variable is anything that can take on differing or varying values, 4 main types of variables:
1. dependent variables (criterion variable)
2. independent variables (predictor variable)
3. moderating variables
4. mediating or intervening variables
Variables can be discrete(male/female) or continuous(age of an individual).
Ad 1. the researchers goal is to understand and describe the dependent variable, or to explain its
variability, or predict it. It is the main variable that lends itself investigation
Ad 2. An independent variable is one that influences the dependent variable in either a positive
or a negative way.
Ad 3. The moderating variable is one that has a strong contingent effect on the independent
variable-dependent variable relationship
Ad 4. An intervening variable is one that surfaces between the time the independent variables
start operating to influence the dependent variable and the time their impact is felt on it.
THUS: the independent variable helps to explain the variance in the dependent variable; the
mediating variable surfaces at time t2 as a function of the independent variable, which also helps
us to conceptualize the relationship between the independent and the dependent variables; and
the moderating variable has a contingent effect on the relationship between the variables.
A conceptual model helps you to structure your decision of the literature.

The theoretical framework is the foundation on which the entire research project is based.
A hypothesis can be defined as a logically conjectured relationship between two or more
variables expressed in the form of a testable statement. By convention in the social sciences, to
call a relationship statistically significantwe should be confident that 95% out of the observed
relationship is true.
-If-then statements: if employees are more healthy, then they will take sick leave less frequently.
-If terms such as positive, negative, more than and less than, and the like are used then these
hypotheses are directional, because the direction of the relationship between the variables is
postulated.
-Non-directional hypothesis are those that do postulate a relationship or difference, but offer no
indication of the direction of these relationships or differences.
-The null hypothesis is a hypothesis set up to be rejected in order to support an alternative
hypothesis.
-The alternate hypothesis, which is the opposite of the null hypothesis, is a statement expressing
a relationship between two variables or indicating differences between groups.
-In deduction, theoretical model is first developed, hypothesis are formulated, data gathered and
then hypothesis are tested. In induction, new hypothesis are based on data already collected and
then are then tested.

H.5 elements of research design


-The research design involves a series of rational decision-making choices. It is important to note
that the more sophisticated and rigorous the research design is, the greater the time, cost and
other resources expended on it will be.
Details of study:
1. Purpose of study(exploration, description, hypothesis testing)
2. Types of investigation(establishing: causal relationships, correlations, group differences, ranks,
etc.)
3. Extent of researcher interference(minimal, manipulation and/or control and/or simulation)
4. Study setting(contrived, no contrived)
5. Unit of analysis(individuals, dyads, groups, organizations, machines, etc.)
6. Sampling design(probability/non probability, sample size(n))
7. Time horizon(one-shot(cross sectional), longitudinal)
Measurement:
1. Measurement and measures(operational definition: items, scaling, categorizing, coding)
2. Data-collection method(observation interview, questionnaire, physical measurement,
unobtrusive)

Data analysis:
1. Feel for data
2. Goodness of data
3. Hypotheses testing
The case study, which is an examination of studies done in other similar organizational
situations, is also a method of solving problems.
-An exploratory study is undertaken when not much is known about the situation at hand, or no
information is available on how similar problems or research issues have been solved in the past.
Exploratory studies are also necessary when some facts are known, but more information is
needed for developing a viable theoretical framework. It is important to note that doing a study
for the first time in a particular organization does not make the research exploratory in nature;
only when knowledge is scant and a deeper understanding is sought, does the study become
exploratory.
-A descriptive study is undertaken in order to ascertain and be able to describe the characteristics
of the variables of interest in a situation.
The goal of a descriptive study is to describe relevant aspects of the phenomenon of interest from
an individual or other perspective.
-Hypothesis testing is undertaken to explain the variance in the dependent variable or to predict
organizational outcomes.
-It is not difficult to see that in exploratory studies, the researcher is basically interested in
exploring the situational factors so as to get a grip on the characteristics of the phenomenon of
interest. Descriptive studies are undertaken when the characteristics or the phenomenon tob e
tapped in a situation are known to exist, and one wants to be able to describe them better by
offering a profile of the factors. Hypothesis testing offers an enhanced understanding of the
relationship that exists among variables.
The study in which the researchers wants to delineate the cause of one or more problems is
called a causal study. When the researcher is interested in delineating the important variables
associated with the problem, the study is called a correlational study.
The intention of the researcher conducting a causal study is to be able to state that variable X
causes variable Y.
-A correlation study is conducted in the natural environment of the organization with minimum
interference by the researcher with the normal flow of work.
-In studies conducted to establish cause-and-effect relationship, the researcher tries to
manipulate certain variables so as to study the effects of such manipulation on the dependent
variable of interest.

-Correlational studies done in organizations are called field studies. Studies conducted to
establish cause and effect relationship using the same natural environment in which employees
normally functions are called field experiments.
Experiments done to establish a cause-and-effect relationship beyond the possibility of the least
doubt require the creation of an artificial, contrived environment in which all the extraneous
factors are strictly controlled., the study is called a lab experiment.
In summary:
1. Field studies: where various factors are examined in the natural setting in which daily
activities go on as normal with minimal researcher interference.
2. Field experiments, where cause-and-effect relationships are studied with some amount of
researcher interference, but still in the natural setting where work continuous in the
normal fashion.
3. Lab experiments, where the researches explores cause-and-effect relationships, not only
exercising a high degree of control but also in an artificial and deliberately created
setting.
The unit of analysis refers to the level of aggregation of the data collected during the subsequent
data analysis stage.
-Two-person groups are called dyads ( husband-wife). If we want to study buying behaviours we
have to collect data from, lete say, 60 individuals, if we want to study a group, we have to collect
data from 6 different groups.
-A study can be undertaken in which data are gathered just once, perhaps over a period of days or
weeks or months, in order to answer a research question, such studies are called one-shot or
cross-sectional studies.
Studies as when data on the dependent variable are gathered at two ore more points in time to
answer the research questions are called longitudinal studies.
Longitudinal studies take more time and effort and cost more than cross-sectional studies.

H.6 Measurement of variables

Measurement is the assignment of numbers or other symbols to characteristics of objects


according to a prespecified set of rules. you cannot measure objects, you measure their
characteristics!
There are at least 2 types of variables: one lends itself to objective and precise measurement; the
other is more nebulous and does not lend itself to accurate measurement because of its abstract
and subjective nature.
Reduction of abstract concepts to render them measurable in a tangible way is called
operationalizing the concepts.
Operationalizing a concept involves a series of steps. The fist step is to come up with a definition
of the construct that you want to measure. Then, it is necessary to think about the content of the
measure. Finally, the validity and reliability of the measurement scales has to be assessed.
-Dimensions are typical characteristics of certain people. Operational definition consists in the
reduction of the concept from its level of abstraction, by breaking it into its dimensions and
elements, as discussed. An operational definition does not describe the correlates of the concept.
One cannot apply the concepts unless one has understand them and retained them in memory. A
scale is a tool or mechanism by which individuals are distinguished as to how they differ from
one another on the variables of interest to our study.

H.7 Measurement: scaling, reliability, validity


A scale is a tool or mechanism by which individuals are distinguished as to how they differ from
on other in the variables of interest to our study.
There are 4 basic types of scales: nominal, ordinal, interval and ratio.
A nominal scale is one that allows the researcher to assign subjects to certain categories or
groups. For example gender 1) male ; 2) female. Nominal scales categorize individuals or
objects into mutually exclusive and also collectively exhaustive groups. In other words, there is
no third category into which respondents normally fall. This is easy for measuring percentage or
frequencies.
The nominal scales gives some basic, categorical, gross information.
-An ordinal scale not only categorizes the variables in such a way as to denote differences among
the various categories, it also rank-orders the categories in some meaningful way. For example:
rank the following 5 characteristics in a job in terms of how important they are for you, you
should rank the most important item as 1, the next as 2 until you ranked them for 1 to 5.
The ordinal scale does not give any indication of the magnitude of the differences among the
ranks.

-An interval scale lets us measure the distance between any two points on the scale. This helps us
to compute the means and the standard deviations of the responses on the variables. The interval
scale not only groups individuals according to certain categories and taps the order of these
groups it also measures the magnitude of the differences in the preferences among the
individuals. (a thermometer is a good example of an interval-scaled instrument.)
-The ratio scale not only measures the magnitude of the differences between points on the scale
but also taps the proportions in the differences. It is the most powerful scale of the four scales
because it has a unique zero origin and subsumes all the properties of the other three scales, the
weighing balance is a good example of a ratio scale. (the weighting balance is a good example.)
Thus: the nominal scale highlights the differences by classifying objects or persons into groups,
and provides the least amount of info on the variable. The ordinal scale provides some additional
info by rank-ordering the categories of the nominal scale. The interval scale, not only ranks, but
also provides us with info on the magnitude of the differences in the variable. The ratio scale
indicates not only the magnitude of the differences, but also the proportion.
There are 2 main categories of scales: the rating scale and the ranking scale.
-Rating scales have several response categories and are used to elicit responses with regard to the
object, event, or person studied.
-Ranking scales make comparisons between or among objects, events, or persons and eclicit the
preferred choices and ranking among them.
The following rating scales are often used in organizational research:
-The dichotomous scale is used to elicit a Yes or No answer, as in the example below. Note that a
nominal scale is used to elicit the response.
Example: Do you own a car? Yes No
-Category scale uses multiple items to elicit a single response. This also uses the nominal scale
Example: where do you live?

Best

Eindhoven

Tilburg

-Likert scale (summated scaled) is designed to examine how strongly subjects agree or disagree
with statements on a 5-point scale with the following anchors ( interval scale) :
Strongly disagree disagree neither agree/ nor disagree
1

agree

strongly agree

-Semantic differential scale is used to assess respondents attitudes toward a particular brand,
advertisement, object or individual ( interval scale). Example:
Beautiful..ugly
Courageous..timid
-Numerical scale is similar to the Semantic Differential Scale, with the difference that numbers
on a 5-point or 7-point scale are provided, with bipolar adjectives at both ends, as illustrated
below. (Interval scale)
Extremely pleased

7 6 5 4 3 2 1

Extremely displeased

-Itemized rating scale a 5-point or 7-point scale with anchors, as needed, is provided for each
item and the respondent states the appropriate number on the side of each item, or circles the
relevant umber against each item. (interval scale) example:
1 very unlikely

2 unlikely

3 neither unlikely/nor likely

4 likely

a) I will be changing my job within the next 12 months

-----

b) I will take on new assignments in the near future

-----

5 very likely

When a neutral point is provided, it is a balanced rating scale, and when it is not, it is an
unbalanced rating scale. (an increase from 5 to 7 or 9 points on a rating scale does not improve
the reliability of the ratings.
-Fixed or constant Sum Scale : the respondents are here asked to distribute a given number of
points across various items. (ordinal scale ) example: in choosing a toilet soap, indicate the
importance you attach to each of the following five aspects by allotting points for each tot total
100 in all
Fragrance

----

Color

----

Shape

----

Size

----

Texture of lather -----------------------------Total points

100

-Stapel scale: this scale simultaneously measures both the direction and intensity of the attitude
toward the items under study. (interval scale). Example
+3

+3

+3

+2

+2

+2 +1

+1

+1

Adopting modern technology Product innovation Interpersonal skills


-1

-1

-1

-2

-2

-2

-3

-3

-3

-Graphic rating scale: A graphical representation helps the respondents to indicate on this scale
their answers to a particular question by placing a mark at the appropriate point on the line.
(ordinal scale) example.

How would you rate your supervisor on a scale of 1 to 10?


- - 10
- 5

all right

excellent
-- 1 very bad

The faces scale, which depicts faces ranging from smiling to sad, is also a graphic rating scale.
-Consensus scale: a panel of judges selects certain items, which in its view measure the relevant
concept. The items are chosen particularly based on their pertinence or relevance to the concept.
One such consensus scale is the Thurstone Equal Appearing Interval Scale, where a concept is
measured by a complex process followed by a panel of judges.
-In multidimensional scaling, objects, people or both are visually scaled and a conjoint analysis
is performed. This provides a visual image of the relationship in space among the dimensions of
a construct.
-Ranking scales are used to tap preferences between two or among more objects or items (ordinal
in nature). Alternative methods used are:
The paired comparison scale is used when among a small number of objects, respondents are
asked to choose between two objects at a time. This helps to assess preferences. This is a good
method if the number of stimuli presented is small.
The forced choice enables respondents to rank objects relative to one another among the
alternatives provided.

Example: rank the following magazines you would like to subscribe to in the order of preference,
assigning 1 for the most preferred choice and 5 for the least preferred.
Fortune

---

Playboy

---

Time

---

People

---

Prevention

---

-The comparative scale provides a benchmark or a point of reference to assess attitudes toward
the current object, event, or situation under study.
Example: how useful is it to invest in treasure bonds?
More useful
1

about the same


3

less useful
5

Thus: rating scales are used to measure most behavioral concepts. Ranking scales are used to
make comparisons or rank the variables that have been tapped on a nominal scale.
Different cultures react differently to issues of scaling.
-The goodness of measures is; we need to be reasonably sure that the instruments we use in our
research do indeed measure the variables they are supposed to, and that they measure them
accurately.
-reliability is a test of how consistently a measuring instrument measures whatever concept it is
measuring. (concerned with stability and consistency of measurement)
- validity is a test of how well an instrument that is developed measures the particular concept it
is intended to measure. (concerned with whether we measure the right concept)
Several types of validity tests are used to test the goodness of measures.
- Content validity ensures that the measure includes an adequate and representative set of items
that tap the concept. The more the scale items represent the domain or universe of the concept
being measured, the greater the content validity.
- Face validity indicates that the items that are intended to measure a concept, do, on the face of
it, look like they measure the concept.

-Criterion-related validity is established when the measure differentiates individuals on a


criterion it is expected to predict. This can be done by establishing concurrent validity or
predictive validity:
Example concurrent validity = when the scale discriminates individuals who are known to be
different.
Example predictive validity = indicates the ability of the measuring instrument to differentiate
among individuals with reference to a future criterion.
-Construct validity testifies to how well the results obtained from the use of the measure fit the
theories around which the test is designed. This is assessed through convergent and discriminant
validity, which are explained below.
Example convergent validity: is established when the scores obtained with two different
instruments measuring the same concept are highly correlated.
Example discriminant validity: is established when, based on theory, two variables are
predicted to be uncorrelated, and the scores obtained by measuring them are indeed empirically
found to be so.
Reliability of a measure indicates the extent to which it is without bias (error free) and hence
ensures consistent measurement ocross time and across the various items in the instrument.
The ability of a measure to remain the same over time is indicative of its stability and low
vulnerability to changes in the situation.
The reliability coefficient obtained by repetition of the same measure on a second occasion is
called test-retest reliability. The higher it is, the better the test-retest reliability and consequently,
the stability of the measure across time.
When responses on 2 comparable sets of measures tapping the same construct are highly
correlated, we have parallel-form reliability.
The internal consistency of measures is indicative of the homogeneity of the items in the
measure that tap the construct.
The interitem consistency reliability is a test of consistency of respondents answers to all items
in a measure.
Split-half reliability reflects the correlations between two halves of an instrument. The estimates
will vary depending on how the items in the measure are split into two halves.
In a reflective scale, the items are expected to correlate. Each item is assumed to share a common
basis.

A formative scale is used when a construct is viewed as an explanatory combination of its


indicators. A good formative scale is one that represents the entire domain of the construct.

H. 10 Data collection methods


-Data can be obtained from primary or secondary sources. Primary data refer to information
obtained firsthand by the researcher on the variables of interest for the specific purpose of the
study. Secondary data refer to information gathered from sources already existing.
Examples of sources of primary data are individuals, focus groups, panels and unobtrusive
sources such as trash can.
1. Focus groups consist typically of 8 to 10 members with a moderator leading the
discussions for about 2 hours on a particular topic, concept, or product. Members are
generally chosen on the basis of their expertise in the topic.
The focus sessions are aimed at obtaining respondents impressions and opinions about
the concept or product.
They are relatively inexpensive and provide fairly dependable data within a short time.
a. The moderator steers the group persuasively to obtain all the info.
b. The content analysis of the data so obtained provides only qualitative info.
c. Focus groups are used for:
i.

exploratory studies

ii.

making generalizations based on the info generated by them

iii.

conducting sample surveys

2. Panels, like focus groups, are another source of primary information for research
purposes. Panels meet more than once. A panel is static (the same members over
extended periods of time) or dynamic (panel members change from time to time as
various phases of the study are in progress ).
Panels are typically used when several aspects of a product are to be studied from time to
time.
the Delphi technique is a forecasting method that uses a cautiously selected panel of
experts in a systematic, interactive manner.
3. Unobtrusive measures originate from a primary source that does not involve people.
Example: The number of different brands of soft drinks cans found in trash bags provide
a measure of their consumption levels.

Examples of secondary sources are company records or archives, government publications.


secondary data refer to information gathered by someone other than the researcher conducting
the current study.
Data collection methods: (blz 212)
1. Interviewing:
a. unstructured interviews are so labeled because the interviewer does
not
enter the interview setting with a planned sequence
of questions to be asked of the respondent.
the object is to bring some preliminary issues to the surface so that the researcher can
determine what variables need further in-dept investigation.
b. structured interviews are those conducted when it is known at the outset what information is
needed.
Bias refers to errors or inaccuracies in the data collected.
Certain strategies in how questions are posed also help the participants to offer less biased
responses:
The questioning technique
Funneling techniques the transition from broad to narrow themes.
Face-to-face interviews provide rich data, offer the opportunity to establish rapport with the
interviewees, and help to explore and understand the complex issues. On the negative side, faceto-face interviews have the potential for introducing interviewer bias and can be expensive if a
large number of subjects are involved. It is best suited to the exploratory stages of research when
the researcher is trying to get a handle concepts.
Telephone interviews help to contact subjects dispersed over various geographic regions and
obtain immediate response from them. On the negative side, the interviewer cannot observe the
nonverbal responses of the respondents, and the interviewee can block the call. Telephone
interviews are best suited for asking structured questions where respondents need to be obtained
quickly from a sample that is geographically spread.
2. Questionnaires:
A questionnaire is a pre-formulated written set of questions to which respondents record their
answers, usually within rather closely defined alternatives

Questionnaires can be administered personally, mailed to the respondents or electronically


distributed.
Personally administered questionnaires to groups of individuals help t 1) establish rapport with
the respondents while introducing the survey 2) provide clarification sought by the respondents
in the spot and 3) collect the questionnaires immediately after they are completed. It is
expensive, especially if the sample is geographically dispersed.
The main advantage is that the researcher can collect all the completed responses in a short time
period of time.
Mail questionnaires are advantageous when responses to many questions have to be obtained
from a sample that is geographically dispersed. On the negative side, mailed questionnaires
usually have a low response rate and one cannot be sure if the data obtained are biased since the
non-respondents may be different from those who did respond.
The language of the questionnaires should approximate the level of understanding of the
respondents.
Type and form of questions:

Open-ended versus closed questions allow respondents to answer them in any way they
choose.

a closed question asks the respondent to make choices among a set of alternatives given
by the researcher. They help the respondent to make quick choices.

Positively and negatively worded questions:


It is advisable to include some negatively formed questions as well.
The use of double negative and excessive use of the words not and only should be
avoided, because they tend to confuse the respondent.

Double-barreled question is a question that lends itself to different possible responses to


its subparts. Such questions should be avoided and 2 or more separate questions asked
instead.

Responses to Ambiguous questions have built-in bias inasmuch as different respondents


might interpret such items in the questionnaire differently/

Recall dependent questions: some questions might require respondents to recall


experiences from the past that are hazy in their memory.

Leading questions: questions should not be phrased in a way that they lead the respondent
to give the answers that the researcher would like them to give.
by asking a leading question we are signaling and pressuring respondents to say no.

Loaded questions: when a question is phrased in an emotionally charged manner.

Questions should not be worded such that they elicit socially desirable responses.

Length of questions: simple, short questions are preferable to longer ones.

Classification data/ personal information elicit suck info as age, educational level, marital status
and income.

3. Other methods of data collection:


a. Observational survey people can be observed in their natural work environment.
Observational studies help to comprehend complex issues through direct observation and then, of
possible, asking questions to seek clarifications on certain issues. The data obtained are rich and
uncontaminated by self-report bias. On the negative side, they are expensive, since long periods
of observation are required, and observer bias may well be present in the data.
The researcher can play one of two roles while gathering field observational data:

The researcher may act as a nonparticipant-observer by collecting the necessary data


without becoming an integral part of the organizational system.

The researcher may also play the role of the participant-observer. Here, the researches
enters the organization and becomes a part of the work team.

Where the observer has a predetermined set of categories of activities to be studied, it is a


structured observational study.
At the beginning of the study it is possible that the observer has no definite idea regarding the
particular aspects need focus. The observer will record practically everything that is observed.
unstructured observational study.

H. 10 Sampling
The process of selecting the right individuals, objects or events as representatives for the entire
population is known as sampling.

-Population refers to the entire group of people, events, or things of interest that the researcher
wishes to investigate.
-An element is a single member of the population.
-The population frame is a listing of all the elements in the population from which the sample is
drawn.
-Sample is a subset of the population. Is thus a subgroup of the population.
-A sampling unit is the element that is available for selection in some stage of the sampling
process.
-A subject is a single member of the sample, just as an element is a member of the population.
The reason for using a sample, rather than collecting data from the entire population, are selfevident.
Attributes or characteristics of the population are generally normally distributed.
Sampling is the process of selecting a sufficient number of the right elements from the
population. The major steps include:
1. Define the population
2. Determine the sample frame
3. Determine the sampling deisgn
a. In probability sampling, the elements in the population have some known, nonzero chance of being selected as sample subjects.
b. In nonprobability sampling the elements to not have a known or predetermined
chance of being selected as subjects.
4. Determine the appropriate sample size
5. Execute the sampling process

Non-response error exists to the extent that those who did respond to your survey are different
from those who did not on characteristics of interest in your study. 2 important sources of nonresponse are not-at-homes and refusals.
Probability sampling can either be unrestricted or restricted.
a. unrestricted or simple random sampling: every element in the population has a known and
equal chance for selected as a subject

b. restricted or complex probability sampling: complex probability for possible more


information
the 5 most common complex probability sampling designs: (blz 279)
a. systematic sampling: drawing every nth element in the population starting with a randomly
chosen element between 1 and n.
b. stratified random sampling: eerst groepen maken en dan uit deze groepen elementen
selecteren
proportionate and disproportionate stratified random sampling:
Select 20% members from each stratum. That is, the members represented in the sample from
each stratum will be proportionate for the total number of elements in the respective strata.
Disproportionate stratified random sampling: the number of subjects from each stratum would
now be altered, while keeping the sample size unchanged. Are maid either when some strata are
too small or too large.
c. cluster sampling: the raget population is first divided into clusters. Then, a random sample of
clusters is drawn and for each selected cluster either all the elements or a sample of elements are
included in the sample.
singlestage and multistage cluster sampling: cluster sampling can also be done in several
stages and is then known as multistage cluster sampling.
d. area sampling: (specific type of cluster sampling) clusters consists of geographic areas such as
countries, city blocks, etc.
e. double sampling: a sampling design where initially a sample is used in a study to collect some
preliminary information of interest and later a subsample of this primary sample is used to
examine the matter in more detail.
-In Non-probability sampling, the elements in the population do not have any probabilities
attached to their being chosen as sample subjects.
-Convenience sampling: refers to the collection of information from members of the population
who are conveniently available to provide it.
- is most often used during the exploratory phase of a research project and is perhaps the best
way of getting some basic info quickly and efficiently.
- Purposive sampling: the sampling here is confined to specific types of people who can provide
the desired information, either because they are the only ones who have it, or conform to some
criteria set by the researcher. 2 major types of purposive sampling:

Judgment sampling: used when a limited number or category of people have the info that
is sought.
Quote sampling: ensures that certain groups are adequately represented in the study
through the assignment of a quota.
A reliable and valid sample should enable us to generalize the findings from the sample to the
population under investigation.
Precision refers to how close our estimate is to the true population characteristics.
Confidence denotes how certain we are that our estimates will really hold true for the
population.
Efficiency in sampling is attained when, for a given level of precision (Standard error), the
sample size could be reduces, or for a given sample size, the level of precision could be
increased.
Grounded theory: expresses the idea that theory will emerge from data through an iterative
process that involves repeated sampling, collection of data en analysis of data until theoretical
saturation is reached.
Kijken of formules terug komen in oude tentamens, zo nodig nog toevoegen!!!

H 11. Quantitative data analysis


The first step in data preparation is data coding. Data coding involves assigning a number to the
participants responses so they can be entered into a database. After responses have been coded,
they can be entered into a database. After the data are keyed in, they need to be edited.
Data editing deal with detecting and correcting illogical, inconsistent or illegal data and
omissions in the info returned by the participants of the study.
Inconsistent responses are responses that are not in harmony with other information.
Illegal codes are values that are not specified in the coding instructions.
Omissions may occur because respondents did not understand the questions, did not know the
answer or were not willing to answer the question.
Data transformation, a variation of data coding is the process of changing the original numerical
representation of a quantitative value to other values.
Another type of data transformation is reverse scoring.

Data transformation is also necessary when several questions have been used to measure a single
concept.
Frequencies simply refer to the number of times various subcategories of a certain phenomenon
occur. Frequencies can also be visually displayed as bar charts, histograms or pie charts.
There are 3 measures of central tendency: the mean, median, and the mode.
* mean (average) is a measure of central tendency that offers a general picture of the data
without unnecessarily inundating one with each of the observations in a data set,
* the median is the central item in a group of observations when they are arrayed in either an
ascending or descending order.
* the mode is the most frequently occurring phenomenon.
Measures of dispersion include the range, the standard deviation, the variance and the
interquartile range.
* range refers to the extreme values in a set of observations.
* the variance is calculated by subtracting the mean from each of the observations in the data set,
taking the square of this difference and dividing the total of these by the number of observations.
* the standard deviation offers an index of the spread of a distribution or variability in the data.
We would like to see the nature, direction and significance of the bivariate relationships of the
variables used in the study. A correlation matrix is used to examine relationships between
interval and/or ratio variables.
Relationship between 2 nominal variables: chi-square test.

chi-square test of significance helps us to see whether or not two nominal variables are related.
Besides that test, other tests such as fisher exact probability test and the Cochran Q test are used
to determine the relationship between 2 nominal variables.
A Pearson correlation matrix will indicate the direction, strength, and significance of the
bivariate relationship among all the variables that were measured at an interval or ratio scale.
The correlation could range from -1.0 and +1.0.
Cronbachs alpha is a reliability coefficient that indicates how well the items in a set are
positively correlated to one another. The closer cronbachs alpha is to 1, the higher the internal

reliability. Reliabilities less than 0.6 are considered to be poor, those in the range 0.7 acceptable
and those over 0.8 good.
Factorial validity can be established by submitting the data for factor analysis.
Criterion-related validity can be established by testing the power of the measure to differentiate
individuals who are known to be different.
Convergent validity can be established when there is a high degree of correlation between 2
different sources responding to the same measure.
Discriminant validity can be established when 2 distinctly different concepts are not correlated to
each other.

H. 12 quantitative date analysis: hypothesis testing


The null hypothesis is presumed true until statistical evidence, in the form of a hypothesis test,
indicates otherwise.
There are 2 kinds of errors classified as type 1 error and type 2 error. A type 1 error, also referred
to as alpha, is the probability of rejecting the null hypothesis when it is actually true. A type 2
error, also referred to as beta, is the probability of failing to reject the null hypothesis given that
the alternative hypothesis is actually true.
Statistical power (1)
is the probability of correctly rejecting the null hypothesis.
Statistical power depends on:

Alpha

Effect size: is the size of a difference or the strength of a relationship in the population.

Sample size

Univariate statistical techniques are used when you want to examine two-variable relationships.
If you are interested in the relationship between many variables, multivariate statistical
techniques are required.

The one sample t-test is used to test the hypothesis that the mean of the population from a sample
is drawn is equal to a comparison standard. (formule pagina 339)
The wilcoxon signed-rank test is a nonparametric test for examining significant differences
between two related samples or repeated measurements on a single sample.
Mcnemars test is a nonparametric method used on nominal data. It assesses the significance of
the difference between two dependent samples when the variable of interest is dichotomous.
Mcnemars test Is a rather straightforward technique to test marginal homogeneity. Marginal
homogeneity refers to equality between one or more of the marginal row totals and the
corresponding marginal column totals.
An independent samples t-test is carried out to see if there are any significant differences in the
means for two groups in the variable of interest. That is, a nominal variable that is split into two
subgroups is tested to see if there is a significant mean difference between the two split groups
on a dependent variable.
An ANOVA helps to examine he significant mean differences among more than two groups on
an interval or ratio-scaled dependent variable.
Simple regression analysis is used in a situation where one independent variable is hypothesized
to affect one dependent variable.
The basic idea of multiple regression analysis is similar to that of simple regression analysis.
Only in this case, we use more when one independent variable to explain variance In the
dependent variable.
Standardized regression coefficients or beta coefficients are the estimates resulting form a
multiple regression analysis performed on variables that have been standardized.
A dummy variable is a variable that has two or more distinct levels, which are coded 0 or 1.
Dummy variables allow ut to use nominal or ordinal variables as independent variables to
explain, understand or predict the dependent variable.
Multicollinearity is an often encountered statistical phenomenon in which two or more
independent variables in a multiple regression model are highly correlated. A common cutoff
value is a tolerance value of 0,1, which corresponds to a VIF of 10.

H. 12 Data analysis and interpretation


-Negative questions need to be turned around on the scale so a 7 is for example a 1 this is called
categorization
-Cronbachs alpha is a reliability coefficient that indicates how well the items in a set are
positively correlated to another.

-Steps:
Checking the reliability of measures: Cronbachs alpha
Obtaining descriptive statistics: frequency distributions
Descriptive statistics: measures of central tendencies and dispersions
Inferential statistics: Pearson correlation
Hypothesis testing
Overall interpretation and recommendations to the President