Sie sind auf Seite 1von 14

Chapter 4

The Research Process


Stage 1: Clarifying the Research Question
Management-research question hierarchy
1. Management dilemma symptom of an actual problem , early signal of an
opportunity
Exploratory research must be done to further define the management
question and research question
2. Management question a restatement of the managers dilemma in question
form
3. Research question the hypothesis that best states the objective of the
research; that focuses the researchers attention
4. Investigative question questions the researcher must answer to
satisfactorily answer the research question
5. Measurement question questions asked to participants to answer what is
specifically observed in a research study

Stage 2: Proposing Research


Resource Allocation and Budget
Rule-of-thumb budgeting involved fixed percentage of some criterion
Departmental or functional area budgeting allocates a portion of total
expenditures in the unit to research activities
Task budgeting selects specific research projects to support on an ad hoc basis
Evaluation Methods
Ex post Facto Evaluation after-the-fact event, cost benefit analysis for future
research proposals
Prior or Interim Evaluation conduct a thorough management audit of
operations
Option Analysis a choice between well-defined options, a formal analysis with
each alternative judged in terms of estimated costs and benefits
Decision Theory assessment based on the outcomes of each action
a. Decision rule may be choose the course of action with the lowest loss
possibility

b. Decision variable expressed in dollars, representing sales, costs, some


form of profits or contribution, or some quantifiable measure
Evaluation of alternatives:
a. Explicitly stated
b. Decision variable is defined by an outcome that may be measured
c. Decision rule is determined by which outcome may be compared

The Research Proposal


-

an activity that incorporates decisions made during early project planning


phases of the study, including management-research question hierarchy and
exploration.
May be oral, or written when study is suggested
May serve the purpose of legally binding contract

Stage 3: Designing the Research Project


Research design blueprint for fulfilling objectives and answering questions
Target population needed in planning the research project
Sample
Census counting of all elements in a target population
Pilot test detect weakness in design and instrumentation

Stage 4: Data Collection and Preparation


Pretesting conducted to refine a measuring instrument
Data facts presented to the researcher from the study environment
-

characterized by their abstractions, verifiability, elusiveness


Processed by our senses
Capturing data is elusive

Stage 5: Data Analysis and Interpretation


Data Analysis reducing accumulated data to a manageable size, developing
summaries, looking for patterns, and applying statistical techniques

Stage 6: Reporting the Results


Research Report should contain:
Executive Summary synopsis of the problem, findings and recommendation

Overview of the research problems background


Implementation Strategies recommendation
Technical Appendix materials necessary to replicate the projects

Chapter 5
Clarifying the Research Question through Secondary Data and Exploration
Exploration useful when researchers lack a clear idea of the problems
Exploratory phase:
o
o
o
o

Discovery and analysis of secondary sources


Interviews with knowledgeable about the problem
Interviews with individual involved in the problem (individual depth
interviews)
Group discussions with individuals involved in the problem

Objectives of exploratory phase:


o
o
o
o
o

Expand your understanding on the management dilemma by looking on


similar solutions
Gather background information on your topic to refine research question
Identify information that should be gathered to formulate investigative
questions
Identify sources for and actual questions that might be used as measurement
questions
Identify sources for and actual sample frames (list of potential participants)
that might be used in sample design

Literature search exploratory research begins, a review of books, articles in


journals or professional literature
Steps of literature search:
1. Define your management dilemma or management question
2. Consult encyclopedias, dictionaries, handbooks, and textbooks to identify key
terms, people, or events relevant to your management dilemma or
management decision
3. Apply these key terms, names of people, or events in searching indexes,
bibliographies, and the Web to identify specific secondary sponsors
4. Locate and review specific secondary sources for relevance
5. Evaluate the value of each source

Levels of Information

Primary Sources original works of research or raw data without interpretation or


pronouncements that represent an official opinion or position
Secondary Sources interpretations of primary data
Annual report can either be primary (outsiders) and secondary
Tertiary Sources interpretations of the secondary source but are generally
represented by indexes, bibliographies and other finding aids
Dictionaries ubiquitous and define terms
Encyclopedias background and historical information on a topic or to find names
or terms that can enhance search results
Handbook collection of facts unique to a topic
Directories used for finding names and addresses
Source evaluation needed for secondary sources based on five factors
1. Purpose explicit or hidden agenda of the information source
2. Scope breadth of topic coverage, including time period, geographic
limitations, and the criteria for information inclusion
3. Authority level of the data and the credentials of the source
4. Audience characteristics and background of the people or groups
5. Format how the information is presented
Data mining process of discovering knowledge from databases, becomes
secondary data
Data warehouse electronic repository for databases that organizes large
volumes of data into categories to facilitate retrieval, interpretation and sorting by
end users
Data marts departments or cross functional storage facilities

Evolution of Data Mining


Data Collection (1960s) retrospective, static
Data Access (1980s) retrospective, dynamic, record level
Data Navigation (1990s) - retrospective, dynamic, multi-level
Data Mining (2000) prospective, proactive

Data-Mining Process
Sample decide between census and sample data

Explore Identify relationships within the data


Modify Modify or transform data
Model Develop a model that explains the data relationships
Assess Test the model accuracy
Data reduction program:
1. Factor analysis
2. Correspondence analysis
3. Clustering
Modeling techniques:
1.
2.
3.
4.
5.
6.

Neural networks
Decision tree
Sequence-based
Classification
Estimation
Genetic-based models

Developing investigative questions:


1. Performance consideration relative costs of options, speed of packing
serviced laptops
2. Attitudinal issues perceived service quality
3. Behavioral issues ease of use in packing
2 types of measurement questions:
1. Predesigned measurement questions formulated and tested previously by
other researchers
2. Custom-designed measurement questions formulated specifically for the
project at hand

Chapter 11
Measurement consists of assigning numbers to empirical events, objects or
properties, or activities in compliance with a set of rules
Process of measurement:
1. Selecting observable empirical events
2. Developing mapping rules scheme for assigning numbers or symbols
3. Applying mapping rules
Objects concepts of ordinary experience, such as tangible items like furniture etc

Properties characteristics of the object, such as weight, height, posture, among


others.
Concept a bundle of meanings or characteristics associated with certain events,
objects, conditions, situations, or behaviors
Constructs image or idea specifically invented for a given research
Variable event, act, characteristics, trait, or attribute that can be measured and
to which we assign numerals or values
Operational definition a definition for a construct stated in terms of specific
criteria for testing or measurement

Measurement
Four mapping assumptions:
1.
2.
3.
4.

Numbers are used to classify, group, or sort responses


Numbers are ordered
Differences between numbers are ordered
The number series has a unique origin indicated by the number zero

Measurement Scales
1. Nominal Scales Classification (mutually exclusive and collectively
exhaustive) but no order, distance, or natural origin
2. Ordinal Scales Classification and order, but no distance or natural origin
3. Interval Scales Classification, order, and distance, but no natural origin
4. Ratio Classification, order, distance, and natural origin
Nonparametric method/distribution-free statistics measure of statistical
significance
4 major error sources:
1.
2.
3.
4.

Respondent characteristics of the respondent


Situation condition, distractions
Measurer leading questions
Data Collection Instrument defect

Characteristics of a good measurement tool:


1. Validity extent to which a test measures what we actually wish to measure
2. Reliability has to do with the accuracy and precision of a measurement
procedure
3. Practicality wide range of factors of economy, convenience and
interpretability
2 Forms of Validity:

1. Internal validity ability of the research instrument to measure what it is


purported to measure
2. External validity datas ability to be generalized across persons, settings
and times
Content Validity measuring instrument is the extent to which it provides
adequate coverage of the investigative questions guiding the study.
Criterion-related validity reflects the success of measures used for prediction
or estimation.
Concurrent validity description of the present; criterion data are available at the
same time as predictor scores
Predictive validity prediction of the future; criterion data are measured after the
passage of time
Construct validity attempts to identify the underlying constructs being
measured and determine how well the test represents it
o
o

Convergent validity the degree to which scores on one scale correlate


with scores on other scales
Discriminant validity the degree to which scores on a scale do not
correlate with scores from scales designed to measure different
constructs

Validity criterion must be judged by the following:


1.
2.
3.
4.

Relevance proper measures of success


Freedom from bias equal opportunity
Reliability stable or reproducible
Availability ease to secure

Reliability Approach/Estimates
Stability secure consistent results with repeated measurements of the same
person of the same instrument
o

Test-retest methodology comparison between 2 test and can


cause a downward bias including the following:
Time delay between measurements situational factor changes
Insufficient time between measurements permits remembering
previous questions
Respondents discernment of a studys disguised purpose
introduce bias if respondent holds opinions
Topic sensitivity respondent seeks to learn more about the
topic

Note: higher interval between testing is recommended


Equivalence variations at one point in time among observers and sample items
o
o
o

Interrater reliability used to correlate the observations or scores


Parallel forms same test administered to the same persons
simultaneously
Delayed equivalent forms composite test-retest and the
equivalence method

Internal Consistency homogeneity among items; uses one administration of an


instrument or test to assess
o
o

Spit-half technique measuring tool has many similar questions or


statements to which the participant can respond
Spearman-Brown correction formula used to adjust for the effect of
test length and to estimate reliability of the whole test

Chapter 12
Attitude is a learned stable predisposition to respond to oneself, other persons,
objects, or issues in a consistently favorable or unfavorable way.
o
o
o
o

Cognitively based attitude represents memories, evaluations, and beliefs


about the properties of the object
Belief is an estimate about the truth of something
Affectively based attitude represents feeling, intuition, values and
emotions toward the object
Conative or behaviorally based attitudes reflects expectations and
behavioral intentions

Factors affecting applicability of attitudinal research:


o
o
o
o
o
o
o

Specific attitudes are better predictors of behavior than general ones


Strong attitudes
Direct experiences with the attitude object produce behavior more reliably
Cognitive-based attitudes influence behavior better than affective-based
attitudes
Affective-based attitudes are often better predictors of consumption
behavior
Using multiple measurements of attitude or several behavioral
assessments across time and environments improves prediction
Influence of reference groups and the individuals inclination to conform to
these influences improves the attitude-behavior linkage

Attitude scaling assessing an attitudinal disposition using a number that


represents a persons score on an attitudinal continuum ranging from extremely
favorable disposition to an extremely unfavorable disposition
Scaling procedure for the assignment of numbers (or other symbols) to a property
of objects in order to impart some of the characteristics of numbers to the
properties in question
Selecting a measurement scale considerations:
o
o
o
o
o
o
o
o

Research objectives
Response types
Data properties
Number of dimensions
Balanced or unbalanced
Forced or unforced choices
Number of scale points
Rater errors

2 General Scaling objectives:


o
o

To measure characteristics of the participants who participate in the study


To use participants as judges of the objects or indicants presented to them

4 General Types of Measurement Scales:


o
o
o
o

Rating participants score an object or indicant without making a direct


comparison to another object or attitude
Ranking constrain the study participant to making comparisons and
determining order among two or more properties
Categorization participants puts themselves or property indicants in groups
or categories
Sorting requires sorting using criteria established by the researcher

Measurement Dimensions:
o
o

Unidimensional measures only one attribute of the participant or object


Multidimensional objects described in several dimensions

Balanced rating scale equal number of categories above and below midpoint
Unbalance rating scale unequal number of favorable and unfavorable response
choices
Error of leniency compensated by unbalanced scale, easy or hard raters
Unforced-choice rating scale opportunity to express no opinion when no option
chosen in the alternative

Forced-choice rating scale requires to select offered alternatives


Recommended Number of Scale Points:
o
o
o

3 point scale simple for products that requires little effort


5-11 point scale complex and plays important role in consumers life, with
valid results
10 point scale cultural practices

Error of central tendency raters reluctance to give extreme judgments


Addressing Rater Errors:
o
o
o
o

Adjust the strength of descriptive adjectives


Space the intermediate descriptive phrases farther apart
Provide smaller differences in meaning between the steps near the ends of
the scale than between the steps near the center
Use more points in the scale

Halo effect systematic bias that the rater introduces by carrying over a
generalized impression of the subject from one rating to another

Rating Scales
1. Simple Attitude Scales inexpensive, highly specific, produces nominal
data
o Simple category scales dichotomous scale offers two mutually exclusive
response choices; yes or no
o Multiple-choice, single response scales multiple options but only one answer
o Multiple choice, multi-response scales checklist, selects one or several
alternatives
2. Likert Scale developed by Rensis Likert; summated rating scale; 7 to 9
scales; interval data
Summated rating scales consists of statements that express either a
favorable or an unfavorable attitude toward the object of interest
Item Analysis procedure in creating a Likert scale
3. Semantic Differential Scales measures the psychological meanings of an
attitude object using bipolar adjectives; brand and image; developed by
Osgood and associates; produces interval data
Semantic space meanings are located in multidimensional property space
3 factors contributing to meaningful judgment by participants in SD:
a) Evaluation
b) Potency

c) Activity
4. Numerical scale equal intervals that separate their numeric scale points; 5
point or 7 or 10 points; ordinal or interval data
5. Multiple rating list scale accepts circled response from the rater layout
facilitates visualization of the results; ordinal or interval
6. Stapel scale alternative to semantic differential when it is difficult to find
bipolar adjectives that match the investigative question; uses plus and minus
for specific phrase; ordinal or interval data
7. Constant-sum scales discover proportions; participants allocate more
points to attribute or property indicant; must sum 100
8. Graphic rating scale to enable researchers to discern fine differences;
often used with children; interval data

Ranking Scales
1. Paired Comparison Scale express attitudes unambiguously by choosing
between two objects; ordinal
No of judgment = [(n)(n-1)/2] where n is the number of stimuli or objects
2. Forced Ranking Scale ranked relative to each other; ordinal
3. Comparative Scale benchmarking or comparison with a standard; data
produced as interval data if compared between standard and new object;
ordinal if linearity could be supported

Sorting
Q-sorts sorting of a deck of cards into piles that represent points along a
continuum; purpose is to get a conceptual representation of the sorters attitude
toward the attitude object and to compare the relationships between people
Q-sorts resolve the following:
1. Item selection
2. Structured or unstructured choices
3. Data analysis
Cumulative Scales accumulating the persons total score
Scalogram Analysis procedure for determining whether a set of items forms
a unidimensional scale

Chapter 13
Types of Scale for Desired Analysis
Communication-based research conducted by personal interview, telephone,
mail, computer or some hybrid studies

Disguised questions designed to conceal the questions true purpose; to obtain


unbiased data
Four situations for disguising the study
1.
2.
3.
4.

Willingly shared, conscious-level information


Reluctantly shared, conscious-level information
Knowable, limited-conscious-level information
Subsconcious-level information

Dummy table cross-tabulation between two or more variables


Preliminary analysis plan serves as a check on whether the planned
measurement questions meet the data needs of the research question

Refining Measurement Questions to accomplish the following:


o
o
o
o
o

Encourage each participant to provide accurate responses.


Encourage each participant to provide an adequate amount if information.
Discourage each participant from refusing to answer specific questions.
Discourage each participant from early discontinuation of participation.
Leave the participant with a positive attitude about survey participation.

Questionnaires/interview schedules used in interviews


3 categories of Questionnaires:
o
o

Administrative questions identify the participant, interviewer, interview


location, and conditions
Classification questions sociological-demographic variables that allow
participants answers to be grouped so that patterns are revealed and can be
studied
Target questions (structured or unstructured) address the investigative
questions of a specific study
a. structured fixed set of choices
b. unstructured do not limit responses; open-ended questions

Question Design
1. Question Content dictated by the investigative questions guiding the study
2. Question Wording the need to be explicit, present alternatives, to explain
meanings
Leading question inject significant error by implying the one response
should be

3. Response Strategy degree and form of structure imposed on the participant.


Decision to whether use open-ended or close-ended questions:
o
o
o
o
o

Objectives of the study


Participants level of information about the topic
Degree to which participant has thought through the topic
Ease with which participant communicates
Participants motivation level to share information

Free-response question or open-ended question ask the participant a


question and either the interviewer pauses for the answer or the participants
records his or her own words in the space provided on a questionnaire
Dichotomous questions suggest opposing responses; middle ground alternative
Multiple-Choice Questions more than 2 alternatives, generates nominal data
but if numeric may produce interval or ratio
Double-barreled Question participants divides the question of store safety into
several questions
Primary effect participants choose the first alternative; visual surveys
Recency effect participants choose the last alternative; telephone surveys
Split-ballot technique counteract the bias by segments
Participants can give multiple responses to a single in the following ways:
o
o
o

Checklist generate nominal data


Rating
Ranking Strategy

Rating Questions position each factor on a companion scale


Ranking Questions relative order of the alternatives
Drafting and Refining the Instrument:
1. Develop the participant-screening, along with the introduction
2. Arrange the measurement question sequence:
o
o
o

Identify groups of target questions by topic


Establish a logical sequence for the question groups and questions within
group
Develop transitions between these question groups

3. Prepare and insert instructions for the interviewer

4. Create and insert a conclusion, including a survey disposition statement


5. Pretest specific questions and the instrument as a whole.

Das könnte Ihnen auch gefallen