Beruflich Dokumente
Kultur Dokumente
Conjoint analysis
Conjoint analysis is a statistical technique used in market research to
determine how people value different features that make up an individual
product or service.
The objective of conjoint analysis is to determine what combination of a limited
number of attributes is most influential on respondent choice or decision
making. A controlled set of potential products or services is shown to
respondents and by analyzing how they make preferences between these
products, the implicit valuation of the individual elements making up the
product or service can be determined. These implicit valuations (utilities or
part-worths) can be used to create market models that estimate market share,
revenue and even profitability of new designs.
Conjoint originated in mathematical psychology and was developed by
marketing professor Paul Green at the University of Pennsylvania and Data
Chan. Other prominent conjoint analysis pioneers include professor V. Seenu
Srinivasan of Stanford University who developed a linear programming
(LINMAP) procedure for rank ordered data as well as a self-explicated
approach, Richard Johnson (founder of Sawtooth Software) who developed the
Adaptive Conjoint Analysis technique in the 1980s and Jordan Louviere
(University of Iowa) who invented and developed Choice-based approaches to
conjoint analysis and related techniques such as MaxDiff.
Today it is used in many of the social sciences and applied sciences including
marketing, product management, and operations research. It is used frequently
in testing customer acceptance of new product designs, in assessing the appeal
of advertisements and in service design. It has been used in product positioning,
but there are some who raise problems with this application of conjoint analysis
(see disadvantages).
Conjoint analysis techniques may also be referred to as multiattribute
compositional modelling, discrete choice modelling, or stated preference
research, and is part of a broader set of trade-off analysis tools used for
systematic analysis of decisions. These tools include Brand-Price Trade-Off,
Simalto, and mathematical approaches such as evolutionary algorithms or Rule
Developing Experimentation.
Conjoint Design
A product or service area is described in terms of a number of attributes. For
example, a television may have attributes of screen size, screen format, brand,
price and so on. Each attribute can then be broken down into a number of
levels. For instance, levels for screen format may be LED, LCD, or Plasma.
Respondents would be shown a set of products, prototypes, mock-ups, or
pictures created from a combination of levels from all or some of the
constituent attributes and asked to choose from, rank or rate the products they
are shown. Each example is similar enough that consumers will see them as
close substitutes, but dissimilar enough that respondents can clearly determine a
preference. Each example is composed of a unique combination of product
features. The data may consist of individual ratings, rank orders, or preferences
among alternative combinations.
As the number of combinations of attributes and levels increases the number of
potential profiles increases exponentially. Consequently, fractional factorial
design is commonly used to reduce the number of profiles that have to be
evaluated, while ensuring enough data is available for statistical analysis,
resulting in a carefully controlled set of "profiles" for the respondent to consider
form of actual choice between alternatives rather than the more artificial
ranking and rating originally used. Jordan Louviere pioneered an approach that
used only a choice task which became the basic of choice-based conjoint and
discrete choice analysis. This stated preference research is linked to
econometric modeling and can be linked revealed preference where choice
models are calibrated on the basis of real rather than survey data. Originally
choice-based conjoint analysis was unable to provide individual level utilities as
it aggregated choices across a market. This made it unsuitable for market
segmentation studies. With newer hierarchical Bayesian analysis techniques,
individual level utilities can be imputed back to provide individual level data.
Information collection
Data for conjoint analysis is most commonly gathered through a market
research survey, although conjoint analysis can also be applied to a carefully
designed configurator or data from an appropriately design test market
experiment. Market research rules of thumb apply with regard to statistical
sample size and accuracy when designing conjoint analysis interviews.
The length of the research questionnaire depends on the number of attributes to
be assessed and the method of conjoint analysis in use. A typical Adaptive
Conjoint questionnaire with 20-25 attributes may take more than 30 minutes to
complete. Choice based conjoint, by using a smaller profile set distributed
across the sample as a whole may be completed in less than 15 minutes. Choice
exercises may be displayed as a store front type layout or in some other
simulated shopping environment.
Analysis
Any number of algorithms may be used to estimate utility functions. These
utility functions indicate the perceived value of the feature and how sensitive
consumer perceptions and preferences are to changes in product features. The
actual mode of analysis will depend on the design of the task and profiles for
respondents. For full profile tasks, linear regression may be appropriate, for
choice based tasks, maximum likelihood estimation, usually with logistic
regression are typically used. The original methods were monotonic analysis of
variance or linear programming techniques, but these are largely obsolete in
contemporary marketing research practice.
In addition, hierarchical Bayesian procedures that operate on choice data may
be used to estimate individual level utilities from more limited choice-based
designs.
Advantages
Disadvantages
does not take into account the number items per purchase so it can give
a poor reading of market share
1. Problem audit and problem definition - What is the problem? What are the
various aspects of the problem? What information is needed?
2. Conceptualization and operationalization - How exactly do we define the
concepts involved? How do we translate these concepts into observable and
measurable behaviours?
3. Hypothesis specification - What claim(s) do we want to test?
4. Research design specification - What type of methodology to use? - examples:
questionnaire, survey
5. Question specification - What questions to ask? In what order?
6. Scale specification - How will preferences be rated?
7. Sampling design specification - What is the total population? What sample size
is necessary for this population? What sampling method to use?- examples:
Probability Sampling:- (cluster sampling, stratified sampling, simple random
sampling, multistage sampling, systematic sampling) & Nonprobability
sampling:- (Convenience Sampling,Judgement Sampling, Purposive
Sampling, Quota Sampling, Snowball Sampling, etc. )
8. Data collection - Use mail, telephone, internet, mall intercepts
9. Codification and re-specification - Make adjustments to the raw data so it is
compatible with statistical techniques and with the objectives of the research examples: assigning numbers, consistency checks, substitutions, deletions,
weighting, dummy variables, scale transformations, scale standardization
10. Statistical analysis - Perform various descriptive and inferential techniques
(see below) on the raw data. Make inferences from the sample to the whole
population. Test the results for statistical significance.
11. Interpret and integrate findings - What do the results mean? What conclusions
can be drawn? How do these findings relate to similar research?
12. Write the research report - Report usually has headings such as: 1) executive
summary; 2) objectives; 3) methodology; 4) main findings; 5) detailed charts
and diagrams. Present the report to the client in a 10 minute presentation. Be
prepared for questions.
The design step may involve a pilot study to in order to discover any hidden
issues. The codification and analysis steps are typically performed by computer,
using statistical software. The data collection steps, can in some instances be
Test-retest reliability checks how similar the results are if the research is
repeated under similar circumstances. Stability over repeated measures is assessed
with the Pearson coefficient.
Alternative forms reliability checks how similar the results are if the research is
repeated using different forms.
Content validation (also called face validity) checks how well the content of
the research are related to the variables to be studied; it seeks to answer whether
the research questions are representative of the variables being researched. It is a
demonstration that the items of a test are drawn from the domain being measured.
Criterion validation checks how meaningful the research criteria are relative to
other possible criteria. When the criterion is collected later the goal is to establish
predictive validity.
random errors
bias introduced
measurement error
scaling error
Interviewer errors:
recording errors
cheating errors
questioning errors
Respondent errors:
non-response error
inability error
falsification error
Hypothesis errors:
the study results lead to the rejection of the null hypothesis even
though it is actually true
type II error (also called beta error)
Qualitative research
Qualitative research a method of inquiry employed in many different
academic disciplines, traditionally in the social sciences, but also in market
research and further contexts.[1] Qualitative researchers aim to gather an indepth understanding of human behavior and the reasons that govern such
behavior. The qualitative method investigates the why and how of decision
making, not just what, where, when. Hence, smaller but focused samples are
more often needed, rather than large samples.
In the conventional view, qualitative methods produce information only on the
particular cases studied, and any more general conclusions are only
propositions (informed assertions). Quantitative methods can then be used to
seek empirical support for such research hypotheses. This view has been
disputed by Oxford University professor Bent Flyvbjerg, who argues that
qualitative methods and case study research may be used both for hypothesestesting and for generalizing beyond the particular cases studied.[2]
Data collection
Qualitative researchers may use different approaches in collecting data, such as
the grounded theory practice, narratology, storytelling, classical ethnography, or
shadowing. Qualitative methods are also loosely present in other
methodological approaches, such as action research or actor-network theory.
Forms of the data collected can include interviews and group discussions,
observation and reflection field notes, various texts, pictures, and other
materials.
Qualitative research often categorizes data into patterns as the primary basis for
organizing and reporting results.[citation needed] Qualitative researchers typically rely
on the following methods for gathering information: Participant Observation,
Non-participant Observation, Field Notes, Reflexive Journals, Structured
Interview, Semi-structured Interview, Unstructured Interview, and Analysis of
documents and materials.[3]
The ways of participating and observing can vary widely from setting to setting.
Participant observation is a strategy of reflexive learning, not a single method
of observing[4] In participant observation[5] researchers typically become
members of a culture, group, or setting, and adopt roles to conform to that
setting. In doing so, the aim is for the researcher to gain a closer insight into the
culture's practices, motivations and emotions. It is argued that the researchers'
ability to understand the experiences of the culture may be inhibited if they
observe without participating[citation needed].
Some distinctive qualitative methods are the use of focus groups and key
informant interviews. The focus group technique involves a moderator
facilitating a small group discussion between selected individuals on a
particular topic. This is a particularly popular method in market research and
testing new initiatives with users/workers.
One traditional and specialized form of qualitative research is called cognitive
testing or pilot testing which is used in the development of quantitative survey
items. Survey items are piloted on study participants to test the reliability and
validity of the items.
In the academic social sciences the most frequently used qualitative research
approaches include the following:
1. Ethnographic Research, used for investigating cultures by collecting and
describing data that is intended to help in the development of a theory.
This method is also called ethnomethodology or "methodology of the
people". An example of applied ethnographic research, is the study of a
particular culture and their understanding of the role of a particular
disease in their cultural framework.
2. Critical Social Research, used by a researcher to understand how people
communicate and develop symbolic meanings.
3. Ethical Inquiry, an intellectual analysis of ethical problems. It includes
the study of ethics as related to obligation, rights, duty, right and wrong,
choice etc.
4. Foundational Research, examines the foundations for a science, analyses
the beliefs and develops ways to specify how a knowledge base should
change in light of new information.
5. Historical Research, allows one to discuss past and present events in the
context of the present condition, and allows one to reflect and provide
Data analysis
Interpretive techniques
The most common analysis of qualitative data is observer impression. That is,
expert or bystander observers examine the data, interpret it via forming an
impression and report their impression in a structured and sometimes
quantitative form.
Coding
Main article: Coding (social sciences)
Coding is an interpretive technique that both organizes the data and provides a
means to introduce the interpretations of it into certain quantitative methods.
Most coding requires the analyst to read the data and demarcate segments
within it. Each segment is labeled with a code usually a word or short
phrase that suggests how the associated data segments inform the research
objectives. When coding is complete, the analyst prepares reports via a mix of:
summarizing the prevalence of codes, discussing similarities and differences in
related codes across distinct original sources/contexts, or comparing the
relationship between one or more codes.
Some qualitative data that is highly structured (e.g., open-end responses from
surveys or tightly defined interview questions) is typically coded without
additional segmenting of the content. In these cases, codes are often applied as a
layer on top of the data. Quantitative analysis of these codes is typically the
capstone analytical step for this type of qualitative data.
Contemporary qualitative data analyses are sometimes supported by computer
programs, termed Computer Assisted Qualitative Data Analysis Software.
These programs do not supplant the interpretive nature of coding but rather are
aimed at enhancing the analysts efficiency at data storage/retrieval and at
applying the codes to the data. Many programs offer efficiencies in editing and
revising coding, which allow for work sharing, peer review, and recursive
examination of data.
A frequent criticism of coding method is that it seeks to transform qualitative
data into quantitative data, thereby draining the data of its variety, richness, and
individual character. Analysts respond to this criticism by thoroughly expositing
their definitions of codes and linking those codes soundly to the underlying
data, therein bringing back some of the richness that might be absent from a
mere list of codes.
Recursive abstraction
Some qualitative datasets are analyzed without coding. A common method here
is recursive abstraction, where datasets are summarized, those summaries are
then further summarized, and so on. The end result is a more compact summary
that would have been difficult to accurately discern without the preceding steps
of distillation.
A frequent criticism of recursive abstraction is that the final conclusions are
several times removed from the underlying data. While it is true that poor initial
summaries will certainly yield an inaccurate final report, qualitative analysts
can respond to this criticism. They do so, like those using coding method, by
documenting the reasoning behind each summary step, citing examples from
the data where statements were included and where statements were excluded
from the intermediate summary.
Mechanical techniques
Some techniques rely on leveraging computers to scan and sort large sets of
qualitative data. At their most basic level, mechanical techniques rely on
counting words, phrases, or coincidences of tokens within the data. Often
referred to as content analysis, the output from these techniques is amenable to
many advanced statistical analyses.
Mechanical techniques are particularly well-suited for a few scenarios. One
such scenario is for datasets that are simply too large for a human to effectively
analyze, or where analysis of them would be cost prohibitive relative to the
value of information they contain. Another scenario is when the chief value of a
dataset is the extent to which it contains red flags (e.g., searching for reports
of certain adverse events within a lengthy journal dataset from patients in a
clinical trial) or green flags (e.g., searching for mentions of your brand in
positive reviews of marketplace products).
A frequent criticism of mechanical techniques is the absence of a human
interpreter. And while masters of these methods are able to write sophisticated
software to mimic some human decisions, the bulk of the analysis is
nonhuman. Analysts respond by proving the value of their methods relative to
either a) hiring and training a human team to analyze the data or b) letting the
data go untouched, leaving any actionable nuggets undiscovered.
Paradigmatic differences
By the end of the 1970s many leading journals began to publish qualitative
research articles[9] and several new journals emerged which published only
qualitative research studies and articles about qualitative research methods.[10]
In the 1980s and 1990s, the new qualitative research journals became more
multidisciplinary in focus moving beyond qualitative researchs traditional
disciplinary roots of anthropology, sociology, and philosophy.[10]
The new millennium saw a dramatic increase in the number of journals
specializing in qualitative research with at least one new qualitative research
journal being launched each year.
Notes
1.
2.
3.
4.
5.
6.
7.
Lincoln (Eds.), The Sage Handbook of Qualitative Research (3rd ed.), pp. 191215. Thousand Oaks, CA: Sage. ISBN 0-7619-2757-3
8.
9.
10.