Sie sind auf Seite 1von 16

Metode istraivanja marketinga

Conjoint analysis
Conjoint analysis is a statistical technique used in market research to
determine how people value different features that make up an individual
product or service.
The objective of conjoint analysis is to determine what combination of a limited
number of attributes is most influential on respondent choice or decision
making. A controlled set of potential products or services is shown to
respondents and by analyzing how they make preferences between these
products, the implicit valuation of the individual elements making up the
product or service can be determined. These implicit valuations (utilities or
part-worths) can be used to create market models that estimate market share,
revenue and even profitability of new designs.
Conjoint originated in mathematical psychology and was developed by
marketing professor Paul Green at the University of Pennsylvania and Data
Chan. Other prominent conjoint analysis pioneers include professor V. Seenu
Srinivasan of Stanford University who developed a linear programming
(LINMAP) procedure for rank ordered data as well as a self-explicated
approach, Richard Johnson (founder of Sawtooth Software) who developed the
Adaptive Conjoint Analysis technique in the 1980s and Jordan Louviere
(University of Iowa) who invented and developed Choice-based approaches to
conjoint analysis and related techniques such as MaxDiff.
Today it is used in many of the social sciences and applied sciences including
marketing, product management, and operations research. It is used frequently
in testing customer acceptance of new product designs, in assessing the appeal
of advertisements and in service design. It has been used in product positioning,
but there are some who raise problems with this application of conjoint analysis
(see disadvantages).
Conjoint analysis techniques may also be referred to as multiattribute
compositional modelling, discrete choice modelling, or stated preference
research, and is part of a broader set of trade-off analysis tools used for
systematic analysis of decisions. These tools include Brand-Price Trade-Off,
Simalto, and mathematical approaches such as evolutionary algorithms or Rule
Developing Experimentation.

Conjoint Design
A product or service area is described in terms of a number of attributes. For
example, a television may have attributes of screen size, screen format, brand,
price and so on. Each attribute can then be broken down into a number of
levels. For instance, levels for screen format may be LED, LCD, or Plasma.
Respondents would be shown a set of products, prototypes, mock-ups, or
pictures created from a combination of levels from all or some of the
constituent attributes and asked to choose from, rank or rate the products they
are shown. Each example is similar enough that consumers will see them as
close substitutes, but dissimilar enough that respondents can clearly determine a
preference. Each example is composed of a unique combination of product
features. The data may consist of individual ratings, rank orders, or preferences
among alternative combinations.
As the number of combinations of attributes and levels increases the number of
potential profiles increases exponentially. Consequently, fractional factorial
design is commonly used to reduce the number of profiles that have to be
evaluated, while ensuring enough data is available for statistical analysis,
resulting in a carefully controlled set of "profiles" for the respondent to consider

Types of conjoint analysis


The earliest forms of conjoint analysis were what are known as Full Profile
studies, in which a small set of attributes (typically 4 to 5) are used to create
profiles that are shown to respondents, often on individual cards. Respondents
then rank or rate these profiles. Using relatively simple dummy variable
regression analysis the implicit utilities for the levels can be calculated.
Two drawbacks were seen in these early designs. Firstly, the number of
attributes in use was heavily restricted. With large numbers of attributes, the
consideration task for respondents becomes too large and even with fractional
factorial designs the number of profiles for evaluation can increase rapidly.
In order to use more attributes (up to 30), hybrid conjoint techniques were
developed. The main alternative was to do some form of self-explication before
the conjoint tasks and some form of adaptive computer-aided choice over the
profiles to be shown.
The second drawback was that the task itself was unrealistic and did not link
directly to behavioural theory. In real-life situations, the task would be some

form of actual choice between alternatives rather than the more artificial
ranking and rating originally used. Jordan Louviere pioneered an approach that
used only a choice task which became the basic of choice-based conjoint and
discrete choice analysis. This stated preference research is linked to
econometric modeling and can be linked revealed preference where choice
models are calibrated on the basis of real rather than survey data. Originally
choice-based conjoint analysis was unable to provide individual level utilities as
it aggregated choices across a market. This made it unsuitable for market
segmentation studies. With newer hierarchical Bayesian analysis techniques,
individual level utilities can be imputed back to provide individual level data.

Information collection
Data for conjoint analysis is most commonly gathered through a market
research survey, although conjoint analysis can also be applied to a carefully
designed configurator or data from an appropriately design test market
experiment. Market research rules of thumb apply with regard to statistical
sample size and accuracy when designing conjoint analysis interviews.
The length of the research questionnaire depends on the number of attributes to
be assessed and the method of conjoint analysis in use. A typical Adaptive
Conjoint questionnaire with 20-25 attributes may take more than 30 minutes to
complete. Choice based conjoint, by using a smaller profile set distributed
across the sample as a whole may be completed in less than 15 minutes. Choice
exercises may be displayed as a store front type layout or in some other
simulated shopping environment.

Analysis
Any number of algorithms may be used to estimate utility functions. These
utility functions indicate the perceived value of the feature and how sensitive
consumer perceptions and preferences are to changes in product features. The
actual mode of analysis will depend on the design of the task and profiles for
respondents. For full profile tasks, linear regression may be appropriate, for
choice based tasks, maximum likelihood estimation, usually with logistic
regression are typically used. The original methods were monotonic analysis of
variance or linear programming techniques, but these are largely obsolete in
contemporary marketing research practice.
In addition, hierarchical Bayesian procedures that operate on choice data may
be used to estimate individual level utilities from more limited choice-based
designs.

Advantages

estimates psychological tradeoffs that consumers make when evaluating


several attributes together
measures preferences at the individual level
uncovers real or hidden drivers which may not be apparent to the
respondent themselves

realistic choice or shopping task

able to use physical objects

if appropriately designed, the ability to model interactions between


attributes can be used to develop needs based segmentation

Disadvantages

designing conjoint studies can be complex

with too many options, respondents resort to simplification strategies

difficult to use for product positioning research because there is no


procedure for converting perceptions about actual features to perceptions
about a reduced set of underlying features

respondents are unable to articulate attitudes toward new categories, or


may feel forced to think about issues they would otherwise not give much
thought to

poorly designed studies may over-value emotional/preference variables


and undervalue concrete variables

does not take into account the number items per purchase so it can give
a poor reading of market share

Quantitative marketing research


Quantitative marketing research is the application of quantitative research
techniques to the field of marketing. It has roots in both the positivist view of
the world, and the modern marketing viewpoint that marketing is an interactive
process in which both the buyer and seller reach a satisfying agreement on the
"four Ps" of marketing: Product, Price, Place (location) and Promotion.
As a social research method, it typically involves the construction of
questionnaires and scales. People who respond (respondents) are asked to
complete the survey. Marketers use the information so obtained to understand
the needs of individuals in the marketplace, and to create strategies and
marketing plans.

Typical general procedure


Simply, there are five major and important steps involved in the research
process:
1. Defining the Problem.
2. Research Design.
3. Data Collection.
4. Analysis.
5. Report Writing & presentation.

A brief discussion on these steps is:

1. Problem audit and problem definition - What is the problem? What are the
various aspects of the problem? What information is needed?
2. Conceptualization and operationalization - How exactly do we define the
concepts involved? How do we translate these concepts into observable and
measurable behaviours?
3. Hypothesis specification - What claim(s) do we want to test?
4. Research design specification - What type of methodology to use? - examples:
questionnaire, survey
5. Question specification - What questions to ask? In what order?
6. Scale specification - How will preferences be rated?
7. Sampling design specification - What is the total population? What sample size
is necessary for this population? What sampling method to use?- examples:
Probability Sampling:- (cluster sampling, stratified sampling, simple random
sampling, multistage sampling, systematic sampling) & Nonprobability
sampling:- (Convenience Sampling,Judgement Sampling, Purposive
Sampling, Quota Sampling, Snowball Sampling, etc. )
8. Data collection - Use mail, telephone, internet, mall intercepts
9. Codification and re-specification - Make adjustments to the raw data so it is
compatible with statistical techniques and with the objectives of the research examples: assigning numbers, consistency checks, substitutions, deletions,
weighting, dummy variables, scale transformations, scale standardization
10. Statistical analysis - Perform various descriptive and inferential techniques
(see below) on the raw data. Make inferences from the sample to the whole
population. Test the results for statistical significance.
11. Interpret and integrate findings - What do the results mean? What conclusions
can be drawn? How do these findings relate to similar research?
12. Write the research report - Report usually has headings such as: 1) executive
summary; 2) objectives; 3) methodology; 4) main findings; 5) detailed charts
and diagrams. Present the report to the client in a 10 minute presentation. Be
prepared for questions.

The design step may involve a pilot study to in order to discover any hidden
issues. The codification and analysis steps are typically performed by computer,
using statistical software. The data collection steps, can in some instances be

automated, but often require significant manpower to undertake. Interpretation


is a skill mastered only by experience.
Statistical analysis

The data acquired for quantitative marketing research can be analysed by


almost any of the range of techniques of statistical analysis, which can be
broadly divided into descriptive statistics and statistical inference. An important
set of techniques is that related to statistical surveys. In any instance, an
appropriate type of statistical analysis should take account of the various types
of error that may arise, as outlined below.
Reliability and Validity

Research should be tested for reliability, generalizability, and validity.


Generalizability is the ability to make inferences from a sample to the
population.
Reliability is the extent to which a measure will produce consistent results.

Test-retest reliability checks how similar the results are if the research is
repeated under similar circumstances. Stability over repeated measures is assessed
with the Pearson coefficient.

Alternative forms reliability checks how similar the results are if the research is
repeated using different forms.

Internal consistency reliability checks how well the individual measures


included in the research are converted into a composite measure. Internal
consistency may be assessed by correlating performance on two halves of a test
(split-half reliability). The value of the Pearson product-moment correlation
coefficient is adjusted with the SpearmanBrown prediction formula to correspond
to the correlation between two full-length tests. A commonly used measure is
Cronbach's , which is equivalent to the mean of all possible split-half coefficients.
Reliability may be improved by increasing the sample size.

Validity asks whether the research measured what it intended to.

Content validation (also called face validity) checks how well the content of
the research are related to the variables to be studied; it seeks to answer whether
the research questions are representative of the variables being researched. It is a
demonstration that the items of a test are drawn from the domain being measured.

Criterion validation checks how meaningful the research criteria are relative to
other possible criteria. When the criterion is collected later the goal is to establish
predictive validity.

Construct validation checks what underlying construct is being measured.


There are three variants of construct validity: convergent validity (how well the
research relates to other measures of the same construct), discriminant validity
(how poorly the research relates to measures of opposing constructs), and
nomological validity (how well the research relates to other variables as required
by theory).

Internal validation, used primarily in experimental research designs, checks


the relation between the dependent and independent variables (i.e. Did the
experimental manipulation of the independent variable actually cause the observed
results?)

External validation checks whether the experimental results can be


generalized.

Validity implies reliability: A valid measure must be reliable. Reliability does


not necessarily imply validity, however: A reliable measure does not imply that
it is valid.
Types of errors

Random sampling errors:

sample too small

sample not representative

inappropriate sampling method used

random errors

Research design errors:

bias introduced

measurement error

data analysis error

sampling frame error

population definition error

scaling error

question construction error

Interviewer errors:

recording errors

cheating errors

questioning errors

respondent selection error

Respondent errors:

non-response error

inability error

falsification error

Hypothesis errors:

type I error (also called alpha error)


o

the study results lead to the rejection of the null hypothesis even
though it is actually true
type II error (also called beta error)

the study results lead to the acceptance (non-rejection) of the null


hypothesis even though it is actually false

Qualitative research
Qualitative research a method of inquiry employed in many different
academic disciplines, traditionally in the social sciences, but also in market
research and further contexts.[1] Qualitative researchers aim to gather an indepth understanding of human behavior and the reasons that govern such
behavior. The qualitative method investigates the why and how of decision
making, not just what, where, when. Hence, smaller but focused samples are
more often needed, rather than large samples.
In the conventional view, qualitative methods produce information only on the
particular cases studied, and any more general conclusions are only
propositions (informed assertions). Quantitative methods can then be used to
seek empirical support for such research hypotheses. This view has been
disputed by Oxford University professor Bent Flyvbjerg, who argues that
qualitative methods and case study research may be used both for hypothesestesting and for generalizing beyond the particular cases studied.[2]

Data collection
Qualitative researchers may use different approaches in collecting data, such as
the grounded theory practice, narratology, storytelling, classical ethnography, or
shadowing. Qualitative methods are also loosely present in other
methodological approaches, such as action research or actor-network theory.
Forms of the data collected can include interviews and group discussions,
observation and reflection field notes, various texts, pictures, and other
materials.
Qualitative research often categorizes data into patterns as the primary basis for
organizing and reporting results.[citation needed] Qualitative researchers typically rely
on the following methods for gathering information: Participant Observation,
Non-participant Observation, Field Notes, Reflexive Journals, Structured
Interview, Semi-structured Interview, Unstructured Interview, and Analysis of
documents and materials.[3]

The ways of participating and observing can vary widely from setting to setting.
Participant observation is a strategy of reflexive learning, not a single method
of observing[4] In participant observation[5] researchers typically become
members of a culture, group, or setting, and adopt roles to conform to that
setting. In doing so, the aim is for the researcher to gain a closer insight into the
culture's practices, motivations and emotions. It is argued that the researchers'
ability to understand the experiences of the culture may be inhibited if they
observe without participating[citation needed].
Some distinctive qualitative methods are the use of focus groups and key
informant interviews. The focus group technique involves a moderator
facilitating a small group discussion between selected individuals on a
particular topic. This is a particularly popular method in market research and
testing new initiatives with users/workers.
One traditional and specialized form of qualitative research is called cognitive
testing or pilot testing which is used in the development of quantitative survey
items. Survey items are piloted on study participants to test the reliability and
validity of the items.
In the academic social sciences the most frequently used qualitative research
approaches include the following:
1. Ethnographic Research, used for investigating cultures by collecting and
describing data that is intended to help in the development of a theory.
This method is also called ethnomethodology or "methodology of the
people". An example of applied ethnographic research, is the study of a
particular culture and their understanding of the role of a particular
disease in their cultural framework.
2. Critical Social Research, used by a researcher to understand how people
communicate and develop symbolic meanings.
3. Ethical Inquiry, an intellectual analysis of ethical problems. It includes
the study of ethics as related to obligation, rights, duty, right and wrong,
choice etc.
4. Foundational Research, examines the foundations for a science, analyses
the beliefs and develops ways to specify how a knowledge base should
change in light of new information.
5. Historical Research, allows one to discuss past and present events in the
context of the present condition, and allows one to reflect and provide

possible answers to current issues and problems. Historical research


helps us in answering questions such as: Where have we come from,
where are we, who are we now and where are we going?
6. Grounded Theory, is an inductive type of research, based or grounded
in the observations or data from which it was developed; it uses a
variety of data sources, including quantitative data, review of records,
interviews, observation and surveys.
7. Phenomenology, describes the subjective reality of an event, as
perceived by the study population; it is the study of a phenomenon.
8. Philosophical Research, is conducted by field experts within the
boundaries of a specific field of study or profession, the best qualified
individual in any field of study to use an intellectual analyses, in order
to clarify definitions, identify ethics, or make a value judgment
concerning an issue in their field of study.

Data analysis
Interpretive techniques

The most common analysis of qualitative data is observer impression. That is,
expert or bystander observers examine the data, interpret it via forming an
impression and report their impression in a structured and sometimes
quantitative form.
Coding
Main article: Coding (social sciences)

Coding is an interpretive technique that both organizes the data and provides a
means to introduce the interpretations of it into certain quantitative methods.
Most coding requires the analyst to read the data and demarcate segments
within it. Each segment is labeled with a code usually a word or short
phrase that suggests how the associated data segments inform the research
objectives. When coding is complete, the analyst prepares reports via a mix of:
summarizing the prevalence of codes, discussing similarities and differences in
related codes across distinct original sources/contexts, or comparing the
relationship between one or more codes.
Some qualitative data that is highly structured (e.g., open-end responses from
surveys or tightly defined interview questions) is typically coded without
additional segmenting of the content. In these cases, codes are often applied as a

layer on top of the data. Quantitative analysis of these codes is typically the
capstone analytical step for this type of qualitative data.
Contemporary qualitative data analyses are sometimes supported by computer
programs, termed Computer Assisted Qualitative Data Analysis Software.
These programs do not supplant the interpretive nature of coding but rather are
aimed at enhancing the analysts efficiency at data storage/retrieval and at
applying the codes to the data. Many programs offer efficiencies in editing and
revising coding, which allow for work sharing, peer review, and recursive
examination of data.
A frequent criticism of coding method is that it seeks to transform qualitative
data into quantitative data, thereby draining the data of its variety, richness, and
individual character. Analysts respond to this criticism by thoroughly expositing
their definitions of codes and linking those codes soundly to the underlying
data, therein bringing back some of the richness that might be absent from a
mere list of codes.
Recursive abstraction

Some qualitative datasets are analyzed without coding. A common method here
is recursive abstraction, where datasets are summarized, those summaries are
then further summarized, and so on. The end result is a more compact summary
that would have been difficult to accurately discern without the preceding steps
of distillation.
A frequent criticism of recursive abstraction is that the final conclusions are
several times removed from the underlying data. While it is true that poor initial
summaries will certainly yield an inaccurate final report, qualitative analysts
can respond to this criticism. They do so, like those using coding method, by
documenting the reasoning behind each summary step, citing examples from
the data where statements were included and where statements were excluded
from the intermediate summary.
Mechanical techniques

Some techniques rely on leveraging computers to scan and sort large sets of
qualitative data. At their most basic level, mechanical techniques rely on
counting words, phrases, or coincidences of tokens within the data. Often
referred to as content analysis, the output from these techniques is amenable to
many advanced statistical analyses.
Mechanical techniques are particularly well-suited for a few scenarios. One
such scenario is for datasets that are simply too large for a human to effectively
analyze, or where analysis of them would be cost prohibitive relative to the

value of information they contain. Another scenario is when the chief value of a
dataset is the extent to which it contains red flags (e.g., searching for reports
of certain adverse events within a lengthy journal dataset from patients in a
clinical trial) or green flags (e.g., searching for mentions of your brand in
positive reviews of marketplace products).
A frequent criticism of mechanical techniques is the absence of a human
interpreter. And while masters of these methods are able to write sophisticated
software to mimic some human decisions, the bulk of the analysis is
nonhuman. Analysts respond by proving the value of their methods relative to
either a) hiring and training a human team to analyze the data or b) letting the
data go untouched, leaving any actionable nuggets undiscovered.
Paradigmatic differences

Contemporary qualitative research has been conducted from a large number of


various paradigms that influence conceptual and metatheoretical concerns of
legitimacy, control, data analysis, ontology, and epistemology, among others.
Research conducted in the last 10 years has been characterized by a distinct turn
toward more interpretive, postmodern, and critical practices.[6] Guba and
Lincoln (2005) identify five main paradigms of contemporary qualitative
research: positivism, postpositivism, critical theories, constructivism, and
participatory/cooperative paradigms.[6] Each of the paradigms listed by Guba
and Lincoln are characterized by axiomatic differences in axiology, intended
action of research, control of research process/outcomes, relationship to
foundations of truth and knowledge, validity (see below), textual representation
and voice of the researcher/participants, and commensurability with other
paradigms. In particular, commensurability involves the extent to which
paradigmatic concerns can be retrofitted to each other in ways that make the
simultaneous practice of both possible.[7] Positivist and postpositivist
paradigms share commensurable assumptions but are largely incommensurable
with critical, constructivist, and participatory paradigms. Likewise, critical,
constructivist, and participatory paradigms are commensurable on certain issues
(e.g., intended action and textual representation).
Validation

A central issue in qualitative research is validity (also known as credibility


and/or dependability). There are many different ways of establishing validity,
including: member check, interviewer corroboration, peer debriefing, prolonged
engagement, negative case analysis, auditability, confirmability, bracketing, and
balance. Most of these methods were coined, or at least extensively described
by Lincoln and Guba (1985)[8]
Academic research

By the end of the 1970s many leading journals began to publish qualitative
research articles[9] and several new journals emerged which published only
qualitative research studies and articles about qualitative research methods.[10]
In the 1980s and 1990s, the new qualitative research journals became more
multidisciplinary in focus moving beyond qualitative researchs traditional
disciplinary roots of anthropology, sociology, and philosophy.[10]
The new millennium saw a dramatic increase in the number of journals
specializing in qualitative research with at least one new qualitative research
journal being launched each year.

Notes
1.

^ Denzin, Norman K. & Lincoln, Yvonna S. (Eds.). (2005). The Sage


Handbook of Qualitative Research (3rd ed.). Thousand Oaks, CA: Sage. ISBN
0-7619-2757-3

2.

^ Bent Flyvbjerg, 2006, "Five Misunderstandings About Case Study


Research." Qualitative Inquiry, vol. 12, no. 2, April, pp. 219-245.; Bent
Flyvbjerg, 2011, "Case Study," in Norman K. Denzin and Yvonna S. Lincoln,
eds., The Sage Handbook of Qualitative Research, 4th Edition (Thousand
Oaks, CA: Sage), pp. 301-316.

3.

^ Marshall, Catherine & Rossman, Gretchen B. (1998). Designing


Qualitative Research. Thousand Oaks, CA: Sage. ISBN 0-7619-1340-8

4.

^ Lindlof, T. R., & Taylor, B. C. (2002) Qualitative communication


research methods: Second edition. Thousand Oaks, CA: Sage Publications,
Inc. ISBN 0-7619-2493-0

5.

^ "Qualitative Research Methods: A Data Collectors Field Guide".


techsociety.com.
http://www.techsociety.com/cal/soc190/fssba2009/ParticipantObservation.pdf.
Retrieved 7 October 2010.

6.

^ a b Guba, E. G., & Lincoln, Y. S. (2005). Paradigmatic


controversies, contradictions, and emerging influences" In N. K. Denzin & Y.
S. Lincoln (Eds.), The Sage Handbook of Qualitative Research (3rd ed.), pp.
191-215. Thousand Oaks, CA: Sage. ISBN 0-7619-2757-3

7.

^ Guba, E. G., & Lincoln, Y. S. (2005). Paradigmatic controversies,


contradictions, and emerging influences" (p. 200). In N. K. Denzin & Y. S.

Lincoln (Eds.), The Sage Handbook of Qualitative Research (3rd ed.), pp. 191215. Thousand Oaks, CA: Sage. ISBN 0-7619-2757-3
8.

^ Lincoln Y and Guba EG (1985) Naturalist Inquiry, Sage


Publications, Newbury Park, CA.

9.

^ Loseke, Donileen R. & Cahil, Spencer E. (2007). Publishing


qualitative manuscripts: Lessons learned. In C. Seale, G. Gobo, J. F.
Gubrium, & D. Silverman (Eds.), Qualitative Research Practice: Concise
Paperback Edition, pp. 491-506. London: Sage. ISBN 978-1-7619-4776-9

10.

^ a b Denzin, Norman K. & Lincoln, Yvonna S. (2005). Introduction:


The discipline and practice of qualitative research. In N. K. Denzin & Y. S.
Lincoln (Eds.), The Sage Handbook of Qualitative Research (3rd ed.), pp. 133. Thousand Oaks, CA: Sage. ISBN 0-7619-2757-3

Das könnte Ihnen auch gefallen