Sie sind auf Seite 1von 39

Motivation

Motivation is literally the desire to do things. It's the difference between waking
up before dawn to pound the pavement and lazing around the house all day. It's
the crucial element in setting and attaining goalsand research shows you can
influence your own levels of motivation and self-control. So figure out what you
want, power through the pain period, and start being who you want to be.
What is the Difference between Creativity and Innovation?

reativity is the capability or act of conceiving something original or unusual


Innovation is the implementation of something new.
Invention is the creation of something that has never been made before and is recognized as
the product of some unique insight.
If you have a brainstorm meeting and dream up dozens of new ideas then you have displayed
creativity but there is no innovation until something gets implemented. Somebody has to take
a risk and deliver something for a creative idea to be turned into an innovation. An invention
might be a product or device or method that has never existed before. So every invention is
an innovation. But every innovation is not an invention. When your company first published
its website that was a major innovation for the company even though many other websites
already existed.
We tend to think of an innovation as a new product but you can innovate with a new process,
method, business model, partnership, route to market or marketing method. Indeed every
aspect of your business operation is a candidate for innovation. Peter Drucker said, Every
organisation must prepare for the abandonment of everything it does. So do not restrict your
vision of innovation to products. Some of the most powerful innovations you can make are in
business methods and customer services. If we look at companies like Dell, eBay and
Amazon we see that their great innovations were with their business models rather than in
new products.
Innovations can be incremental or radical. Every improvement that you make in products or
services can be seen as an incremental innovation. Most businesses and most managers are

good at incremental innovation. They see problems in the current set-up and they fix them.
Radical innovations involve finding an entirely new way to do things. As such they are often
risky and difficult to implement. Most larger organisations and most managers are poor at
radical innovation. If you had been making LP records then you could have introduced
incremental innovations in your design and marketing. However if this was your strategy then
a radical innovation, the CD, would eventually kill you. The CD manufacturer could similarly
introduce various incremental improvements. Once again a radical innovation, music
downloads over the internet, would make your offering obsolete. So we need to constantly
look for incremental innovations and radical innovations. We need to develop creativity and
turn it quickly into innovation.

Marketing Research
Managers need information in order to introduce products and services that create
value in the mind of the customer. But the perception of value is a subjective one,
and what customers value this year may be quite different from what they value next
year. As such, the attributes that create value cannot simply be deduced from
common knowledge. Rather, data must be collected and analyzed. The goal of
marketing research is to provide the facts and direction that managers need to make
their more important marketing decisions.
To maximize the benefit of marketing research, those who use it need to understand
the research process and its limitations.
Marketing Research vs. Market Research
These terms often are used interchangeably, but technically there is a difference.
Market research deals specifically with the gathering of information about a market's
size and trends. Marketing research covers a wider range of activities. While it may
involve market research, marketing research is a more general systematic process
that can be applied to a variety of marketing problems.
The Value of Information
Information can be useful, but what determines its real value to the organization? In
general, the value of information is determined by:

The ability and willingness to act on the information.

The accuracy of the information.

The level of indecisiveness that would exist without the information.

The amount of variation in the possible results.

The level of risk aversion.

The reaction of competitors to any decision improved by the information.

The cost of the information in terms of time and money.

The Marketing Research Process


Once the need for marketing research has been established, most marketing
research projects involve these steps:
1. Define the problem
2. Determine research design
3. Identify data types and sources
4. Design data collection forms and questionnaires
5. Determine sample plan and size
6. Collect the data
7. Analyze and interpret the data
8. Prepare the research report
Problem Definition
The decision problem faced by management must be translated into a market
research problem in the form of questions that define the information that is required
to make the decision and how this information can be obtained. Thus, the decision
problem is translated into a research problem. For example, a decision problem may
be whether to launch a new product. The corresponding research problem might be
to assess whether the market would accept the new product.
The objective of the research should be defined clearly. To ensure that the true
decision problem is addressed, it is useful for the researcher to outline possible
scenarios of the research results and then for the decision maker to formulate plans
of action under each scenario. The use of such scenarios can ensure that the
purpose of the research is agreed upon before it commences.

Research Design
Marketing research can classified in one of three categories:

Exploratory research

Descriptive research

Causal research

These classifications are made according to the objective of the research. In some
cases the research will fall into one of these categories, but in other cases different
phases of the same research project will fall into different categories.

Exploratory research has the goal of formulating problems more precisely,


clarifying concepts, gathering explanations, gaining insight, eliminating
impractical ideas, and forming hypotheses. Exploratory research can be
performed using a literature search, surveying certain people about their
experiences, focus groups, and case studies. When surveying people,
exploratory research studies would not try to acquire a representative sample,
but rather, seek to interview those who are knowledgeable and who might be
able to provide insight concerning the relationship among variables. Case
studies can include contrasting situations or benchmarking against an
organization known for its excellence. Exploratory research may develop
hypotheses, but it does not seek to test them. Exploratory research is
characterized by its flexibility.

Descriptive research is more rigid than exploratory research and seeks to


describe users of a product, determine the proportion of the population that
uses a product, or predict future demand for a product. As opposed to
exploratory research, descriptive research should define questions, people
surveyed, and the method of analysis prior to beginning data collection. In
other words, the who, what, where, when, why, and how aspects of the
research should be defined. Such preparation allows one the opportunity to
make any required changes before the costly process of data collection has
begun.
There are two basic types of descriptive research: longitudinal studies and
cross-sectional studies. Longitudinal studies are time series analyses that
make repeated measurements of the same individuals, thus allowing one to
monitor behavior such as brand-switching. However, longitudinal studies are
not necessarily representative since many people may refuse to participate
because of the commitment required. Cross-sectional studies sample the
population to make measurements at a specific point in time. A special type of
cross-sectional analysis is a cohort analysis, which tracks an aggregate of
individuals who experience the same event within the same time interval over
time. Cohort analyses are useful for long-term forecasting of product demand.

Causal research seeks to find cause and effect relationships between


variables. It accomplishes this goal through laboratory and field experiments.

Data Types and Sources


Secondary Data
Before going through the time and expense of collecting primary data, one should
check for secondary data that previously may have been collected for other
purposes but that can be used in the immediate study. Secondary data may be
internal to the firm, such as sales invoices and warranty cards, or may be external to
the firm such as published data or commercially available data. The government
census is a valuable source of secondary data.
Secondary data has the advantage of saving time and reducing data gathering costs.
The disadvantages are that the data may not fit the problem perfectly and that the
accuracy may be more difficult to verify for secondary data than for primary data.
Some secondary data is republished by organizations other than the original source.
Because errors can occur and important explanations may be missing in republished
data, one should obtain secondary data directly from its source. One also should
consider who the source is and whether the results may be biased.
There are several criteria that one should use to evaluate secondary data.

Whether the data is useful in the research study.

How current the data is and whether it applies to time period of interest.

Errors and accuracy - whether the data is dependable and can be verified.

Presence of bias in the data.

Specifications and methodologies used, including data collection method,


response rate, quality and analysis of the data, sample size and sampling
technique, and questionnaire design.

Objective of the original data collection.

Nature of the data, including definition of variables, units of measure,


categories used, and relationships examined.

Primary Data
Often, secondary data must be supplemented by primary data originated specifically
for the study at hand. Some common types of primary data are:

demographic and socioeconomic characteristics

psychological and lifestyle characteristics

attitudes and opinions

awareness and knowledge - for example, brand awareness

intentions - for example, purchase intentions. While useful, intentions are not
a reliable indication of actual future behavior.

motivation - a person's motives are more stable than his/her behavior, so


motive is a better predictor of future behavior than is past behavior.

behavior

Primary data can be obtained by communication or by observation. Communication


involves questioning respondents either verbally or in writing. This method is
versatile, since one needs only to ask for the information; however, the response
may not be accurate. Communication usually is quicker and cheaper than
observation. Observation involves the recording of actions and is performed by either
a person or some mechanical or electronic device. Observation is less versatile than
communication since some attributes of a person may not be readily observable,
such as attitudes, awareness, knowledge, intentions, and motivation. Observation
also might take longer since observers may have to wait for appropriate events to
occur, though observation using scanner data might be quicker and more cost
effective. Observation typically is more accurate than communication.
Personal interviews have an interviewer bias that mail-in questionnaires do not have.
For example, in a personal interview the respondent's perception of the interviewer
may affect the responses.
Questionnaire Design
The questionnaire is an important tool for gathering primary data. Poorly constructed
questions can result in large errors and invalidate the research data, so significant
effort should be put into the questionnaire design. The questionnaire should be
tested thoroughly prior to conducting the survey.

Measurement Scales
Attributes can be measured on nominal, ordinal, interval, and ratio scales:

Nominal numbers are simply identifiers, with the only permissible


mathematical use being for counting. Example: social security numbers.

Ordinal scales are used for ranking. The interval between the numbers
conveys no meaning. Median and mode calculations can be performed on
ordinal numbers. Example: class ranking

Interval scales maintain an equal interval between numbers. These scales


can be used for ranking and for measuring the interval between two numbers.
Since the zero point is arbitrary, ratios cannot be taken between numbers on
an interval scale; however, mean, median, and mode are all valid. Example:
temperature scale

Ratio scales are referenced to an absolute zero values, so ratios between


numbers on the scale are meaningful. In addition to mean, median, and
mode, geometric averages also are valid. Example: weight

Validity and Reliability


The validity of a test is the extent to which differences in scores reflect differences in
the measured characteristic. Predictive validity is a measure of the usefulness of a
measuring instrument as a predictor. Proof of predictive validity is determined by the
correlation between results and actual behavior. Construct validity is the extent to
which a measuring instrument measures what it intends to measure.
Reliability is the extent to which a measurement is repeatable with the same results.
A measurement may be reliable and not valid. However, if a measurement is valid,
then it also is reliable and if it is not reliable, then it cannot be valid. One way to show
reliability is to show stability by repeating the test with the same results.

Attitude Measurement
Many of the questions in a marketing research survey are designed to measure
attitudes. Attitudes are a person's general evaluation of something. Customer
attitude is an important factor for the following reasons:

Attitude helps to explain how ready one is to do something.

Attitudes do not change much over time.

Attitudes produce consistency in behavior.

Attitudes can be related to preferences.

Attitudes can be measured using the following procedures:

Self-reporting - subjects are asked directly about their attitudes. Self-reporting


is the most common technique used to measure attitude.

Observation of behavior - assuming that one's behavior is a result of one's


attitudes, attitudes can be inferred by observing behavior. For example, one's
attitude about an issue can be inferred by whether he/she signs a petition
related to it.

Indirect techniques - use unstructured stimuli such as word association tests.

Performance of objective tasks - assumes that one's performance depends on


attitude. For example, the subject can be asked to memorize the arguments of
both sides of an issue. He/she is more likely to do a better job on the
arguments that favor his/her stance.

Physiological reactions - subject's response to a stimuli is measured using


electronic or mechanical means. While the intensity can be measured, it is
difficult to know if the attitude is positive or negative.

Multiple measures - a mixture of techniques can be used to validate the


findings, especially worthwhile when self-reporting is used.

There are several types of attitude rating scales:

Equal-appearing interval scaling - a set of statements are assembled. These


statements are selected according to their position on an interval scale of
favorableness. Statements are chosen that has a small degree of dispersion.
Respondents then are asked to indicate with which statements they agree.

Likert method of summated ratings - a statement is made and the


respondents indicate their degree of agreement or disagreement on a five
point scale (Strongly Disagree, Disagree, Neither Agree Nor Disagree, Agree,
Strongly Agree).

Semantic differential scale - a scale is constructed using phrases describing


attributes of the product to anchor each end. For example, the left end may
state, "Hours are inconvenient" and the right end may state, "Hours are
convenient". The respondent then marks one of the seven blanks between the
statements to indicate his/her opinion about the attribute.

Stapel Scale - similar to the semantic differential scale except that 1) points
on the scale are identified by numbers, 2) only one statement is used and if
the respondent disagrees a negative number should marked, and 3) there are
10 positions instead of seven. This scale does not require that bipolar
adjectives be developed and it can be administered by telephone.

Q-sort technique - the respondent if forced to construct a normal distribution


by placing a specified number of cards in one of 11 stacks according to how
desirable he/she finds the characteristics written on the cards.

Sampling Plan
The sampling frame is the pool from which the interviewees are chosen. The
telephone book often is used as a sampling frame, but have some shortcomings.
Telephone books exclude those households that do not have telephones and those
households with unlisted numbers. Since a certain percentage of the numbers listed
in a phone book are out of service, there are many people who have just moved who
are not sampled. Such sampling biases can be overcome by using random digit
dialing. Mall intercepts represent another sampling frame, though there are many
people who do not shop at malls and those who shop more often will be overrepresented unless their answers are weighted in inverse proportion to their
frequency of mall shopping.
In designing the research study, one should consider the potential errors. Two
sources of errors are random sampling error and non-sampling error. Sampling
errors are those due to the fact that there is a non-zero confidence interval of the
results because of the sample size being less than the population being studied.
Non-sampling errors are those caused by faulty coding, untruthful responses,
respondent fatigue, etc.
There is a tradeoff between sample size and cost. The larger the sample size, the
smaller the sampling error but the higher the cost. After a certain point the smaller
sampling error cannot be justified by the additional cost.
While a larger sample size may reduce sampling error, it actually may increase the
total error. There are two reasons for this effect. First, a larger sample size may
reduce the ability to follow up on non-responses. Second, even if there is a sufficient
number of interviewers for follow-ups, a larger number of interviewers may result in a
less uniform interview process.
Data Collection
In addition to the intrinsic sampling error, the actual data collection process will
introduce additional errors. These errors are called non-sampling errors. Some nonsampling errors may be intentional on the part of the interviewer, who may introduce
a bias by leading the respondent to provide a certain response. The interviewer also
may introduce unintentional errors, for example, due to not having a clear
understanding of the interview process or due to fatigue.
Respondents also may introduce errors. A respondent may introduce intentional
errors by lying or simply by not responding to a question. A respondent may
introduce unintentional errors by not understanding the question, guessing, not
paying close attention, and being fatigued or distracted.
Such non-sampling errors can be reduced through quality control techniques.
Data Analysis - Preliminary Steps

Before analysis can be performed, raw data must be transformed into the right
format. First, it must be edited so that errors can be corrected or omitted. The data
must then be coded; this procedure converts the edited raw data into numbers or
symbols. A codebook is created to document how the data was coded. Finally, the
data is tabulated to count the number of samples falling into various categories.
Simple tabulations count the occurrences of each variable independently of the other
variables. Cross tabulations, also known as contingency tables or cross tabs, treats
two or more variables simultaneously. However, since the variables are in a twodimensional table, cross tabbing more than two variables is difficult to visualize since
more than two dimensions would be required. Cross tabulation can be performed for
nominal and ordinal variables.
Cross tabulation is the most commonly utilized data analysis method in marketing
research. Many studies take the analysis no further than cross tabulation. This
technique divides the sample into sub-groups to show how the dependent variable
varies from one subgroup to another. A third variable can be introduced to uncover a
relationship that initially was not evident.
Conjoint Analysis
The conjoint analysis is a powerful technique for determining consumer preferences
for product attributes.
Hypothesis Testing
A basic fact about testing hypotheses is that a hypothesis may be rejected but that
the hypothesis never can be unconditionally accepted until all possible evidence is
evaluated. In the case of sampled data, the information set cannot be complete. So if
a test using such data does not reject a hypothesis, the conclusion is not necessarily
that the hypothesis should be accepted.
The null hypothesis in an experiment is the hypothesis that the independent variable
has no effect on the dependent variable. The null hypothesis is expressed as H0.
This hypothesis is assumed to be true unless proven otherwise. The alternative to
the null hypothesis is the hypothesis that the independent variable does have an
effect on the dependent variable. This hypothesis is known as the alternative,
research, or experimental hypothesis and is expressed as H1. This alternative
hypothesis states that the relationship observed between the variables cannot be
explained by chance alone.
There are two types of errors in evaluating a hypotheses:

Type I error: occurs when one rejects the null hypothesis and accepts the
alternative, when in fact the null hypothesis is true.

Type II error: occurs when one accepts the null hypothesis when in fact the
null hypothesis is false.

Because their names are not very descriptive, these types of errors sometimes are
confused. Some people jokingly define a Type III error to occur when one confuses
Type I and Type II. To illustrate the difference, it is useful to consider a trial by jury in
which the null hypothesis is that the defendant is innocent. If the jury convicts a truly
innocent defendant, a Type I error has occurred. If, on the other hand, the jury
declares a truly guilty defendant to be innocent, a Type II error has occurred.
Hypothesis testing involves the following steps:

Formulate the null and alternative hypotheses.

Choose the appropriate test.

Choose a level of significance (alpha) - determine the rejection region.

Gather the data and calculate the test statistic.

Determine the probability of the observed value of the test statistic under the
null hypothesis given the sampling distribution that applies to the chosen test.

Compare the value of the test statistic to the rejection threshold.

Based on the comparison, reject or do not reject the null hypothesis.

Make the marketing research conclusion.

In order to analyze whether research results are statistically significant or simply by


chance, a test of statistical significance can be run.
Tests of Statistical Significance
The chi-square ( 2 ) goodness-of-fit test is used to determine whether a set of
proportions have specified numerical values. It often is used to analyze bivariate
cross-tabulated data. Some examples of situations that are well-suited for this test
are:

A manufacturer of packaged products test markets a new product and wants


to know if sales of the new product will be in the same relative proportion of
package sizes as sales of existing products.

A company's sales revenue comes from Product A (50%), Product B (30%),


and Product C (20%). The firm wants to know whether recent fluctuations in
these proportions are random or whether they represent a real shift in sales.

The chi-square test is performed by defining k categories and observing the number
of cases falling into each category. Knowing the expected number of cases falling in
each category, one can define chi-squared as:
2 = ( Oi - Ei )2 / Ei
where
Oi = the number of observed cases in category i,
Ei = the number of expected cases in category i,
k = the number of categories,
the summation runs from i = 1 to i = k.
Before calculating the chi-square value, one needs to determine the expected
frequency for each cell. This is done by dividing the number of samples by the
number of cells in the table.
To use the output of the chi-square function, one uses a chi-square table. To do so,
one needs to know the number of degrees of freedom (df). For chi-square applied to
cross-tabulated data, the number of degrees of freedom is equal to
( number of columns - 1 ) ( number of rows - 1 )
This is equal to the number of categories minus one. The conventional critical level
of 0.05 normally is used. If the calculated output value from the function is greater
than the chi-square look-up table value, the null hypothesis is rejected.
ANOVA
Another test of significance is the Analysis of Variance (ANOVA) test. The primary
purpose of ANOVA is to test for differences between multiple means. Whereas the ttest can be used to compare two means, ANOVA is needed to compare three or
more means. If multiple t-tests were applied, the probability of a TYPE I error
(rejecting a true null hypothesis) increases as the number of comparisons increases.
One-way ANOVA examines whether multiple means differ. The test is called an Ftest. ANOVA calculates the ratio of the variation between groups to the variation
within groups (the F ratio). While ANOVA was designed for comparing several
means, it also can be used to compare two means. Two-way ANOVA allows for a
second independent variable and addresses interaction.
To run a one-way ANOVA, use the following steps:
1. Identify the independent and dependent variables.
2. Describe the variation by breaking it into three parts - the total variation, the
portion that is within groups, and the portion that is between groups (or
among groups for more than two groups). The total variation (SS total) is the
sum of the squares of the differences between each value and the grand
mean of all the values in all the groups. The in-group variation (SSwithin) is the

sum of the squares of the differences in each element's value and the group
mean. The variation between group means (SS between) is the total variation
minus the in-group variation (SStotal - SSwithin).
3. Measure the difference between each group's mean and the grand mean.
4. Perform a significance test on the differences.
5. Interpret the results.
This F-test assumes that the group variances are approximately equal and that the
observations are independent. It also assumes normally distributed data; however,
since this is a test on means the Central Limit Theorem holds as long as the sample
size is not too small.
ANOVA is efficient for analyzing data using relatively few observations and can be
used with categorical variables. Note that regression can perform a similar analysis
to that of ANOVA.
Discriminant Analysis
Analysis of the difference in means between groups provides information about
individual variables, it is not useful for determine their individual impacts when the
variables are used in combination. Since some variables will not be independent
from one another, one needs a test that can consider them simultaneously in order to
take into account their interrelationship. One such test is to construct a linear
combination, essentially a weighted sum of the variables. To determine which
variables discriminate between two or more naturally occurring groups, discriminant
analysis is used. Discriminant analysis can determine which variables are the best
predictors of group membership. It determines which groups differ with respect to the
mean of a variable, and then uses that variable to predict new cases of group
membership. Essentially, the discriminant function problem is a one-way ANOVA
problem in that one can determine whether multiple groups are significantly different
from one another with respect to the mean of a particular variable.
A discriminant analysis consists of the following steps:
1. Formulate the problem.
2. Determine the discriminant function coefficients that result in the highest ratio
of between-group variation to within-group variation.
3. Test the significance of the discriminant function.
4. Interpret the results.
5. Determine the validity of the analysis.

Discriminant analysis analyzes the dependency relationship, whereas factor analysis


and cluster analysis address the interdependency among variables.
Factor Analysis
Factor analysis is a very popular technique to analyze interdependence. Factor
analysis studies the entire set of interrelationships without defining variables to be
dependent or independent. Factor analysis combines variables to create a smaller
set of factors. Mathematically, a factor is a linear combination of variables. A factor is
not directly observable; it is inferred from the variables. The technique identifies
underlying structure among the variables, reducing the number of variables to a
more manageable set. Factor analysis groups variables according to their
correlation.
The factor loading can be defined as the correlations between the factors and their
underlying variables. A factor loading matrix is a key output of the factor analysis. An
example matrix is shown below.
Factor 1

Factor 2

Factor 3

Variable 1
Variable 2
Variable 3
Column's Sum of Squares:
Each cell in the matrix represents correlation between the variable and the factor
associated with that cell. The square of this correlation represents the proportion of
the variation in the variable explained by the factor. The sum of the squares of the
factor loadings in each column is called an eigenvalue. An eigenvalue represents the
amount of variance in the original variables that is associated with that factor. The
communality is the amount of the variable variance explained by common factors.
A rule of thumb for deciding on the number of factors is that each included factor
must explain at least as much variance as does an average variable. In other words,
only factors for which the eigenvalue is greater than one are used. Other criteria for
determining the number of factors include the Scree plot criteria and the percentage
of variance criteria.
To facilitate interpretation, the axis can be rotated. Rotation of the axis is equivalent
to forming linear combinations of the factors. A commonly used rotation strategy is
the varimax rotation. Varimax attempts to force the column entries to be either close
to zero or one.
Cluster Analysis
Market segmentation usually is based not on one factor but on multiple factors.

Initially, each variable represents its own cluster. The challenge is to find a way to
combine variables so that relatively homogenous clusters can be formed. Such
clusters should be internally homogenous and externally heterogeneous. Cluster
analysis is one way to accomplish this goal. Rather than being a statistical test, it is
more of a collection of algorithms for grouping objects, or in the case of marketing
research, grouping people. Cluster analysis is useful in the exploratory phase of
research when there are no a-priori hypotheses.
Cluster analysis steps:
1. Formulate the problem, collecting data and choosing the variables to analyze.
2. Choose a distance measure. The most common is the Euclidean distance.
Other possibilities include the squared Euclidean distance, city-block
(Manhattan) distance, Chebychev distance, power distance, and percent
disagreement.
3. Choose a clustering procedure (linkage, nodal, or factor procedures).
4. Determine the number of clusters. They should be well separated and ideally
they should be distinct enough to give them descriptive names such as
professionals, buffs, etc.
5. Profile the clusters.
6. Assess the validity of the clustering.
Marketing Research Report
The format of the marketing research report varies with the needs of the
organization. The report often contains the following sections:

Authorization letter for the research

Table of Contents

List of illustrations

Executive summary

Research objectives

Methodology

Results

Limitations

Conclusions and recommendations

Appendices containing copies of the questionnaires, etc.

Concluding Thoughts
Marketing research by itself does not arrive at marketing decisions, nor does it
guarantee that the organization will be successful in marketing its products.
However, when conducted in a systematic, analytical, and objective manner,
marketing research can reduce the uncertainty in the decision-making process and
increase the probability and magnitude of success.

Marketing Research
Managers need information in order to introduce products and services that create
value in the mind of the customer. But the perception of value is a subjective one,
and what customers value this year may be quite different from what they value next
year. As such, the attributes that create value cannot simply be deduced from
common knowledge. Rather, data must be collected and analyzed. The goal of
marketing research is to provide the facts and direction that managers need to make
their more important marketing decisions.
To maximize the benefit of marketing research, those who use it need to understand
the research process and its limitations.
Marketing Research vs. Market Research
These terms often are used interchangeably, but technically there is a difference.
Market research deals specifically with the gathering of information about a market's
size and trends. Marketing research covers a wider range of activities. While it may
involve market research, marketing research is a more general systematic process
that can be applied to a variety of marketing problems.
The Value of Information
Information can be useful, but what determines its real value to the organization? In
general, the value of information is determined by:

The ability and willingness to act on the information.

The accuracy of the information.

The level of indecisiveness that would exist without the information.

The amount of variation in the possible results.

The level of risk aversion.

The reaction of competitors to any decision improved by the information.

The cost of the information in terms of time and money.

The Marketing Research Process


Once the need for marketing research has been established, most marketing
research projects involve these steps:
1. Define the problem
2. Determine research design
3. Identify data types and sources
4. Design data collection forms and questionnaires
5. Determine sample plan and size
6. Collect the data
7. Analyze and interpret the data
8. Prepare the research report
Problem Definition
The decision problem faced by management must be translated into a market
research problem in the form of questions that define the information that is required
to make the decision and how this information can be obtained. Thus, the decision
problem is translated into a research problem. For example, a decision problem may
be whether to launch a new product. The corresponding research problem might be
to assess whether the market would accept the new product.
The objective of the research should be defined clearly. To ensure that the true
decision problem is addressed, it is useful for the researcher to outline possible
scenarios of the research results and then for the decision maker to formulate plans
of action under each scenario. The use of such scenarios can ensure that the
purpose of the research is agreed upon before it commences.
Research Design

Marketing research can classified in one of three categories:

Exploratory research

Descriptive research

Causal research

These classifications are made according to the objective of the research. In some
cases the research will fall into one of these categories, but in other cases different
phases of the same research project will fall into different categories.

Exploratory research has the goal of formulating problems more precisely,


clarifying concepts, gathering explanations, gaining insight, eliminating
impractical ideas, and forming hypotheses. Exploratory research can be
performed using a literature search, surveying certain people about their
experiences, focus groups, and case studies. When surveying people,
exploratory research studies would not try to acquire a representative sample,
but rather, seek to interview those who are knowledgeable and who might be
able to provide insight concerning the relationship among variables. Case
studies can include contrasting situations or benchmarking against an
organization known for its excellence. Exploratory research may develop
hypotheses, but it does not seek to test them. Exploratory research is
characterized by its flexibility.

Descriptive research is more rigid than exploratory research and seeks to


describe users of a product, determine the proportion of the population that
uses a product, or predict future demand for a product. As opposed to
exploratory research, descriptive research should define questions, people
surveyed, and the method of analysis prior to beginning data collection. In
other words, the who, what, where, when, why, and how aspects of the
research should be defined. Such preparation allows one the opportunity to
make any required changes before the costly process of data collection has
begun.
There are two basic types of descriptive research: longitudinal studies and
cross-sectional studies. Longitudinal studies are time series analyses that
make repeated measurements of the same individuals, thus allowing one to
monitor behavior such as brand-switching. However, longitudinal studies are
not necessarily representative since many people may refuse to participate
because of the commitment required. Cross-sectional studies sample the
population to make measurements at a specific point in time. A special type of
cross-sectional analysis is a cohort analysis, which tracks an aggregate of
individuals who experience the same event within the same time interval over
time. Cohort analyses are useful for long-term forecasting of product demand.

Causal research seeks to find cause and effect relationships between


variables. It accomplishes this goal through laboratory and field experiments.

Data Types and Sources


Secondary Data
Before going through the time and expense of collecting primary data, one should
check for secondary data that previously may have been collected for other
purposes but that can be used in the immediate study. Secondary data may be
internal to the firm, such as sales invoices and warranty cards, or may be external to
the firm such as published data or commercially available data. The government
census is a valuable source of secondary data.
Secondary data has the advantage of saving time and reducing data gathering costs.
The disadvantages are that the data may not fit the problem perfectly and that the
accuracy may be more difficult to verify for secondary data than for primary data.
Some secondary data is republished by organizations other than the original source.
Because errors can occur and important explanations may be missing in republished
data, one should obtain secondary data directly from its source. One also should
consider who the source is and whether the results may be biased.
There are several criteria that one should use to evaluate secondary data.

Whether the data is useful in the research study.

How current the data is and whether it applies to time period of interest.

Errors and accuracy - whether the data is dependable and can be verified.

Presence of bias in the data.

Specifications and methodologies used, including data collection method,


response rate, quality and analysis of the data, sample size and sampling
technique, and questionnaire design.

Objective of the original data collection.

Nature of the data, including definition of variables, units of measure,


categories used, and relationships examined.

Primary Data
Often, secondary data must be supplemented by primary data originated specifically
for the study at hand. Some common types of primary data are:

demographic and socioeconomic characteristics

psychological and lifestyle characteristics

attitudes and opinions

awareness and knowledge - for example, brand awareness

intentions - for example, purchase intentions. While useful, intentions are not
a reliable indication of actual future behavior.

motivation - a person's motives are more stable than his/her behavior, so


motive is a better predictor of future behavior than is past behavior.

behavior

Primary data can be obtained by communication or by observation. Communication


involves questioning respondents either verbally or in writing. This method is
versatile, since one needs only to ask for the information; however, the response
may not be accurate. Communication usually is quicker and cheaper than
observation. Observation involves the recording of actions and is performed by either
a person or some mechanical or electronic device. Observation is less versatile than
communication since some attributes of a person may not be readily observable,
such as attitudes, awareness, knowledge, intentions, and motivation. Observation
also might take longer since observers may have to wait for appropriate events to
occur, though observation using scanner data might be quicker and more cost
effective. Observation typically is more accurate than communication.
Personal interviews have an interviewer bias that mail-in questionnaires do not have.
For example, in a personal interview the respondent's perception of the interviewer
may affect the responses.
Questionnaire Design
The questionnaire is an important tool for gathering primary data. Poorly constructed
questions can result in large errors and invalidate the research data, so significant
effort should be put into the questionnaire design. The questionnaire should be
tested thoroughly prior to conducting the survey.

Measurement Scales
Attributes can be measured on nominal, ordinal, interval, and ratio scales:

Nominal numbers are simply identifiers, with the only permissible


mathematical use being for counting. Example: social security numbers.

Ordinal scales are used for ranking. The interval between the numbers
conveys no meaning. Median and mode calculations can be performed on
ordinal numbers. Example: class ranking

Interval scales maintain an equal interval between numbers. These scales


can be used for ranking and for measuring the interval between two numbers.
Since the zero point is arbitrary, ratios cannot be taken between numbers on
an interval scale; however, mean, median, and mode are all valid. Example:
temperature scale

Ratio scales are referenced to an absolute zero values, so ratios between


numbers on the scale are meaningful. In addition to mean, median, and
mode, geometric averages also are valid. Example: weight

Validity and Reliability


The validity of a test is the extent to which differences in scores reflect differences in
the measured characteristic. Predictive validity is a measure of the usefulness of a
measuring instrument as a predictor. Proof of predictive validity is determined by the
correlation between results and actual behavior. Construct validity is the extent to
which a measuring instrument measures what it intends to measure.
Reliability is the extent to which a measurement is repeatable with the same results.
A measurement may be reliable and not valid. However, if a measurement is valid,
then it also is reliable and if it is not reliable, then it cannot be valid. One way to show
reliability is to show stability by repeating the test with the same results.

Attitude Measurement
Many of the questions in a marketing research survey are designed to measure
attitudes. Attitudes are a person's general evaluation of something. Customer
attitude is an important factor for the following reasons:

Attitude helps to explain how ready one is to do something.

Attitudes do not change much over time.

Attitudes produce consistency in behavior.

Attitudes can be related to preferences.

Attitudes can be measured using the following procedures:

Self-reporting - subjects are asked directly about their attitudes. Self-reporting


is the most common technique used to measure attitude.

Observation of behavior - assuming that one's behavior is a result of one's


attitudes, attitudes can be inferred by observing behavior. For example, one's

attitude about an issue can be inferred by whether he/she signs a petition


related to it.

Indirect techniques - use unstructured stimuli such as word association tests.

Performance of objective tasks - assumes that one's performance depends on


attitude. For example, the subject can be asked to memorize the arguments of
both sides of an issue. He/she is more likely to do a better job on the
arguments that favor his/her stance.

Physiological reactions - subject's response to a stimuli is measured using


electronic or mechanical means. While the intensity can be measured, it is
difficult to know if the attitude is positive or negative.

Multiple measures - a mixture of techniques can be used to validate the


findings, especially worthwhile when self-reporting is used.

There are several types of attitude rating scales:

Equal-appearing interval scaling - a set of statements are assembled. These


statements are selected according to their position on an interval scale of
favorableness. Statements are chosen that has a small degree of dispersion.
Respondents then are asked to indicate with which statements they agree.

Likert method of summated ratings - a statement is made and the


respondents indicate their degree of agreement or disagreement on a five
point scale (Strongly Disagree, Disagree, Neither Agree Nor Disagree, Agree,
Strongly Agree).

Semantic differential scale - a scale is constructed using phrases describing


attributes of the product to anchor each end. For example, the left end may
state, "Hours are inconvenient" and the right end may state, "Hours are
convenient". The respondent then marks one of the seven blanks between the
statements to indicate his/her opinion about the attribute.

Stapel Scale - similar to the semantic differential scale except that 1) points
on the scale are identified by numbers, 2) only one statement is used and if
the respondent disagrees a negative number should marked, and 3) there are
10 positions instead of seven. This scale does not require that bipolar
adjectives be developed and it can be administered by telephone.

Q-sort technique - the respondent if forced to construct a normal distribution


by placing a specified number of cards in one of 11 stacks according to how
desirable he/she finds the characteristics written on the cards.

Sampling Plan

The sampling frame is the pool from which the interviewees are chosen. The
telephone book often is used as a sampling frame, but have some shortcomings.
Telephone books exclude those households that do not have telephones and those
households with unlisted numbers. Since a certain percentage of the numbers listed
in a phone book are out of service, there are many people who have just moved who
are not sampled. Such sampling biases can be overcome by using random digit
dialing. Mall intercepts represent another sampling frame, though there are many
people who do not shop at malls and those who shop more often will be overrepresented unless their answers are weighted in inverse proportion to their
frequency of mall shopping.
In designing the research study, one should consider the potential errors. Two
sources of errors are random sampling error and non-sampling error. Sampling
errors are those due to the fact that there is a non-zero confidence interval of the
results because of the sample size being less than the population being studied.
Non-sampling errors are those caused by faulty coding, untruthful responses,
respondent fatigue, etc.
There is a tradeoff between sample size and cost. The larger the sample size, the
smaller the sampling error but the higher the cost. After a certain point the smaller
sampling error cannot be justified by the additional cost.
While a larger sample size may reduce sampling error, it actually may increase the
total error. There are two reasons for this effect. First, a larger sample size may
reduce the ability to follow up on non-responses. Second, even if there is a sufficient
number of interviewers for follow-ups, a larger number of interviewers may result in a
less uniform interview process.
Data Collection
In addition to the intrinsic sampling error, the actual data collection process will
introduce additional errors. These errors are called non-sampling errors. Some nonsampling errors may be intentional on the part of the interviewer, who may introduce
a bias by leading the respondent to provide a certain response. The interviewer also
may introduce unintentional errors, for example, due to not having a clear
understanding of the interview process or due to fatigue.
Respondents also may introduce errors. A respondent may introduce intentional
errors by lying or simply by not responding to a question. A respondent may
introduce unintentional errors by not understanding the question, guessing, not
paying close attention, and being fatigued or distracted.
Such non-sampling errors can be reduced through quality control techniques.
Data Analysis - Preliminary Steps
Before analysis can be performed, raw data must be transformed into the right
format. First, it must be edited so that errors can be corrected or omitted. The data

must then be coded; this procedure converts the edited raw data into numbers or
symbols. A codebook is created to document how the data was coded. Finally, the
data is tabulated to count the number of samples falling into various categories.
Simple tabulations count the occurrences of each variable independently of the other
variables. Cross tabulations, also known as contingency tables or cross tabs, treats
two or more variables simultaneously. However, since the variables are in a twodimensional table, cross tabbing more than two variables is difficult to visualize since
more than two dimensions would be required. Cross tabulation can be performed for
nominal and ordinal variables.
Cross tabulation is the most commonly utilized data analysis method in marketing
research. Many studies take the analysis no further than cross tabulation. This
technique divides the sample into sub-groups to show how the dependent variable
varies from one subgroup to another. A third variable can be introduced to uncover a
relationship that initially was not evident.
Conjoint Analysis
The conjoint analysis is a powerful technique for determining consumer preferences
for product attributes.
Hypothesis Testing
A basic fact about testing hypotheses is that a hypothesis may be rejected but that
the hypothesis never can be unconditionally accepted until all possible evidence is
evaluated. In the case of sampled data, the information set cannot be complete. So if
a test using such data does not reject a hypothesis, the conclusion is not necessarily
that the hypothesis should be accepted.
The null hypothesis in an experiment is the hypothesis that the independent variable
has no effect on the dependent variable. The null hypothesis is expressed as H0.
This hypothesis is assumed to be true unless proven otherwise. The alternative to
the null hypothesis is the hypothesis that the independent variable does have an
effect on the dependent variable. This hypothesis is known as the alternative,
research, or experimental hypothesis and is expressed as H1. This alternative
hypothesis states that the relationship observed between the variables cannot be
explained by chance alone.
There are two types of errors in evaluating a hypotheses:

Type I error: occurs when one rejects the null hypothesis and accepts the
alternative, when in fact the null hypothesis is true.

Type II error: occurs when one accepts the null hypothesis when in fact the
null hypothesis is false.

Because their names are not very descriptive, these types of errors sometimes are

confused. Some people jokingly define a Type III error to occur when one confuses
Type I and Type II. To illustrate the difference, it is useful to consider a trial by jury in
which the null hypothesis is that the defendant is innocent. If the jury convicts a truly
innocent defendant, a Type I error has occurred. If, on the other hand, the jury
declares a truly guilty defendant to be innocent, a Type II error has occurred.
Hypothesis testing involves the following steps:

Formulate the null and alternative hypotheses.

Choose the appropriate test.

Choose a level of significance (alpha) - determine the rejection region.

Gather the data and calculate the test statistic.

Determine the probability of the observed value of the test statistic under the
null hypothesis given the sampling distribution that applies to the chosen test.

Compare the value of the test statistic to the rejection threshold.

Based on the comparison, reject or do not reject the null hypothesis.

Make the marketing research conclusion.

In order to analyze whether research results are statistically significant or simply by


chance, a test of statistical significance can be run.
Tests of Statistical Significance
The chi-square ( 2 ) goodness-of-fit test is used to determine whether a set of
proportions have specified numerical values. It often is used to analyze bivariate
cross-tabulated data. Some examples of situations that are well-suited for this test
are:

A manufacturer of packaged products test markets a new product and wants


to know if sales of the new product will be in the same relative proportion of
package sizes as sales of existing products.

A company's sales revenue comes from Product A (50%), Product B (30%),


and Product C (20%). The firm wants to know whether recent fluctuations in
these proportions are random or whether they represent a real shift in sales.

The chi-square test is performed by defining k categories and observing the number

of cases falling into each category. Knowing the expected number of cases falling in
each category, one can define chi-squared as:
2 = ( Oi - Ei )2 / Ei
where
Oi = the number of observed cases in category i,
Ei = the number of expected cases in category i,
k = the number of categories,
the summation runs from i = 1 to i = k.
Before calculating the chi-square value, one needs to determine the expected
frequency for each cell. This is done by dividing the number of samples by the
number of cells in the table.
To use the output of the chi-square function, one uses a chi-square table. To do so,
one needs to know the number of degrees of freedom (df). For chi-square applied to
cross-tabulated data, the number of degrees of freedom is equal to
( number of columns - 1 ) ( number of rows - 1 )
This is equal to the number of categories minus one. The conventional critical level
of 0.05 normally is used. If the calculated output value from the function is greater
than the chi-square look-up table value, the null hypothesis is rejected.
ANOVA
Another test of significance is the Analysis of Variance (ANOVA) test. The primary
purpose of ANOVA is to test for differences between multiple means. Whereas the ttest can be used to compare two means, ANOVA is needed to compare three or
more means. If multiple t-tests were applied, the probability of a TYPE I error
(rejecting a true null hypothesis) increases as the number of comparisons increases.
One-way ANOVA examines whether multiple means differ. The test is called an Ftest. ANOVA calculates the ratio of the variation between groups to the variation
within groups (the F ratio). While ANOVA was designed for comparing several
means, it also can be used to compare two means. Two-way ANOVA allows for a
second independent variable and addresses interaction.
To run a one-way ANOVA, use the following steps:
1. Identify the independent and dependent variables.
2. Describe the variation by breaking it into three parts - the total variation, the
portion that is within groups, and the portion that is between groups (or
among groups for more than two groups). The total variation (SS total) is the
sum of the squares of the differences between each value and the grand
mean of all the values in all the groups. The in-group variation (SSwithin) is the
sum of the squares of the differences in each element's value and the group

mean. The variation between group means (SS between) is the total variation
minus the in-group variation (SStotal - SSwithin).
3. Measure the difference between each group's mean and the grand mean.
4. Perform a significance test on the differences.
5. Interpret the results.
This F-test assumes that the group variances are approximately equal and that the
observations are independent. It also assumes normally distributed data; however,
since this is a test on means the Central Limit Theorem holds as long as the sample
size is not too small.
ANOVA is efficient for analyzing data using relatively few observations and can be
used with categorical variables. Note that regression can perform a similar analysis
to that of ANOVA.
Discriminant Analysis
Analysis of the difference in means between groups provides information about
individual variables, it is not useful for determine their individual impacts when the
variables are used in combination. Since some variables will not be independent
from one another, one needs a test that can consider them simultaneously in order to
take into account their interrelationship. One such test is to construct a linear
combination, essentially a weighted sum of the variables. To determine which
variables discriminate between two or more naturally occurring groups, discriminant
analysis is used. Discriminant analysis can determine which variables are the best
predictors of group membership. It determines which groups differ with respect to the
mean of a variable, and then uses that variable to predict new cases of group
membership. Essentially, the discriminant function problem is a one-way ANOVA
problem in that one can determine whether multiple groups are significantly different
from one another with respect to the mean of a particular variable.
A discriminant analysis consists of the following steps:
1. Formulate the problem.
2. Determine the discriminant function coefficients that result in the highest ratio
of between-group variation to within-group variation.
3. Test the significance of the discriminant function.
4. Interpret the results.
5. Determine the validity of the analysis.

Discriminant analysis analyzes the dependency relationship, whereas factor analysis


and cluster analysis address the interdependency among variables.
Factor Analysis
Factor analysis is a very popular technique to analyze interdependence. Factor
analysis studies the entire set of interrelationships without defining variables to be
dependent or independent. Factor analysis combines variables to create a smaller
set of factors. Mathematically, a factor is a linear combination of variables. A factor is
not directly observable; it is inferred from the variables. The technique identifies
underlying structure among the variables, reducing the number of variables to a
more manageable set. Factor analysis groups variables according to their
correlation.
The factor loading can be defined as the correlations between the factors and their
underlying variables. A factor loading matrix is a key output of the factor analysis. An
example matrix is shown below.
Factor 1

Factor 2

Factor 3

Variable 1
Variable 2
Variable 3
Column's Sum of Squares:
Each cell in the matrix represents correlation between the variable and the factor
associated with that cell. The square of this correlation represents the proportion of
the variation in the variable explained by the factor. The sum of the squares of the
factor loadings in each column is called an eigenvalue. An eigenvalue represents the
amount of variance in the original variables that is associated with that factor. The
communality is the amount of the variable variance explained by common factors.
A rule of thumb for deciding on the number of factors is that each included factor
must explain at least as much variance as does an average variable. In other words,
only factors for which the eigenvalue is greater than one are used. Other criteria for
determining the number of factors include the Scree plot criteria and the percentage
of variance criteria.
To facilitate interpretation, the axis can be rotated. Rotation of the axis is equivalent
to forming linear combinations of the factors. A commonly used rotation strategy is
the varimax rotation. Varimax attempts to force the column entries to be either close
to zero or one.
Cluster Analysis
Market segmentation usually is based not on one factor but on multiple factors.

Initially, each variable represents its own cluster. The challenge is to find a way to
combine variables so that relatively homogenous clusters can be formed. Such
clusters should be internally homogenous and externally heterogeneous. Cluster
analysis is one way to accomplish this goal. Rather than being a statistical test, it is
more of a collection of algorithms for grouping objects, or in the case of marketing
research, grouping people. Cluster analysis is useful in the exploratory phase of
research when there are no a-priori hypotheses.
Cluster analysis steps:
1. Formulate the problem, collecting data and choosing the variables to analyze.
2. Choose a distance measure. The most common is the Euclidean distance.
Other possibilities include the squared Euclidean distance, city-block
(Manhattan) distance, Chebychev distance, power distance, and percent
disagreement.
3. Choose a clustering procedure (linkage, nodal, or factor procedures).
4. Determine the number of clusters. They should be well separated and ideally
they should be distinct enough to give them descriptive names such as
professionals, buffs, etc.
5. Profile the clusters.
6. Assess the validity of the clustering.
Marketing Research Report
The format of the marketing research report varies with the needs of the
organization. The report often contains the following sections:

Authorization letter for the research

Table of Contents

List of illustrations

Executive summary

Research objectives

Methodology

Results

Limitations

Conclusions and recommendations

Appendices containing copies of the questionnaires, etc.

Concluding Thoughts
Marketing research by itself does not arrive at marketing decisions, nor does it
guarantee that the organization will be successful in marketing its products.
However, when conducted in a systematic, analytical, and objective manner,
marketing research can reduce the uncertainty in the decision-making process and
increase the probability and magnitude of success.

PRODUCTION MANAGEMENT

DEFINITION
The job of coordinating and controlling the activities required to make a product, typically
involving effective control of scheduling, cost, performance, quality, and waste requirements.
Material management is a scientific technique, concerned with Planning,
Organizing &Control of flow of materials, from their initial purchase to
destination. AIM OF MATERIAL MANAGEMENT. To get. 1. The Right quality.

Production planning is the planning of production and manufacturing modules in a


company or industry. It utilizes the resource allocation of activities of employees, materials
and production capacity, in order to serve different customers.
What is a batch record?
A Batch record is a document that accompanies a pharmaceutical product as it is made. The
batch record directs the operators in exactly how to make the product B and the operators
must follow the batch record instructions just as it is written.
Original Article
Ethical Considerations Of Pharmaceutical Sales In The Primary Care Arena

R Handley
Citation

R Handley. Ethical Considerations Of Pharmaceutical Sales In The Primary Care Arena. The
Internet Journal of Academic Physician Assistants. 2008 Volume 7 Number 1.
Abstract

The conflict of interest between pharmaceutical sales and the ethical practice of medicine has
long existed. While pharmaceutical companies have lost much of their ability to financially
incentivize clinicians, they continue to have undue influence over prescribing practice by
means of detailing. In an attempt to gain large market shares, these companies exploit this

mode of medical information dissemination, creating enormous endorsement bias, and


ultimately placing patients at the mercy of these manipulation tactics. Jonsen et al. have
established the four topic method of ethical discourse which includes medical indications,
patient preferences, quality of life, and contextual features. These topics allow for an
intelligent discussion concerning the ethics of pharmaceutical business practices. The
integrity of evidence-based medicine as well as the safety and welfare of patients are at risk.
How clinicians respond to these pressures will have positive or negative impact on this
medical ethics dilemma.

Introduction

The conflict of interest between pharmaceutical sales and the ethical practice of medicine has
long existed. Since the 1950s, educators, administrators, researchers, and clinicians have
fought for corrective action (Podolsky & Greene, 2008). Pharmaceutical companies have
spent approximately $21 billion dollars on marketing while 90% of this budget has gone
toward directly influencing physician practice (Brennan et al., 2006). Physicians and other
health care practitioners have a fiduciary duty to act in the best interest of their patients.
Jonsen, Siegler, and Winslade (2006) define this duty as, [owing] undivided loyalty to
clients and [working] for their benefit (p. 163). Furthermore, according to these authors, it is
especially important that fiduciaries avoid financial conflicts of interest that prejudice their
clients interest (p. 163).
Now that pharmaceutical companies have lost much of their ability to financially persuade
practitioners with large monetary gifts, the focus on ethics behind their influence has shifted
more toward the issue of detailing (Huddle, 2008). Detailing is loosely defined as the
process of face to face contact between a medical provider and drug company representative
(drug rep) for the purposes of educating the medical provider on all of the details of a
particular drug, service, or device. Many argue that a conflict of interest still exists for
patients, despite a cap on monetary incentives (Brennan et al., 2006; Higgins, 2007; Spurling
& Mansfield, 2007). They contend that any gift, large or small, may still influence the
decision making process of a medical provider, and thus conflict with the better interest of the
patient. They often site social science research which has demonstrated the powerful impulse
to reciprocate for even small gifts (Brennan et al., 2006). Indeed, from their standpoint, pens
and lunches strongly count toward that influence. For those who seem to believe that little

influence can be generated from such small gifts, the persuasion is thought to stem from the
dissemination of biased information (Huddle, 2008). These conflicting viewpoints have
stirred an ongoing battle between the unbiased better interest of patient care and the financial
self-interest of multibillion dollar pharmaceutical companies. The integrity of evidence-based
medicine and ultimately the safety and welfare of patients hang in the balance as the battle
wages on.
Through the litmus of Jonsen et al. four topic method, the far reaching implications of this
subject will be explored.
Medical Indications

Medical indications are defined as the facts, opinions, and interpretations about the patients
physical and/or psychological condition that provide a reasonable justification for diagnostic
and therapeutic interventions. (Jonsen et al., 2006, p. 14) The concepts of beneficence and
nonmaleficence must be considered regarding what is medically indicated for patients. These
concepts define what is necessary to create maximum benefit and avoid harm to the patient.
The conflict of interest caused within the drug rep/provider relationship, through the use of
detailing and gifting providers, sheds suspicion on what is truly medically indicated for
patients. Clinicians often have enormous choices when it comes to what medicine or device
they may chose for a patient. Unfortunately, many providers succumb to the pressures placed
upon them by drug or device representatives when it comes to patient care. This may not be
in the patients best interest.
Evidence-based practice has emerged as the new standard of patient care. As such, many
higher educational institutions have begun to shun pharmaceutical influence in the way they
provide care for their patients. For example, Yale, Stanford, and the University of
Pennsylvania have limited relationships between member healthcare providers and
pharmaceutical companies (Higgins, 2007). Yet other institutions embrace these types of
symbiotic relationships. At the forefront of this practice is the Weill Medical College of
Cornell University, who recently opened the Clinique Skin Wellness Center (Higgins, 2007).
Created mostly from a $4.75 million dollar grant from Clinique, the center justifies its
existence through the education of skin cancer prevention. While this is a noble effort,
physicians in this setting will find it difficult to convince patients the care they are receiving
is unbiased (Higgins, 2007). In other words, providers in this setting might find it difficult to
provide medically indicated and evidence-based care when that evidence points away from

the use of Clinique products. What are the plans in case of therapeutic failure should Clinique
products and educational materials bring detriment to patients? The probabilities of success
of Clinique products should have been weighed against the larger body of evidence based
medicine before this relationship of biased synergy was struck.
Patient Preferences

The central theme of patient preference is the constant tension between the ideas of autonomy
and paternalism. Autonomy has been defined as, the moral right to choose and follow ones
own plan of life and action (Jonsen et al., 2006, p. 52). The idea of paternalism rests in
assuming a position of authority over what is thought to be in the patients best interest and
thus ignoring that patients preferences (Jonsen et al., 2006, p. 54). Unscrupulous clinicians,
armed with biased information given to them by drug companies, might assume a
paternalistic stance with their patients if those patients arent convinced the drugs or products
their provider is promoting are equal or on par with alternative, less expensive treatments. If
the relationship between the drug rep and clinician is severely biased, either through close
friendship or substantial financial incentive, the clinician may feel compelled to use his or her
position of authority to override patient preference.
For example, a patient may come to the clinician wanting to control her hypothyroidism with
a cost effective, generic brand of medication, but the clinician may be incentivized to
recommend the more accurately dosed brand name. Despite evidence that there is, in this
case, no significant difference between generic and brand name medication, the clinician will
stand on the authority of his drug rep biased influence and convince the impressionable
patient there is a significant and clinically felt difference. The patient may not have the
money to afford the name brand, but now he or she is convinced that they must have the
superior name brand if they are to feel better. Under this undue influence, the patient is now
ready to over-extend his or her budget to secure the medication. Of course, the clinician will
attempt to ease the initial cost of the medicine with samples or coupons, but the patient will
ultimately be made responsible for its costs when samples and coupon offers run out or
expire.
Furthermore, because the clinician is in a position of authority, he or she will substantially
contribute toward achieving a placebo effect in patients through the mere suggestion that a
particular drug is better than another, thus strengthening the patients belief in the fallacy. The
patient may feel better not because of an actual drug effect, but by the mere suggestion of its

superiority. Patients will thus continue to spend their money needlessly because they have
been misled by their medical providers.
While it seems obvious to most that this kind of practice constitutes an obvious breach in
medical ethics, it occurs all the time in clinical practice. Its this authors humble opinion that
this kind of ethic continues to exist because it has been tolerated for so long and has been
comfortably categorized as a grey area in medical ethics. Even while the patient has the full
capacity to decide what he himself prefers, his preferences are ignored and overridden, and he
is duped into believing the all too expensive or sometimes deadly (Vioxx) lie. All the while,
through continued promotional campaigns, trumped up clinical trials, and paid physician
endorsements pharmaceutical companies and clinicians share in the financial fanfare.
This breach of ethics brings to mind the ethical concepts of scope of disclosure,
comprehension, and informed consent. It is important for patients to fully comprehend the
decisions they are making. They can make these decisions if all the pertinent facts are fully
disclosed them. If they are being intentionally or unintentionally misinformed, how then are
they to fully comprehend the impact of their decisions? How are they to make informed
consents? The sad fact is thisthey are making misinformed consents.
Quality Of Life

The concept of quality of life is difficult to define. Quality is an elusive term, which means
many different things to many different people. In other words, a judgment of quality must be
made. This judgment will differ considerably based on the one making the judgment. Some
will judge quality based on the sanctity of life, which implies that physical life be
sustained under any condition for as long as possible (Jonsen et al., 2006, p. 111). Quality of
life is understood from both personal and observer evaluation. For example, the life of a one
legged man may be judged not worth living by the athletic observer, who measures the
quality such a life by the ability or inability to run, while the one legged man may have come
to fully enjoy such a life now that he has discovered his intellectual prowess (Jonsen et al.,
2006).
The medicine of enhancement is ethically suspicious. This area of medicine is full of
judgment calls as to what should be considered improved quality of life. Whether its the
ability to enhance sexual or athletic performance or diminish the effects of aging,
pharmaceutical companies are there to remind us all what the standard of quality truly

means. We see the ads on television as constant reminders that we really shouldnt
disappoint our sexual partners or begin to show the ugly signs of aging. Who hasnt
seen a commercial describing the signs and symptoms of depression and thought, Hey, thats
me. Its no coincidence that the commercials always end with their projected emotional
quality of life standard, in which the now medicated patient is running through a field of
daisies. The pharmaceutical industry is a master of thrusting these judgment calls upon us to
the point where we cant help but feel completely inept and unsatisfied with our lives.
This thrusting of values upon us by the pharmaceutical industry has its origins in the
current sad state of pharmaceutical big business. A recent LA Times article by Melody
Petersen (2008), regarding Big Pharma reports:
Only now is it becoming clear that this business model couldn't work forever. The strategy
had a flaw that executives have long ignored: It required extraordinary amounts of promotion
at the expense of scientific creativity. To make the strategy work, the drug industry put its
marketers in charge; scientists were given a back seat. Is it any wonder that executives at
many companies have watched their pipelines of new drugs slow to a trickle? (p. 1)
While Big Pharma has run out of brilliant new drugs, slated to replace the old, they have
stepped up their efforts to remarket their existing drugs under new indications or new
formulations. Indeed, when new drugs are scarce, simply lower the bar and serve them to a
larger population or reformulate to make them better, faster, stronger. Instead of beefing up
efforts to push the envelope of scientific discovery, drugs makers have replaced the scientist
with the Harvard MBA (Petersen, 2008). As a result, they have been left scrambling to
manage the financial fallout of such business practice. Indeed, according to Petersen, the
strategy that has made the pharmaceutical industry one of the wealthiest and most powerful
on Earth is finally starting to betray it. (Petersen, 2008)
Nothing could be more ethically suspect than the promotion of a drug on the basis of sheer
profit, but this is exactly what Big Pharma is doing when they seek and find FDA approval
for new indications. Such an example is the new indication for Cialis (a sexual performance
drug akin to Viagra) to be taken every day rather than as needed for sexual intercourse.
If quality of life was truly important to pharmaceutical companies they would be interested in
creating drugs for all socioeconomic groups, but this not the case. They typically target those

markets with expendable income. The blockbuster sales tactic wins out when choosing
which drug to place their bets on. Categories such as cholesterol management, depression,
and constipation are the mainstays of choice (Petersen, 2008). For example, malaria is killing
a child every 30 seconds in developing countries, yet the poor cannot support the high prices
a blockbuster drug demands, hence very little money is earmarked for development of such
drugs (Petersen, 2008). Moreover, medicines that treat diseases that afflict only a small
number of patients are often left on the back burner because they cannot achieve the numbers
necessary to sustain market share and momentum (Petersen, 2008).
This became clear when Bristol-Myers Squibb executives announced at a news conference in
2000 that they were embarking on what they called the MegaDouble business plan. To
enhance the company's profits, executives ordered its scientists to work only on megablockbusters, such as Lipitor. Scientists with blueprints for drugs promising a mere $100
million in annual sales had little choice but to box up their work and send it to the warehouse.
(And executives are now perplexed about why they don't have enough new drugs) (p. 1).
There is no improvement in patient quality of life when drugs become more expensive and
more people are dying from understudied medications. As companies struggle to meet the
bottom line, they may lay off employees and place more of the financial burden onto the
customer in the form of raising medication costs. There is enormous pressure placed on the
pharmaceutical industry by stock holders to compete fiercely in the market place. This fierce
competition often leads to premature launching of medicines. Indeed, in much the same way
new movies are released, new blockbuster drugs must garner a large share of the market
upon initial release or they could be doomed to a lesser market share (Petersen, 2008). But
unlike the movies, some of these drugs are killing people by the thousands. Some drugs have
been studied with only a few thousand subjects before they are launched. In many of these
studies the data is barely marginal enough for FDA approval. Indeed, FDA fast tracking of
certain products is not uncommon practice. Vioxx is perhaps the most notable and recent
example of this unfortunate recipe for disaster. It was prescribed to over 20 million patients
before Merck pulled it from the market in 2004 after reports that it doubled the risk of heart
attack (Petersen, 2008). One FDA scientist estimated that Vioxx might have caused heart
attacks or strokes in roughly 139,000 Americans and that 30% to 40% of them died (Petersen,
2008).

Whats interesting here is that this author prescribed hundreds of tablets of Vioxx during its
glory days. Every one of his patients enjoyed an improved quality of life through
significant reduction of musculoskeletal pain and luckily no one was harmed in this small
subset of patients. However, 139, 000 patients in the general population suffered an adverse
event or died from using the drug. While there was no indication in earlier studies that the
drug would cause harm to such a large number of patients, perhaps a larger original study
sample would have accounted for this deadly propensity and prevented this pharmaceutical
catastrophe from occurring in the first place.
Contextual Features

Clinicians face many contextual features such as social, political, economic, religious,
institutional, and family considerations. At the same time, the physician has the fiduciary duty
to provide undivided loyalty to her patients and must work for their benefit (Jonsen et al.,
2006, p. 163). The concept of conflict of interest has already been discussed above, but
needs to resurface here again to underscore the importance of avoiding such conflicts when
caring for patients within the above contexts. Under the role of other interested parties, the
patient and physician must navigate together the waters of best interest for the patient.
Federal and local authorities, managed care organizations, public health authorities, thirdparty payers, and pharmaceutical companies all claim an interest in the care of patients
(Jonsen et al, 2006, p. 166). All of which may even attempt to dictate how care is rendered.
This places the clinician in the middle, between the patients interest and outside influences.
Thus the clinician must serve as intermediary between the two, serving both interests
simultaneously, keeping the betterment of the individual and the whole in mind. This, of
course, is no easy task.
In an effort to minimize the effect of undue influence by the pharmaceutical industry, grass
roots organizations have emerged to empower the clinician to stand against such influence.
One such organization is the No Free Lunch organization (No Free Lunch (NFL), n.d.). They
have launched a website known as www.nofreelunch.org which educates and assists
physicians in standing against this conflict of interest. This organization encourages medical
providers to pledge to eliminate this influence in their workplace by boycotting
pharmaceutical industry promotional activity on their turf. Indeed, all paraphernalia, such as
pens, pamphlets, note paper, posters, and other educational material provided by the drug
industry are to be confiscated and discarded and then replaced with No Free Lunch materials.

Becoming a member of this organization proves to patients that your organization takes this
conflict of interest very seriously and is doing something about it.
This organization is a viable solution to the current crisis. However, it could be construed to
be too representative of the unrestricted advocacy approach to patient care. In this
approach, whatever is required by medical indications and personal preference should be
avoided (Jonsen et al., 2006, p. 178). Perhaps the patient has no problem with or even prefers
to have a physician who has access to free samples. Many patients within this authors
practice have stated such and enjoy the ability to obtain free samples, many of which are on a
continuous basis, so long as the drug is still under patent and available to the clinic. Many
drug reps understand that there exists a small subset of underserved patients in every practice
and will allocate to the medical provider a certain amount of medications to serve this portion
of the population. Of course, this activity will never make it to the books as such, but it does
occur. They do this knowing that this will help the physician maintain some balance in access
to care. This is also a tactic they employ to further entwine the provider into writing more of
the product under other, more profitable circumstances. In essence, the clinician feels less
conflict of interest by helping disenfranchised patients, but must then reciprocate even further
by writing the drug for patients who can afford it. Either way, the conflict still exists and is
unavoidable. Its essentially an all or nothing decision to engage in the activities or not. Sadly,
because this kind of practice is so ubiquitous, most primary care medical centers have come
to accept it as the norm. This ambivalence spurs the conflict of interest forward with no end
in sight.
Conclusion

Many argue that a conflict of interest still exists for patients, despite a cap on monetary
incentives provided by pharmaceutical companies. Citing social science research, they
contend that any gift, large or small, may still influence the decision making process of a
medical provider, and thus conflict with the better interest of the patient. Jonsen et al. have
provided an excellent framework upon which ethical arguments can be constructed for or
against this industry-wide dilemma. The principles of medical indications, patient
preferences, quality of life, and contextual features have proven to be the perfect litmus
through which tough questions may be challenged and hence emerge far better understood
and appreciated.

As the art and practice of medicine forge on, it is imperative that financial conflicts of interest
stay far removed from the scientific process. Market share concerns have too often tainted the
purity of scientific inquiry. This has had profound effect on patient safety. Moreover,
corporate sales tactics have replaced the better judgment and unbiased concerns physicians
once had for their patients. There are enormous obstacles in the way of changing the current
state of affairs. Perhaps there will be power found in the increasing number of grass roots
physicians who are choosing to stand against this type of influence. But until then, the current
landscape, overshadowed by pharmaceutical big business is, at best, ethically suspect.

Das könnte Ihnen auch gefallen