Sie sind auf Seite 1von 57

MKTG202

NOTES

PART ONE: THE ROLE OF MARKETING RESEARCH AND THE RESEARCH PROCESS

o Market Research:
- The systematic and objective process of gathering and analysing information to aid in
marketing decisions
- One of the principal tools to help specify and supply accurate information to reduce
uncertainties when developing and implementing marketing plans and strategies
- Helps decision-makers shift from intuitive information gathering to systematic and
objective investigating
o Basic or Pure Research:
- Refers to attempts to verify the acceptability or to learn more about a given
theory/certain concept; it is not aimed at solving a particular pragmatic problem
o Applied Research:
- Refers to research conducted when a decision must be made about a real life problem

Developing and implementing a marketing strategy involves four stages:

1. Identifying and evaluating opportunities


- Estimates of market potential or predictions about future environmental conditions aloe
managers to evaluate opportunities
- Complete accuracy in a forecast is not possible as change is constant
2. Analysing market segments and selecting target markets
- Refers to determining which characteristics of market segments distinguish them from
the overall market
- Can be segmented based on Geographical, Behavioural or Psychographic features
3. Planning and implementing a marketing mix that will satisfy customer needs and meet the
objectives of the organisation
The following are selected types of research that might be conducted for each element of
the marketing mix:

o Product Research: Includes many types of studies designed to evaluate and develop product
attributes that will add value
- Product Testing: Reveals a product prototypes strengths and weaknesses or determines
whether a finished products performs better than competing brands/expectations
- Brand Name Evaluation: How appropriate a name is for a product
- Package Testing: Assess size, colour, shape, ease of use and other attributes of a
package

o Pricing Research: Research designed to learn the ideal price for a product or whether costs
will be covered
Promotion Research: Research that investigates the effectiveness of
particular promotions (coupons, sampling deals, premiums etc.),
buyer motivation levels and media research/studies of advertising
effectiveness/awareness
o Distribution Research: Often needed to gain knowledge about retailers and wholesalers
operations and reactions to manufacturers marketing policies

4. Analysing marketing performance


- Refers to market research conducted to obtain feedback for successful total quality
management
- Performance-monitoring Research: Provides regular feedback for evaluation and
control of marketing activities; includes feedback surveys, market-share analysis and
sales analysis

The determination of the need for market research centres on:

1. Times constraints
- Judgements on whether the urgency of the situation precludes the use of research or not
2. Availability of data
- Judgements to whether potential sources of data exist or whether there is already an
adequate amount of information
3. Nature of the decision
- Judgements on how strategically/tactically important the business decision is
4. The value of the research information in relation to costs
- Judgements on the value of each alternative course of action against its cost; analysing
the costs of the information relative to the potential benefits of the information

Currently, increased globalisation and the rapid growth of the internet are two major trends in the
business environment that are strongly influencing all business activity, including marketing
research:

1. Global Marketing Research


- Businesses must judge whether they require customised marketing strategies in foreign
countries
2. Growth of the Internet and Social Media
- The impact of the growth of the internet and social media is increasing the pace at which
marketing research is done

Marketing research often follows a general pattern; these stages of the research process are:

1. Defining the problem


- Includes clarifying a problem, defining an opportunity and monitoring and evaluating
current operations; establishes a sense of direction

The major aspects of defining a problem is market research are the degrees of certainty,
uncertainty and level of ambiguity:

- Certainty: The decision-maker knows the exact nature of the marketing problem or
opportunity as all the required information is available
- Uncertainty: The manager grasps the general nature of the desired objectives, but
information about alternatives is inadequate
- Ambiguity: The nature of the problem or opportunity is unclear; objectives are vague
and decision alternatives are difficult to define

2. Planning a research design

The objectives of the study, the available data sources, the urgency of the decision and the costs
will determine which basic research method should be chosen.

o Research Design: refers to a master plan that specifies/determines the sources of


information, the design technique, the sampling methodology and the schedule and cost of
research
- Exploratory Research: refers to initial research conducted to clarify and define a
problem, investigate any existing studies on the issue, interview knowledgeable
individuals as well as other informal investigations; usually qualitative
Pilot Studies: a collective term for any small-scale exploratory research
techniques that used sampling but do not apply rigorous standards (focus group
interview, open ended surveys etc.)

- Descriptive Research: refers to research designed to describe the characteristics of a


population or phenomenon; attempts to determine the extent of differences in needs,
attitudes and opinions among target markets
Surveys: a research technique in which information is gathered from a sample of
people using a questionnaire; most common method of descriptive research
Observation: a research technique that records behaviour without relying on
reports from respondents

- Causal Research: research conducted to identify cause-and-effect relationships among


variables
Experiments: experimentation allows investigation of changed in one variable,
while manipulating one or two other variables under controlled conditions;
ideally without outside influences

3. Planning a sample
- The stage in which the researcher determines who is to be sampled, how large a sample
is needed and how sampling units will be selected (target population, sample size,
selection of sampling units)
Probability Sampling: a sample in which every member of the population has a
known, nonzero chance of selection
Nonprobability Sampling: a sample in which units are selected on the basis of
personal judgement
Also: simple random samples, stratified sampled, quota sampled, cluster
samples and judgemental samples

4. Collecting the data

There are often two phases to the process of gathering data:

- Pretesting: refers to using a small subsample to determine whether the data-gathering


plan for the main study is an appropriate procedure, check data collection forms,
improve certain areas and minimise the errors
- Main Study: the actual study

5. Analysing the data


o Data Processing and Analysis Stage: the stage in which the researcher performs several
interrelated procedures to convert data into a format that will answer the research
questions, problems or opportunities
- Editing: involves checking the data collection forms for omissions , legibility and
consistency in classification; corrects any problems before the data is transferred to the
computer
- Coding: refers to the rules for interpreting, categorising, recording and transferring the
data to the data storage media
- Analysis: refers to the application of reasoning to understand/interpret the data that
has been gathered; may involve determining patterns and summarising relevant details
as well as statistical analysis

6. Formulating the conclusion and preparing the report


o Conclusions and report preparation stage: the stage in which the researcher interprets
information and draws conclusions to be communication to decision-makers
- Written report summarises findings to a managerial audience
MKTG202

NOTES

PART TWO: PROBLEM DEFINITION AND THE RESEARCH PROCESS

The process of defining the problem:

o Problem Definition: The crucial first stage in the research process determining the
problem to be solved and the objectives of the research

1. Ascertain the decision makers objectives


- Researchers and decision-makers should attempt to gain a clear understanding of the
purpose for undertaking the research
- Illuminating the nature of the marketing problem or opportunity through exploratory
research, objectives of the research is also clarified
o Iceberg Principle: The idea that the core marketing management problems is neither visible
nor understood by marketing managers

2. Understand the background of the problem


o Situation Analysis: A preliminary investigation or informal gathering of background
information to familiarise researchers or managers with the decision area
- When information is inadequate/troubles identifying the problem, a situation analysis is the
logical first step

3. Isolate and identify the problem, not the symptoms


- Managements job is to isolate and identify the most likely causes
- Executive judgement and creativity should be exercised to ensure the fundamental problem
has been identified

4. Determine the unit of analysis


- Researcher must specify whether the investigation will collect data about individuals,
households, organisation, departments, geographical area or objects

5. Determine the relevant variables


o Variable: Anything that may assume different numerical or categorical values (represents a
quality that can exhibit differences in value, usually magnitude or strength)
o Categorical (classificatory) variable: A variable that has a limited number of distinct values
o Continuous variable: A variable that has an infinite number of possible values
o Dependent variable: A criterion or variable to be predicted or explained
o Independent variable: A variable that is expected to influence a dependent variable

- Managers and researchers must be careful to identify the relevant variables necessary to
define the managerial problem
- Typically, each research objective will mention a variable/s to be measured or analysed
6. State the research questions (hypotheses) and research objectives
- At the end of the problem definition stage, the researcher should prepare a written
statement that clarifies any ambiguity about what the research hopes to accomplish
o Research Objective: The researchers version of the marketing problem; it explains the
purpose of the research in measurable terms and defines standards for what the research
should accomplish
o Hypothesis: An unproven proposition or supposition that tentatively explains certain facts of
phenomena; a probable/educated answer to a research question
o Managerial Action Standard: A performance criterion or objective that expresses specific
actions that will be taken is the criterion is achieved

The research proposal and anticipated outcomes:

o Research Proposal: A written statement if the research design that includes a statement
explaining the purpose of the study and a detailed, systematic outline of the procedures
associated with a particular research methodology
o Dummy Tables: Representations of the actual tables that will be in the findings section of
the final report; used to provide a better understanding of what the actual outcomes of the
research will be
MKTG202

NOTES

PART THREE: PLANNING THE RESEARCH DESIGN

o Qualitative Research: initial/preliminary research methodology that attempts to provide


elaborate interpretations of phenomena of interest without depending on numerical
measurement/analysis; focused on in depth understanding and insight

The uses of qualitative research:

Researchers conduct exploratory research for three interrelated projects:

1. Diagnosing a situation
- Qualitative research helps to diagnose the dimensions of problems so that successive
research projects will be on target

2. Screening alternatives
- When several options/opportunities arise, exploratory & qualitative research may be
used to determine the best alternatives
o Concept Testing: Any exploratory research procedure that tests some sort of stimulus as
proxy for an idea about a new, revised or repositioned product service or strategy
Consumers are typically presented with a written statement or filmed
representation of an idea and asked to provide their opinions

3. Discovering new ideas


- Uncovering consumer needs is a great potential source of product ideas
- One goal of exploratory research is to first determine what problems consumers have
with a product category

Qualitative versus Quantitative Research

- Qualitative research provides greater understanding of a concept or crystallises a


problem; focus is on words and observations, stories, visual portrayals, meaningful
characterisations, interpretations and other expressive descriptions
- Quantitative research provides precise measurements or quantification; focus is on the
quantity or extent of some phenomenon in the form of numbers

Qualitative Research Orientations

There are multiple ways qualitative research can be conducted; four major schools of thought
currently influence the choice of technique:

1. Phenomenology
o Phenomenology: a philosophical approach to studying human experiences based on the
idea that human experience is inherently subjective and determined by the context in which
people live
- Researcher focuses on how a consumers behaviour is shaped by the relationships he or
she has with the physical environment, objects, people and situations
- Inquiry seeks to describe, reflect upon and interpret experiences
- Reliance on conversational interview tools (no direct questions based on mutual trust)
- Best suited to make sense of complex or ambiguous situations
o Hermeneutics: an approach to understanding phenomenology that relies on the analysis of
texts in respondents stories
o Hermeneutic Unit: refers to key themes, patterns or archetypes in a respondents story
- Due to the subjective nature of this approach, the information should be analysed and
interpreted by a number of means and groups; triangulation

2. Ethnography
o Ethnography: a research approach from anthropology that studies cultures by participant
observation
o Participant Observation: the researcher becomes immersed in the culture they are studying
- Ethnographic research relies on observation of natural behaviour than direct
questioning; particularly useful when a certain culture cannot verbalise their thoughts
and feelings example: children
- Respondent observed/conversed to in real-life situations (builders in pubs, mother and
children shopping) as opposed to in a contrived setting (focus groups)
- Findings should be triangulated as the data collated relies on a researchers
interpretation
o Netnography: a type of ethnography which is the study of online behaviour of individuals in
discussion groups and communities; also known as social listening
- Successfully used to examine the value of user support forums for organisational
customers

3. Grounded Theory
o Grounded Theory: the researcher poses questions about information gathered by
respondents or historical records and repeatedly questions himself/herself and the
responses to derive deeper interpretations
- Very flexible approach; particularly useful for highly dynamic situations
- Defining characteristic: does not begin with a theory but rather extracts one from
whatever emerges from an area of inquiry

4. Case Studies
o Case Study Method: obtains information from intensive investigations on one or a few
situations similar to the researchers problem situation
- Primary advantages include that an entire organisation or entity can be investigated in
depth with meticulous attention to detail
- A number of insights and hypotheses can be gained/suggested for future research
Common Techniques used in Qualitative Research

o Focus Groups Interviews


- An unstructured, free flowing interview with a small group of people
- Typically consists of a moderator/interviewer with 6-10 participants
- Advantages include synergy, stimulation, security, specialisation, flexibility, speed and
inexpensive
- Disadvantages include that the small group cannot be a representative sample and the
requirement of a sensitive and effective moderator for accurate results
- Often used for concept screening and concept refinement; as a means of exploratory
research

o Depth Interview
- A relatively unstructured, extensive interview with many questions and probes for
additional elaboration
- Advantages include that it sometimes works far better than focus groups
- Disadvantages include that the success of the research depends on the interviewers
skill, expensive method, data is highly subjective and difficult to accurately interpret
o Projective Techniques
- An indirect means of questioning seeking to discover an individuals true attitudes,
motivations and beliefs

The most common projective techniques in marketing research are:

1. Word Association Tests


- Subject is presented with a list of words, one at a time, and asked to respond with the
first word that comes to mind
- Frequently used to test potential brand names
- Can also be used to pre-test words or ideas to know beforehand to what degree the
meaning of a word if understood in the context of a survey/questionnaire
2. Sentence Completion Method
- Based on the principle of free association; respondents are required to complete a
number of partial sentences with first word or phrase that comes to mind
3. Third-person Technique and Role Playing
- Third-person is a technique where respondents are asked hypothetically why a third
person does what he/she does/thinks about a product; the respondent is expected to
transfer his/her attitudes to the third person
- Role playing is a dynamic re-enactment of the third-person technique in a given situation
4. Thematic Apperception Test (TAT)
- A test that presents a series of pictures to research subjects and asks them to provide a
description of or a story about the image, then what might happen next
- Themes are elicited on the basis of the perceptual-interpretive (apperception) use of the
pictures; researcher then analyses the contents of the stories subjected relate to
- The use of images are sufficiently interesting, yet ambiguous enough to not disclose the
nature of the research
5. Cartoon Tests
- Picture Frustration: a version of TAT that uses a cartoon drawing for which the
respondents suggests the dialogue the character might engage in

Modern Technology and Qualitative Research

1. Videoconferencing and streaming media


- Video-conferenced focus groups or particular individuals
- Streaming media consists of multimedia content made available in real time over the
internet; the effect is similar to videoconferencing
2. Interactive media and online focus groups
- Online focus groups are quicker, more secure and efficient, but has less group synergy
and lack of visible emotions
- Offline and online focus groups available
3. Social networking
- Utilising social media websites to collate a wealth of qualitative data; however, serious
ethical issues must be considered when collecting information from respondents
without their knowledge or consent
4. Software development
- Computerised qualitative analysis is now commonly used; programs include Nud*st,
ATLAS and NVvio
- Improved efficiencies in identifying themes and connections within the data, as well as
assisting in interpreting voice, photographs and videos for meaning
- Modern predictive analysis software allows text data to be mined from various sources
including social network sites, recorded call conversations, email contacts etc.
- Expensive but highly capable

Qualitative and exploratory studies should never be used as a basis for final conclusions as
they may be subject to considerable interpreter bias
MKTG202

NOTES

PART FOUR: DIGITAL RESEARCH USING SECONDARY DATA

Secondary Data Research

o Secondary data: data that has been previously gathered and recorded by someone else
prior; usually historical
- Advantages include: availability (instantaneous electronic revival), less expensive than
primary data, time saved
- Disadvantages include: secondary data is not designed specifically to meet researchers
needs, possibly outdated, variation in definition of terms, different units of measurement,
difficulties in verifying the datas accuracy
- When secondary data are reported in a format does not exactly meet the researchers
needs, data conversion/transformation may be necessary
- To verify the accuracy of data, cross checks of data from various sources should be
attempted

Typical Objectives for Secondary Data Research Designs

Common research objectives for secondary data studies include:

1. Fact-Finding
Identification of consumer behaviour for a product category
- Uncovering all available information about consumption patterns or identifying
demographic trends that affect industry
Trend Analysis
o Market Tracking: the observation and analysis of trends in industry volume and brand share
over time
- Typically involves comparisons with competitors or own sales in comparable time periods
Environmental Scanning
- Information gathering designed to detect indications of environmental changes in their
initial stages of development
o Push Technology: internet information technology that automatically delivers content to the
researchers or managers desktop

2. Model Building
- Involves specifying relationships between two or more variables; can involve the
development of descriptive or predictive equations
- Represents a mathematical model of a basic relationship
- Three common objectives include: Estimating market potential for geographic areas,
forecasting sales, selecting sites
- Marketers often combine secondary data with survey results in order to determine
important market parameters (potential, size of existing market etc.
o Data Mining: the use of powerful computers to dig through volumes of data to discover
patterns about an organisations customers and products; a broad term that applies to many
different forms of analysis
- Requires sophisticated computer resources and significant monetary investment
o Neural Network: a form of artificial intelligence in which a computer is programmed to
mimic the way that the human brain processes information
o Market-Basket Analysis: a form of data mining that analyses anonymous point-of-sale
transactions to identify the coinciding purchases or relationships between products
purchased and other retail shopping information
o Customer Discovery: involves data mining to look for patterns identifying who is likely to be
a valuable customer

3. Database Marketing
o Customer Relationship Management (CRM) System: a decision support systems that
manages the interactions between an organisation and its customers
- Maintains customer databases containing sensitive customer information, responses to
previous promotional offers, demographic and financial data
o Database Marketing: the use of CRM databases to promote one-to-one relationships with
customers and create precisely targeted promotions

Sources of Secondary Data

Internal and Proprietary Data Sources


o Internal and Proprietary Data: Secondary data that originates from inside the organisation
- The routinely gathering, recording and storing of internal data may help organisations solve
future problems
- If the data is properly coded, more detailed analysis can be conducted

External Data: The Distribution System


o External Data: data created, recorded or gathered by an entity other than the researchers
organisation
- Typically from the government, newspapers and journals or trade associations; available in
published form or via computerised data archives

o Single-Source Data: diverse types of data offered by a single company; the data is usually
integrated on the basis of a common variable such as geographic area or store
MKTG202

NOTES

PART FIVE: SURVEY RESEARCH

The Nature of Surveys

o Sample Survey (formal term): a method of collecting primary data with intentions to obtain
a representative sample of the target population
- Advantages include: quick, inexpensive, efficient, flexible and accurate
- Often many are poorly formed, samples are biased, instructions are poor and results are
misinterpreted

Error in Survey Research

Various forms of survey error include:

1. Random Sampling Error


o Random Sampling Error: A statistical fluctuation that occurs because of chance variation in
the elements selected for a sample
- The lesser the sample size, the more unavoidable of statistical problems

2. Systematic Error
o Systematic Error: Errors that result from some imperfect aspect of the research design or
mistake in the execution of the research
- Also called non-sampling error
o Sample Bias: Exists when the results of a sample show a persistent tendency to deviate in
one direction from the true value of the population parameter

3. Respondent Error
o Respondent Error: A category of sample bias resulting from some respondent action or
inaction such as non-response and response bias
Non-Response Error
- To use the results of survey with low response rates, the researcher must be sure that
those who did respond are representative or those who did not
- Non-response include: refusals to participate, no contact and indifference
o Self-Selection Bias: A bias that occurs when people who feel strongly about a subject are
more likely to respond than people who feel indifferent, resulting in overrepresentation of
the extreme and underrepresentation of the indifferent
Response Bias
- Occurs when respondents tend to answer questions with a certain slant, such as
deliberate falsification and unconscious misrepresentation
Specific types of response bias include:

Acquiescence Bias
- The tendency of respondents to fully agree or disagree to all statements they are asked
about
Extremity Bias
- When some individuals use extremes when responding, while others tend to respond
more neutrally
Interviewer Bias
- When the presence of the interviewer influences the respondents answers
- Includes responses that are believed please the interviewer, rather than be truthful,
respondents that wish to appear intelligent/wealthy/gifted etc., the
rephrasing/modification of questions
Auspices Bias
- Bias in the responses of subjects deliberately or unconsciously caused by their being
influences by the organisation conducting the study
Social Desirability Bias
- Bias in responses cause by the respondents desire, wither conscious or unconscious, to
gain prestige or appear in a different social role
- Appears to be influences by the degree of national development and culture

4. Administrative Error

The results of improper administration or execution of the research tasks are administrative
errors. The four types of administrative error include:

Data Processing Error


- Occurs due to incorrect data entry, incorrect computer programming or other
procedural errors during data analysis
Sample-Selection Error
- A systematic error that results in a unrepresentative sample due to improper sample
design or sampling procedure execution
Interviewer Error
- Mistakes made by interviewers who fail to record survey responses correctly
Interviewer Cheating
- The practice of filling in fake answers or falsifying questionnaires while working as an
interviewer

Rule of Thumb Estimates for Systematic Error

Many researchers have established conservative rules of thumb and techniques, based on
experience, to estimate systematic error. It has been found useful to have some benchmark figures
or standards or comparison to understand how much error can be expected.
Classifying Survey Research Methods

Surveys can be classified according to structure, disguise and time frame; these include:

1. Structured and Disguised Questions


o Structured Question: A question that imposes a limit on the number of allowable responses
o Unstructured Question: A question that imposes no restrictions on the respondents answer
o Disguised Question: An indirect question that assumes the purpose of the study must be
hidden from the respondent
o Undisguised Question: A straight forward question that assumes the respondent if willing to
answer
- Other classifications include structured-undisguised, unstructured-undisguised and
structured-disguised these classifications have 2 limitations; the degree of structure
and disguise vary not clear-cut categories and most surveys are hybrids

2. Temporal Classification

Projects that require multiple surveys over a long period of time can be classified on a temporal
basis.

Cross-Sectional Studies
o Cross-Sectional Study: A study in which various segments of a population are sampled and
data are collected at a single moment in time
- Most marketing research surveys are cross-sectional studies
- Aims to find out descriptive information about a market or relevant stakeholder group at
a point in time
- Typical method of analysing a cross-sectional study is to divide the sample into
appropriate subgroups

Longitudinal Studies
o Longitudinal Study: A survey of respondents at different times, allowing analysis of response
continuity and changer over time
- Sometimes classed cohort studies
- Changes in the variable being measured are often difficult to identify
o Tracking Study: A types of longitudinal study that used successive sample to compare trends
and identify changes in variables
- Useful for assessing aggregate trends, but do not allow for tracking changes in
individuals over time

Consumer Panels
o Consumer Panel: A longitudinal survey of the same sample of individuals or households to
record their attitudes, behaviour or purchasing habits over time
- Enable the researcher to track repeat-purchase behaviour and changes in purchasing
habits that occur in response to changes in price, promotions or other aspects of
marketing strategies
- Typically contacted via telephone, personal interviews, mail questionnaire or email
Different Methods that Marketing Researchers Conduct Surveys

1. Personal Interviews
o Personal Interview: Face-to-face communication in which an interviewer asks a respondent
to answer questions
- Advantage include: opportunities for clarification, feedback and reassurance, follow ups
by probing, less likelihood for item nonresponse, props and visual aids
- Disadvantages include: can possibly be 1 hours as opposed to 10 minutes,
respondents are not anonymous thus may be reluctant to provide sensitive information,
differential interviewer techniques may be a source of bias, generally more expensive
o Probing: A method used in personal interviews in which the interviewer asks the respondent
for clarification of answers to standardised questions
o Item Nonresponse: The failure of the respondent to provide an answer to a survey question;
the technical term for an unanswered question on an otherwise complete questionnaire

2. Door-to-Door Interviews
o Door-to-Door Interview: A personal interview conducted at the respondents doorsteps in
an effort to increase the participation rate in the survey
- Advantages include: increased participation rates,
- Disadvantages include: possible underrepresentation of certain groups (excludes units
with security systems, high rise apartments etc.), participants are more likely to be
elderly, homemakers or retired people
o Callback: An attempt to recontact an individual selected for a sample who was not available
initially
- Global considerations: willingness to participate in a personal interview varies
dramatically, such as Middle Eastern or Islamic women with a male interviewer, certain
questions may be more offensive to certain cultures etc.

3. Telephone Interviews
- Typically the main medium for commercial survey research
- Advantages include: speed of data collection, relative inexpensiveness, easy to make
call-backs
- Disadvantages include: declining telephone response/participation rates, difficult to
ensure a representative sample, limited duration and a lack of visual medium
o Random Digit Dialling: The use of telephone exchanges and a table of random numbers to
contact respondents with unlisted phone numbers
- Partially resolves the problem of unlisted phones numbers
o Central Location Interviewing:
- Telephone interviews conducted from a central location using lines at fixed charged
o Computer-assisted Telephone Interview (CATI)
- Technology that allows answers to telephone interviews to be entered directly into a
computer for processing
4. Mail Questionnaires
o Self-Administered Questionnaire: a survey in which the respondent takes the responsibility
for reading and answering the questions
- Can reach a geographically dispersed sample simultaneously
- Advantages include: relative inexpensiveness but sometimes requires follow up mailings,
additional postage and printing costs, respondent convenience, anonymity of
respondent, absence of interviewer possible revealing of sensitive information
- Disadvantages include: highly standardised questions, lengthy process of sending and
receiving results, considerably lengthy, often boring poor response rate, auspices bias
may occur
o Cover Letter: a letter that accompanies a questionnaire to induce the reader to complete
and return the questionnaire

5. Email Surveys
- Advantages include: speed of distribution, lower processing costs, faster turnaround,
more flexibility
- Disadvantages include: low response rates, difficult to maintain anonymity, use of anti-
virus and firewall software difficult to recruit respondents

6. Internet Surveys
- Advantages include: speed and cost effectiveness, visual appeal and interactivity,
relatively high respondent participation and cooperation, accurate real-time data
capture, easy call-backs, personalised and flexible questioning, respondent anonymity
- Disadvantages include: difficult to gain a representative sample, security concerns
MKTG202

NOTES

PART SIX: OBSERVATION

The Nature of Observation

o Observation: The systematic process of recording the behavioural patterns of people,


objects and occurrences as they are witnessed or complies evidence from records of past
events
o Visible Observation: the observers presence is known to the subject
o Hidden Observation: the subject is unaware that observation is taking place
- Becomes a tool for scientific inquiry when it: serves a formulated research purpose, planned
systematically, recorded systematically and relates to general propositions, subject to checks
and controls on validity and reliability
- Major advantage: the data do not have distortions, inaccuracies or other response biases
due to memory error, social desirability bias etc.
- Physical action, Verbal behaviour, Expressive behaviour, Spatial relations and locations,
Temporal patterns, Physical objects, Verbal and pictorial records can be observed
- Complementary evidence: Video recordings are utilised to enable researchers to perform
detailed analysis to physical actions

o Contrived Observation: observation in which the investigator creates an artificial


environment in order to test a hypothesis
- Examples include mystery shoppers or intentional complaints to test reactions
o Direct Observation: a straightforward attempt to observe and record what naturally occurs;
no attempts to control or manipulate a situation
- Advantages include: most economical technique, data is obtained rather easily and quickly
o Response Latency: the amount of time it takes to make a choice between two alternatives;
used as a measure of the strength of preference
- Becoming high popular as computers are able to record decision times easily
o Observer Bias: a distortion of measurement resulting from the cognitive behaviour or
actions of a witnessing observer
- Every detail about the people, objects and events in a given situation must be recorded or
accuracy may suffer
- Pace is limited to the observers memory, writing speed and other factors

Observation of Physical Objects

- Physical trace evidence is a visible mark of some past event or occurrence indicating
particular notions
- An observer can record physical trace data to discover things that a respondent could not
recall accurately or things respondents may be lying about
Content Analysis

o Content Analysis: The systematic observation and quantitative description of the contents
of messages of communications
- involves systematic analysis and observation to identify the specific information content and
other characteristics of the messages
- focused on studying the message itself; more sophisticated than simply counting the items
requires a system of analysis to secure relevant data

Mechanical Observation

Television Monitoring
o Television Monitoring: computerised mechanical observation used to estimate television
ratings
- uses a consumer panel and sophisticated monitoring device (PeopleMeter)
- similar technology used in other countries; electronic devices are hooked up to television
sets to capture information on program choices, the length of viewing time and viewer
identities

Monitoring Website Traffic


- a hit occurs when a user clicks on a single page of the website
- a variety of information technologies are used to measure web traffic and to maintain access
logs
- Internet monitoring enables companies to identify the popularity of a websites, measure the
effectiveness of advertisement banners and provide other audience information

Scanner-based Research
- Utilised lasers that perform optical character recognition and bar code technology (such as
the universal product code UPC)
- Scanner research has investigated the different ways consumer respond to price promotions
and how these differences affect a promotions profitability

o Scanner-based Consumer Panel: A type of consumer panel in which participants purchasing


habits are recorded with a laser scanner rather than a purchase diary
- one of the primary means of implementing such research
- typically each household is assigned a bar-coded card that records the actual purchase
information
- Advantages include: record actual rather than reported information, mechanical record-
keeping improves accuracy, eliminates possibilities of bias, thorough amounts of information
o At-home scanning system: A system that slows consumer panellists to perform their own
scanning after taking home products, suing hand-held wands that read UPX symbols
Measuring Physiological Reactions
- All mechanical devices assume that physiological reactions are associated with
persuasiveness or predict some cognitive response
- No strong theoretical evidence for support
- Measures are dependent on the calibration or sensitivity of mechanical devices
- Artificial/contrived settings are typically used for controls

There are major categories of mechanical devices used to measure physiological reactions:

1. Eye Tracking Monitors


o Eye-tracking Monitor: A mechanical device used to measure unconscious eye movements;
typically uses infrared light beams that reflect off the eye
- Record how the subject reads a print advertisement or views television advertisement and
how much time is spent looking at various parts of the stimulus

2. Pupilometers
o Pupilometer: A mechanical device used to observe and record changes in the diameter of a
subjects pupils
- records and interprets changes in cognitive activity that result from stimulus
- research based on the assumption that increased pupil size reflects positive attitudes
towards and interest in advertisements

3. Psychogalvanometers
o Psychogalvanometer: A device that measures galvanic skin response (GSR), a measure of
involuntary change in the electrical resistance of the skin
- Device based on the assumption that physiological changes (increased perspiration)
accompany emotional reactions to advertisements, packages and slogans
- Excitement increased the bodys perspiration rate increases the electrical resistance of
the skin
- Indicator of emotional arousal or tension

4. Voice Pitch Analysers


o Voice Pitch Analysis: A physiological measurement technique that record abnormal
frequencies in the voice that are supposed to reflect emotional reactions to various stimuli
- Computerised analysis compares the respondents voice pitch to the normal range
o Functional Magnetic Resonance Imaging (fRMI): A magnetic scan that reveals in different
colours which parts of the brain are active in real time
- Can signify emotional and cognitive reactions
MKTG202

NOTES

PART SEVEN: EXPERIMENTAL RESEARCH AND TEST MARKETING

o Experiment: A research method in which conditions are controlled so that independent


variables can be manipulated to test a hypothesis about a dependable variable; allows
evaluation of causal relationships among variables

In order to conduct experimental research, the following steps should be followed:

1. Decide on a field or laboratory experimental design


o Laboratory Experiment: An experiment conducted in a laboratory or other artificial setting
to obtain almost complete control over the research
o Tachistoscope: A device that controls the amount of time a subject is exposed to a visual
image
o Field Experiment: An experiment conducted in a natural setting, where complete control of
extraneous variables is not possible

2. Decide on the choice of independent an dependent variable(s)


Manipulation of the independent variable
- The independent variable is theorised to be the causal influence
o Experimental Treatments: Alternative manipulations of the independent variable being
investigated
o Experimental Group: A group of subjects exposed to the experimental treatment\
o Control Group: A group of subjects exposed to the control condition in an experiment that
is, not exposed to the experimental treatment
Selection and Measurement of the Dependent Variable
- Changes in the dependent variable are presume to be consequences of changes in
the independent variable
- Selection of the dependent variable is a crucial decision

3. Select and design the test units


o Test Units: Subjects or entities whose responses to experimental treatments are observed or
measured
Sample Selection and Random Sampling Errors
- Sample selection error may occur because of the procedure used to assign subjects
or test units to either the experimental or control group
o Random Sampling Error: A statistical fluctuation that occurs because of chance variation in
the elements selected for a sample
o Randomisation: A procedure in which the assignment of subjects and treatments to groups
is based on chance
- Most common technique used
- Allows the researcher to assume the groups are identical with respect to all variables
except the experimental treatment
o Matching: A procedure for the assignment of subjects to groups that ensures each group of
respondents is matched on the basis if pertinent characteristics, such as age, income, etc.
- Researcher cannot be certain that subjects have been matched on all characteristics
important to the experiment

4. Address issues of validity in experiments

When extraneous variables cannot be eliminated, experimenters may strive for constancy of
conditions; to make efforts to expose all subjects in each experimental group to situations that are
exactly alike, except for the differing conditions of the independent variable

o Blinding: A technique used to control subjects knowledge of whether or not they have been
given a particular experimental treatment
- Frequently used in medical research
o Double-Blind Design: When neither the subject nor the experimenter knows which are the
experimental and which are the controlled conditions
o Constant Experimental Error: occurs when the extraneous variables or the conditions of
administering the experiment are allowed to influence the dependent variables every time
the experiment is repeated; a systematic bias

When choosing experimental research designs, researchers must determine whether they have
internal validity and external validity.

Internal Validity
o Internal Validity: Validity determined by whether an experimental treatment was the sole
cause of changes in a dependent variable or whether the experimental manipulation did
what it was supposed to do
- If the observed results were influence or confounded by extraneous factors, the
researcher will have problems making valid conclusions about the relationship
between the experimental treatment and dependent variable
- If an experiment lacks internal validity, projecting results is not possible
o Demand Characteristics: Experimental design procedures that unintentionally hint to
subjects about the experimenters hypothesis
o Guinea Pig Effect: An effect on the results of an experiment cause by subjects changing their
normal behaviour/attitudes in order to cooperate with an experimenter
o Hawthorne Effect: An unintended effect on the results of a research experiment cause by
the subjects knowing they are participants

The six major extraneous variables that may jeopardise internal validity are:

1. History
o History Effect: The loss of internal validity caused by specific events in the external
environment, occurring between the first and second measurements that are beyond the
control of the experimenter
o Cohort Effect: A special case of the history effect; a change in the dependent variable
because members of one experimental group experience different historical situations
2. Maturation
o Maturation Effect: An effect on the results of the experiment cause by experimental
subjects maturing or changing over time, such as weariness, boredom, age etc.
3. Testing
o Testing Effects/Pretesting Effects: The effect of pretesting may sensitise subjects when
taking a test for the second time
4. Instrumentation
o Instrumentation Effect: An effect on the results of an experiment caused by a change in
the wording of questions, a change in interviewers or other procedures used to measure
the dependent variable
5. Selection
o Selection Effect: A sampling bias that results from differential selection of respondents
for the comparison groups
6. Mortality
o Mortality Effect: Sample attrition (sample bias) occurs when some subjects withdraw
from the experiment before it is completed

External Validity
o External Validity: The ability of the researcher to generalise the results of the
experiment to other subjects or groups in the population under study and beyond
Student Surrogates
- Concerns the use of university students as experimental subjects
- Time, money and a host of other practical considerations often necessitate the use
of student surrogates as research subjects
- Rather widespread practice in academic studies
- Issue of external validity should be considered as the student population is likely to
be atypical and often not representative of the total population
Extraneous Variables
- A numbers of extraneous variables may affect the dependent variable, thereby
distorting the experiment
- Not always possible to control all extraneous variables to have a perfect experiment

5. Select and implement an experimental design


There are two broad choices for types of experimental design:
Basic versus Factorial Experimental Designs
- Basic experimental designs focus on a single independent variable
- Factorial experimental designs are more sophisticated; allow for investigation of the
interaction of two or more independent variables
Repeated Measures or Not
o Repeated Measures: An experimental technique in which the same subjects are
exposed to all experimental treatments to eliminate any problems due to subject
differences
- Solve problems caused by subject differences, but causes others, such as demand
characteristics
- Cheaper and more efficient than other experimental designs
Symbolism for Diagramming Experimental Designs

The following symbols will be used in describing the various experimental designs:

X = exposure of a group to an experimental treatment

O = observations or measurement of the dependent variable; if more than one observation or


measurement is taken, subscripts (O1, O2 etc.) indicate temporal order

R = random assignment of test units; R symbolises that individuals selected as subjects for the
experiment are randomly assigned to the experimental groups

The diagrams of experimental designs assume a time flow from left to right.

Quasi-Experimental Designs

o Quasi-Experimental Design: A research design that cannot be classified as a true


experiment because it lacks adequate control for problems associated with the loss of
external or internal valdity

The following are three examples of quasi-experimental design:

One-shot design
o One-shot/After-only design: A single measure is recorded after the treatment is
administered: X (experimental treatment) O1 (measurement of sales after the
treatment is administered)

- Lacks any kinds of comparison or any means of controlling extraneous variables;


should be a measure of what will happen when the test units have not been expose
to X to compare with the measures of when subjects have been exposed to X
- Under certain circumstances, even though this design lacks internal validity, it is the
only viable choice

One-group pretest-postest design


o One-group pretest-postest design: Subjects in the experimental group are measured
before and after the treatment is administered, but there is no control group:
O1 (first observation) X (experimental treatment) O2 (second observation)

- Conclusion based on the difference between O1 & O2 (O2 - O1)


- Offers a comparison of the same individuals before and after training
- Disadvantages include: the length of the time lapse (maturation effect), people may
drop out during the experiment (mortality effect), O2 is not an observation of an
identical test (instrumentation effect)
- Frequently used in marketing research
Static group design
o Static group design: Each subjects is identified as a member of either an experimental
group or control group (exposed/not exposed); no premeasure is taken:
X O1 (Experimental group) O2 (Control group)

- Results are computed by subtracting the observed results in the control group from
those in the experimental (O1 - O2)
- Disadvantages include: lack of assurance that groups were equal on variables of
interest (systematic differences)
- This option is particularly used when conducting use tests for new products or
brands

The following are examples of better experimental designs:

In the following discussion, the symbol R to the left of the diagram indicates that the first step in a
true experimental design is the randomisation of subject assignment.

Pretest-posttest control group design (before-after with control)


o Pretest-posttest control group design: a true experimental design in which the
experimental group is tested before and after exposure to the treatment and the control
group is tested at the same two times without being exposed to the experimental
treatment: R O1 X O2 (Experimental group)
R O3 O4 (Control group)

- The effect of the experimental treatment equals: (O2 - O1) - (O4 - O3 )


- The effect of all extraneous variables is assumed to be the same on both the
experimental and the control groups (since both groups received the pre-test, no
difference between them is expected
- A testing effect (disadvantage) is possible when subjects are sensitised to the subject
of the research

Posttest-only control group design (after-only with control)


o Posttest-only control group: An after-only design in which the experimental group is
testes after exposure to the treatment and the control group is test at the same time
without having been exposed to the treatment no premeasure is taken random
assignment of subjects and treatment occurs:
R X O1 (Experimental group)
2
R O (Control group)

- The effect of the experimental treatment is equal to (O2 - O1 )


- The design is to randomly select subjects and randomly assign them to the
experimental or control group
- With only the posttest measurement, the effects of testing and instrument variation
are eliminated
Solomon four-group design
o Solomon four-group design: A true experimental design that combines the pretest-
posttest with the control group design and the posttest-only with the control group
design, thereby providing a means for controlling the interactive testing effect and other
sources of extraneous variation:
R O1 X O2 (Experimental group 1)
3 4
R O O (Control group 1)
5
R X O (Experimental group 2)
6
R O (Control group 2)

- Possible to isolate the effects of the experimental treatment and interactive testing
- Rarely used in marketing research due to the effort, time and costs of implementing
it

In many instances, true experimentation is not possible; the best the researcher can do is
approximate an experimental design.

o Compromise Design: An approximation of an experimental design, which may fall short


of the requirements of random assignment of subjects or treatment of groups
Time Series Designs
o Time series design: An experimental used when experiments are conducted over long
periods of time; allows researchers to distinguish between temporary and permanent
changes in dependent variables are quasi-experimental as they do not allow the
researcher full control over the treatment exposure or influence of extraneous variables:
A simple time series design O1 O2 O3 X O4 O5 O6

- Several observations are taken to identify trends before the treatment X is


administered and after
- This design cannot give the researcher complete assurance that the treatment
caused the trend
- Problems of internal validity are greater than in more tightly controlled before and
after designs for experiments of shorter duration
- Main advantage: its ability to distinguish temporary from permanent changes
(permanent change, temporary change, no change, continuation of trend)

Complex Experimental Designs

o Complex experimental designs: statistical designs that isolate the effect of confounding
extraneous variables or allow for manipulation of more than one independent variables
in the experiment

The following are examples of complex experimental designs:


Completely Randomised Design
o Completely randomised design: An experimental design that used a random process to
assign subjects (test units) to treatments to investigate the effects of only one
independent variable
- Examples of completely randomised designs include: posttest-only with control
group, pretest-posttest design (before-after) with control group(s) that replicates or
repeats the same treatment on different experimental units

Randomised Block Design


o Randomised block design: An extension of the completely randomised design in which a
single extraneous variable that might affect units response to the treatment is identified
and the effects of this variable are isolated by being blocked out
- by isolating the block effects, one type of extraneous variation is positioned out and
a more efficient experimental design therefore results

Factorial Designs
o Factorial design: An experiment that investigates the interaction of two or more
independent variables on a single dependent variable
o Main effect: The influence of a single independent variable on a dependent variable
o Interaction effect: The influence on a dependent variable of combinations of two or
more independent variables

6. Address issues of ethics in experimentation


o Debriefing: The process of providing subjects with all pertinent facts about the nature
and purpose of an experiment after its completion
- Expected to counteract negative effects of deception, relieve stress and provide an
educational experience for the subject

Test Marketing: An Application of Field Experiments

o Test marketing: A scientific testing and controlled experimental procedure that provides
an opportunity to measure sales or profit potential for a new product or to test a new
marketing plan under realistic marketing conditions
- Advantages include: no other form of research can beat the world when it comes to
testing actual purchasing behaviour and consumer acceptance of a product, offers
the opportunity to estimate the outcomes of alternative courses of action

The following steps should be followed in test marketing:

1. Decide whether to test market or not


- Test marketing is an expensive research procedure
- Consider the value of information and time (the average test market is approximately 9 12 months)
- Exploratory and other research suggests the product will have an acceptable sales volume, while the
test market will be used to refine the marketing mix to evaluate a proposed marketing plan
- Main disadvantage: a loss of secrecy to competitors
2. Work out the functions of the test market
- Test marketing performs two useful functions for management: offers the
opportunity to estimate the outcomes of alternative courses of action and allows
management to identify and correct any weaknesses in either the
product/marketing plan
- If a product turns out to be a marketing failure, it does not mean the test market is a
failure, but rather a research success

3. Decide on the type of test market


o Control method of test marketing: A minimarket test using forced distribution in a
small city; retailers are paid for shelf space so that the test marketer can be guaranteed
distribution
o Electronic test markets: A system of test marketing that measures results based on
Universal Product Code data; often scanner-based panels are combined with high-
technology television broadcasting systems to allow experimentation with different
advertising messages via split cable broadcasts or other technology
- Measures immediate impact of commercial television viewing of specific programs
on unit sales volume
o Simulated test markets: A research laboratory in which the traditional shopping process us
compressed into a short amount of time
- Almost always use a computer model of sales to produce estimates of sales volume
- Cannot replace full-scale test marketing, but can allow for early predictions about
the likelihood of success
- If the simulation is not executed identically, the results are no longer valid
o Virtual reality simulated test market: An experiment that attempts to reproduce the
atmosphere of an actual retail store with visually compelling images appearing on a
computer screen
o Online test markets: An online panel used to test market new products and advertising
copy, global coverage, but respondents are usually paid

4. Decide the length of the test market


- Test marketing for an adequate period of time minimises potential biases due to
abnormal buying patterns
- Average test market length is from 9 12 months

5. Decide where to conduct the test market


- Differences in demographic factors and other characteristics among the
experimental or control markets affect the test results
- Factors to consider in test market selection include: population size, demographic
composition and lifestyle considerations, competitive situation, media coverage and
efficiency, media isolation, self-contained trading area, overused test markets and
availability of scanner data
6. Estimate and project the results of the test market

A number of methodological factors may cause problems in estimating the results on a national
level; usually a result of mistakes in the design or execution of the test market.

Overattention: by paying too much attention to testing a new product, the product
may be more successful than it normally would be
Unrealistic store conditions: the environments or store conditions implemented
may be a result of research design problems or overattention
Reading the competitive environment incorrectly: a common mistake is the
assumption that the competitive environment will be the same nationally as it was
in the test market
Incorrect volume forecasts
Time lapse

The projecting of test market results through:

Consumer surveys: most test marketers use this to measure change in consumer awareness
of and attitudes toward the product and rate of purchasing and repeat purchasing
Straight trend projections: this is the simplest method for identifying sales and the market
share for the test area to be calculated
Ratio of test product sales to total company sales: refers to calculating a ratio of test
product sales to total company sales in the area to provide a approximate benchmark for
modifying projections into other markets
Market penetration x repeat-purchase rate
o Repeat-purchase rate: The percentage of purchasers who make a second or repeat
purchase
o Market penetration: The percentage of potential customers who make at least one trial
purchase
- The formula to calculate market share for products that are subject to repeat
purchases is:
Market share= Market penetration (trial buyers) x repeat purchase rate
MKTG202

NOTES

PART EIGHT: MEASUREMENT

The measurement process:

1. Determine what is to be measured


o Concept/Construct: A generalised idea about a class of objects, attributes, occurrences or
processes, such as age, gender, brand loyalty, personality, channel power, happiness etc.

- The concepts relevant to the problem must be identified first


- Concepts/constructs must be operational in order to be measured
- The definition of concepts/constructs determines what is to be measured
- True measurement of concepts requires a process of precisely assigning scores or numbers
to the attributes of people or objects

2. Determine how it is to be measured


o Operational definition: An explanation that gives meaning to a concept by specifying the
activities or operations necessary to measure it
o Conceptual definition: A verbal explanation of the meaning of a concept; it defines what the
concept is what it is not

3. Apply a rule of measurement


o Rule: A guide that tells someone what to do
o Scale: A series of items that are arranged progressively according to value or magnitude; a
series into which an item can be placed according to its quantification

The four types of scales are:


Nominal Scale
o Nominal scale: Numbers or Letters assigned to objects serve as labels for identification or
classification
- Simplest type of scale (counting)
- Descriptive Statistics: Frequencies and Mode

Ordinal Scale
o Ordinal scale: Arranges objects or alternatives according to their magnitude in an ordered
relationship
- Indicated rank order, but the degree of distance or the interval between the ranks are
unknown (counting and rank order)
- Descriptive Statistics: Frequencies, Mode, Median and Range
Interval Scale
o Interval scale: Arranges both objects according to their magnitudes and distinguishes this
ordered arrangement in units of equal intervals
- Can comment about the magnitude of differences or compare the average differences on
the attributes that were measured (arithmetic operations that preserve order and relative
magnitudes)
- Cannot comment on the actual strength of the attribute towards an object (just a figure)
- Used to measure psychological attributes
- Descriptive Statistics: Mean, Median, Standard Deviation and Variance

Ratio Scale
o Ratio scale: Has absolute rather than relative quantities, and an absolute zero where there is
an absence of a given attribute
- Possesses absolute zeroes and interval properties (all arithmetic)
- Descriptive Statistics: Mean, Median, Standard Deviation and Variance

4. Determine if the measure consists of a number of measures


o Attribute: A single characteristic or fundamental feature of an object, person, situation or
issue
o Index Measures/Composite Measures: Multi-item instruments for measuring a single
concept with several attributes
o Summated Scale: a scale created by simply adding together the response of each item
making up the composite measure; the scores can be averaged by the number of items
making up the composite scale
o Reverse Coding: where the value assigned for a response is treated oppositely from the
other items ( 5 1, 4 2 etc. )

5. Determine the type of attitude and scale to be used to measure it


o Attitude: An enduring disposition to consistently respond in a given manner to various
aspects of the world; composed of affective, cognitive an behavioural components
- Affective reflects an individuals general feeling or emotions towards an object
- Cognitive represents ones awareness of and knowledge about an object
- Behavioural includes buying intentions and behavioural expectations and reflects a
predisposition to action

To measure an attitude, focus on the way an individual responds (by verbal expression or overt
behaviour) to some stimulus.

o Hypothetical Construct: describes a variable that is not directly observable but is


measurable through indirect indicators, such as verbal expression or overt behaviour

The Attitude Measuring Process

Attitudes can be measured indirectly using qualitative exploratory techniques. To directly measure
behavioural intent, direct verbal statements concerning affect, belief or behaviour are undertaken.
Obtaining verbal statements from respondents generally requires them to perform the following:
o Ranking: A measurement task that requires the respondent to rank order a small number of
stores, brands or objects on the basis of overall preference or some characteristic of the
stimulus
o Rating: A measurement task that requires respondents to estimate the magnitude of a
characteristic or quality that a brand, store or object possesses
- use of a quantitative score, along a continuum
o Sorting: A measurement task that presents a respondent with several objects or product
concepts and requires the respondent to arrange the objects into piles or to classify the
product concepts
o Choice: A measurement task that identifies preferences by requiring respondent to choose
between two or more alternatives

The following describes the most popular techniques for measuring attitudes.

1. Attitudes Rating Scales


- Most common practice in marketing research

Simple Attitude Scales


- Most basic form in which the scale requires the respondent to either agree or disagree with
a statement or response to a question
- Only has the properties of a nominal scale
- Limited types of mathematical analysis may be used

Category Scales
o Category scale: A rating scale that consists of several response categories often providing
respondents with alternatives to indicate positions on a continuum
- Question wording is an extremely important factor in the usefulness of this scale

Likert Scale (method of summated ratings)


o Likert scale: A measure of attitudes designed to allow respondents to rate how strongly they
agree/disagree with carefully constructed statements, ranging from very positive to very
negative ; several scale items may be used to form a summated index
- Simple to administer
- Difficult to know what a single-summated score means

Semantic Differential
o Semantic differential: A series of 7-point attitudes rating scales that use bipolar adjectives to
anchor the beginning and end of each scale
- For scoring purposes, a weight is assigned to each position on the rating scale

Numerical Scales
o Numerical scale: Similar to the semantic differential except that it used numbers instead of
verbal descriptions as response options to identify response positions
- Uses bipolar adjectives in the same manner as the semantic differential
Stapel Scale
o Staple scale: A measure of attitudes that consists of a single adjective in the centre of an
even number of numerical values
- Originally developed to measure simultaneously the direction and intensity of an attitude
- Easy to construct and administer

Constant-Sum Scale
o Constant-Sum scale: Respondents are asked to divide a constant sum to indicate the relative
importance of attributes, such as divide 100 points among the following
- Works best with respondent who have high educational levels
- Typically the constant-sum scale is a rating technique, but with minor modifications it can
classify as a sorting technique

Graphic Rating Scales


o Graphic rating scale: Allows respondents to rate an object by choosing any point along a
graphic continuum
- Typically a respondents score is determined by measuring the length (in mm) from one end
of the graphic continuum to the point marked by the respondent
- Alternatively, the researcher may divide the line into predetermine scoring categories
(lengths) and record responses accordingly

Measuring Behavioural Intention

Behavioural Differential
o Behavioural differential: a rating scale used to measure the behavioural intentions (similar to a
semantic differential) of a subject towards an objects or category of objects

Ranking
- The most basic is to develop an ordinal scale that asks respondent to rank order (from most to least
preferred) a set of objects or attributes
o Paired Comparison: A measurement technique that involves presenting the respondent with two
objects and asking the respondent to pick the preferred object; more than two may be presented but
comparisons are made in pairs

Sorting
- Require respondent indicate their attitudes or beliefs by arranging them on the basis of perceived
similarity or some other attribute

Randomised Response Questions


o Randomised response question: A research procedure used for dealing with sensitive topics in which
a random procedure determines which of two questions a respondent will be asked to answer
- Although estimates are subject to error, respondents remain anonymous and response bias
is reduced
Selecting a Measurement Scale: Some Practical Decisions

The following questions will help focus the choice of measurement scale:

1. Is ranking, sorting, rating or choice technique best?


- the answer to this is determined largely by the problem definition and the type/depth of
statistical analysis desired

2. Should a monadic or comparative scale be used?


- If the scale to be used is not a ratio scale, the researcher must decide whether to include a
monadic or comparative rating scale
o Monadic rating scale: Asks respondents about a single concept in isolation
o Comparative rating scale: Asks respondents to rate a concept in comparison with a
benchmark explicitly used as a frame of reference

3. What type of category labels, if any, will be used for the rating scale?
- Types include verbal labels, numerical labels and unlisted choices
- The maturity and educational levels of the respondent will influence this decision

4. How many scale categories or response positions are needed to accurately measure an
attitude?
- The researcher must determine the number of meaningful positions that is best for the
specific project
- A matter of sensitivity at the operational levels, rather than the conceptual

5. Should a balanced or unbalanced rating scale be chosen?


o Balanced rating scale: A fixed alternative rating scale with an equal number of positive and
negative categories; a neutral point or point of indifference is at the centre of the scale
o Unbalanced rating scale: A fixed alternative rating scale that has more response categories
piled at one end and an unequal number of positive and negative categories

- The nature of the concept or the researchers knowledge about attitudes towards the
stimulus to be measures generally will determine the choice

6. Should a scale that forced a choice among predetermined options be used?


o Forced-choice rating scale: A fixed alternative rating scale that requires respondents to
choose one of the fixed alternatives
o Non-forced-choice rating scale: A fixed-alternative rating scale that provides a no opinion
category or that allows respondents to indicate that they cannot say which alternative is
their choice

- The nature of the concept or the researchers knowledge about attitudes towards the
stimulus to be measures generally will determine the choice

7. Should a single measure or index measure be used?


- Consider how complex or dimensional the issue/problem definition being investigated is
Attitudes and Intentions

Behaviour is often modelled as a function or intentions that in turn is considered a function of an


individuals beliefs; this type of research is sometime called the theory of reasoned action approach
or the multi-attribute model.

o Multi-attribute model: A means of measuring an attitude to an object by asking respondent


to evaluate each part of the object; attitude scores are based on the product of belief
strength and evaluation of the consequences

6. Evaluate the measure

The four major criteria for evaluating measurements is:

Reliability
o Reliability: The degree to which measures are free from random error and therefore yield
consistent results
- Is a necessary condition for validity, but a reliable instrument may not be valid
- Two dimensions underlying the concept of reliability: repeatability and internal consistency

o Test-Restest method: Administering the same scale or measure to the same respondents at
two separate points in time for stability
- High stability correlation/consistency High degree of reliability
- Disadvantages include: the desensitisation of the respondents, the maturation effect
o Split-Half method: A method for assessing the internal consistency by checking the results of
one-half of a set pf scaled items against the results from the other half
- Most basic method
o Equivalent-Form method: A methods that measures the correlation between alternative
instruments designed to be as equivalent as possible, administered to the same group of
subjects
- High correlation between the two forms High Reliability

Validity
o Validity: The ability of a scale to measure what was intended to be measured
- The question of validity expresses the researchers concern with accurate measurements

The three basic approaches to establishing validity are:

1. Face/Content Validity
o Face/Content validity: Professional agreement that a scales content logically appears to
accurately reflect what was intended to be measured
- Clear, understandable questions, adequate coverage of the concept

2. Criterion Validity
o Criterion validity: The ability of a measure to correlate with other standard measures of the
same construct or established criterion
- Provides a more rigorous empirical test
- Attempts to answer the question Does my measure correlate with other measures of the
same construct?

3. Construct Validity
o Construct validity: The ability of a measure to provide empirical evidence consistent with a
theory based on the concepts
- Established during the statistical analysis of the data
- Implies that empirical evidence generate by the measure is consistent with the theoretical
logic behind the concepts
- Is a complex method of establishing validity

Sensitivity
o Sensitivity: A measurement instruments ability to accurately measure variability in stimuli
or responses
- The more sensitive a measure, the more categories on the scale needed
- Index measures are more sensitive than single-item scales

Practicality
- Practical measures are shorter (fewer items), while still being sensitive, easy to administer,
timely and simple enough to understand
- Results from practical measures should be easy to interpret
MKTG202

NOTES

PART NINE: QUESTIONNAIRE DESIGN

The following are steps to take when designing a questionnaire:

1. Specify what information will be sought


- Before construction, the researcher should first list the order of importance the specific
research objectives and information required to meet those objectives
- Sometimes research objectives can be expressed as hypotheses

2. Determine the type of questionnaire and survey research method


- Depend on the type of respondents and nature of the information needed

3. Determine the content of individual questions


- Questions to whether the question is necessary, whether respondents will have the
necessary information to respond or whether several questions are needed instead of one

4. Determine the form of response to each question

Two types of questions can be identified:

Open-Ended Response Questions


o Open-ended response question: poses some question and asks the respondent to answer in
his/her own words
- Free answer questions
- More beneficial for exploratory research
- Cost is administration is substantially higher
- Difficult to categorise and analyse

Fixed-Alternative Questions
o Free-alternative question: respondents are given specific, limited-alternative responses and
asked to choose the one closest to his/her own viewpoint
- Facilitates coding, tabulating and ultimately interpreting the data

o Simple-dichotomy (dichotomous-alternative) question: A fixed alternative question that


requires the respondent to choose one or two alternatives
o Determinant-choice question: A fixed alternative question that requires the respondent to
choose one response from multiple alternatives
o Frequency-determination question: A fixed alternative question that asks for an answer
about general frequency of occurrence
o Checklist question: A fixed alternative question that allows the respondent to provide
multiple answer to a single question by checking off items
5. Determine the wording of each question

The following are guidelines to help prevent the most common mistakes:

Avoid complexity: use simple, conversational language


- Should be readily understandable to all respondents
- Note that some words may have multiple meanings

Avoid leading or loaded questions


o Leading question: A question that suggests or implies certain answers
o Loaded question: A question that suggests a socially desirable or is emotionally charged
- Major sources of bias in question wording
o Counterbiasing statement: An introductory statement or preamble to a potentially
embarrassing question that reduced the respondents reluctance to answer by suggesting
that certain behaviour is not unusual
o Split-ballot technique: Using two alternative phrasings of the same questions for respective
halves of a sample to elicit a more accurate total response than would a single phrasing

Avoid ambiguity: be as specific as possible


- All words have different meanings

Avoid double-barrelled items


o Double-barrelled question: A question that may induce bias because it covers two issues at
once
- Results may be exceedingly difficult to interpret

Avoid making assumptions


- No inclusions of any implicit assumptions in the question

Avoid burdensome questions that may tax the respondents memory


- Unaided recall versus Aided recall where on gives the respondent no clue as to the brand of
it interest whilst the other provides a clue to help jog the memory

6. Determine questions sequence


- not ideal to ask demographic or classificatory questions at the beginning; only at the end
after rapport has been established
o Order bias: bias caused by the influence of earlier questions in a questionnaire or by an
answers position in a set of answers
- Can distort survey results
o Funnel technique: Asking general questions before specific ones to obtain unbiased
responses
- Providing good survey flow makes a questionnaire easy to follow response quality
o Filter question: A question that screens respondents who are not qualified to answer the
following question
o Branch question: A filter question used to determine which version of a second question will
be asked
7. Determine physical characteristics of the questionnaire
- Good layout and physical attractiveness are crucial
Traditional Questionnaires
o Multiple-grid question: Several similar questions arranges in a grid format

Internet Questionnaires
- Ensure the questionnaire is compatible on the respondents computers

o Push button: A small outlines area on a dialogue box (rectangle/arrow etc.) that
respondents click on to select an option or perform a function (submit, next etc.)
o Status bar: a visual indicator that tells the respondent what portion of the survey je or she
has completed
o Radio button: a circular icon resembling a button that activates one response choice and
deactivates others when a respondent clicks on it
o Drop-down box: a space-saving device that reveals responses when they are needed but
otherwise hides them from view
o Check box: a small graphic box, next to an answer that a respondent clicks on to choose an
answer (check mark or X)
o Open-ended box: a box where respondents can type in their own answers to open ended
questions
o Pop-up boxes: boxes that appear as selected points an contain information or instructions
for respondents

The following lists software that makes questionnaires interactive:

o Variable piping software: software that allows variables to be inserted into an internet
questionnaire as a respondent is completing it
o Error trapping: using software to control the flow of an internet questionnaire for example
to prevent respondents from failing to answer a question
o Forced answering software: software that prevents respondents from continuing with an
internet questionnaire if they fail to answer a question
o Interactive help desk: a live, real-time support feature that solves problems or answers
questions respondents may encounter in completing the questionnaire

8. Re-examine and revise steps 1-7 if necessary


o Preliminary tabulation: a tabulation of the results of a pretest to help determine whether
the questionnaire will meet with the objectives of the research

9. Pretest the questionnaire


- Provides clarifications for any problems as well as estimates for potential response rates
MKTG202

NOTES

PART TEN: SAMPLING: SAMPLE DESIGN AND SAMPLE SIZE

Sampling Terminology

o Sample: A subset or some part of a larger population


o Population: Any complete group of entities that share some common set of characteristics
o Population element: An individual member of a population
o Census: An investigation of all the individual elements that make up a population

Why Sample?

The following are reasons to why a sample should be undertaken rather than a complete census:

Pragmatic Reasons
- Sampling cuts costs and gathers vital information quickly
- May not want to expose too many people in the population to the research

Accurate and Reliable Results


- Most properly selected samples give sufficiently accurate results

Destruction of Test Units


- Eliminating the alternatives

Practical Sampling Concepts

The following are the stages in the selection of a sample:

1. Defining the target population


- Vital to carefully define the target population so that a manageable basis for the sample
design can be formed
- Answering question about the crucial characteristics of the population (usual technique ffro
defining the target population)

2. Selecting a sampling frame


o Sampling frame: A list of elements from which a sample may be drawn (also called a
working population); when a list is unfeasible, then a sampling frame is a highly detailed
explanation of how a representative subset of the target population will be conducted
- A practical operationalization of the target population
o Sampling frame error: An error that occurs when certain sample elements are not listed or
are not accurately represented in a sampling frame
- Includes underrepresentation and overrepresentation
o Sampling units: A single element or group of elements subject to selection in the sample
- Types include: primary sampling units (PSUs), secondary sampling units (if two stages of
sampling are necessary) and tertiary sampling units (if three stages are necessary)
o Random sampling error: The difference between the sample results and the result of a
census conducted using identical procedures; a statistical fluctuation that occurs because of
chance variations in the element selected for a sample
- The difference between the true value and the value obtained from a sample (different
values despite the actual known true value)
- Increasing sample size may reduce random sampling error
o Systematic (nonsampling) error: Error resulting from some imperfect aspect of the research
design, such as mistakes in sample selection, sampling frame error or nonresponses from
individuals
- Errors resulting from the researcher, not chance or random influence or variation
o Nonresponse error: occurs when certain individuals fail or refuse to respond resulting in a
skew in the statistical result of the sample as well as the sample not being representative of
the population

3. Determine if a probability or nonprobability sampling method will be chosen

The main alternative sampling plans may be grouped into two categories:

Probability Sampling
o Probability sampling: A sampling technique in which very member of the population has a
known, non-zero probability of selection

o Simple random sampling: A sampling procedure that assured each element in the
population of an equal chance of being included in the sample

o Systematic sampling: A sampling procedure in which a starting point is selected by a


random process and then every nth number of the list is selected
- the value of n is determined by dividing the sampling frame by the required sample size
- not a truly random sample
- possibility of periodicity bias if a list has a systematic pattern (rarely occurs)
- almost guarantee a representative sample

o Stratified sampling: A probability sampling procedure in which simple random subsamples


that are more or less equal representative on some characteristic are drawn from within
each stratum of the population
- Dividing the sampling frame intro strata (layers) and then randomly sampling within each
strata
- The stratification variable is usually categorical variable or one that ca easily be converted
into categories (subgroups)
Proportional versus Disproportional Sampling

o Proportional stratified sampling: A stratified sample in which the number of sampling units
drawn from each stratum is in proportion to the population size of that stratum
o Disproportional stratified sampling: A stratifies sample in which the sample size for each
stratum is allocated according to analytical considerations
- Underlying logic: as variability increases, sample size much increase to provide accurate
estimates
The strata that exhibit the greatest variability are samples more heavily to increase
sample efficiency (produce smaller random sampling error)

Cluster Sampling

o Cluster sampling: An economically efficient sampling technique in which the primary


sampling unit is not the individual element in the population but a large cluster of elements;
clusters are selected randomly
- Random selection of places (clusters)
- Frequently used when the lists of sample population are not available
- Ideally a cluster should be as heterogeneous as the population itself; a mirror image of the
population however a problem may arise if the characteristic are too similar

Nonprobability Sampling
o Nonprobability sampling: A sampling technique in which units of the sample are selected on
the basis of personal judgement or convenience; the probability of any particular member of
the population being chosen is unknown
- No appropriate statistical techniques for measuring random sampling error from a
nonprobability sample
- Not used more often than probability sampling, but is often more practical when an
accurate list for a sampling frame does not exist

o Convenience sampling: The sampling procedure of obtaining those or units that are most
conveniently available (also called haphazard or accidently sampling)
- The potential for bias should be acknowledge in the research report

o Judgement/Purposive sampling: A nonprobability sampling technique in which an


experience individual selects the sample based on personal judgement about some
appropriate characteristic of the sample member
- Samples are selected to satisfy a specific purpose
o Expert interview: A special form of judgement sampling where the researcher decides that
a certain group of people have special knowledge or expertise that may substitute for a
much larger sample of non-expert individuals
o Quota sampling: A nonprobability sampling procedure that ensures that various subgroups
of a population will be represented or pertinent characteristics to the exact extent that the
investigator desires
- Not the same as stratified sampling as it is still selected largely through a convenience or
judgemental procedure (random selection procedure does not exist)

o Snowball sampling: A sampling procedure in which initial respondents are selected by


probability methods and additional respondents are obtained from information provided by
the initial respondents
- Used to locate member of rare populations by referrals

4. Plan procedure for selecting sampling units

The following are key points to consider when forming an appropriate sample design:

Degree of Accuracy
- The researcher must be willing to spend time and money needed to achieve accuracy

Resources
- The costs associated with different sampling techniques vary
- Consider the cost of research versus the value of the information

Time
- How much time available

Advance knowledge of the population


- The availability or lack of lists of population characteristics/members
- May dictate a preliminary study (short telephone survey etc.) to generate information to
build a sampling frame for the primary study OR a lack may rule out systematic sampling,
stratified sampling or others

National versus local project


- Geographic proximity of population elements will influence sample design

5. Determine sample size

There are various formulae to decide on a sample, but these are only a very crude guide:

Random Error and Sample Size


- In theory, sample size can be calculated using the formula for a confidence interval
- The return of a reduction of sampling error if quickly outweighed by the costs of asking more
people
- A larger sample may just give the researcher more confidence in a wrong answer
Systematic Error and Sample Size
- Sample size has no effect on systematic error (bias in the sampling frame or poorly worded
questions)
- In much social research, systematic error > random error
- Difficult to calculate systematic error

Factors in determining sample size for questions involving means


These following three factors affect how large our sample should be:
1. The variance or heterogeneity of the population
- In statistical terms, it refers to the standard deviation of the population
- As heterogeneity increases, so must the sample size

2. The magnitude of acceptable error


- Indicated how precise the estimate must be according to the researchers specifications of
the range of error

3. The confidence interval


- Typical use of the 95 percent confidence interval; however, there is nothing sacred about
the 0.05 chance level

The influence of population size on sample size


- The size of the population DOES NOT have a major effect on the sample size, but rather the
variance of the population has the largest effect

Determining sample size on the basis of judgement


- Estimating sample size using formulae is rather impractical to calculate precisely for many
studies
Researchers may rely on their experience and use a sample size similar to those used in
previous studies

6. Select actual sampling units

7. Conduct fieldwork
MKTG202

NOTES

PART ELEVEN: EDITING AND CODING: TRANSFORMING RAW DATA INTO INFORMATION

Stages of Data Analysis

Before the Survey

PRETEST, PRETEST, PRETEST.

- Non sampling error > sampling error in questionnaire surveys


- Fix any problem with interpretations of questions and flow

During the Survey

o Editing: The process of checking and adjusting data for omissions, legibility and consistency,
and readying them for coding and storage
- Careful editing makes the coding job easier
- Purpose is to ensure completeness, consistency and readability of the data to be transferred
to storage
o Coding: The process of assigning a numerical score or other character symbol to previously
edited data
After the Survey Responses

o Field editing: Preliminary editing on the same day as the interview to catch technical
omissions, check legibility and handwriting and clarify responses that are logically or
conceptually inconsistent
- Purpose is to check that the questionnaire has been answered as well as possible

After the Survey

o Data cleaning: A process used to determine inaccurate, incomplete or unreasonable data to


then improve the quality through correction of detected error and omissions
- final clean up to prepare for analysis
- ensure all codes are legitimate; create summary tabulation for all variables to ensure that
only the acceptable values are present

Coding

o Codes: Rules for interpreting, classifying and recording data in the coding process; also the
actual numerical or other character symbols assigned to the raw data
- Researchers organise coded data into field, records and files
o Field: A collection of characters that represent a single type of data, such as the answer to
the question
o Record: A collection of related fields the answers to all questions by one respondent
o File: A collection of related record, with accompanying information about the nature of the
data

Code Construction

There are two basic rules for code construction:

1. The coding categories should be exhaustive


- Refers to that coding categories should be provided for all subjects, objects or responses
- Depending on the class (typical or not) trouble for code construction may arise

2. The coding categories should be mutually exclusive and independent


- - there should be no overlap among the categories ensures that a subject or response can
only be placed in one category

Coding Open-Ended Questions

- Purpose of coding such questions is to reduce the large number of individual responses to a few
general categories of answers that can be assigned numerical codes
- Code construction in these situation necessarily must reflect the judgement of the researcher
- Major objective is to accurately transfer the meanings from the written responses to numeric
codes
- Recognise that the key idea in this process is that code building is based on thoughts, not just
words
- The end result should be a list, in an abbreviated and orderly form, of all the comments and
thoughts given in answers to the question
o Test tabulation: Tallying a small sample of the total number of replies to a particular question in
order to construct coding categories

o Code book: A book that identifies each variables in a study and gives the variables description,
code name and position in the data matrix

o Recode: To use a computer to convert original codes used for raw data into codes that are more
suitable more analysis
- Often used when a researcher measures attitudes with a series of positive and negative
statements (reflect the same order of magnitude)
MKTG202

NOTES

PART TWELVE: UNIVARIATE STATISTICAL ANALYSIS: A RECAP OF INFERENTIAL STATISTICS

o Descriptive Statistics: descriptions of the characteristics of a sample/population


o Inferential Statistics: used to make conclusions about a population from a sample of that
population based on evidence and reasoning
- Prime purpose is to make a judgement about the population or the collection of all
elements about which one seeks information

o Sample Statistics: Variables in a sample or measures computed from sample data


o Population Parameters: Variables in a population or measured characteristics of the
population
- Generally the population parameters are not known use of samples to make
inferences about population parameters

Making Data Usable

To make data usable, the information collated must be organised and summarised

o Frequency Distribution/ Frequency Table: the process begins with recording the number of
times a particular value of a variable occurs
- One of the most common means of summarising a set of data
o Percentage Distribution/ Distribution of Relative Frequency: the process begins by dividing
the frequency of each value by the total number of observations, multiplied by 100
o Probability: is the long-run relative frequency with which an event will occur
- Conceptually the same as percentage distribution except the data is converted into
probabilities (0 1)

Measures of Central Tendency

Central Tendency (the middle area of the frequency distribution) can be measured in three ways:

1. The Mean
o Mean: The arithmetic average

- REFER TO TEXTBOOK -

2. The Median
o Median: The midpoint the value below which half the values in a distribution fall

- REFER TO TEXTBOOK -
3. The Mode
o Mode: The value that occurs the most often
- Most often used when the data is ordinal or nominal scaled

Measures of Dispersion

1. The Range
o Range: The distance between the smallest and largest values of a frequency distribution
o Interquartile Range: The range between the lower quartile (lowest 25%) and the upper
quartile (highest 25%)

2. Variance
o Variance: A measure of variability or dispersion Its square root is the standard deviation

- REFER TO TEXTBOOK -

3. Standard Deviation
o Standard Deviation: A quantitative index of a distributions spread or variability The
square root of the variance or a distribution
- measures the dispersion in a sample

The Normal Distribution

o Normal Distribution: A symmetrical, bell-shaped distribution that described the expected


probability distribution of many chance occurrences

o Standardised Normal Distribution: A purely theoretical probability distribution that reflects


a specific normal curve for the standardised value, Z
- Symmetrical about its mean
- The mean identifies the normal curves highest point (the mode)
- Has an infinite number of cases (continuous distribution)
- Area under the curve has a probability density = 1.0
- Mean = 0, St.Dv = 1

- REFER TO TEXTBOOK -

Population Distribution, Sample Distribution and Sampling Distribution

o Population Distribution: A frequency distribution of the elements of a population


o Sample Distribution: A frequency distribution of a sample
o Sampling Distribution: A theoretical probability distribution of sample means for all possible
samples of a certain size drawn from a particular position
o Standard Error of the Mean: The standard deviation of the sampling distribution
Central Limit Theorem

o Central-Limit Theorem: The theory that as the sample size increases , the distribution of the
sample means of size n, randomly selected, approaches a normal distribution

Estimation of Parameters

o Point Estimate: An estimate of the population mean in the form of a single value, usually the
sample mean

o Confidence Interval Estimate: A specified range of numbers within which a population mean
is expected to lie; an estimate of the population mean based on the knowledge that it will be
equal to the sample mean plus or minus a small sampling error

o Confidence Interval: A percentage or decimal value that tells how confident a researcher
can be about being correct It states the long-run percentage of confidence intervals that
will include the true population mean

Stating a Hypothesis

The hypothesis-testing procedure:

1. Determine a statistical hypothesis


2. Take an actual sample and calculate the sample mean
3. Determine if the deviation between obtained value of the sample mean and its expected
value (statistical hypothesis) could have occurred by chance along
4. Reject null or alternative hypothesis

o Significance Level: The critical probability in choosing between the null and alternative
hypotheses ; the probability level that is too low to warrant full support of the null
hypothesis
o Critical Values: The values that lie exactly on the boundary of the region of rejection

o Type I Error: An error caused by rejecting the null hypothesis when it is true
o Type II Error: An error caused by failing to reject the null hypothesis when the alterative
hypothesis is true

Choosing the Appropriate Statistical Technique

The choice of method of statistical analysis depends on:

1. The type of question to be answered


- The researcher should consider the method of statistical analysis before choosing the
research design and before determining the type of data to collect
2. The number of variables
- The number of variables that will be simultaneously investigates is a primary
consideration in the choice of statistical technique
o Univariate Statistical Analysis: A type of analysis that assess the statistical significance of a
hypothesis about a single variable

3. The scale of measurement


- The type of measurement reflected in the data determines the permissible statistical
techniques and appropriate empirical operations

4. Parametric versus Nonparametric hypothesis tests


Parametric Statistics
- When the data are interval/ratio scaled and when the sample size is large parametric
statistical procedures are appropriate
- Such procedures are based on the assumption that data in the study are drawn from a
population with a normal distribution
- Often the data does not satisfy these assumptions
Nonparametric Statistics
- Also referred to as distribution free
- When the data is either ordinal or nominal, the assumptions parametric procedures are
based on are inappropriate use of nonparametric statistical tests

The t-distribution
o T-distribution: A symmetrical bell-shaped distribution that is dependent on sample size.
It has a mean of zero and a standard deviation=1
- The shape of the t-distribution is influenced by degrees of freedom
o Degrees of Freedom: The number of observations minus the number of constraints or
assumptions needed to calculate a statistical term

- REFER TO TEXTBOOK -
Confidence interval estimate using the t-distribution
o One-Sample t-test: A hypothesis test that uses the t-distribution rather than the Z-
distribution; it is used when testing a hypothesis with a small sample size and unknown

The Chi-square test for goodness fit


o Chi-square () test: A hypothesis test that allows for investigation of statistical significance
in the analysis of a frequency distribution

- REFER TO TEXTBOOK -
MKTG202

NOTES

PART THIRTEEN: BIVARIATE STATISTICAL ANALYSIS: TESTS OF DIFFERENCES

Test of Differences

o Test of Difference: An investigation of a hypothesis stating that two (or more) groups differ
with respect to measures on a variable

1. The independent samples t-test for differences of means


o Independent samples t-test for differences of means: A technique used to test the
hypothesis that the mean scores of some interval or ratio scaled variables are significantly
different for two independent samples or groups
- to use, researcher assume that the two samples are drawn from normal distributions
and that the variances between the two populations are equal

- In theory
when the sample size of one or both groups is <30 and the population standard
deviation is unknown, a t-test is used
when the sample size of one or both groups is > 30 and the population standard
deviation if known, a z-test is used

- In practice
the population standard deviation is hardly known and with samples sizes >30, the
t-test is a close approximation of the z-test researchers use the t-test to compare
differences in mean between two groups

- REFER TO TEXTBOOK -

2. The paired samples t-test


o Paired samples t-test: A technique used to test the hypothesis that mean scores differ on
some interval or ratio scales variables between related or paired samples
- Null and alternative hypothesis can be stated in the same way as an independent t-test

- REFER TO TEXTBOOK -

3. Analysis of variance (ANOVA)


o Analysis of variance (ANOVA): Analysis involving the investigation of the effects of one
treatment variable on an interval scaled dependent variable; a hypothesis testing technique
to determine whether statistically significant differences in means occur between three or
more groups

- REFER TO TEXTBOOK -
o The F-test: a procedure used to determine whether there is more variability in the scores of
one sample than the scores of another sample
- the key question is whether the two sample variances are different or whether they are
from the same population

- REFER TO TEXTBOOK
MKTG202

NOTES

PART FOURTEEN: BIVARIATE STATISTICAL ANALYSIS: TESTS OF ASSOCIATION

Correlation Coefficient

- Standardises the covariance, effectively removing the units of measurement


- Indicates both the magnitude of the linear relationship and the direction of that relationship
- No correlation is indicated if r = 0

- Researchers find the correlation coefficient useful as the comparison of two correlations can
be made without regard for the amount of variance
- Correlation does not mean causation

o Pearsons correlation coefficient: A statistical measure of the covaration, or association


between two variables
- Is actually a standardised measure of covariance

- REFER TO TEXTBOOK -

o Coefficient of determination (r ): A measure obtained by squaring the correlation


coefficient r; that proportion of the total variance of a variable that is accounted for by
knowing the value of another variable
- Indicates how much of an influence one variable might have upon another

- REFER TO TEXTBOOK -

o Correlation Matrix: The standard form reporting correlational results

- REFER TO TEXTBOOK

o Nonparametric correlation
- In situations in which data is ordinal, a nonparametric correlation technique may be
substituted for the Pearson correlation technique Spearmans rank-order correlation
coeeficient (one of the most common)

- REFER TO TEXTBOOK
Regression Analysis

- Another technique for measuring the linear association between a dependent and an
independent variable
- Assumes the dependent variable Y is predictively or causally linked to the independent
variable X
- Provides the same information as a correlation coefficient but also allows researchers to
predict values of one variable based upon values or another variable
predictions based on the line of best fit

o Bivariate Linear Regression: A measure of linear association that investigates straight-line


relationships of the type Y = + X
where Y is the dependent variable and X is the independent variable and and are two
constants to be estimated
- Concept is the same as y = mx + b

- REFER TO TEXTBOOK -

o F-test (regression): A procedure to determine whether more variability is explained by the


regression or unexplained by the regression

- REFER TO TEXTBOOK -

Cross Tabulations: The Chi-square Test for Goodness of Fit

o Cross Tabulation/ Contingency Table: a joint frequency of observations on two or more sets
of variables; one of the simplest techniques for describing sets of relationships
o Chi-square test for independence: A test that statistically analyses significance in a joint
frequency distribution

- REFER TO TEXTBOOK -

Statistical and Practical Significance for Tests of Association

When interesting statistical significance, do not simply rely on the statistical test to tell you whether
or not there is an association between two variables.

Instead, you must understand the other information being shown by the statistic to determine
whether or not the association is managerially meaningful.
MKTG202

NOTES

PART FIFTEEN: MULTIVARIATE STATISTICAL ANALYSIS

o Dependence Methods: multivariate statistical techniques that explain or predict one or


more dependent variables on the basis of two or more independent variables
Multiple regression analysis, multiple discriminant analysis, logistic regression,
multivariate analysis of variance, n-way cross tabulation

o Interdependence Methods: multivariate statistical techniques that give meaning to a set of


variables or seek to group things together
Exploratory factor analysis, cluster analysis and multidimensional scaling

ANALYSIS OF DEPENDENCE

o N-way cross tabulation: where two nonmetric scaled variables are compared after
accounting for the effects of a third (or more) nonmetric variable
- A lurking variable ; unequal group sizes in the presence of such a variable can weight
the results incorrectly flawed conclusions (Simpsons Paradox)
- To avoid, a clear model/theory of what the interrelationships among all the constructs
are must be established

o Partial Correlation Analysis: An analysis of association between two linear variables after
controlling for the effects of other variables

o N-way univariate analysis of variance (ANOVA): A technique that simultaneously tests for
the differences in the mean of a metric dependent variable among two or more nonmetric
independent variables
- The simultaneous examination of effects reduced the likelihood of finding a statistically
significant relationship when there really isnt one (type I error)

o Multiple Regression Analysis (linear): Where the effects of two or more metric scaled
independent variables on a single, metric scaled dependent variable are investigated
simultaneously
- Extension of bivariate regression analysis

o Multicollinearity: A problem in multiple regression when the independent variables are


correlated with each other, causing the parameter estimates to be unreliable

o Multiple Discriminant Analysis: establishes a rule for predicting the probability that an
object will belong in one of two or more mutually exclusive categories (the level of a non-
parametric dependent variable) based on a combination of two or more independent
variables
- The prediction of a categorical variable (purpose)

o Binary Logistic Regression: establishes a rule for forecasting the value of a binary dependent
variable from a combination of two or more metric independent variable
- Output is interpreted in the same way as for multiple linear regression

ANALYSIS OF INTERDEPENDENCE

o Exploratory Factor Analysis: its general purpose is to summarise the information contained
in a large number of variables into a small number of factors
- derives factors that are orthogonal; factors have zero correlation with each other
(independent and unrelated to other factors)
Factor Loadings: provides a means for interpreting and labelling the factors
Factor Scores: represents each observations calculated value or score
Eigenvalues: the sums of the squared factor loadings for each factor; can be interpreted as
the number of variables-worth of information contained in each factor
Communalities: the sum of the squared factor loadings for each variable; a measure of the
% of variance in each variable explained by all factors
- a relatively high communality a variable is well represented by the results

o Cluster Analysis: techniques classifying individuals or objects into a small number of


mutually exclusive groups, ensuring that there will be likeness within and difference among
groups as possible
- Designed to discover the natural groups of cases
- Purpose is to determine how many groups really exist and define their composition
(describes but does not predict relationships)
- A cluster should have high internal homogeneity and external heterogeneity
- Typically used to facilitate market segmentation

There are two major types of cluster analysis:

Hierarchical cluster analysis: all objects starts as separate clusters, then two clusters
that are closest or most similar join together, so on so forth until all clisters are
joined together (progressive agglomeration called dendogram)
- Works well when dealing with a relatively small number of cases
K-means cluster analysis:

o Multidimensional Scaling: locates objects in multidimensional space on the basis of


measures of the similarity of objects
- The perceptual difference among objects is reflected in the relative distance among
objects in a multidimensional space
o Perceptual Map: An application of multidimensional scaling to show graphically how objects
are perceived by consumers

Das könnte Ihnen auch gefallen