Sie sind auf Seite 1von 9

Australasian Marketing Journal 21 (2013) 147154

Contents lists available at SciVerse ScienceDirect

Australasian Marketing Journal


journal homepage: www.elsevier.com/locate/amj

Validating the Customer Satisfaction Survey (CSS) Scale in the Australian


fast food industry
Ian Phau , Graham Ferguson
Curtin University, Australia

a r t i c l e

i n f o

Article history:
Received 9 May 2012
Revised 11 November 2012
Accepted 18 February 2013
Available online 30 April 2013

a b s t r a c t
The aim of this study is twofold and is conducted in two separate studies. The rst study validated the
Customer Satisfaction Survey (CSS) Scale in the fast food industry with an Australian sample. Nicholls
et al. (1998) developed the CSS as a way to assess key service elements. The authors tested it in multiple
service industries and Gilbert et al. (2004) tested it across cultures. Both sets of authors argued that the
tool was universal irrespective of service industry or culture. Similar to previous ndings, this current
study found that personal service and service setting are key dimensions of satisfaction. However Australian consumers also assess whether service providers are delivering on their promises as part of assessing
satisfaction. The second study compared CSS responses collected immediately following the service
encounter to those collected after a temporal delay. After a delay customers used more items to assess
each dimension but assessed satisfaction on similar dimensions. Managerial implications are discussed
together with future directions.
2013 Australian and New Zealand Marketing Academy. Published by Elsevier Ltd. All rights reserved.

1. Introduction
In a review of marketing metrics and their impact on nancial
performance, Gupta and Zeithaml (2006) claim that customer satisfaction is the most commonly used measure of service performance. They attribute this popularity to simple collection of
responses that can be used to improve service performance, and
enable performance to be compared between locations and over
time (Gupta and Zeithaml, 2006). Other authors have highlighted
the link between customer satisfaction and brand loyalty (Morgeson et al., 2011) which has positive implications for purchase
intention, customer retention, positive word of mouth and nancial performance (Bolton and Drew, 1991; Bolton, 1998; Mittal
and Kamakura, 2001; Kumar, 2002; Yeung et al., 2002; Anderson
et al., 2004; Gilbert et al., 2004; Fornell et al., 2006; Oliver,
2010). Recent studies have continued to nd that customer satisfaction is related to the likelihood of CEOs receiving bonuses
(OConnell and OSullivan, 2011), advertising effectiveness and
brand equity (Samaraweera and Gelb, 2011), and market performance (Aksoy et al., 2008; Williams and Naumann, 2011).
Measuring satisfaction becomes more important and more
difcult as brands locate and market themselves in international
markets (Morgeson et al., 2011). Measures must be able to accommodate differences in culture and still reliably measure satisfaction

Corresponding author. Address: Curtin University, The School of Marketing,


GPO Box U1987, Perth, Western Australia 6845, Australia. Tel.: +61 8 92664014;
fax: +61 8 92663937.
E-mail address: ian.phau@cbs.curtin.edu.au (I. Phau).

across service environments and across countries. The underlying


need to measure and compare customer satisfaction in different
contexts makes it important to have an accurate, reliable and comparable measurement instrument.
Consumers consider and trade-off many service attributes in order to determine whether they are satised with a service experience. Nicholls et al. (1998) developed the Customer Satisfaction
Survey (CSS) as a universal list of attributes that could be used to
measure satisfaction across different service industries. Gilbert
et al. (2004) tested the measure across four countries to discover
whether it could be used to measure satisfaction amongst consumers from different cultures. They found that the measure could be a
universal measure of service satisfaction but advocated additional
testing in other cultures.
1.1. The fast food industry
The global fast food industry generated sales of US$675bn in
2011 (IBISWorld, 2011) but grew at only 2.9% pa from 2007 to
2011. Developed nations accounted for 83% of the revenue but
the fast food industry in these countries were stagnant and competitive. Therefore global fast food chains continued to expand into
growth markets in Asia, Eastern Europe and South America. These
markets offered growing afuence, large populations, large young
populations (43.1% of Chinas population is under 29 years) and a
taste for Western food (IBISWorld, 2011). The largest US brands,
McDonalds, KFC/PizzaHut/Taco Bell, Subway and Burger King, controlled 23.3% of the global market in 2011 and operated in more
than 100 countries (IBISWorld, 2011). In 201011 the Australian

1441-3582/$ - see front matter 2013 Australian and New Zealand Marketing Academy. Published by Elsevier Ltd. All rights reserved.
http://dx.doi.org/10.1016/j.ausmj.2013.02.006

148

I. Phau, G. Ferguson / Australasian Marketing Journal 21 (2013) 147154

fast food industry generated A$14.698bn in a competitive, mature


market dominated by US brands (MacGowan, 2012). Demand is
driven by taste, low prices and convenience although there has
been a recent trend toward healthy options (MacGowan, 2012).
1.2. Justication of the research
Global fast food providers attempt to deliver similar service
experiences globally. For example McDonalds promises to deliver
quality food and superior service in a clean, welcoming environment, at great value (McDonalds, 2012) irrespective of country.
These organisations need to measure customer satisfaction to compare performance across locations and to improve service experience. However, comparing across cultures has made measuring
customer satisfaction more difcult. Therefore the scale used to
measure satisfaction must perform equally well in all cultures
(Witkowski et al., 2003). This is the motivation of the rst study
which tests how well the CSS measures the satisfaction of Australian customers with global fast food service encounters.
The second study sought to compare the collection of CSS responses immediately following the service encounter to responses
collected after a delay. Previous studies largely ignored the potential for differences between how customers assess satisfaction at
the time of the service encounter versus after a temporal delay.
For example, Nicholls et al. (1998) tested the CSS using mail surveys that were collected long after the service experience while
Gilbert et al. (2004) collected satisfaction responses immediately
after a service encounter. Yet several researchers have argued that
customers may assess satisfaction more in line with a cumulative
view of satisfaction as the delay increases; that other interactions
with the brand could inuence perceived satisfaction with the service encounter; and that customers will rely on memorable parts
of the service experience (Mazursky and Geva, 1989; McQuitty
et al., 2000; Koenig-Lewis and Palmer, 2008). Therefore, it is unclear whether the timing of the data collection inuences how customers assess satisfaction.
The paper discusses the two studies separately. Each study begins with a discussion of the CSS extant literature leading to proposition development. This is followed by a description of the
research method, analysis and the ndings. Finally the managerial
implications and limitations of each study are highlighted and
some concluding comments for both studies are provided.
2. Study one
2.1. Customer satisfaction in Australia
The measurement of customer satisfaction is an important indicator of service performance the gives marketers the opportunity
to improve service experiences (Gupta and Zeithaml, 2006). Satisfaction also links to important business outcomes such as positive
word of mouth and repeat purchase (Bolton and Drew, 1991;
Yeung et al., 2002; Gilbert et al., 2004; Oliver, 2010). Measures of
satisfaction within a country are reasonably well established but
the ability of current tools of measure satisfaction across cultures
is less clear. As organisations enter global markets, assessing satisfaction across cultures is becoming more important therefore the
rst study responds to the call from Gilbert et al. (2004) to test
the CSS in other countries. By testing the scale in Australia, the
ndings will help clarify the ability of the CSS to measure satisfaction across cultures.
3. Relevant literature and theoretical underpinning
There are many denitions of satisfaction. Gupta and Zeithaml
(2006, pp. 720) dened it as the customers judgement that a

product or service meets or falls short of expectations. Oliver


(2010) described it as the level of pleasure resulting from the fulfilment of a service and the level of fullment as a post-purchase
comparison between the quality that the customer perceives from
their experience and the quality that they expected to receive
(Churchill and Suprenant, 1982; Zeithaml et al., 1993; Anderson
and Fornell, 1994; Sivadas and Baker-Prewitt, 2000; Wilson,
2002; Grigoroudis and Siskos, 2004; Oliver, 2010). The denitions
describe a process where a service provider fulls the service, the
consumer assesses that fullment, and satisfaction or dissatisfaction results depending on the outcome of the assessment. Satisfaction is uniquely derived by each individual due to unique
expectations of the service, unique roles in the service experience
and subjective assessment of performance (Mittal and Kamakura,
2001; Anderson et al., 2008).
Customer satisfaction is intrinsically related to perceived service quality (Parasuraman et al., 1988; Zeithaml et al., 1996;
McDougall and Terrence, 2000) and according to Huber et al.
(2007) the two constructs are often confused. Early services marketing research discussed satisfaction as a response to each service
encounter (Parasuraman et al., 1988; Spreng et al., 1996) while
perceived service quality was conceived of as a longer-term view
or attitude toward the brand (Bolton and Drew, 1991; Huber
et al., 2007). However, recent research has not retained this distinction (Huber et al., 2007). Oliver (2010) describes satisfaction
as a comparison between expectations and performance while
quality is a comparison between excellence and performance.
Using this denition both perceived quality and satisfaction can
be applied to specic service encounters or cumulatively. Oliver
(2010) argued that assessment of quality during an encounter is
a key determinant of customer satisfaction with that encounter.
But that cumulative quality is inuenced by cumulative satisfaction and is a more enduring attitude. Oliver (2010) does retain
the differentiation that perceived satisfaction is derived from actual service encounters whilst quality perceptions can be formed
without experiencing the service.
Satisfaction has often been conceptualised as a cumulative
assessment of overall experience to date (Johnson et al., 2001). Satisfaction indexes such as the American Customer Satisfaction Index
(ACSI) (Fornell et al., 1996) measure long term satisfaction based
upon a consumers consumption experience to date. The indexes
are built by surveying a random sample of existing customers from
major brands to determine cumulative satisfaction with each of
those brands (Fornell et al., 1996). The intention of these measures
is to provide generic satisfaction barometers (Grigoroudis and Siskos, 2004) that result in indices that can be compared across countries, industries and the selected organisations. Lockyer (2007), for
example, uses ACSI results for the fast food industry to indicate
that satisfaction remained relatively stable across the major US
brands from 2006 to 2007. However, the results are restricted to
only those brands that are included, are not useful to assess the
performance during individual service encounters, and are not useful to compare results between locations. However, our focus on
encounter specic satisfaction does not preclude the inuence of
cumulative satisfaction as satisfaction in one period is likely to
be inuenced by satisfaction in prior periods (Mittal et al., 1999).
Oliver (2010) argued that customers have a satisfaction response
to each service encounter and satisfaction in each of these encounters accumulates to form cumulative satisfaction.
Many authors argue that consumer satisfaction is determined
by comparing perceived performance to customer expectations
(Churchill and Suprenant, 1982; Parasuraman et al., 1988; Oliver,
2010). This disconrmation of expectations (DoE) perspective suggests that if service performance surpasses expectations then the
customer is very satised and if not then dissatised. However this
approach can produce the confusing situation where a customer

I. Phau, G. Ferguson / Australasian Marketing Journal 21 (2013) 147154

expects & receives inferior service. Is this customer satised or dissatised? In addition, measuring expectations can be difcult. Carman (1990) suggested that expectations should be measured prior
to the service encounter but collecting data accessing customers
prior to the experience can be difcult, it is likely that new customers cannot form detailed expectations, that expectations will
change as the service experience progresses, and that overall customers will struggle to enunciate expectations prior to the experience (Danaher and Mattsson, 1994; Yuksel and Rimmington, 1998;
Oliver, 2010). Therefore despite potential contamination of expectations, researchers often collect both expectations and satisfaction
assessments after the service encounter (1998). Oliver (2010) argued that customers will modify their expectations to reduce cognitive dissonance and conceptualise expectations more vividly if
the experience was extremely satisfying or dissatisfying. Therefore
comparisons between expectations and performance measured in
this way lack credibility.
Some research has discounted the ability of disconrmation of
expectations to explain customer satisfaction. Churchill and Suprenants (1982) research, found that satisfaction was not inuenced
by disconrmation of expectations but rather by performance of
the product. Lee et al. (2000) found that perceived performance
scores explained a larger amount of the variation in service quality
than scores of the difference between expectations and performance. Therefore some academics advocate the measurement of
satisfaction without measuring expectations (Johnson et al., 2001).
Customer satisfaction is determined by all of the key elements
of the service (Huber et al., 2007; Anderson et al., 2008). Therefore
researchers have variously attempted to measure satisfaction by
asking customers about those important attributes or by asking
consumers to indicate an overall level of satisfaction (Oliver,
2010). Mittal et al. (1999) argued that customers are more likely
to assess satisfaction against multiple attributes rather than assess
the service overall (i.e. a global view of satisfaction) and therefore
assessing important attributes is a better way to measure satisfaction. They also argue that assessing multiple attributes allows
managers to identify specic opportunities to improve customer
satisfaction. Despite several attempts, researchers have not managed to identify a universal list of dimensions that apply to all contexts (Oliver, 2010).
Some researchers use a dual step process to rate the importance
and the performance of a set of attributes (Gilbert et al., 2004; Oliver, 2010). The idea is to identify which attributes are salient to
satisfaction and combine this insight with a rating of performance.
However, the reason for an attributes importance is unclear.
According to Oliver (2010) the resultant combined scores do not
differentiate between an attribute that is important but not delivered from one that is not important but delivered. As well, the
score may be measuring something other than satisfaction because
of the inclusion of the importance measure. Yuksel and Rimmington (1998) found that including importance weightings on service
attributes did not signicantly improve the power of customer satisfaction measures.
Overall, it seems that assessing the perceived performance of
important service attributes is the most useful way to measure customer satisfaction with service encounters. Functional attributes of
a service affect customer satisfaction (Gronroos, 1984; Oberoi and
Hales, 1990; Yksel and Yksel, 2003) and are an important consideration. Assessing the importance of each attribute seems unimportant (Yuksel and Rimmington, 1998). The expectancydisconrmation approach is attractive conceptually but weakened
by difculties measuring expectations and non-logical conclusions.
One such instrument based on the performance-only approach
is the CSS developed by Nicholls et al. (1998). The scale was developed to be a parsimonious scale to measure customer satisfaction
across all service situations. The intention was to develop a univer-

149

sal list of attributes that determine service satisfaction. Nicholls


et al. (1998, pp. 241) based the attributes on personal reaction
to the service delivery and to the [service] environment. The
authors developed a scale of 29 statements from pilot studies, rened the scale to 17 items based upon a sample of 1199 customers
from various organisations and then tested those items with another sample of 2992 customers from 13 different organisations.
Two factors were identied from the scale. Satisfaction with Personal Service (SatPers) included courtesy, timely service, competence of employees, easy to get help, and treatment received
(a = .89 for ve items). Satisfaction with Service Setting (SatSett)
included cleanliness, convenient operating hours, security inside,
and outside the organisation, (a = .74 for four items). Nicholls
et al. (1998) advocated use of the scale by service managers to
evaluate customer satisfaction.
Customer satisfaction measurement becomes more difcult
when comparing across different cultures (Gilbert et al., 2004;
Morgeson et al., 2011). Lee and Ulgado (1997) compared how
South Korean and US consumers evaluated McDonalds. US consumers placed more importance on low prices and assurance
whereas South Korean consumers rated attributes such as empathy and reliability more highly. Morgeson et al. (2011) agree that
customer satisfaction varies by culture. They found that consumers
in traditional and self-expressive societies report higher levels of
satisfaction than consumers in secular-rational and survival societies. Therefore some cross-national variance in satisfaction measures is to be expected. Despite this it is important to have
measures that work within each culture.
Gilbert et al. (2004) set out to conrm the CSS as a universal
measure of satisfaction by validating the survey in multiple countries. Global fast food brands offered an appropriate comparison
industry because of the standardised service offered in all countries. Data was collected from customers of ve fast food establishments in four countries (Jamaica, the US, Wales and Scotland). The
total sample was over ve thousand. The results indicated that SatPers (a = .91 for seven items) and SatSett (a = .64 for three items)
continued to be suitable for identifying customer satisfaction
across all four countries.
4. Research propositions and methodology
This study proposes that testing the CSS among international
fast food service brands in Australia will identify factors that contain the same items as those found by Nicholls et al. (1998) in the
USA, and to those identied by Gilbert et al. (2004) across Wales,
Jamaica, Scotland and the USA.
4.1. Data collection instrument
After determining whether the respondent had purchased dinein or take-away food, respondents answered items from the CSS.
The CSS was pilot tested for comprehension by 25 university students (who were also fast food consumers) which resulted in no
changes to the CSS scale. Respondents rated 17 items related to
service features (as shown in Table 1). Each respondent was instructed to circle the appropriate score according to how much
they agreed or disagreed with the specic statement. The nal section of the survey, contained demographic questions.
4.2. Data collection procedure
Gilbert et al. (2004) collected data from customers of Burger
King, Checkers, Kentucky Fried Chicken (KFC), McDonalds, Taco
Bell and Wendys. The current study replicated this as closely as
possible using McDonalds, KFC and Burger King (franchised as

150

I. Phau, G. Ferguson / Australasian Marketing Journal 21 (2013) 147154

Table 1
The CSS measurement instrument.

Table 2
Sample characteristics study one.

Please use the scale to rate how strongly you agree or disagree with the
statement regarding this organizations performance.
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.

I am pleased with the courtesy I received


I received timely service
The employees are competent
It is easy for me to get the help I need
The operating hours are convenient for me
The place is neat and clean
The employees treat me as a valued customer
There is easy access to the organizations services
They take time to understand my needs
I feel physically secure within the organisation
The security outside the organisation is good
I received help promptly
The cost for their service or product is very reasonable
I can count on them to treat me fairly
This organisation delivers what it promises
The people who work here are very helpful
This organisation backs up its promises

1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1

2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2

3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3
3

4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4
4

5
5
5
5
5
5
5
5
5
5
5
5
5
5
5
5
5

Hungry Jacks in Australia). Subway was also added because of its


growing importance in fast food category worldwide. Taco Bell,
Checkers and Wendys burger chain were not readily available in
Australia.
Consistent with Gilbert et al. (2004) data was collected from
two separate fast food outlets for each brand located within easy
reach of the university. The demographic prole of the suburbs
was similar to the average for the Perth metropolitan area with
slightly below average income, but similar spread of ages, qualications and nationalities (ABS, 2012). The suburbs were selected
because each contained all four of the franchises within one shopping precinct. Responses were collected during the afternoon on
weekdays from all locations. As per Gilbert et al. (2004), the study
employed a skip interval approach with every fth customer asked
to participate in the study immediately after leaving the outlet.
Administrators of the surveys were instructed to discard incomplete questionnaires and to collect exactly 100 completed responses from each fast food brand total 400 surveys. This
method of data collection and sample size were consistent with
Gilbert et al. (2004) and therefore appropriate to assess the CSS.
The data collection process took approximately four weeks to
complete.
5. Findings
Of the 400 responses collected (refer to Table 2), (55.8%) were
females. A majority of respondents belonged to age groups under
40 years (83.1%) and had a range of education levels with 27.1%
having completed a university qualication. 49.3% of the respondents were Australian, 26.8% Asian and 20.0% European.
Exploratory factor analysis with varimax rotation identied
three factors that explained 48.83% of the variance in the CSS scale.
This was consistent with Gilbert et al. (2004) explained 49.36% of
the variance using the CSS. As this was a replication of the prior research the researchers decided not to remove items that may be
considered line ball with loadings between 0.55 and 0.60.The rst
factor included all of the items identied by Nicholls et al. (1998) 1,
2, 3, 4, 7, plus items 12 & 16 added by Gilbert et al. (2004), and item
9 Employees listen. This additional item was consistent with
other items in the existing factor and therefore SatPers was retained as the name for the factor. The factor had a high scale reliability (a = .85) and accounted for 32.85% of the variance (refer to
Table 3).
The second factor consisted of items 15 & 17; the organisation
delivers what it promises and the organisation backs up its

Items

Study one

Age
20 or less
2124
2529
3039
4049
5059
60 and over

30.3%
15.5%
16.3%
22.0%
10.0%
4.5%
2.5%

Gender
Male
Female

44.3%
55.8%

Education
Not a high school graduate
High school graduate
TAFE or similar qualications
University student
University graduate
Postgraduate

11.8%
16.8%
24.5%
20.0%
22.3%
4.8%

Ethnicity/race
Asian
Australian (including Aboriginal descent)
European
Other

26.8%
49.3%
20.0%
4.0%

Table 3
Varimax rotated factor analysis of Customer Satisfaction Survey study one.
Item

Loadings
SatPers

1. Provider courtesy
2. Timely service
3. Competent employees
4. Easy to get help
7. Treatment received
9. Employees listen
10. Security within the organisation
11. Security outside the organisation
12. Prompt help
15.Organization delivers what it promises
16. Helpful personnel
17. Organization backs up its promises
% of variance
Eigen values
Cronbach alpha (correlation)
Pearson correlation
KMO
Bartletts test of sphericity

SatAssu

SatSett

.709
.589
.746
.553
.724
.554
.748
.827
.557
.876
.756
.612
32.85
5.585
.85
.64
.855
.000

9.11
1.549
.70
.46

6.87
1.169
.58
.19

Blank cells have loading less than 0.30.

promises. This factor had not been identied in either previous


study but in this study was reliable (r = .46) and accounted for
9.11% of variance. The factor was named satisfaction with assurance (SatAssu).
The third factor contained items 10 & 11 associated with the
security within and outside the organisation. Nicholls et al.
(1998) included these items plus items 5 & 6 (convenient hours
and cleanliness) in SatSett. Gilbert et al. (2004) found the same factor but without item 6. Despite not including convenient operating
hours or cleanliness the factor identied in the current study did
include the items related to security. The factor name SatSett
was retained as the security items were the two highest loading
items in the SatSett factor identied by both Nicholls et al.
(1998) and Gilbert et al. (2004). This factor had a lower scale reliable (r = .19) but was still deemed viable (as per Nunnally and
Bernstein, 1994) and accounted for 6.87% of variance.

I. Phau, G. Ferguson / Australasian Marketing Journal 21 (2013) 147154

6. Discussion and implications


The way that customers used the CSS to assess satisfaction in the
current study was consistent with the results of Nicholls et al.
(1998) and Gilbert et al. (2004). Despite adding employee listening
as an item to SatPers, the items included were consistent and seems
to represent the same construct. Similarly, while the items: convenient operating hours and neatness where not retained in the SatSett construct, the core items identied by both prior studies
remained (see Table 4) and therefore the factor was validated as a
dimension of customer satisfaction. An additional factor, SatAssu,
was identied in the current study. This factor was comprised of
two items: the organisation delivered as promised, and supported
those promises. These items were originally retained in the CSS
throughout the scale development process as important measures
of customer satisfaction but did not contribute to the factors identied in the empirical testing of Nicholls et al. (1998) or Gilbert
et al. (2004). Therefore it is not surprising that these items would
show up as important measures of satisfaction in circumstances
where these elements were considered important by customers.
The results conrm the usefulness of SatPers and SatSett as
dimensions of service performance but introduce a new factor SatAssu that extends the dimensions used by fast food consumers to
assess satisfaction. The amount of variation found in this study and
between countries in Gilbert et al.s (2004) study (see Table 4) suggests that these dimensions are not completely stable crossnationally. Despite this, the scale offers organisations a parsimonious, seventeen item instrument that effectively measures customer satisfaction with service encounters. Future research could
continue to extend testing of the CSS scale to other cultures less
similar to the US than Australia.
7. Study two
7.1. Temporal effects on customer satisfaction measurement
Some studies that explore encounter specic customer satisfaction tend to collect the responses immediately after the service
Table 4
Factor loadings from study one and prior studies.
Item

Study one
SPers

Provider courtesy
Timely service
Competent
employees
Easy to get help
Convenient
operating hours
Neat and clean place
Treatment received
Easy access to
service
Employees listen
Security within the
organisation
Security outside the
organisation
Prompt help
Cost
Fair treatment
Organization
delivers what it
promises
Helpful personnel
Organization backs
up its promises

SAssu

SSett

Nicholls
et al. (1998)

Gilbert et al.
(2004)

SPers

SPers

.709
.589
.746

.762
.682
.781

.553

.737

SSett

SSett

.78
.72
.75
.74
.547

.51

.605
.724

.770

.72

.554
.748

.751

.81

.827

.819

.80

.557

.68

.876

.756

.769
.612

.79

151

encounter (e.g. Davis and Heineke, 1998; van Dolen et al., 2004)
while others collect it after a delay (e.g. Oberoi and Hales, 1990;
Fornell et al., 1996; Knutson, 2000; Gilbert et al., 2004). Immediate
collection inherently makes sense as consumers are better able to
assess encounter specic attributes when they are in the moment
and several studies argue that customer assessments of the
encounter will change over time (Mazursky and Geva, 1989;
McQuitty et al., 2000; Koenig-Lewis and Palmer, 2008). Koenig-Lewis and Palmer (2008), for example, argue that over time individual service encounters are interpreted in the context of cumulative
satisfaction, in line with subsequent exposure to the brand (not
necessarily direct experience) and are more likely to reect the
affective components of the experience better than the cognitive
components. Therefore study two sought to explore whether a
temporal delay of data collection would change the attributes used
to assess satisfaction.
If the construct is assessed using different dimensions then delayed measurement of satisfaction may be measuring a different
construct (e.g. cumulative satisfaction). No studies, especially in
the context of customer satisfaction in the fast food industry, were
found that have investigated whether temporal delays affect the
dimensions used to assess encounter specic satisfaction. The ndings will help practitioners and academics to determine when to
collect satisfaction responses, how to measure satisfaction and
how to interpret the results.

8. Relevant literature and theoretical underpinning


According to Koenig-Lewis and Palmer (2008) customers will
seek to reduce cognitive dissonance by interpreting a service experience in line with existing attitudes to the brand. Over time this
may entail the customer selectively forgetting part of the encounter or adjusting their assessment (Koenig-Lewis and Palmer, 2008).
If the measurement of satisfaction is delayed after the consumption encounter then the data may be more inuenced by cumulative satisfaction assessments rather than satisfaction with a
specic service encounter. The shorter the time period between
the service encounter and the data collection, the more specic
the consumers assessment is likely to be to the encounter rather
than cumulative assessment. In related research, Mazursky and
Geva (1989) measured customer satisfaction and purchase intention immediately after the service encounter and then measured
purchase intention again after a two week delay. They found that
purchase intention diminished after the temporal delay and that
the relationship between past knowledge and purchase intention
increased in strength while the link with encounter specic performance diminished. McQuitty et al. (2000) argue that the temporal
delay also works in reverse as customers learn from the encounter
specic satisfaction to update their long term attitude to the
brand.
Memory of the service encounter will also diminish over time.
Koenig-Lewis and Palmer (2008) argue that perceptions of a service encounter will naturally distort over time as cognitive assessments of satisfaction are more difcult to recall than emotional
responses, therefore satisfaction after a temporal delay is more
likely to be based upon affective memories than cognitive
assessments.
The temporal delay also introduces the opportunity for customers to update their attitude toward the service based upon subsequent interactions with the brand (not necessarily encounters)
(Koenig-Lewis and Palmer, 2008). Interpretation of what is important may change as a customer becomes less proximal to the service
environment. These can change the consumers interpretation of
past consumption encounters.

152

I. Phau, G. Ferguson / Australasian Marketing Journal 21 (2013) 147154


Table 7
Varimax rotated factor analysis of Customer Satisfaction Survey study two.

9. Research propositions and methodology


The current study explores the differences between immediate
(van Dolen et al., 2004; Davis and Heineke, 1998), and delayed
(Oberoi and Hales, 1990; Fornell et al., 1996; Knutson, 2000; Gilbert et al., 2004) assessment of encounter specic satisfaction. It
is possible that customers will use different attributes to assess
service performance after a temporal delay because they reinterpret their satisfaction in line with cumulative satisfaction, their recall of the service encounter may be reduced by the delay and their
expectations may have changed as a result of other interactions
with the brand.
The questionnaire used in study one was adapted to include a
cover letter that explained the intent of the research and respondents were asked two additional questions: to nominate their most
recent fast food outlet encounter (which fast food outlet did you
last purchase from), and the temporal delay since that encounter
(how many days since you last purchased from this outlet). The
adapted questionnaire was pilot tested for comprehension which
resulted in no changes. The nal questionnaire was mailed out to
every sixth household in the street listings directory for Victoria
Park. Victoria Park is a suburb of Perth with a demographic prole
similar to the average for the Perth metropolitan area with slightly
below average income, but similar spread of ages, qualications
and nationalities (ABS, 2012). 234 of the 1200 surveys sent out
were returned and usable (19.5% response rate) (see Table 5 for
sample distribution).

Item

Loadings
SatPers

1. Provider courtesy
2. Timely service
3. Competent employees
4. Easy to get help
5. Convenient operating hours
6. Neat and clean place
7. Treatment received
8. Easy access to service
9. Employees listen
10. Security within the
organisation
11. Security outside the
organisation
12. Prompt help
14. Fair treatment
15. Organization delivers what it
promises
16. Helpful personnel
17. Organization backs up its
promises
% of var.
Eigen values
Cronbach alpha
Pearson correlation
KMO
Bartletts test of sphericity

SatComm

SatSett

SatAssu

.780
.774
.765
.619
.756
.547
.578
.602
.727
.717
.584
.649
.580
.732
.585
.598
43.96
7.473
.88
.72
.922
.000

8.46
1.438
.74
.66

7.31
1.243
.76
.59

6.13
1.041
.82
.70

Blank cells have loading less than 0.30.

Table 5
Complete survey counts by fast food establishment for each study.

10. Findings

Fast food establishment

Study one

Study two

Total

McDonalds
Hungry Jacks
KFC
Subway
Other
Total

100
100
100
100

400

68
39
30
32
65
234

168
139
130
132
65
634

Table 6
Sample characteristics for studies one and two.
Items

Face to face survey


(%)

Mail survey
(%)

Age
20 or less
2124
2529
3039
4049
5059
60 and over

30.3
15.5
16.3
22.0
10.0
4.5
2.5

2.2
2.6
9.4
28.6
24.8
20.9
11.5

Gender
Male
Female

44.3
55.8

43.6
56.4

Education
Not a high school graduate
High school graduate
TAFE or similar qualications
University student
University graduate
Postgraduate

11.8
16.8
24.5
20.0
22.3
4.8

5.6
23.1
26.5
3.4
29.5
12.0

26.8
49.3

9.4
70.6

20.0
4.0

17.5
2.6

Ethnicity/race
Asian
Australian (including Aboriginal
descent)
European
Other

Of the 234 usable responses obtained from the mail survey,


there were slightly more females (56.4%) than males, the age
groups were older than 40 years (85.8%) and they were more likely
to have completed a university qualication (41.5%). The respondents were more likely to be Australian (70.6%) and less likely to
be Asian (9.4%) (see Table 6).
Exploratory factor analysis with varimax rotation was used to
identify four factors (see Table 7). The rst factor identied in
study two, SatPers, included provider courtesy, timely service,
competent employees, easy to get help, treatment received
and prompt help. This factor had a high scale reliability (a = .88)
and accounted for 43.96% of variance. The second factor comprised
helpful personnel and employees listen and was titled Satisfaction with Communication (SatComm). This factor has good scale
reliability (r = .66) and accounted for 8.46% of the variance. The
third factor, SatSett, contained ve items: security within, security outside, convenient hours, neat and clean, and easy access.
This factor also has a good scale reliability (a = .76) and accounted
for 7.31% of the variance. In study two the SatAssu factor contained
organisation delivers promises, backs up promises and fair treatment. The Cronbach alpha of the factor was 0.82 and it accounted
for 6.13% of the variance.
11. Discussion and implications
The results indicate only small differences between study one
and study two in the factors used by customers to assess satisfaction with fast food encounters. Three factors were identied in
study one whereas four factors were identied in study two. However the additional factor in study two, SatComm, was conceptually similar to SatPers (and in study one the items were part of
SatPers). Across both studies it was apparent that customers assess
satisfaction with personal as the dominant factor (including SatComm in study two), satisfaction with setting as the next most

I. Phau, G. Ferguson / Australasian Marketing Journal 21 (2013) 147154

dominant and satisfaction with assurance as the least explanatory


factor.
The items that comprise the factors are also relatively consistent across the two studies. In the study two, SatPers had two less
items which formed the second factor SatComm. Whilst the SatSett
factor in study two was comprised of three more items (neatness,
convenience and easy access) these items are consistent with satisfaction with setting. Perhaps this reects that after a temporal
delay respondents broaden their assessment of setting by including additional items. Similarly, the SatAssu factor identied in
study two included fair treatment. The concept of fairness is consistent with the overall intent of service assurance and therefore is
similar to the SatAssu factor found in study one.
Overall, the results indicate that after a time delay, customers
use the same dimensions to assess satisfaction but with more
items. The results have practical implications for fast food managers. By recognising that customers still assess satisfaction on
the same dimensions, even after a temporal delay, fast food
managers can still use delayed responses to assess and maximise
service performance. However, temporal delays may increase the
breadth of items used to assess satisfaction. This consistency in
assessment may support the delayed collection of data for products that are purchased less frequently. Koenig-Lewis and Palmer
(2008) for example, argue that satisfaction should be measured
close to the next service encounter rather than just after the
last service encounter as this is more indicative of repurchase
intent.
The results found after a temporal delay could have been inuenced by several factors. Data collected via mail may exhibit different characteristics than data collected face to face because
respondents complete the mail survey in their own time and can
consider the implications of their answers more fully. Also people
who responded to the mail survey were older, more educated and
less likely to be from Asia than those surveyed immediately after
purchase. This could indicate particular types of people that respond to mail surveys and may indicate that these respondents
are atypical fast food consumers. Where the responses were collected may also affect customer assessment of satisfaction particularly with the service setting. For example customers at an outlet
may recall safety as more important than respondents who are
completing the questionnaire away from the service setting.

12. Concluding comments


Based upon responses to the CSS, customers in Australia assess
satisfaction in very similar ways to customers in other countries
and other service industries. This is irrespective of whether the
customers complete the assessment straight away or after a temporal delay. The SatPers dimension was reasonably consistent in
each study and with past research (Nicholls et al., 1998; Gilbert
et al., 2004) and was the most explanatory factor in the CSS. SatSett
was also validated in the current study as the second more important factor in the CSS. SatAssu was identied as an additional factor
in both studies indicating that assurance of promises and fairness
are important in the context of this study. However Gilbert et al.
(2004) also found some variation in factors and the items that
comprise those factors across the four countries that they explored: therefore the ndings could be caused by instability in
the CSS.
Measuring customer satisfaction is important to enable organisations to manage the service experience with the aim of increased
customer acquisition and retention (Bolton and Drew, 1991; Gilbert et al., 2004; Oliver, 2010; Yeung et al., 2002). The CSS offers
organisations a parsimonious tool to measure customer satisfaction across service situations and countries.

153

The focus of the current study on the fast food industry limits
the ndings somewhat due to the unique specic characteristics
including the need for convenience, speed of service, changing consumer preference for fast food, frequency of purchase, etc. As well,
despite the focus on global brands, national fast food consumption
is still comprised of many smaller providers which could be explored in future research. Lastly, it is possible that some groups
of customers (e.g. regular patrons versus special occasion patrons)
assess satisfaction differently. Future research could target measurement of satisfaction amongst specic target customer groups
to explain more of the differences in satisfaction assessment.
References
ABS, 2012. Australian Bureau of Statistics National Regional Prole 20062010.
http://www.ausstats.abs.gov.au/ausstats/nrpmaps.nsf/NEW+GmapPages/
national+regional+prole.
Aksoy, L., Cooil, B., Groening, C., Keiningham, T.L., Yaln, A., 2008. The long-term
stock market valuation of customer satisfaction. Journal of Marketing 72 (4),
105122.
Anderson, E.W., Fornell, C., 1994. A customer satisfaction research prospectus. In:
Rust, R.T., Oliver, R.L. (Eds.), Service Quality: New Directions in Theory and
Practice. Sage Publications, London, pp. 241268.
Anderson, E.W., Fornell, C., Mazvancheryl, S.K., 2004. Customer satisfaction and
shareholder value. Journal of Marketing 68 (4), 172185.
Anderson, S., Pearo, L.K., Widener, S.K., 2008. Drivers of service satisfaction: linking
customer satisfaction ot the service concept and customer characteristics.
Journal of Service Research 10 (4), 365381.
Bolton, R.N., 1998. A dynamic model of the duration of the customers relationship
with a continuous service provider: the role of satisfaction. Marketing Science
17 (1), 45.
Bolton, R.N., Drew, J.H., 1991. A multistage model of customers assessments of
service quality and value. Journal of Consumer Research 17 (4), 375384.
Carman, J.M., 1990. Consumer perceptions of service quality: an assessment of the
SERVQUAL dimensions. Journal of Retailing 66 (1), 33.
Churchill Jr., G.A., Suprenant, C., 1982. An investigation into the determinants of
customer satisfaction. Journal of Marketing Research 19 (4), 491504.
Danaher, P.J., Mattsson, J., 1994. Customer satisfaction during the service delivery
process. European Journal of Marketing 28 (5), 516.
Davis, M.M., Heineke, J., 1998. How disconrmation, perception and actual waiting
times impact customer satisfaction. International Journal of Service Industry
Management 9 (1), 64.
Fornell, C., Johnson, M.D., Anderson, E.W., Jaesung, C., Bryant, B.E., 1996. The
American customer satisfaction index: nature, purpose, and ndings. Journal of
Marketing 60 (4), 718.
Fornell, C., Mithas, S., Morgeson III, F.V., Krishnan, M.S., 2006. Customer satisfaction
and stock prices: high returns, low risk. Journal of Marketing 70 (1), 314.
Gilbert, G.R., Veloutsou, C., Goode, M.M.H., Moutinho, L., 2004. Measuring customer
satisfaction in the fast food industry: a cross-national approach. The Journal of
Services Marketing 18 (5), 371.
Grigoroudis, E., Siskos, Y., 2004. A survey of customer satisfaction barometers: some
results from the transportation-communications sector. European Journal of
Operational Research 152 (2), 334353.
Gronroos, C., 1984. A service quality model and its marketing implications.
European Journal of Marketing 18 (4), 3644.
Gupta, S., Zeithaml, V., 2006. Customer metrics and their impact on nancial
performance. Marketing Science 25 (6), 718739.
Huber, F., Herrmann, A., Henneberg, S.C., 2007. Measuring customer value and
satisfaction in services transactions, scale development, validation and crosscultural comparison. International Journal of Consumer Studies 31 (6), 554
564.
IBISWorld, 2011. IBISWorld Industry Report G4621-GL: Global Fast Food
Restaurants. IBISWorld Industry Reports. IBISWorld.
Johnson, M.D., Gustafsson, A., Andreassen, T.W., Lervik, L., Cha, J., 2001. The
evolution and future of national customer satisfaction index models. Journal of
Economic Psychology 22 (2), 217245.
Knutson, B.J., 2000. College students and fast food how students perceive
restaurant brands. The Cornell Hotel and Restaurant Administration Quarterly
41 (3), 6874.
Koenig-Lewis, N., Palmer, A., 2008. Experiential values over time a comparison of
measures of satisfaction and emotion. Journal of Marketing Management 24 (1
2), 6985.
Kumar, P., 2002. The impact of performance, cost, and competitive considerations
on the relationship between satisfaction and repurchase intent in business
markets. Journal of Service Research 5 (1), 5568.
Lee, H., Lee, Y., Yoo, D., 2000. The determinants of perceived service quality and its
relationship with satisfaction. The Journal of Services Marketing 14 (3), 217
231.
Lee, M., Ulgado, F.M., 1997. Consumer evaluations of fast-food services: a crossnational comparison. The Journal of Services Marketing 11 (1), 3952.
Lockyer, S., 2007. Study: mixed results in customer satisfaction. Nations Restaurant
News, 24.

154

I. Phau, G. Ferguson / Australasian Marketing Journal 21 (2013) 147154

MacGowan, I., 2012. IBISWorld Industry Report G5125a: Fast Food in Australia.
IBISWorld Industry Report. IBISWorld.
Mazursky, D., Geva, A., 1989. Temporal decay in satisfaction purchase intention
relationship. Psychology and Marketing 6 (3), 211227.
McDonalds, 2012. Our values: What we believe in. Retrieved 7 March 2012, from
http://mcdonalds.com.au/careers/about-us/our-values.
McDougall, G.H.G., Terrence, L., 2000. Customer satisfaction with services: putting
perceived value into the equation. The Journal of Services Marketing 14 (5),
392410.
McQuitty, S., Finn, A., Wiley, J.B., 2000. Systematically varying consumer satisfaction
and its implications for product choice. Academy of Marketing Science Review
2000 (10), 116.
Mittal, V., Kamakura, W.A., 2001. Satisfaction, repurchase intent, and repurchase
behavior: investigating the moderating effect of customer characteristics.
Journal of Marketing Research (JMR) 38 (1), 131142.
Mittal, V., Kumar, P., Tsiros, M., 1999. Attribute-level performance, satisfaction, and
behavioral intentions over time:a consumption-system approach. Journal of
Marketing 63 (2), 88101.
Morgeson, F., Mithas, S., Keiningham, T., Aksoy, L., 2011. An investigation of the
cross-national determinants of customer satisfaction. Journal of the Academy of
Marketing Science 39 (2), 198215.
Nicholls, J.A.F., Gilbert, R.G., Roslow, S., 1998. Parsimonious measurement of
customer satisfaction with personal service and the service setting. The Journal
of Consumer Marketing 15 (3), 239253.
Nunnally, J.C., Bernstein, I.H., 1994. Psychometric Theory. McGraw-Hill, New York.
OConnell, V., OSullivan, D., 2011. The impact of customer satisfaction on CEO
bonuses. Journal of the Academy of Marketing Science 39 (6), 828845.
Oberoi, U., Hales, C., 1990. Assessing the quality of the conference hotel service
product: towards an empirically based model. Service Industries Journal 10 (4),
700721.
Oliver, R., 2010. Satisfaction: A Behavioral Perspective on the Consumer. M.E.
Sharpe, Armonk, N.Y..
Parasuraman, A., Zeithaml, V.A., Berry, L.L., 1988. SERVQUAL: a multiple-item scale
for measuring consumer perceptions of service quality. Journal of Retailing 64
(1), 1240.

Samaraweera, M., Gelb, B.D., 2011. Wringing more value from advertising
dollars: the customer satisfaction boost. Journal of Business Strategy 32 (6),
2429.
Sivadas, E., Baker-Prewitt, J.L., 2000. An examination of the relationship between
service quality, customer satisfaction, and store loyalty. International Journal of
Retail & Distribution Management 28 (2), 7382.
Spreng, R.A., MacKenzie, S.B., Olshavsky, R.W., 1996. A reexamination of the
determinants of consumer satisfaction. Journal of Marketing 60 (3), 15.
van Dolen, W., de Ruyter, K., Lemmink, J., 2004. An empirical assessment of the
inuence of customer emotions and contact employee performance on
encounter and relationship satisfaction. Journal of Business Research 57 (4),
437444.
Williams, P., Naumann, E., 2011. Customer satisfaction and business performance: a
rm-level analysis. Journal of Services Marketing 25 (1), 2032.
Wilson, A., 2002. Attitudes towards customer satisfaction measurement in the retail
sector. International Journal of Market Research 44 (2), 213222.
Witkowski, T.H., Ma, Y., Zheng, D., 2003. Cross-cultural inuences on brand identity
impressions: KFC in China and the United States. Asia Pacic Journal of
Marketing and Logistics 15 (1/2), 7488.
Yeung, M.C.H., Lee Chew, G., Ennew, C.T., 2002. Customer satisfaction and
protability: a reappraisal of the nature of the relationship. Journal of
Targeting, Measurement & Analysis for Marketing 11 (1), 24.
Yuksel, A., Rimmington, M., 1998. Customer-satisfaction measurement:
performance counts. The Cornell Hotel and Restaurant Administration
Quarterly 39 (6), 6070.
Yksel, A., Yksel, F., 2003. Measurement of tourist satisfaction with restaurant
services: a segment-based approach. Journal of Vacation Marketing 9 (1), 52
68.
Zeithaml, V., Berry, L., Parasuraman, A., 1993. The nature and determinants of
customer expectations of service. Journal of the Academy of Marketing Science
21 (1), 112.
Zeithaml, V.A., Berry, L.L., Parasuraman, A., 1996. The behavioural consequences of
service quality. Journal of Marketing 60 (2), 3146.

ID
1027035

Title
Validating the Customer Satisfaction Survey (CSS) Scale in the Australian fast food industry

http://fulltext.study/journal/1161

http://FullText.Study

Pages
8

Das könnte Ihnen auch gefallen