Sie sind auf Seite 1von 42

An Exploratory Study of the Impact of e-Service Process

on Online Customer Satisfaction

Sulin Ba
Department of Operations and Information Management
School of Business
University of Connecticut
Storrs, CT 06269
Tel: 860-486-6311
Fax: 860-486-4839
Sulin.ba@uconn.edu
Wayne C. Johansson
U.S. Department of Homeland Security
Los Angeles International Airport
Los Angeles, CA 90045
Tel: 323-436-2772
wcjla@sbcglobal.net

Forthcoming in Production and Operations Management

An Exploratory Study of the Impact of e-Service Process


on Online Customer Satisfaction
Abstract
Although extensive academic research has examined the dynamics of interpersonal
interactions between service providers and customers, much less research has investigated
customer service encounters through technological interfaces such as the Web in electronic
commerce transactions. Corporate websites have become an important point of contact with
customers for many companies. Service has been described as one of the most important
attributes for online business to influence traffic and sales. However, more research is needed to
understand how web-based technological capabilities of the services affect customer evaluations
of service value and how to determine the technological capabilities embedded in the e-service
for customer satisfaction. In this paper, we propose viewing the interface between online buyers
and sellers through the lens of service management in order to identify and explain possible
determinants of online customer satisfaction. A companys website is considered its electronic
Service Delivery System (eSDS). We look at this eSDS from its process point of view and
examine how an eSDS affects customer satisfaction. Our findings indicate that as the eSDS
process improves, a customers perception of the websites ease of use increases, leading to
increased service value and perceived control over the process, which increases customer
satisfaction. The research provides evidence that the technological capabilities embedded in the
website processes are an important factor in determining service quality and ultimately online
customer satisfaction.

Keywords: Web-based Technological Capabilities, Technology Design of e-Service


Process, Online Customer Satisfaction, Electronic Service Delivery System (eSDS)

1.

Introduction
Technology advancement is revolutionizing the way business is conducted and reshaping

how companies interact with their customers. This phenomenon is particularly evident in the
domain of electronic commerce (EC). Companies have realized electronic commerce not only is
a way of reducing costs through automation and increased efficiency, but, more importantly, also
is a means to expand revenues through enhanced customer service. Corporate websites provide
an important interface through which customers and firms interact with each other.

This

interface has several characteristics that are uncommon to the traditional forms of buyer/seller
interaction (e.g., face-to-face or telephone). With little or no human intervention, the capabilities
embedded in the website process technology enable a consumer to locate a product or service,
assess its utility, and purchase it practically whenever and wherever it is convenient. Indeed, the
Internet technology has dramatically impacted the service creation process.
Yet, surveys of online customers consistently indicate that a big percentage is not
satisfied with the interaction (ICSA 2001, Bednarz 2003). As pointed out by Meuter et al.
(2000), although extensive academic research has examined the dynamics of interpersonal
interactions between service providers and customers, much less research has investigated
customer service encounters through technological interfaces. There has been research on
website design to improve customer satisfaction. Most studies, however, focus on website
navigation, information content, download speed, information presentation, etc. (e.g., McKinney
et al. 2002, Palmer 2002). More research is needed to better understand how services delivered
through technological interfaces such as the Web affect customer evaluations of service value
and how to manage the technology-based service process for customer satisfaction and,
ultimately, for creating strategic advantage.

Service has been described as one of the most important attributes for online business to
influence traffic and sales (Lohse and Spiller 1998). Provision of service over electronic
networks is referred to as e-service (Rust and Kannan 2003). Scholars have argued that e-service,
compared with offline service, has the ability to serve consumers more efficiently, at a lower
marginal cost, while simultaneously offering real-time product and/or service-specific
information (Shapiro and Varian 1999).
However, although Internet-based service tools and technologies offer many benefits,
there are inevitably costs associated with developing and delivering e-services. A recent IDC
survey found that in 2002, e-service (such as online order taking and order tracking, payment,
and after-sales support) provision absorbed about 50% of the total investment in new information
technologies at a typical company (Tsikriktsis et al. 2004). E-service entails many different
dimensions and attributes, such as responsiveness of answering customer inquiries, website
security, customization, interactivity, service delivery processes, etc. The rapid increase in eservice activity creates a challenge for firms: what combination of features should be embedded
in the service technology to satisfy consumers while realistically considering operational and
financial constraints?
The ideal action for Internet companies is to improve and maintain all service quality
attributes that satisfy their customers needs and wants. However, given that firms, even large
ones, have limited resources, priorities must be set among alternative technological capabilities
embedded in e-service in making investment decisions based on a companys business strategies.
Not all technological capabilities have the same effect on customer satisfaction. The key is to
find, among various capabilities, which ones are more crucial to enhancing the level of service
quality. In other words, to be successful, e-services need to identify and focus on developing

technology-based features that enhance consumer value. In this manner, firms can understand
what service areas should be emphasized to most effectively improve quality while avoiding
investing valuable resources in technology features that may not pay off.
The justification and deployment of new technology in the case of this study, the eservice process to a new market warrants special attention (Betz 2001). We build on theories
from service management and propose viewing the interface between buyers and sellers (i.e., the
website embedding e-service technology capabilities) through the lens of service management in
order to identify and explain possible determinants of online customer satisfaction. In particular,
the service management literature has identified service process as a critical factor for
influencing service quality. We evaluate the significance of the e-service process in terms of
causality of the level of customer satisfaction: How does a customers e-service process
perception about a website affect customer satisfaction?
This research strives to make an important contribution to the management of technology
domain by examining the impact of e-service capabilities on customer satisfaction in the Internet
market. Our insights will help firms to understand whether investing in and managing the eservice is justified and what factors lead to customer acceptance of the e-service process.

2.

Background and Theory


The Service-Profit Chain model (Heskett et al. 1994) hypothesizes relationships between

the Service Delivery System (SDS) (internal service quality, employee satisfaction, retention,
and productivity), customer satisfaction, and profitability. In summary, profit and growth result
from customer loyalty, which develops from customer satisfaction. Customer satisfaction, on the
other hand, is influenced by the service value a customer receives from the service delivery

system.

This model provides an integrative framework for understanding how a firms

investments into service operations are related to customer perceptions of the value they receive
from the firm. The framework has been widely used by practitioners. Academic researchers
have also empirically tested the various links suggested by the model and found support for the
positive effects of the SDS performance perceptions on service quality perceptions and
customer behaviors (Kamakura et al. 2002).
With recent technological advances and the explosion of Internet usage, many services
are delivered through a companys website and customers no longer interact face-to-face with
the service provider. A companys website thus becomes the service delivery system, which is
critical for a companys value creation strategy. The focus of this study is to examine how this
new, electronic service delivery system (eSDS) affects customers perceived service value and
customer satisfaction.
Roth and Jackson (1995) evaluated the service delivery system in the operations
capabilities-service quality-performance (C-SQ-P) model based on the process and people
capabilities of the SDS: what the system can do and what the outcomes of the service interaction
are, because these two factors are what customers tend to use to make their judgment of the
service system. In their survey of the retail banking industry, Roth and Jackson provided
evidence that the processes of a service delivery system had a greater impact upon service
quality than people capabilities the knowledge and skills possessed by employees interacting
with customers. This finding leads us to think that it is possible that technology and the business
processes embedded in the technology have a greater significance than human interaction on
customers perceived service value, especially in the online environment. A study by Meuter et
al. (2000) indicates that process failure and poor process design are among the major factors

leading to a customers dissatisfactory evaluation of service in a technology-based service


encounter. In the Internet environment, the technology typically is the website which is also the
service delivery system. Therefore we focus on the eSDS from the service process point of view.
Specifically, service process is conceptualized as a configuration of technological capabilities
through which service providers respond to customer needs; and perceived eSDS process refers
to the customers view of how service processes are delivered by the websites technological
capabilities. Combining the C-SQ-P model with the Service-Profit Chain model and adapting
them to the EC service context, we conjecture the following:
H1a:

Service value provided to customers is positively correlated to customers


perceived eSDS process.

H1b: Customer satisfaction is positively correlated to the service value provided to


customers.
H2:

Customer satisfaction is positively correlated to perceived eSDS process.

One important factor that has been identified by marketing researchers to have a
considerable impact in a service process is perceived control (Hui and Bateson 1991), which is
described as the amount of control that a customer feels he has over the process or outcome. Hui
and Bateson (1991) concluded that perceptions of control affect customer satisfaction ratings in a
variety of service situations. The sense of control is especially important to customers in a selfservice setting and could increase the evaluation of the experience (Langeard et al. 1981).
In an EC context, when a customer searches through a companys website for a particular
product or checks inventory availability, the customer is in fact performing a self-service. More
recent studies on Internet retailing suggest that perceived control encourages Internet usage and
loyalty, resulting in more satisfied customers (Lee and Allaway 2002). Indeed, new advances in

technology have made it possible for customers to choose the channel through which to acquire
the product, the channel through which the product will be delivered, and the extent to which
they would like to be involved in the development or delivery of the product. Therefore, their
expectations of perceived control have escalated. Rust and Kannan (2003) believe that
appropriately designed and implemented service technologies can provide customers with more
control in their process of conducting transactions which can increase customer satisfaction. In
this research, we define perceived control as the amount of control the customer has in the
technology-based e-service encounter, such as navigating through the website and determining
the e-service outcome. We hypothesize that:
H3a:

The customers perceived control in the online service process is positively


correlated to the perceived eSDS process.

H3b:

Customer satisfaction is positively correlated to the customers perceived control


in the online service process.

Although it has been argued that the capabilities embedded in an e-service technology
provide many potential benefits for customers, if customers think the technology is too difficult
to use, customers may not use the e-service technology at all. Therefore, customer acceptance of
this new technology is critical for firms trying to push more service to the customer side to lower
their service cost and improve their service efficiency. The Technology Acceptance Model
(TAM), which has been widely used to study user acceptance of new technology, argues that
perceived ease of use is one of the key predictors of user acceptance of new technology (Davis
1989). Perceived ease of use is directly related to computer-mediated services and refers to the
extent to which a person believes using the technology will be free of effort. In an e-commerce

setting, ease of use has been confirmed as a key factor leading to channel satisfaction (Devaraj el
al. 2002).
Ease of use, however, is dictated by what the system can do and what it allows its
customers to do, i.e., the capabilities embedded in the e-service technology. Usability studies on
online stores have looked at website architecture, design, and various navigation processes to
predict how easy it is for users to achieve what they want to do (Lohse and Spiller 1998, Palmer
2002). A recent study by Chen et al. (2004) indicates that poorly designed website processes
have an adverse influence on the websites perceived ease of use. Therefore, we hypothesize the
following:
H4a:

The customers perceived ease of use of a website is positively correlated to the


perceived eSDS process.

H4b:

Customer satisfaction is positively correlated to the customers perceived ease of


use of the website.

Other than the process capabilities of the SDS, the service literature also looks at how a
service is delivered. Bitner et al. (1994) and Mittal and Lassar (1996) both point out that in
service settings, customer satisfaction and evaluation of service are often influenced by the
quality of the interpersonal interaction between the customer and the service provider. In the EC
context, services are provided through the technology medium and therefore are mostly
impersonal. The level of interaction between the service provider and the customer or among the
customers is normally determined by the degree of interaction embedded in the service
providers website a key technological capability. For example, does the website enable twoway synchronous or asynchronous exchanges between the customer and the service provider?
Does the website provide a telephone number in case the customer needs to contact the service

provider? This kind of reciprocal communication-based interaction is termed interactivity in the


literature and considered an important influence in building up online relationships and total
shopping experiences (Ha and James 1998, Merrilees 2002).
In addition to the social aspect of customer interaction with the service provider, many
websites also offer technological tools that allow customers to receive information tailored to
their specific needs. For example, a customer shopping for a sports utility vehicle can choose to
only receive information comparing different SUV models. This type of interactivity the
technological capability to create a customized product is mainly process or system oriented
(see, e.g., McKinney et al. 2002, Palmer 2002), instead of social content oriented, and therefore
is captured in our eSDS process construct.

Interactivity in our research focuses on the

interpersonal communication aspect of the construct.

Based on the above discussion, we

hypothesize the following:


H5a:

Interactivity moderates the relationship between the perceived eSDS process and
service value.

H5b:

Interactivity moderates the relationship between service value and customer


satisfaction.

H5c:

Interactivity moderates the relationship between the perceived eSDS process and
the perceived ease of use.

H5d:

Interactivity moderates the relationship between the customers perceived ease of


use and customer satisfaction.

H5e:

Interactivity moderates the relationship between the perceived eSDS process and
the customers perceived control.

H5f:

Interactivity moderates the relationship between the customers perceived control


and customer satisfaction.

Figure 1 summarizes our research model.


Figure 1. Research Model and Hypotheses

)
(+

(+

H1
a

b
H1

Users
Service
ofValue
control

a(
H4

H5a

Perceived eSDS
Process

H5c

Perceived
Ease of Use

H4
b( +

) H5b

H5d

H2
H3
a(+
)

H5e

+)

Perceived
Control

b
H3

Customer
Satisfaction

( +)

H5f

Interactivity

Three demographic variables, namely gender, Internet experience, and online shopping
experience, are important for control purposes. Prior research has indicated that both gender and
relevant prior experience play a role in how users perceive a technology (e.g., Venkatesh and
Morris 2000). To the extent that perceived ease of use and perceived control may be related to
these demographics, it is necessary to control for them in assessing the true relationship between
these constructs and customer satisfaction. Therefore, these three demographic variables were
included as control variables.

10

3.

Methodology

3.1.

Research Design
The empirical study was conducted in two phases: a pilot study (n=100, with a response

rate of 79.9%) and the final study. Relying upon extant service instruments and concepts, we
first developed an instrument to measure our constructs. Through pilot testing, we substantiated
the reliability of the construct indices and eliminated redundant questions. The final study
instrument is a much shorter version.
Common method variance can be a potential source of bias in survey research. Therefore,
following suggestions by Podsakoff and Organ (1986), a procedural remedy was used to reduce
method bias by guaranteeing response anonymity.

Another procedural remedy was the

separation of predictor and criterion variables psychologically the items measuring different
constructs were mixed throughout the questionnaire. In addition, almost half of the items on the
instrument were reverse-worded.
Data for the study were collected through self-administered questionnaires over a period
of three weeks. Subjects for the study were students enrolled in four operations and information
management courses at a major private university in the U.S. Two of the courses were
undergraduate courses and two MBA elective courses. All four courses had roughly the same
number of enrollment. Each subject was asked to visit a specific website from six retailing
websites, with the purpose of selecting and possibly purchasing one of three products (a pair of
shoes, a briefcase, and a gift item of their choice). The surveys handed out to each course
consisted of an equal number of the six websites. The surveys were handed out randomly;
therefore neither the subject nor the researchers could know in advance who would be asked to
visit which website, thus creating randomization. Those enrolling in more than one of the four

11

courses were identified beforehand and received the questionnaire only once. The incentive for
participating was extra course credit (2% of course grade).
Heim and Sinha (2002) classify an electronic retailers service process into four
categories: service kiosk, service mart, mass service customization, and joint alliance service
customization. According to Heim and Sinha, a service kiosk, which is basically an electronic
brochure, has little or no ability to sell online and extremely limited service process, making a
service kiosk unsuitable for our study. Consequently, we selected six websites, each belonging
to one of the other three categories, hence maximizing the variation in the technological
capabilities embedded in the service processes offered by these e-tailers. Three of the six sites
were general broad-category shopping sites (storerunner.com, nbci.com 1 , netmarket.com) that
can be considered joint alliance service customization service products were designed and
delivered via interlinking systems between several companies and service processes represented
operations oriented toward multiple-company delivery of service products. Two other websites,
one luggage retailer (luggageonline.com) and one shoe retailer (shopping.zappos.com), were
chosen as mass service customization sites that were single-company implementation of service
process delivery. Finally, a gift item retailer (123-gifts.com 2 ) was chosen as a service mart that
provided basic technological service capabilities such as searching and ordering products but
lacked more complicated service features such as order tracking.
After the subjects completed an encounter with the website, a survey instrument was selfadministered. The subjects were not required to complete a purchasing transaction. Our final
sample consisted of 149 complete and valid responses out of 170 questionnaires that were
handed out, for a response rate of 87.6%. To determine whether nonresponse bias was an issue,

1
2

This site has evolved into a general portal instead of a shopping site since the study.
No longer in operation as of November 2004.

12

we used the procedure outlined by Armstrong and Overton (1977) to compare early (surveys
returned within the first week) with late responders. No significant differences in any of our
measures were noted.
The age of respondents varied from 20 to 39, with an average age of 24. Of these, 45%
were female and 55% male. 91% of the respondents had prior online shopping experience, with
an average of 8 online purchases. 95% of the respondents used the Internet on a daily basis. In
addition, 60% of them typically used high-speed Internet access (DSL, cable modem, or T1
connection).
3.2.

Scale Development
In an effort to provide reliability, the study instrument relied upon existing measures

whenever possible, reverse questions, single barrel questions, the test-retest method, and pretesting of the questionnaire. The final survey instrument contained one binary question that
asked the users success in finding and/or purchasing the target product, 36 questions (7-point
Likert scale) to evaluate the constructs, and 11 demographic questions.
Perceived e-Service Process. Since there was no existing scale explicitly measuring the
eSDS process from the customers point of view, new items were created according to Heim and
Sinhas taxonomy of e-service process (2002). Heim and Sinha suggest that website navigation,
product information and representation, order processing and fulfillment are major e-service
process dimensions. Since our study did not require the subjects to actually purchase a product,
order fulfillment was not measured. Five items were created to measure the subjects perception
of various technological capabilities embedded in a website, such as website navigation,
information searching, and product ordering. In addition, Roth and Jackson (1995) and Meuter
et al. (2000) both identify process error as a major source of dissatisfaction in a technology-based

13

self-service encounter. One additional item was thus adapted from Roth and Jackson (1995) to
measure process error.
Service Value. Service value in recent years has been considered a key strategic variable
to help explain consumer purchase behavior and relationship commitment (Patterson and Spreng
1997). In service management and marketing, value is typically defined from the consumer's
perspective. Heskett et al. (1994) defines value as the results customers receive in relation to the
total costs (both the prices and other costs to customers incurred in acquiring the service).
Perceived value is often viewed as the customer's overall assessment of the utility of a product
based on perceptions of what is received and what is given (Zeithaml 1988).
Perceptions of value, however, are not limited to the functional aspects but may include
social, emotional and even epistemic value components. Prior research indicates that three
elements contribute to a consumers perception of value: product price, product quality, and
shopping experience. Kerin et al. (1992) investigated the effect price, product quality and
shopping experience had on value perceptions of a retail store (rather than a product), concluding
that the shopping experience had a greater effect on store value than did price or product quality.
Existing scales measuring value, however, overwhelmingly focus on price (see, e.g., Patterson
and Spreng 1997 and Sweeney et al. 1999). In our research, since we were mainly interested in
service value relative to shopping experience (we did not ask our subjects to actually purchase a
product), new items were developed to focus on measuring the give and receive trade-off in
terms of efforts.
Perceived Ease of Use. Ease of use items were adapted from existing scales. The
perceived ease of use measures first developed by Davis (1989) in the technology acceptance

14

model, then modified by Devaraj et al. (2002) and Koufaris (2002) for online transactions
formed the basis for our scale.
Perceived Control. In developing the scale for perceived control, our key focus was that
the items should refer to a technology-based (i.e., the Internet) service encounter. Bowen and
Johnston (1999) suggest that perceived control emphasizes the importance of the individuals
subjective assessment of whether they can exercise discretion and influence.

Negative

consequences, such as alienation and frustration, result when this basic need is not satisfied.
Therefore, incorporating the scales developed by Koufaris (2002) for online retailing, and Hui
and Bateson (1991) for service encounters, we created a six-item scale for perceived control
which tried to capture the positive as well as negative feelings customers might experience in
online service encounters.
Interactivity. As previously mentioned, most existing scales for interactivity in the
information systems literature mainly refer to websites technological capability. Our research
focuses on the communication and social aspects of the online interaction, following Ha and
James analysis (1998). The only available scale for communication-based interactivity is by
Merrilees (2002) which has a reported reliability measure of 0.85. After a careful examination of
the scale, we felt that some items (e.g., the overall shopping experience is very pleasant and
enjoyable) were more about general satisfaction with the website than the interactivity aspect.
Therefore, we adapted two out of the seven items from that scale.
In their conceptualization of interactivity, Ha and James (1998) list connectedness and
information collection as important dimensions of interactivity. Connectedness refers to the
feeling of being able to link to the outside world and to broaden one's experience whereas
information collection refers to a websites ability to provide and collect necessary information

15

to and from consumers in a transaction. Based on this conceptualization, we developed four new
items for the interactivity scale.
Customer Satisfaction. Five of the items used to operationalize customer satisfaction
came from Oliver and Swan (1989), which had a reported scale reliability of over 0.95.
Although their scale was developed when measuring respondents satisfaction in the context of
new car purchases, the wording of the scale is very general and the items have been used by
others in various online research context (e.g., Devaraj et al. 2002, Janda et al. 2002). Therefore
we adapted the scale and made some wording adjustment to reflect the web technology based
service experience. In addition, we added one more item from McKinney et al. (2002) as a
general measure of overall satisfaction towards a website.

4.

Data Analysis
All measurements of the constructs are based upon the respondents opinions. Unknown

covariates (e.g. traffic volume on the Internet, Internet service providers technologies), which
neither the website nor the customer can control, may have an influence. We rely on a large
sample size to mitigate these unknowns. SPSS 13 for Windows was employed for exploratory
factor analysis. Amos 5 was the structural equation modeling (SEM) package utilized for the
confirmatory factor analysis and for determining relationships among the constructs.
Heteroscedasticity was observed in the scatterplots containing the ease of use index and
unfortunately could not be corrected by various transformations.

While not desirable, the

existence of heteroscedasticity does not, however, invalidate the index. More than likely, any
hypothesis testing involving this index will be either too conservative or too sensitive (Hair et al.
1998). Across the three products, none of the indices were found to significantly differ.

16

As mentioned before, common method variance can be a potential source of bias in


survey research. Negative affectivity, in particular, can be a problem for this research since
subjects may react negatively to a website, which could affect all of their responses. One of the
procedures commonly used to test for the presence of common method bias in a data set is
Harmans one-factor test (Podsakoff et al. 2003) if a single factor is obtained from an
exploratory factor analysis or if one factor accounts for a majority of the covariance in the
independent and dependent variables, then the threat of common method bias is high. Our factor
analysis did not indicate a single-factor structure that explained significant covariance,
suggesting that common method bias is not a cause for concern in our sample.
4.1.

Exploratory Factor Analysis


Responses to the questionnaire were subjected to an exploratory factor analysis. Recent

research has demonstrated the benefits of using exploratory factor analysis as a complement to
theory in specifying the appropriate factor loadings in the measurement model (Gerbing and
Hamilton 1996). The Kaiser-Meyer-Olkin Measure of Sampling Adequacy (MSA) for our data
is .882, well above the .80 level deemed as meritorious for factor analysis (Kaiser 1970).
The principal factor method was used to extract the factors, with a promax (oblique)
rotation. The eigenvalue-one criterion suggested eight factors. However, factors 7 and 8 each
only accounted for less than 3% of the common variance. A scree test suggested six meaningful
factors. Therefore, we decided to retain only six factors, which together explained 60% of the
total variance.
In interpreting the rotated factor pattern, according to Hair et al. (1998), our sample size
requires a factor loading of .45 at the minimum to be significant. A factor loading of .50 or
greater would be considered more ideal. Using this criterion, table 1 presents the questionnaire

17

items and their corresponding factor loadings that are considered significant (.45). Each factor is
labeled in the table for easy interpretation.
(Table 1 - EFA factor loading)
Several items turned out problematic (SAT4, VAL1, VAL2, VAL5, EOU4, PC4, PC6,
INT1, INT2, INT3, and eSDS4). Given the exploratory nature of our study, we decided to first
retain all the items with a .45 or greater factor loading that loaded on their intended constructs.
Problem items were dropped from further analysis. The fact that SAT4 did not load significantly
on customer satisfaction was a surprise, given that the item had been used in prior research. In
addition, although PC4 is typically used as an indicator for perceived control in the literature, the
fact that it loaded on customer satisfaction is consistent with the study by McKinney et al. (2002)
who used the item as an indicator of overall customer satisfaction towards a website and reported
a reliability index of .98 for the scale.
Items VAL1, VAL2, and VAL5 were created in an attempt to capture the give and
receive trade-off in terms of effort, rather than price. However, these items seemed ambiguous
upon closer examination (for example, the trade-off in VAL2 the service provided through the
website was very efficient was not obvious), justifying their exclusion from further data
analysis. INT1, INT2, INT3 also seemed to be more about preparation than interactivity.
Items VAL6, EOU6 and INT6 all had marginal loadings. However, reliability analyses
indicated that including VAL6 in the value scale would improve the alpha from .482 to .629.
EOU6 and INT6, on the other hand, would reduce the alpha level for their corresponding scale.
Therefore, VAL6 was retained whereas EOU6 and INT6 were dropped.
Next, internal consistency reliability analyses were conducted with the remaining items.
Table 2 reports the results and the summary statistics of all scales.

18

(Table 2 construct reliability)


Although Nunnally (1978) recommends .70 as the threshold for an acceptable alpha,
Bagozzi and Yi (1988) and Hair et al. (1998) have both endorsed reliabilities as low as 0.60 for
exploratory research when structural equation modeling is used, as is the case with our research.
4.2.

Testing the Measurement Model


The model derived from EFA was then submitted to confirmatory factor analysis to

determine model fit. The measurement model was estimated using the maximum likelihood
method, and the chi-square value for the model was statistically significant (chi-square(215,
n=149) = 396.048, p<.001). Technically, this chi-square statistic may be used to test the null
hypothesis that the model fits the data. In practice, however, the statistic is very sensitive to
sample size and departures from multivariate normality, and will often result in the rejection of a
well-fitting model. For this reason, it has become common practice to seek a model with a
relatively small chi-square value, rather than necessarily seek a model with a nonsignificant chisquare. Many use the informal criterion that the model may be acceptable if the chi-square/df
ratio is less than 2 (Hatcher 1994), a criterion our model met (chi-square/df=1.842).
Another result, however, indicated that there was in fact a problem with the models fit:
the factor loading for item PC3 failed to load above at least 0.30 (Hatcher 1994). Therefore, this
item was dropped and the resulting model was tested again.
The overall model fit statistics for the revised model reflected reasonable fit. The chisquare/df ratio is less than 2 (chi-square=372.377, df=194). CFI, GFI, AGFI and RMSEA are all
within reasonable range, although less than ideal (CFI=.879, GFI=.815, AGFI=.760,
RMSEA=.078). Therefore, the revised model was tentatively accepted as the studys final
measurement model, and a number of tests were conducted to assess its reliability and validity.

19

Construct reliability for five of the six constructs remained the same as listed in table 2.
The only difference is the construct perceived control, which now has three indicators instead of
four (PC3 was deleted from the revised model). The reliability for the scale increased from .686
to .770, suggesting that dropping PC3 was the right choice. To summarize, all six scales now
demonstrated acceptable levels of reliability for exploratory research.
Table 3 reports the standardized factor loadings for the revised model.
loadings were significant (p<.001).

All factor

This finding provides evidence supporting convergent

validity of the indicators (Anderson and Gerbing 1988).


(Table 3 - CFA factor loading)
Discriminant validity was assessed using two different criteria: the variance extracted test
(Fornell and Larcker 1981) and the chi-square difference test (Anderson and Gerbing 1988). The
variance extracted estimate is a measure of the amount of variance captured by a construct,
relative to the variance due to random measurement error.

A construct demonstrates

discriminant validity if its variance extracted estimate is .50 or greater (Fornell and Larcker
1981). Two of our six constructs failed this test the variance extracted estimate was only .374
for service value and .463 for perceived ease of use. However, Hatcher (1994) cautions that this
test is quite conservative. Therefore, the chi-square difference test was used to further assess the
discriminant validity of these two constructs.

We ran multiple models, constraining the

correlations between the construct in question and one other construct to 1 in each model, and
compared each of these models to the original model. The chi-square difference was significant
at the .05 level for all pairs of models, providing evidence of discriminant validity.

20

Combined, these findings generally support the reliability and validity of the constructs
and their indicators. The revised model was therefore retained as the studys final measurement
model.
4.3.

The Structural Model


Jreskog and Srbom (1996) state the minimum sample size is a function of the number

of variables, k: k(k-1)/2. Our sample size would permit 17 variables, less than the number
required to avoid identification issues. Therefore, the summated score for each index was used
in the SEM analysis. Baumgartner and Homburg (1996) cite numerous articles employing this
approach as evidence of its acceptance in a variety of academic disciplines. Furthermore,
Netemeyer et al. (1990) report that this approach provides the same results as models with
multiple indicators.
Although the data set under consideration cannot be assumed to have a multivariate
normal distribution, Cortina et al. (2001) state that there is considerable evidence that the
Maximum Likelihood Estimator (MLE) is robust with respect to many types of violations of the
multivariate normality assumption. Therefore, since there is no indication of extreme departure
from normality, the MLE was used in the SEM analysis without transforming any of the data
distributions. It is generally accepted that the minimum sample size to ensure appropriate use of
MLE is 100 to 150 (Ding et al. 1995), a requirement that our data set met.
The structural model was developed with the error variance of each measurement
variable set equal to the product of its variance and one minus its reliability coefficient. The path
from the latent variable to the composite indicator is fixed at the square root of the reliability
coefficient, (Jreskog and Srbom 1996, Baumgartner and Homburg 1996).

21

Modeling of the interaction effects followed the procedures developed by Mathieu et al.
(1992). Selection of the Mathieu et al. method was based upon Cortina et al.s (2001) comment
that this method is especially useful when testing complicated theoretical models that include
both mediated and moderated relationships, as is the case with our proposed model.
All interaction effects involving the interactivity construct were found to be insignificant.
Thus support for our hypotheses H5a through H5f is not provided by the data. Another causal
path also proved to be non-significant: from ease of use to customer satisfaction (H4b). In
addition, none of the control variables (i.e., gender, prior Internet experience, and prior online
shopping experience) was significantly related to perceived ease of use and perceived control.
Goodness of fit indices for the model appear in table 4, in the column headed Mt: Theoretical
Model. Values on the CFI, NFI, GFI, and AGFI were acceptable. However, a review of the
models residuals revealed that one of the standardized residuals was relatively large (in excess
of 2.0). These results showed that the initial theoretical model was problematic.
(Table 4 Fit Indices for the Structural Model)
However, the model modification indices produced by Amos indicated that additional
relationships may exist, namely that service value and perceived control might both be
influenced by perceived ease of use. Furthermore, interactivity, instead of being a moderating
variable, might have a direct impact on customer satisfaction. Adding a path from perceived
ease of use to service value seems consistent with the definition of service value in terms of the
give versus receive tradeoff when a website is difficult to use, customers might have to
give more effort, decreasing the perceived value the customer receives. In addition, when a
customer thinks a website is easy to use, conceptually, it makes sense that the customer might
also think he has more control. Interactivity, on the other hand, according to a recent study by

22

Lii et al. (2004), is a direct driver of repeat visits. Because the addition of these suggested
relationships could be justified on theoretical grounds, corresponding paths were added to the
theoretical model Mt. The resulting model, revised model Mr, was then estimated.
Fit indices for the revised model are presented in table 4. It can be seen that the fit
indices (i.e., CFI, NFI, GFI, and AGFI) were not only above .9 but also higher than those
displayed by the initial theoretical model. In addition, the revised model produced a nonsignificant p value, thus justifying the addition of the new paths. The R2 value showed that
80.2% of the variance in customer satisfaction was accounted for by the relationships in the
model.
The revised model is presented in Figure 2 along with the path coefficients.
Figure 2. Revised Model

.43

H1
a(
+

9
0.3
):

):
0

(+

0.473

b
H1

Service
Value

0.
+):
(
a
H4

587

Perceived
Ease of Use

Perceived eSDS
Process

Customer
Satisfaction

H2(+): 0.490

53
0.5

Perceived
Control

b
H3

(+

Interactivity

Note: ** p<0.05. * p<0.1. All other path coefficients are significant at p<0.001.

23

.21
): 0

5*

51
*

0.300**

): 0
.43
8

-0.
1

H3
a (+

4.4.

Discussion
The revised model is the best model that fits the data collected with the survey instrument.

This new model suggests that interactivity, instead of being a moderator, actually acts as a
mediator between the eSDS process and customer satisfaction. However, there is an unusual
element in this relationship. Specifically, as the eSDS process improves, allowing the customer
to make more decisions and choices in the service process, the interactivity allowed by the
website increases.

Interactivity, surprisingly, doesnt necessarily lead to higher customer

satisfaction. On the contrary, if a customer feels an increasing need to interact with the service
provider, his/her satisfaction with the website decreases. This finding is consistent with that by
Zeithaml et al. (2002) in their focus group discussions regarding important service requirements
in the e-commerce context. Only when customers need special assistance, e.g., there is a process
error, do they feel the need to initiate an interaction with a customer service representative.
Many focus group participants were otherwise only interested in having efficient transactions.
The data also reveals that perceived ease of use has a mediated impact, rather than a
direct impact, on customer satisfaction through both service value and perceived control. That is,
as the users perception of a websites ease of use decreases, the service value they feel they
receive from using the website decreases and the users perception of his ability to control the
process decreases. But perceived ease of use has no direct impact on customer satisfaction. We
think the probable reason for this is related to the demographics of our study sample. The
sample population reported homogenous and high values for their self-assessment of their
technical competence.
It is worth noting that our result does not necessarily contradict the TAM model which
theorizes that perceived ease of use directly influences a users attitude toward a technology,

24

which would be customer satisfaction in this study. The TAM model suggests that perceived
ease of use influences perceived usefulness because, other things being equal, the easier the
technology is to use, the more useful it can be. Perceived usefulness is conceptually closely
related to the service value construct in our model. Our result that perceived ease of use
influences customer satisfaction through service value is in fact consistent with TAM.
Perceived ease of use is a construct related to customer-specific characteristics. For
example, when a customer is technologically sophisticated and has extensive experience
shopping online, she may feel that she has a great degree of control over how she conducts the
transaction. On the other hand, if a customer is not experienced, she may feel at a loss and not
able to engage in the process. Intuitively, this makes sense. The question that must be answered
is: How sound is the theoretical basis of the relationship? Information systems literature has
examined the relationship between computer users self-efficacy the judgment of ones ability
to perform a specific task using a computer and their perceived ease of use of computer
systems, and found significant correlations (e.g., Hong et al. 2002). Computer self-efficacy, on
the other hand, is considered the conceptualization of perceived control (Venkatesh 2000). The
question of whether there is a causal relationship between perceived control and ease of use, and
if so, in which direction, remains unanswered. Further research is certainly needed to examine
the precise relationship between the two constructs.
Since the six websites we used belonged to three types of service processes with regard to
their corresponding technological capabilities, namely service mart, mass service customization,
and joint alliance service customization, a post-hoc analysis was done to determine whether there
were any differences in customer satisfaction by type of website. Our analysis did not yield any
significant result. Given our finding that the eSDS process has a positive influence on customer

25

satisfaction, conceptually, one could speculate that the more comprehensive technological
capabilities embedded in a mass service customization site might lead to happier customers than
the basic website capabilities from a service mart. Although this conjecture is not supported by
our data, we believe this issue is an interesting one and should be explored in future research.

5.

Implications and Conclusion


In this paper we have argued that service is critical to the success of electronic commerce.

Building on theories from service management, we have examined what e-service technological
capabilities should be embedded in a firms website and what technology features should take
priority. We contribute to the management of technology domain by proposing a theoretical
model that helps firms understand the impact of e-service technological capabilities on online
customer satisfaction. In addition, our model also helps firms to justify their investment in eservice technology.
5.1.

Research Implications
It is important to note that online customer satisfaction has been evaluated along other

dimensions. For example, SERVQUAL is often referred to as an important factor leading to


customer satisfaction (Devaraj et al. 2002). Information quality and availability is another factor
examined by the literature (e.g., McKinney et al. 2002). A key contribution of our research is we
demonstrate that the technological capabilities embedded in the e-service process through which
services are delivered and information is presented to the customer are critical to customer
satisfaction. For example, does the website have the technological capability that allows the
customer the flexibility of customizing the information content? If a website offers an abundant
amount of information but doesnt allow much user freedom in terms of choosing where to go at

26

the website and what information to see, the customer is unlikely to be satisfied.

The

technological capabilities of the eSDS processes, therefore, really are the foundation of other
service measurement parameters.
Although the service management literature has established that the SDS has a direct
impact on service value, traditional service delivery systems are made of very different
components from an electronic SDS. Traditionally it is the service employees that determine the
service value the customers perceive they receive. By examining the linkage between the
electronic SDS and service value, we demonstrate that even without face-to-face interactions
with service employees, the eSDS plays a vital role in customer satisfaction. Furthermore, we
extended the current literature that looks at online customer satisfaction extant research mainly
focuses on the relationship between service quality and customer satisfaction or system quality
and customer satisfaction. We have demonstrated that the technological capabilities embedded
in e-service processes are really the key factor determining service quality and ultimately online
customer satisfaction.
The capabilities embedded in an e-service technology have different dimensions. For
example, Levitt (1976) draws upon manufacturing sources in using the words standardized and
customized to define the poles of a service process continuum whereas Shostack (1987) uses
complexity and divergence. In this research, we did not drill down inside the eSDS to these
different capability dimensions to analyze which ones are the most important in determining
online customer satisfaction. However, conceptually it is possible that some dimensions play a
more significant role than others. Indeed, customization has been identified as an important eservice technology feature preferred by many online customers (Nunes and Kambil 2001).

27

Therefore, future theoretical investigations are warranted to understand what dimensions of


service processes are important in delivering quality online services.
From a practical point of view, our research provides investment guidance to firms in
their creation of and upgrades for e-service technologies.

An e-service website can offer

different technological capabilities. Many companies, however, are financially constrained in


practice in terms of what e-service technology features to focus on. It is therefore important to
identify those features that are critical to customer satisfaction. In addition to interface design
factors identified by prior research, such as site aesthetics, graphics presentation, visual effects,
our research results bring to the foreground the importance of procedural and process design
capabilities embedded into an e-service technology site. Companies deploying e-service really
need to understand that their website is not only an interface with their customers, but also an
information system that embeds their business processes. Having smooth and flexible website
processes means seamless system integration. For example, the website needs to be integrated
with the companys inventory system so customers can check the availability of products; with
the order tracking system so customers can check their order status, etc. Therefore, presenting a
pretty face is only a small part of the whole website design effort. How the whole system is
designed, what technological capabilities to offer, and what service processes are enabled
ultimately determines what service value a company delivers to its customers and how satisfied
the customers are.
Although previous research has argued that website interactivity is an important
technology feature for customer satisfaction, our research only provides limited support for this
argument. Offering real-time interactivity between the service provider and the customer can be
expensive, as it involves more human intervention. Based on our results, we believe that at this

28

point there is not enough justification for companies to spend a significant amount of money on
this aspect of e-service. More research on the role of interactivity in e-service delivery and how
interactivity affects customer satisfaction is clearly needed.
5.2.

Limitations and Suggestions for Future Research


There are several ways in which future research could strengthen the results of this study.

First, our survey did not require the subjects to actually carry out the purchase. Therefore the
technology-based service capabilities we examined in this paper do not cover any post-purchase
services such as fulfillment and returns. But practitioners as well as researchers (e.g., Zeithaml
et al. 2002) have voiced that post-purchase service is also critical to customer satisfaction and
may explain why some customers never came back to a certain website. This aspect of e-service
capabilities needs to be captured as well in order to evaluate the overall service value an eSDS
can deliver.
Although prior research indicates that the users perception of a systems ease of use is a
significant direct driver of satisfaction, our empirical testing failed to confirm this hypothesis.
As pointed out by Zeithaml et al. (2002), customer-specific characteristics such as demographics
and psychographics could have a strong impact on perceived service value. Perceived ease of
use is directly influenced by the customers technology proficiency. The inability to demonstrate
the significance of this construct, we believe, may be an artifact of the sample population.
Therefore, further research is warranted: the size and heterogeneity of the sample should be
increased by the inclusion of individuals outside of the setting of a higher education institution.
The research design of this study has several limitations.

First, the measurement

instrument needs to be fine tuned. As discussed in section 4, quite a few items had high crossloadings in the exploratory factor analysis.

Although the constructs, e.g., service value,

29

perceived ease of use, and perceived control, are theoretically different, they are also correlated.
Developing scale items that clearly distinguish these constructs from one another is an important
task for future research.
As mentioned in section 4, common method variance can be a potential problem for
survey research. Given that our constructs were all measured by the same method from the same
subjects, this study potentially faces the same bias, although various procedural remedies were
employed to reduce the bias. One way of addressing this problem in future studies is to use an
objective measure of the technological capabilities of an e-service website, instead of examining
it from the customers perspective. A more objective measure not only helps to reduce methods
variance but might also yield more insight in how customers view a websites service value and
how that view ultimately translates into satisfaction, providing the critical link between an
organizations e-service process design decision and customer response.
As electronic commerce continues to grow, e-service is going to play an even bigger role
in customer satisfaction. Managing e-service technology will become more critical for firms
intending to compete online. Firms need to carefully evaluate their technology-based service
offerings and understand how to design web-based technological capabilities to deliver the type
of services customers demand.

Moreover, as technology is constantly evolving, so is

technology-based e-service process and its impact on business strategy.

The effective

management of the integration of business and technology is becoming an indispensable part of


many organizations value creation strategy.

This work is only a first step in trying to

understand the e-service technology and the impact of the technology on customer satisfaction.
We believe this is a promising research area for researchers in the management of technology
domain.

30

References:
Anderson, J.C. and D. W. Gerbing. 1988. Structural equation modeling in practice: a review and
recommended two-step approach. Psychological Bulletin, 103, 411-423.
Armstrong, J. S. and T. S. Overton. 1977. Estimating nonresponse bias in mail surveys. Journal
of Marketing Research, 14(3), 396-402.
Bagozzi, R. P., and Y. Yi. 1988. On the evaluation of structural equation models. Journal of the
Academy of Marketing Science, (16), 74-79.
Baumgartner, H. and C. Homburg. 1996. Applications of structural equation modeling in
marketing and consumer research: a review. International Journal of Research in
Marketing, 13(2), 139-161.
Bednarz, A. 2003. Retailers shore up Web sites for holidays. Network World, December
15, 20(50), 10.
Betz, F. 2001. Executive strategy: strategic management and information technology. New
York: John Wiley & Sons, Inc.
Bitner, M.J., B. H. Booms, and L. A. Mohr. 1994. Critical service encounters: the employees
viewpoint. Journal of Marketing, 58, 95-106.
Bowen, D. E. and R. Johnston. 1999. Internal service recovery: developing a new construct.
International Journal of Service Industry Management, 10(2), 118.
Brady, M. and J. J. Cronin. 2001. Customer orientation: effects of customer service perceptions
and outcome behaviors. Journal of Service Research, 3(3), 241-251.
Bryne, B.M. 1998. Structural Equation Modeling with LISREL, PRELIS and SIMPLIS: Basic
Concepts, Applications, and Programming, Lawrence Erlbaum Associates, Inc., Mahwah,
NJ.

31

Chen, L., M. Gillenson, and D. Sherrell. 2004. Consumer acceptance of virtual stores: a
theoretical model and critical success factors for virtual stores. Database for Advances in
Information Systems, 35(2), 8-31.
Cortina, J.M., G. Chen, and W.P. Dunlap. 2001. Testing interaction effects in LISREL:
Examination and illustration of available procedures. Organizational Research Methods,
4(4), 324-360.
Davis, F. 1989. Perceived usefulness, perceived use of ease, and user acceptance of information
technology. MIS Quarterly, 13(3), 319-340.
Devaraj, S., M. Fan, and R. Kohli. 2002. Antecedents of B2C channel satisfaction and preference:
validating e-commerce metrics. Information Systems Research, 13(3), 316-333.
Ding, L., W. F. Velicer, and L. L. Harlow. 1995. Effects of estimation methods, number of
indicators per factor and improper solutions on structural equation modeling fit indices.
Structural Equation Modeling, 2, 119-143.
Fornell, C. and D.F. Larcker. 1981. Evaluating structural equation models with unobservable
variables and measurement errors. Journal of Marketing Research, 18, 39-50.
Gerbing, D.W. and J. G. Hamilton. 1996. Viability of exploratory factor analysis as a precursor
to confirmatory factor analysis. Structural Equation Modeling, 3(1), 62-72.
Ha, L, and L. James. 1998. Interactivity reexamined: a baseline analysis of early business web
sites. Journal of Broadcasting & Electronic Media, 42(4).
Hair, Jr., J. F., R. E. Anderson, R. L. Tatham, and W.C. Black. 1998. Multivariate Data Analysis,
5th edition, Prentice Hall, Upper Saddle River, NJ.
Hatcher, L. 1994. A Step-by-Step Approach to Using SAS for Factor Analysis and Structural
Equation Modeling. Cary, NC: SAS Publishing.

32

Heim, G. and K. Sinha. 2002. Service process configurations in electronic retailing: a taxonomic
analysis of electronic food retailers. Production and Operations Management, 11(1), 5474.
Heskett, J.L., T.O. Jones, G.W. Loveman, W.E. Sasser, Jr., and L.A. Schlesinger. 1994. Putting
the service-profit chain to work. Harvard Business Journal, March-April, 164-174.
Hong, W., J. Thong, W. Wong, and K. Y. Tam. 2002. Determinants of user acceptance of digital
libraries: an empirical examination of individual differences and system characteristics.
Journal of Management Information Systems, 18(3): 97-124.
Hui, M.K. and J.E.G. Bateson. 1991. Perceived control and the effects of crowding and
consumer choice on the service experience. Journal of Consumer Research, 18, 174-184.
ICSA (2001) 2001 ICSA/E-Satisfy.com Study of Electronic Customer Service.
http://www.icsa.com/resources/pubs/eSatisfy_toc.cfm.
Janda, S., P. J Trocchia, and K. P Gwinner. 2002. Consumer perceptions of Internet retail service
quality. International Journal of Service Industry Management, 13(5), 412-431.
Jreskog, K. and D. Srbom. 1996. LISREL8: Users Reference Guide, Scientific Software
International, Inc., Chicago.
Kaiser, H. F. 1970. A second-generation little jiffy. Psychometrika, 35: 401-415.
Kamakura, W., V. Mittal, F. de Rosa, and J. Mazzon. 2002. Assessing the service-profit chain.
Marketing Science, 21(3), 294-317.
Kerin, R. A., A. Jain, and D. J. Howard. 1992. Store Shopping Experience and Consumer PriceQuality-Value Perceptions. Journal of Retailing, 68, 376-397.
Koufaris, M. 2002. Applying the technology acceptance model and flow theory to online
consumer behavior. Information Systems Research, 13(2), 205-223.

33

Langeard, E., J. Bateson, C. Lovelock, and P. Eiglier. 1981. Marketing of services: new insights
from consumers and managers. Report No. 81-104. Marketing Science Institute,
Cambridge, MA.
Lee, J. and A. Allaway (2002) Effects of personal control on adoption of self-service technology
innovations. The Journal of Services Marketing, 16(6), 553-572.
Levitt, T. 1976. The industrialization of service. Harvard Business Review, 54(5), 63-71.
Lii, Y., H. Lim, and L. Tseng. 2004. The effects of web operational factors on marketing
performance. Journal of American Academy of Business, 5(1), 486-494.
Lohse, G. and P. Spiller. 1998. Electronic shopping. Communications of the ACM, 41(7), 81-87.
Mathieu, J.E., S.I. Tannenbaum, and E. Salas. 1992. Influences of individual and situational
characteristics on measures of training effectiveness, Academy of Management Journal,
35, 828-847.
McKinney, V., K. Yoon, and F. Zahedi. 2002. The measurement of Web-customer satisfaction:
An expectation and disconfirmation approach. Information Systems
Research, 13(3), 296-315.
Merrilees, B. 2002. Interactivity design as the key to managing customer relations in ecommerce. Journal of Relationship Marketing, 1(3-4), 111-126.
Meuter, M., A. Ostrom, R. I. Roundtree, and M. J. Bitner. 2000. Self-service technologies:
understanding customer satisfaction with technology-based service encounters. Journal
of Marketing, 64(3), 50-64.
Mittal, B. and W. M. Lassar. 1996. The role of personalization in service encounters. Journal of
Retailing, 72 (1), 95-109.

34

Netemeyer, R. G., M. Johnston, and S. Burton. 1990. Analysis of Role Conflict and Role
Ambiguity in a Structural Equations Framework. Journal of Applied Psychology, 75(2),
148-157.
Nunes, P. and A. Kambil. 2001. Personalization? No thanks. Harvard Business Review, April,
2-3.
Nunnally, J. C. 1978. Psychometric Theory, New York: McGraw-Hill.
Oliver, R. and J. Swan. 1989. Consumer perceptions of interpersonal equity and satisfaction in
transactions: a field survey approach. Journal of Marketing, 53 (April), 21-35.
Palmer, J. 2002. Web site usability, design, and performance metrics. Information Systems
Research, 13(2), 151-167.
Patterson, P. and R. Spreng. 1997. Modeling the relationship between perceived value,
satisfaction, and repurchase intentions in a business-to-business, service context: an
empirical examination. International Journal of Service Industry Management, 8(5).
Podsakoff, P.M. and D.W. Organ. 1986. Self reports in organizational research: problems and
prospects. Journal of Management, 12(4), 531-544.
Podsakoff, P.M., S. MacKenzie, J. Y. Lee, and N. Podsakoff. 2003. Common method biases in
behavioral research: a critical review of the literature and recommended remedies.
Journal of Applied Psychology, 88(5), 879-903.
Roth, A.V., W.E. Jackson III. 1995. Strategic determinants of service quality and performance:
Evidence from the banking industry. Management Science, 41 (11), 1720-1733
Rust, R. and P.K. Kannan. 2003. E-service: a new paradigm for business in the electronic
environment. Communications of the ACM, 46(6).

35

Seyal, A., M. Rahim, and M. Rahman. 2002. A study of computer attitudes of non-computing
students of technical colleges in Brunei Darussalam. Journal of End User
Computing, 14(2). 40-47.
Shapiro, C. and H. R. Varian. 1999. Information Rules: A Strategic Guide to the Network
Economy. Boston: Harvard Business School Press.
Shostack, G.L. 1987. Service positioning through structural change. Journal of Marketing, 51
(1), 34-43.
Sweeney, J., G. Soutar, and L. Johnson. 1999. The role of perceived risk in the quality-value
relationship: a study in a retail environment. Journal of Retailing, 75(1), 77-105.
Tsikriktsis, N., Lanzolla, G., and Frohlich, M. 2004. Adoption of e-processes by service firms:
an empirical study of antecedents. Production & Operations Management, 13(3), 216-229.
Venkatesh, V. 2000. Determinants of perceived ease of use: integrating control, intrinsic
motivation, and emotion into the technology acceptance model. Information Systems
Research, 11(4), 342-365.
Venkatesh, V. and M. G. Morris. 2000. Why don't men ever stop to ask for directions? Gender,
social influence, and their role in technology acceptance and usage behavior. MIS
Quarterly, 24(1), 115-139.
Zeithaml, V.A. 1988. Consumer perceptions of price, quality and value: a means-end model and
synthesis of evidence. Journal of Marketing, 52(3), 2-22.
Zeithaml, V.A., A. Parasuraman, and A. Malhotra. 2002. Service quality delivery through web
sites: a critical review of extant knowledge. Journal of the Academy of Marketing
Science, 30(4), 362-375.

36

Appendix: Measurement Scales


Constructs and Scale Items

Source

Perceived eSDS Process


eSDS1. The website was difficult to navigate through.
eSDS2. The number of choices at each step of the process doesnt
need to be changed.
eSDS3. The website ordering process wasnt complicated.
eSDS4. I did not experience any errors (e.g., web pages that did
not load the first time).
eSDS5. I had trouble finding what I was looking for on the
website.
eSDS6. The entire process of searching and buying took a
reasonable amount of time.
Service Value
VAL1. Using the website was a waste of my time.
VAL2. The service provided through the website was very
efficient.
VAL3. The website required a lot of effort to use.
VAL4. I was treated fairly.
VAL5. Very little thought was required to use this website.
VAL6. The website doesnt provide value.
Perceived Ease of Use
EOU1. The user of the website has to be skillful to use the website.
EOU2. The user does not have to be knowledgeable in order to use
the site.
EOU3. Using this website was easy.
EOU4. The user needs to be a frequent web user.
EOU5. My interaction with the website was clear and
understandable.
EOU6. A user does not need specific knowledge about the
company in order to use the website.
Perceived Control
PC1. The website limited what I could do.

New item
New item
New item
Adapted from Roth and
Jackson (1995)
New item
New item

New item
New item
New item
New item
New item
Brady and Cronin (2001)

Davis (1989)
New item
Davis (1989)
New item
Davis (1989)
New item

Adapted from Seyal et al.


(2002)
Koufaris (2002)

PC2. I felt in control at each step and could determine the outcome
of the online process.
PC3. To use the website, I had to input unnecessary information,
Koufaris (2002)
which was confusing.
PC4. I felt frustrated at the process of searching and buying.
Koufaris (2002)
PC5. At the website, I could do what I wanted to when I wanted to. Adapted from Seyal et al.

37

(2002)
New item

PC6. The website wasnt complicated to use.


Interactivity
INT1. Sufficient guidelines were provided.
INT2. Careful instructions were provided.
INT3. I always knew what information I needed to provide.
INT4. The website allows good two-way communication.
INT5. Interaction with customer service rep through email or
phone is necessary so my question can be answered quickly.
INT6. Interaction with other customers through chat rooms is
beneficial.
Customer Satisfaction
SAT1. Using the website pleased me.
SAT2. I was content with the procedures for using the website.
SAT3. I was very unhappy with the online experience.
SAT4. The website did an excellent job for me.
SAT5. It is a poor choice to use this website.
SAT6. I would never use this website again.

38

New item
New item
New item
Merrilees (2002)
Merrilees (2002)
New item

Oliver and Swan (1989)


Oliver and Swan (1989)
Oliver and Swan (1989)
Oliver and Swan (1989)
Oliver and Swan (1989)
McKinney et al. (2002)

Table 1. Questionnaire Items and Corresponding Factor Loadings


from the Rotated Factor Pattern Matrix
Factor
1
Questionnaire
Item

Customer
Satisfaction

SAT1

.714

SAT2

.563

SAT3

.926

2
Perceived
eSDS
Process

Service
Value

Ease of
Use

Perceived
Control

Interactivity

SAT4
SAT5

.548

SAT6

.750

VAL1

.886

VAL2

.549

VAL3

.681

VAL4

.590

VAL5
VAL6

.469

EOU1

.713

EOU2

.829

EOU3

.721

EOU4
EOU5

.586

EOU6

.474

PC1

.546

PC2

.721

PC3
PC4

.603
.650

PC5

.820

PC6
INT1
INT2
INT3

.508

INT4

.711

INT5

.904

INT6

.450

eSDS1

.785

eSDS2

.679

eSDS3

.655

eSDS4

.729

eSDS5

.500

eSDS6

.600

Note: N=149.

39

Table 2. Summary Statistics and Cronbachs Alpha for All Scales


Construct
Customer Satisfaction
(SAT1, SAT2, SAT3, SAT5, SAT6)
Perceived Control
(PC1, PC2, PC3, PC5)
Ease of Use
(EOU1, EOU2, EOU3, EOU5)
Service Value
(VAL3, VAL4, VAL6)
Interactivity
(INT4, INT5)
Perceived eSDS Process
(eSDS1, eSDS2, eSDS3, eSDS5, eSDS6)

40

Mean

S.D.

Cronbachs Alpha

4.89

1.20

0.869

4.66

1.07

0.686
(0.770 without PC3)

6.47

1.49

0.766

5.39

1.10

0.629

6.89

2.07

0.739

4.54

1.35

0.824

Table 3. Standardized Factor Loadings for the Measurement Model


Item Description
F1: Customer Satisfaction
SAT1
SAT2
SAT3
SAT5
SAT6
F2: Perceived eSDS Process
ESDS1
ESDS2
ESDS3
ESDS5
ESDS6
F3: Service Value
VAL3
VAL4
VAL6
F4: Ease of Use
EOU1
EOU2
EOU3
EOU5
F5: Perceived Control
PC1
PC2
PC5
F6: Interactivity
INT4
INT5

Factor Loading
For Revised Model
.747
.729
.640
.854
.803
.578
.521
.663
.838
.862
.462
.550
.775
.735
.533
.831
.580
.822
.780
.589
.899
.651

Note. All loadings are significant at the .001 level.


Table 4. Fit Indices for the Structural Model
Criteria
2 (df)
p-value
CFI
RMSEA
NFI
GFI
AGFI

Guidelines
Bryne (1998)
Small
Large
> 0.90
< 0.08
>0.90
>0.90
> 0.80

Mt: Theoretical
Model
20.034 (6)
0.003
0.947
0.126
.929
.954
0.840

41

Mr:
Revised Model
7.516 (5)
0.185
0.993
0.058
0.979
0.984
0.932

Das könnte Ihnen auch gefallen