Sie sind auf Seite 1von 14

International Journal of Information Management 32 (2012) 560573

Contents lists available at SciVerse ScienceDirect

International Journal of Information Management


journal homepage: www.elsevier.com/locate/ijinfomgt

Examining the effect of user satisfaction on system usage and individual


performance with business intelligence systems: An empirical study of Taiwans
electronics industry
Chung-Kuang Hou
Chia-Nan University of Pharmacy and Science, Department of Information Management, 60 Erh-Jen RD., Sec. 1, 717 Jen-Te, Tainan 711, Taiwan, ROC

a r t i c l e

i n f o

Article history:
Available online 4 April 2012
Keywords:
End-user computing satisfaction
Business intelligence
User satisfaction
System usage
Individual performance

a b s t r a c t
The advent of new information technology has radically changed the end-user computing environment
over the past decade. To enhance their management decision-making capability, many organizations
have made signicant investments in business intelligence (BI) systems. The realization of business benets from BI investments depends on supporting effective use of BI systems and satisfying their end
user requirements. Even though a lot of attention has been paid to the decision-making benets of BI
systems in practice, there is still a limited amount of empirical research that explores the nature of enduser satisfaction with BI systems. End-user satisfaction and system usage have been recognized by many
researchers as critical determinants of the success of information systems (IS). As an increasing number of companies have adopted BI systems, there is a need to understand their impact on an individual
end-users performance. In recent years, researchers have considered assessing individual performance
effects from IS use as a key area of concern. Therefore, this study aims to empirically test a framework
identifying the relationships between end-user computing satisfaction (EUCS), system usage, and individual performance. Data gathered from 330 end users of BI systems in the Taiwanese electronics industry
were used to test the relationships proposed in the framework using the structural equation modeling
approach. The results provide strong support for our model. Our results indicate that higher levels of
EUCS can lead to increased BI system usage and improved individual performance, and that higher levels
of BI system usage will lead to higher levels of individual performance. In addition, this studys ndings,
consistent with DeLone and McLeans IS success model, conrm that there exists a signicant positive
relationship between EUCS and system usage. Theoretical and practical implications of the ndings are
discussed.
2012 Elsevier Ltd. All rights reserved.

1. Introduction
Today, many organizations continue to increase their investment in implementing various types of information systems (IS),
such as enterprise resource planning (ERP) and customer relationship management (CRM), primarily because of the belief that these
investments will lead to increased productivity for employees (Jain
& Kanungo, 2005). Evaluating individual employee performance
from IS use has been an ongoing concern in IS research (Goodhue
& Thompson, 1995). However, previous studies that examined the
relationship between IS usage and individual performance effects
have reported contradictory results that range from positive to nonsignicant, to even a negative relationship. For instance, Goodhue

Correspondence address: Department of Information Management, Chia-Nan


University of Pharmacy and Science, 60 Erh-Jen RD., Sec. 1, 717 Jen-Te, Tainan,
Taiwan, ROC. Tel.: +886 6 2664911x5311; fax: +886 6 3660607.
E-mail addresses: ckhou1@yahoo.com, ckhou1225@mail.chna.edu.tw
0268-4012/$ see front matter 2012 Elsevier Ltd. All rights reserved.
doi:10.1016/j.ijinfomgt.2012.03.001

and Thompson (1995) explored the role of task-technology t on


individual performance effects and indicated a positive relationship
between IS use and individual performance. Conversely, Pentland
(1989) found a negative relationship between IS use and individual performance. Lucas and Spitler (1999) found that IS use has no
impact on individual performance.
Many researchers have recognized user satisfaction as a critical determinant of the success of IS (Bailey & Pearson, 1983;
DeLone & McLean, 1992; Doll & Torkzadeh, 1988; Igbaria & Tan,
1997). When data computing in organizations has transformed
from transactional data processing into end-user computing (EUC),
Doll and Torkzadeh (1988) have developed a 12-item and vefactor instrument for measuring end-user computing satisfaction
(EUCS) in the EUC environment. Even though EUCS instrument has
already been widely applied and validated for various IS applications (e.g., decision support systems (McHaney, Hightower, &
White, 1999; Wang, Xi, & Huang, 2007), ERP systems (Somers,
Nelson, & Karimi, 2003), and online banking systems (Pikkarainen,
Pikkarainen, Karjaluoto, & Pahnila, 2006), it has not been validated

C.-K. Hou / International Journal of Information Management 32 (2012) 560573

with users of business intelligence (BI) systems. BI systems were


designed to provide decision-makers with actionable information
delivered at the right time, at the right place, and in the correct form
to make the right decisions (Negash & Gray, 2004). Given these
goals, attributes measured by EUCS such as timeliness, accuracy,
content, etc., are relevant to an evaluation of BI systems. Since an
increasing number of companies have adopted BI systems, there
is a need to understand the impact of EUCS on individual job performance. DeLone and McLean (2003) propose that higher levels
of individual satisfaction with using an IS will lead to higher levels
of intention to use, which will subsequently affect the use of the
system. Most studies investigating system usage at the individual
level terminate at the user acceptance of the computer technology rather than at the performance outcome (Dasgupta, Granger, &
McGarry, 2002). The main reason could be attributed to the conventional wisdom that more use leads to better performance. However,
empirical studies that examined the relationship between IS usage
and individual performance effects have reported contradictory
results ranging from positive to non-signicant, to even a negative
relationship. Therefore, the purpose of this study is to investigate
whether it is appropriate to adopt the EUCS instrument to measure user satisfaction with BI systems. Furthermore, this study also
examines the following research question: How does EUCS inuence system usage and individual job performance? In this paper,
we present a model that identies the relationships between EUCS,
system usage, and individual performance. Drawing on Igbaria and
Tans (1997) nomological net model, we propose that EUCS has a
positive impact on individual performance both directly and indirectly through system use. Operational measures for the constructs
are developed and tested empirically, using data collected from
330 respondents in the Taiwanese electronics industry to a survey questionnaire. Structural equation modeling is used to test
the hypothesized relationships. The structure of this paper is organized as follows. In Section 2, we review the related literature on
BI systems, EUCS, and performance measures to provide the necessary background information for the study. Section 3 presents
the research framework and develops the hypothesized relationships, while Section 4 describes the research methodology. Section
5 presents the data analysis and results, which are discussed in Section 6. Section 7 presents implications for practice and research, and
the nal section describes the limitations of the study.

2. Literature review
2.1. Business intelligence (BI) system
Today, many organizations have already implemented ERP systems, considered to be one of the most signicant and necessary
business software investments for rms. ERP systems offer organizations the advantage of providing a single, integrated software
system that links their core business activities such as operations,
manufacturing, sales, accounting, human resources, and inventory
control (Lee, 2000; Newell, Huang, Galliers, & Pan, 2003; Parr &
Shanks, 2000). As more companies implement ERP systems, they
have accumulated massive amounts of data in their databases.
Although ERP systems are good at capturing and storing data, they
offer very limited planning and decision-making support capabilities (Chen, 2001). It is widely accepted that ERP should provide
better analytical and reporting functions to aid decision-makers
(Chou, Tripuramallu, & Chou, 2005). According to Aberdeens survey report, business intelligence (BI) applications have the highest
percentage of planned implementations by companies using ERP
systems (AberdeenGroup, 2006).
As Mikroyannidis and Theodoulidis (2010) explain, the BI system is a collection of techniques and tools, aimed at providing

561

businesses with the necessary support for decision making (p.


559). Moss and Atre (2003) also dene BI as being a collection of
integrated operational as well as decision support applications and
databases that provide the business community with easy access to
business data (p. 4). As such, BI systems can be regarded as the next
generation of decision support systems (Arnott & Pervan, 2005).
Therefore, BI systems can provide real-time information, create
rich and precisely targeted analytics, monitor and manage business
processes via dashboards that display key performance indicators,
and display current or historical data relative to organizational or
individual targets on scorecards.
In recent years, several major ERP software vendors such as
SAP and Oracle have started to offer extended products, such
as BI applications, because they have realized the shortcomings
of their systems in providing decision-making support. According to results from the 2009 IT spending survey from Gartner, BI
continues to be the top spending priority for chief information
ofcers (CIOs) in order to raise enterprise visibility and transparency, particularly sales and operational performance (Gartner,
2008). Furthermore, more than half of the respondents in another
survey by InformationAge (2006) stated that improving decisionmaking and better corporate performance management are the
two main drivers of BI investment. Companies that adopt BI systems can empower their employees decision-making capabilities
in a faster and more reliable way. Therefore, BI can deliver better
business information by offering a powerful grip on organizational
data. Since a BI system includes technology for reporting, analysis, and sharing information, it can be integrated into ERP systems
to maximize the return-on-investment (ROI) of ERP (Chou et al.,
2005).
2.2. End-user computing satisfaction
Cotterman and Kumar (1989) dened an end user as any person
who has an interaction with computer-based IS as a consumer of
information. Turban et al. (2007) briey discuss how the end-user
can be at any level in an organization or in any functional area. Many
researchers have emphasized user satisfaction as a measure of IS
success in organizations (Bailey & Pearson, 1983; DeLone & McLean,
1992; Doll & Torkzadeh, 1988; Ives, Olson, & Baroudi, 1983). Of
course, the denition of user satisfaction has evolved with the
changes in the IS environment (Simmers & Andandarajan, 2001).
Early research on user satisfaction was conducted in the transactional data processing environment (e.g., Bailey and Pearson, 1983;
Ives et al., 1983), in which users interact with the computer indirectly with the assistance of an analyst or a programmer (Doll &
Torkzadeh, 1988). User satisfaction was dened as the extent to
which users believe that the information system available to them
meets their information requirement (Ives et al., 1983, p. 785). Subsequent research on user satisfaction has been conducted in the
end-user computing (EUC) environment, in which users interact
directly with the application software to enter information or prepare output reports (Doll & Torkzadeh, 1988). In this context of EUC,
user satisfaction has been dened as an affective attitude towards
a specic computer application by someone who interacts with the
application directly (Doll & Torkzadeh, 1988, p. 261).
The literature has developed several instruments to measure
user satisfaction, including a 13-item user information satisfaction
(UIS) instrument from Ives et al. (1983), Bailey and Pearsons
(1983) 39-item computer user satisfaction instrument, and Doll
and Torkzadehs (1988) 12-item EUCS instrument. Based on the
UIS instrument of Ives et al., Doll and Torkzadeh (1988) developed
a 12-item and 5-factor instrument for measuring EUCS, which has
been widely applied and validated, and found to be generalizable
across several IS applications (Gelderman, 1998; Igbaria, 1990). For
example, the instrument has been tested in an ERP environment

562

C.-K. Hou / International Journal of Information Management 32 (2012) 560573

Table 1
Overview of EUCS model research in IS.
Author

IS applications

Sample source/size

Findings

McHaney et al. (1999)

Representational decision
support systems (DSS)

Xiao and Dasgupta (2002)

Web-based portals

123 respondents from 105 different


individuals/companies in the United
States
332 full-time and part-time students at
a large mid-Atlantic university in the
United States

McHaney et al. (2002)

Typical business software


applications

342 knowledge workers in Taiwan

Somers et al. (2003)

Enterprise resource
planning (ERP) systems

407 end users from 214 organizations


in the United States

Abdinnour-Helm et al. (2005)

A web site

176 students in a lab simulation

Pikkarainen et al. (2006)

Online banking systems

268 system users in Finland

Wang et al. (2007)

Group decision support


systems

156 undergraduate students in China


in an experiment

Deng et al. (2008)

Enterprise wide
applications

2,648 respondents from ve world


regions including the United States,
Western Europe, Saudi Arabia, India,
and Taiwan

They conducted a test-retest study and found


that the EUCS instrument remained internally
consistent and stable.
They found that with the exception of one item
that measured sufciency of information, the
rest of the items (11 items) in the EUCS model
were valid.
They found that the EUCS instrument was a
valid and reliable measure in Taiwanese
applications.
They conrmed that the EUCS instrument
maintains its psychometric stability when
applied to users of ERP applications.
They found that the EUCS was a valid and
reliable measure in the Web environment, but
one of the sub-factors, timeliness, will need
further renement.
They tested and then modied the EUCS model.
Their ndings indicated that the modied
EUCS model can be utilized in analyzing user
satisfaction with the online banking.
They found that the EUCS model exhibits
acceptable t to the sample data and the
reliability and validity of the model are also
validated.
They used multi-group invariance analysis to
test the model across ve national samples and
found that the EUCS instrument provided
equivalent measurement across national
cultures.

and was found to be reliable (Law & Ngai, 2007; Somers et al.,
2003). Table 1 provides a summary of the EUCS research in IS. EUCS
is a multi-faceted construct that consists of ve subscales: content,
accuracy, format, ease of use, and timeliness (see Fig. 1). Although
past research has demonstrated the validity and reliability of the
EUCS instrument (Doll & Xia, 1997; Doll, Xia, & Torkzadeh, 1994;
Hendrickson, Glorfeld, & Cronan, 1994; McHaney & Cronan, 1998;
McHaney et al., 1999), some studies involved only student groups,
or groups of users within a single organization. The responses
from students may not reect the real-world situation. Besides,
the results from a single organization may not be generalizable
(OReilly, 1982).
2.3. System usage
Over the past decade, the system usage (synonymous with
use) construct has played a critical role in IS research (Barkin &
Dickson, 1977; Bokhari, 2005; Schwarz & Chin, 2007). Burton-Jones

and Straub (2006) state that system usage has been employed
in scholarly studies across four domains, including IS success
(DeLone & McLean, 1992; Goodhue, 1995), IS acceptance (Davis,
1989; Venkatesh, Morris, Davis, & Davis, 2003), IS implementation
(Hartwick & Barki, 1994; Lucas, 1978), and IS for decision-making
(Barkin & Dickson, 1977; Yuthas & Young, 1998). Ives et al. (1983)
argued that system usage can be used as a surrogate indicator of
IS system success. Goodhue and Thompson (1995) dened system
usage as the behavior of employing the technology in completing
tasks (p. 218) and conceptualized it as the extent to which the
information system has been integrated into each individuals work
routine (p. 223). In a review of technology acceptance model literature, Lee, Kozar, and Larsen (2003) found that the frequency of
use, amount of time using, actual number of usages, and diversity of usage were more commonly used for measuring system
usage. Similarly, Burton-Jones and Straub (2006) report that the
most common measures of system usage include the extent of use,
frequency of use, duration of use, decision to use (use or not use),
voluntariness of use (voluntary or mandatory), features used, and
task supported.

Content
2.4. Individual performance impact of IS

Accuracy
End-User
Computing
Satisfaction
(EUCS)

Format

Ease of use

Timeliness

Fig. 1. End-user computing satisfaction model by Doll and Torkzadeh (1988).

Individual performance impact of IS refers to the actual performance of an individual using an IS. DeLone and McLean (1992) note
that an individual performance impact could also be an indication
that an IS has given the user a better understanding of the decision
context, has improved his or her decision-making productivity, or
has changed the users perception of the importance or usefulness
of the IS. A number of prior studies have measured individual performance impact of IS, including improved individual productivity,
increased job performance, enhanced decision-making effectiveness, and strengthened problem identication capabilities (DeLone
& McLean, 1992). For example, Gattiker and Goodhue (2005) conducted an empirical investigation of the impact of ERP systems on
business processes and found that the adoption of ERP systems was

C.-K. Hou / International Journal of Information Management 32 (2012) 560573

System Usage

Control variables

 Frequency of
system usage
 Duration of use

 Firm size
 Elapsed time
since adoption of
the BI system

H2

H1a

563

H1b
Individual Performance

End-user Computing
Satisfaction






Content
Accuracy
Format
Ease of use
Timeliness

H3









Decision-making quality
Job performance
Individual productivity
Job effectiveness
Problem identification speed
Decision-making speed
The extent of analysis in
decision-making

Fig. 2. The research model.

positively associated with improved business processes and might


include higher quality data for decision making, efciency gains in
business processes, and better coordination among different units
within an organization. In their study of executive information
system (EIS), Leidner and Elam (1993) found that the frequency
and duration of EIS use were shown to increase the impact of
decision-making at the individual level, such as decision-making
speed, problem identication speed, and the extent of analysis in
decision-making. Furthermore, Igbaria and Tan (1997) proposed
that system usage has a direct positive effect on individual perceived performance impacts (i.e., perceived impact of computer
systems on decision-making quality, performance, productivity,
and effectiveness of the job).

toward using the system, and in turn, increase the actual use of the
system in voluntary situations. DeLone and McLean (2003) argue
that increased user satisfaction will lead to a higher intention to
use, which will subsequently affect the use of the system. In other
words, dissatised users might opt to discontinue to using the system and seek other alternatives (Szajna & Scamell, 1993). The study
expected that EUCS would have a signicant inuence on BI system usage. In addition, some researchers suggested that increased
system usage leads to a higher level of user satisfaction (Baroudi,
Olson, & Ives, 1986; Lee, Kim, & Lee, 1995; Torkzadeh & Dwyer,
1994). Therefore, we propose the following hypotheses:

3. The research model and hypotheses

Hypothesis 1b. Higher levels of BI system usage will lead to higher


levels of EUCS.

3.1. The research model


Fig. 2 presents the research model developed in this study.
The research model proposes that end-user computing satisfaction
(EUCS) will have a positive impact on individual performance both
directly and indirectly through BI system usage. EUCS is conceptualized as a higher-order construct composed of ve rst-order
factors: content, accuracy, format, ease of use, and timeliness. Individual performance is a concept that has been operationalized in
the existing literature (Igbaria & Tan, 1997; Leidner & Elam, 1993).
Through the literature support, we develop and test three hypotheses representing (a) the relationship between EUCS and system
usage (b) the relationship between system usage and individual
performance, and (c) the relationship between EUCS and individual
performance.
3.2. Hypotheses
3.2.1. EUCS and system usage
Previous studies examining the relationship between user satisfaction and system usage have found moderate support for this
relationship at the individual level (Iivari, 2005). User satisfaction
is strongly associated with system usage, as measured by system
dependence (Kulkarni, Ravindran, & Freeze, 2007), the frequency
and duration of use of the system (Guimaraes & Igbaria, 1997;
Yuthas & Young, 1998), the number of different software applications and business tasks for which IS is used (Igbaria & Tan,
1997), and the intention to use (Chiu, Chiu, & Chang, 2007; Halawi,
McCarthy, & Aronson, 2007). Parikh and Fazlollahi (2002) proposed
that higher levels of user satisfaction lead to positive attitudes

Hypothesis 1a. Higher levels of EUCS will lead to higher levels of


BI system usage.

3.2.2. System usage and individual performance


The relationship between the use of an IS and individual performance effects is complex (Jain & Kanungo, 2005). Prior studies
have produced mixed ndings in terms of the impact of IS usage
on individual performance. Some studies have found that IS usage
is positively associated with individual performance (Goodhue
& Thompson, 1995; Igbaria & Tan, 1997; Leidner & Elam, 1993;
Torkzadeh & Doll, 1999), while others nd that IS usage has no
impact (Lucas & Spitler, 1999), or even a negative impact, on
individual performance (Pentland, 1989; Szajna, 1993). While the
evidence about the relationship between IS usage and individual
performance effects is mixed, it is logical to expect that an IS will
not contribute to any performance effects unless it is used. In other
words, an IS must be utilized before it can deliver performance
effects (Goodhue & Thompson, 1995). Therefore, it is expected that
with increase in IS usage, there will be an improvement in individual performance effects and, therefore, a positive relationship
should exist between system usage and individual performance
effects of IS. Therefore, we proposed:
Hypothesis 2. Higher levels of BI system usage will lead to higher
levels of individual performance.
3.2.3. EUCS and individual performance
Prior research has supported the positive impact of user satisfaction on individual performance (Gatian, 1994; Guimaraes &
Igbaria, 1997; Igbaria & Tan, 1997). For example, in their study
of client/server systems, Guimaraes and Igbaria (1997) found that
end-user satisfaction has a positive effect on end-user jobs (i.e.,
accuracy demanded by the job, feedback on job performance,

564

C.-K. Hou / International Journal of Information Management 32 (2012) 560573

skill needed on the job, etc.). Igbaria and Tan (1997) found that
user satisfaction has the strongest direct effect on individual perceived performance effects, but identied a signicant role for
system usage in mediating the relationship between user satisfaction and individual impact. Additionally, DeLone and McLean
(1992) proposed that user satisfaction will affect individual impact
(performance) (i.e., decision effectiveness, problem identication,
information understanding, individual productivity, etc.). Therefore, the study expected that EUCS would have a signicant positive
inuence on individual performance, leading to the following
hypothesis:
Hypothesis 3. Higher levels of EUCS will lead to higher levels of
individual performance.
3.2.4. Control variables and individual performance
To have a clear understanding of the effects of user satisfaction
on individual performance, we control for other variables that have
the potential to inuence the individual performance. Two control
variables, rm size and elapsed time since adoption of the BI system, are included in the research framework. Firm size was used as
the control variable, since it was used in prior IS literature to proxy
for the size of the organization resource base which can inuence
IT performance (Hunton, Lippincott, & Reck, 2003; Ravichandran &
Lertwongsatien, 2005; Tippins & Sohi, 2003). Firm size was measured by number of employees working in a rm (Elbashir, Collier,
& Davern, 2008; Zhu & Kraemer, 2002). Large rms with more capital resources can invest in different activities which support IT, such
as employee training (Subramani, 2004). Therefore, users in large
rms are generally more satised with the systems than those in
smaller rms (Lees, 1987).
Elapsed time since adoption of the BI system was used as the
control variable as suggested by Bradford and Florin (2003). This
variable is calculated as the length of time since the implementation
of BI systems. The earlier rms implement an IS, the more employee
learning that takes place and the greater the chance of realization
of business benets from IS investments (Purvis, Sambamurthy, &
Zmud, 2001). Additionally, the longer the time elapsed, the more
comfortable employees are with the system and the greater the
employee satisfaction with the system (Bradford & Florin, 2003).
4. Research methodology
4.1. Subjects
The study focuses on the electronics industry, which has been
the most dynamic sector in East Asia since the 1980s and has
been widely recognized as a key driver of economic growth in its
role as a technology enabler for the whole electronics value chain
(Tung, 2001). Taiwans electronics industry is divided into seven
sectors (semiconductor, photoelectricity, computer and peripheral equipment, electronics, software and Internet, IC design,
and other electronics industries). Electronics products form an
increasingly vital part of a whole range of products, ranging from
electronic devices and systems (e.g., personal computers [PCs],
mobile phones) to solutions and services (e.g., Internet providers,
broadcasting services) (Lee, 2001; Lin, Chen, Lin, & Wu, 2006; Tung,
2001). Furthermore, because of the trend of globalization and the
impact of cost reduction, electronics companies in Taiwan need to
improve the speed of their business process, shorten their time of
delivery, and reduce their manufacturing costs to satisfy their customers requirements. Therefore, Taiwanese manufacturers have
adopted many IT applications (e.g., ERP systems) and electronic
commerce systems to enhance their competitiveness in the global
market (Chen, Wang, & Chiou, 2009). For example, ERP systems
have become critical to enhancing the competitive advantage of

Taiwans semiconductor industry by integrating internal information, increasing the speed of business processes, and reducing costs
in manufacturing, human resources, and management (Lin et al.,
2006). As more companies implement ERP systems, they have accumulated massive amounts of data on their databases. Although ERP
systems are good at capturing and storing data, they offer very
limited planning and decision-making support capabilities (Chen,
2001). According to Aberdeens survey report, BI applications have
the highest percentage of planned implementations by companies
using ERP systems (AberdeenGroup, 2006). The main purpose of
BI implementation is to enhance the analysis capabilities of business information stored in ERP systems to support and improve
management decision-making (Elbashir et al., 2008). Therefore, the
electronics industry is likely to be fruitful ground to address our
research objectives.
4.2. Construct measurement
The items used to operationalize the constructs were adapted
from relevant previous studies. All scale items were rephrased to
relate specically to the context of BI systems and were measured
using a seven-point Likert-type scale (from 1 = strongly disagree
to 7 = strongly agree). To ensure the content validity of scales, a
pre-test was conducted with ve industrial experts and ten experienced BI users in Taiwan. They were asked to evaluate the clarity of
wording and the appropriateness of the items in each scale. Based
on the feedback received, we modied the wording of some questions and instructions. In addition, we adapted a 12-item scale
specically for measuring end-user computing satisfaction from
Doll & Torkzadehs (1988) EUCS instrument, which consists of ve
componentscontent (four items), accuracy (two items), format
(two items), ease of use (two items), and timeliness (two items). The
EUCS instrument by Doll and Torkzadeh has been widely used and
empirically validated through conrmatory factor analysis (e.g.,
Abdinnour-Helm et al., 2005; Doll et al., 1994; Hendrickson et al.,
1994; McHaney et al., 2002).
The measures of system usage used widely in the literature
include frequency of use, duration of use, and extent of use
by the individual (Davis, 1989; Hartwick & Barki, 1994; Igbaria,
Guimaraes, & Davis, 1995; Leidner & Elam, 1993; Mathieson,
Peacock, & Chin, 2001; Venkatesh & Davis, 2000). In this study, BI
system usage was measured by (1) frequency of use, which was
measured on a seven-point scale ranging from 1 (less than once a
week) to 7 (more than 4 times a day); and (2) the duration of use
by the individual, which asked individuals to indicate how much
time was spent on the system per week using a seven-point scale
ranging from 1 (less than 10 min) to 7 (more than 2 h).
Individual performance was assessed using 14 items. Four of
these items measuring the perceived impact of BI systems on
job performance, individual productivity, job effectiveness, and
decision-making quality were adapted from Igbaria and Tan (1997).
Ten additional items measuring the perceived impact of BI systems on the speed of problem identication and decision-making,
and the extent of analysis in decision-making were adapted from
Leidner and Elam (1993). Each item was measured on a sevenpoint Likert-type scale ranging from 1 (strongly disagree) to 7
(strongly agree). The speed of problem identication refers to the
length of time between when a problem rst arises and when it is
rst noticed (Leidner & Elam, 1993). The respondents were asked
to assess the extent to which BI systems helped them identify
potential problems faster, notice potential problems before they
become serious crises, and sense key factors affecting their area
of responsibility. The speed of decision-making refers to the time
between when a decision-maker recognizes the need to make a
decision to the time when he or she renders judgment (Leidner
& Elam, 1993, p. 142). The respondents were asked to indicate the

C.-K. Hou / International Journal of Information Management 32 (2012) 560573


Table 2
Prole of the respondents and their organizations (n = 330).
Categories

Frequency

Table 2 (Continued)
Categories
Percentage

Gender
Male
Female

170
160

51.5
48.5

Age (year)
Under 30
3039
Over 40

60
196
74

18.2
59.4
22.4

Educational level
Bachelors degree
Masters degree
Vocational/technical school
Senior high school
Doctoral degree

193
98
35
2
2

58.5
29.7
10.6
0.6
0.6

95

28.8

92
50
39

27.9
15.2
11.8

34
27

10.3
8.2

Industry segmenta
Photoelectric and optical related
industry
Semiconductor industry
Electronics-related industry
Computer and consumer
electronics manufacturing
industry
IC design house
Software and Internet-related
industry
Others (e.g., equipment rms,
material rms)
Number of employees
100 or less
101499
500999
10004999
50009999
Over 10,000

2.4

13
66
42
154
12
43

3.9
20.0
12.7
46.6
3.6
13.0

Annual revenue (NT$ millions)


More than 2000
500 to below 2000
100 to below 500
Below 50
50 to below 100

189
62
41
27
11

57.3
18.8
12.4
8.1
3.3

211

63.9

64
47
8

19.4
14.2
2.4

102
90
68
59
53
24
14
14
12
1

30.9
27.3
20.6
17.9
16.1
7.3
4.2
4.2
3.6
0.3

Work position
Non-management/professional
staff
Middle-level management
First level supervisor
Top-level
management/executives
Organizations BI Softwareb
Oracle
SAP
Microsoft
Data systems (Taiwan)
In-house development
Other suppliers
Business objects
Cognos
Hyperion
SAS

Elapsed time since adoption of the BI system (year)


Less than 1
29
13
48
35
37
510
120
Over 10
96

8.8
14.5
11.2
36.4
29.1

BI experience (year)
Less than 1
14

19.7
38.8

65
128

565

Over 5

Frequency

Percentage

137

41.5

Duration of BI use each week (min)


Less than 20
2040
4090
90120
Over 120

78
31
37
20
164

23.6
9.4
11.2
6.1
49.7

Frequency of system usage


Less than once a week
About once a week
2 or 4 times a week
About once a day
2 or 3 times a day
More than 4 times a day

45
10
55
64
30
126

13.6
3.0
16.7
19.4
9.1
38.2

Note: US$1 NT$32.84.


a
Some organizations belong to more than one industry segment.
b
Some organizations had one or more BI systems implemented.

degree to which BI systems had helped them make a decision more


quickly, shorten the time frame for making decisions, and spend
less time in meetings. The extent of analysis in decision-making
refers to the extent of analysis in decision-making in situation diagnosis, alternative generation, alternative evaluation, and decision
integration (Leidner & Elam, 1993). To measure the extent of analysis in decision-making, the respondents were asked to assess the
degree to which BI systems had helped them spend signicantly
more time analyzing data before making a decision, examine more
alternatives in decision-making, use more sources of information in
decision-making, and engage in more in-depth analysis. Perceptual
measures of performance are chosen because most measurements
of individual performance are intangible or qualitative and, hence,
it is difcult to be precise about their actual value as objective
measures. Tallon et al. (2000) argue that perceptual measures have
begun to be adopted in IS research (e.g., DeLone and McLean, 1992;
Mahmood and Soon, 1991; Sethi and King, 1991). To ensure data
reliability, we conducted a pilot study with 30 executives from
four Taiwanese electronics companies. Each participant was asked
to complete the questionnaire, evaluate the instrument and comment on its clarity and understandability (Moore & Benbasat, 1991).
Cronbachs alpha coefcient was used to measure the internal consistency of the multi-item scales used in the study. The value of
Cronbachs alpha for each construct was greater than 0.7, indicating satisfactory reliability level above the recommended value of
0.6 (Nunnally, 1978). Based on the feedback received, we modied
the wording of some questions and instructions. The feedback from
the pilot study was incorporated into the nal version of the questionnaire. Appendix A lists the nal scale items used to measure
each construct and their reference sources. The 30 executives who
assisted in the pilot study were not included in the sampling frame
of the subsequent study.
4.3. Data collection
Due to the limited time the managerial respondents could
offer, a mail survey approach was adopted to allow respondents to
complete the surveys at their convenience. The sample was drawn
from a report published by 2010 Common Wealth Magazine,
which lists Taiwans top 1000 manufacturers, including electronics
companies ranked by annual revenue. Initial telephone screening
interviews were conducted with IS executives or senior managers
from Taiwans electronics companies to conrm that the selected
companies are using BI systems. Of these, 552 companies qualied
and agreed to participate in the mail survey. A contact person was
identied at each company and that person was asked to distribute

566

C.-K. Hou / International Journal of Information Management 32 (2012) 560573

Table 3
Fit indices for the measurement model.
Fit indices

Recommended
valuea

First-order factor
model

Second-order
factor model

Chi-square/degrees of freedom
Normalized Fit Index (NFI)
Goodness-of-Fit Index (GFI)
Adjusted Goodness-of-Fit Index (AGFI)
Non-normed Fit Index (NNFI)
Root mean square error of approximation (RMSEA)

3.0
0.90
0.90
0.80
0.90
0.08

2.315
0.950
0.905
0.862
0.961
0.063

2.706
0.977
0.951
0.900
0.974
0.072

Recommended values for concluding good t of model to data (Hair et al., 1998; Segars & Grover, 1993).

the questionnaire to a key end user who has plenty of experience


and knowledge in BI systems at any level in an organization. This
was done to avoid concern about common respondent bias in
survey research. A total of 552 survey packages were sent out.
The survey package contained a cover letter, questionnaire, and
a stamped return envelope. The questionnaire consisted of three
parts. The rst part involved demographic questions about the
respondent, their organization, and the extent to which they used
BI systems. The second part included 12 questions that were
adopted from Doll and Torkzadehs EUCS instrument, designed to
measure the respondents satisfaction with BI systems. The third
part of the questionnaire included 14 items measuring individual
performance that were adapted from Igbaria and Tan (1997) and
Leidner and Elam (1993). Seven-point Likert-type scales were used
to score the responses with 7 standing for strongly agree and 1
for strongly disagree.
A total of 335 completed questionnaires were returned. However, ve responses had to be discarded due to incomplete data.
There were 330 valid responses and the response rate was 54.3%.
The demographics of the respondents surveyed are shown in
Table 2. The respondents included 170 males (51.5%) and 160
females (48.5%), 59.4% were between 30 and 39 years old, and

88.7% had at least a bachelors degree. The distribution of the industry segments in our sample included 28.8% in photoelectric and
optical industry, 27.9% in the semiconductor industry, 15.2% in
electronic-related industry, 11.8% in computer and consumer electronics manufacturing industry and 10.3% in IC design house. As
for the size of the rm in terms of the number of employees, 63.3%
of the responses can be classied as large rms (more than 1000
employees), 12.7% of the responses as medium rms (501999
employees), 20.0% of the responses as small (101499 employees), and the remaining 3.0% had less than 100 employees. The job
position of respondents included top-level managers (2.4%), middle managers (19.4%), supervisors (14.2%), and professional staff
(63.9%). Most of the participants worked in the IT department
(20.6%), followed by those in the R&D department (15.5%), the sales
department (9.7%) and the purchasing department (8.8%). Concerning BI usage experience, those who have accumulated more than 5
years comprised the majority, at approximately 41.5%. Nearly half
(49.7%) of the respondents used BI systems more than 120 min per
week. More than 38% reported using BI systems an average of more
than four times per day.
To examine the possible presence of a non-response bias, we
tested for statistically signicant differences in the responses of

Table 4
Measurement model: factor loadings, reliability and validity.
Constructs

Indicators

Construct reliability and validity


Factor loadings

Convergent validity
(t-value)

Cronbachs alpha

Composite
reliability (CR)b

Average variance
extracted (AVE)c

Contents (C)

C1
C2
C3
C4

0.857a
0.876***
0.865***
0.917***

30.272
20.354
22.321

0.937

0.931

0.772

Accuracy (A)

A1
A2

0.921a
0.924***

26.700

0.919

0.919

0.851

Format (F)

F1
F2

0.869a
0.918***

22.320

0.892

0.888

0.798

Ease of use (EU)

EU1
EU2

0.987a
0.925***

32.858

0.955

0.955

0.914

Timeliness (T)

T1
T2

0.950a
0.927***

27.977

0.938

0.936

0.880

Individual performance
(INDP)

INDP1
INDP2
INDP3
INDP4
INDP5
INDP6
INDP7

0.886***
0.924***
0.912***
0.673a
0.645***
0.661***
0.670***

14.418
14.783
14.626

15.212
15.175
13.945

0.954

0.912

0.603

System usage (SU)

SU1
SU2

0.895a
0.976***

14.651

0.909

0.934

0.876

Loadings are specied as xed to make the model identied.

CR = (

AVE =

 2  2 
 )/[(
) +
Var(j )].
 i 2  2i 
i /[

i +

(Fornell & Larcker, 1981).


***
Signicance level: p < 0.001.

Var(i )], where i is the standardized factor loadings for the indicators for a particular latent variable i, Var(i ) is the indicator error variances

na

Note: Diagonals in parentheses are square roots of the average variance extracted from observed variables (items); off-diagonals are correlations between constructs. na: Average variance extracted are not applicable to the
single-item constructs.
**
p < 0.01.

na
0.193**
(0.935)
0.026
0.341**
(0.776)
0.283**
0.057
0.198**
(0.893)
0.740**
0.607**
0.535**
0.166**
0.073
0.202**
(0.922)
0.694**
0.622**
0.725**
0.516**
0.280**
0.032
0.229**

3
2

(0.956)
0.637**
0.440**
0.229**
0.035
0.185**

(0.938)
0.500**
0.282**
0.026
0.213**

8
6

(0.878)
0.776**
0.717**
0.629**
0.688**
0.597**
0.316**
0.056
0.341**
5.373
5.227
5.198
5.045
5.219
5.345
4.578
4.970
3.620
Content
Accuracy
Format
Ease of use
Timeliness
Individual performance
System usage
Firm size
Elapsed time since BI adoption
1
2
3
4
5
6
7
8
9

Correlation matrix
S.D.
Mean
Constructs

Table 5
Means, standard deviations, factor correlations and discriminant validity for the measurement model.

0.908
1.033
1.059
1.158
1.109
0.928
2.015
1.836
1.280

C.-K. Hou / International Journal of Information Management 32 (2012) 560573

567

early (184 users) versus late respondents (146 users) (Armstrong


& Overton, 1977; Lambert & Harrington, 1990) using gender, work
position, industry type, and annual revenue. The chi-square (2 )
tests comparing the categories across the two groups revealed no
signicant differences for gender (2 = 3.798, p = 0.051), work position (2 = 1.766, p = 0.779), industry type (2 = 0.247, p = 0.619), and
annual revenue (2 = 6.865, p = 0.443). Therefore, there is no significant non-response bias in the study.
5. Data analysis and results
5.1. Conrmatory factor analysis
This study is conrmatory in nature and the proposed research
model is built on the basis of ndings of previous empirical
research. The structural equation modeling (SEM) method was used
to test the research model presented in Fig. 2.
The two-step approach, suggested by Anderson and Gerbing
(1988) was used. First, the measurement model was estimated
using conrmatory factor analysis (CFA) to test the reliability and
validity of the measurement model. Second, the structural model
was analyzed to examine the overall model t.
5.1.1. The measurement model
The measurement model was rst assessed through CFA. Since
the chi-square (2 ) statistic is sensitive to sample size, we also
assessed additional t indices: normed t index (NFI), goodness-oft index (GFI), adjusted goodness-of-t index (AGFI), non-normed
t index (NNFI), and root mean square error of approximation
(RMSEA). The goodness-of-t indices suggested by Hair et al.
(1998) and Segars and Grover (1993) were used: 2 /degrees of
freedom (d.f.) 3.0, GFI 0.90, AGFI 0.80, RMSEA 0.08, NFI and
NNFI 0.90. We found that all the items are proper measures
of the corresponding constructs. As shown in Table 3, the measurement model displayed a reasonable model t to the data
(2 /d.f. = 2.315, NFI = 0.950, GFI = 0.905, AGFI = 0.862, NNFI = 0.961,
and RMSEA = 0.063).
To ensure data validity and reliability, the composite reliability, convergent validity, discriminant validity, and validity of the
second-order construct were examined. As illustrated in Table 4,
the composite reliabilities ranged from 0.888 to 0.955. Furthermore, all the Cronbachs alpha values exceeded the 0.70 cutoff level
(Nunnally, 1978), demonstrating adequate internal consistency.
Both the composite reliability estimates and the Cronbachs alpha
estimates clearly indicate reliability. Fornell and Larcker (1981)
suggested that the convergent validity of the measurement model
was evaluated based on the average variance extracted (AVE). As
shown in Table 4, AVE estimates for all the dimensions were above
0.50, as suggested by Hair et al. (1998). These results indicate that
the measurement model exhibited reasonably adequate convergent validity. Finally, the discriminant validity of the measurement
model was examined. To evaluate discriminant validity, Fornell and
Larcker (1981) suggested a comparison between the square root of
the AVE for each construct and the correlations between constructs
in the model. In Table 5, the diagonal elements are the square roots
of the AVEs. Off-diagonal elements are the correlations among constructs. All diagonal elements are greater than the corresponding
off-diagonal elements, indicating satisfactory discriminant validity
of all the constructs.
Table 6 shows the estimation of the second-order construct,
EUCS. The paths from the second-order construct to the ve rstorder factors are signicant and of high magnitude, greater than
the suggested cutoff of 0.70 (Chin, 1998). As indicated in Table 3,
the second-order measurement model showed an excellent model
t to the data (2 /d.f. = 2.706, NFI = 0.977, GFI = 0.951, AGFI = 0.900,

568

C.-K. Hou / International Journal of Information Management 32 (2012) 560573

C1

C3
C4
A1

0.922*
**
**
0.922*

A2

SU1

Content
(R2= 0.824)

Accuracy
(R2= 0.823)

System usage
(R2= 0.112)

0.334***
0.334***

0.902***
0.916***

F1

0.875
***
***
0.923

F2

Format
(R2= 0.801)

End-user
computing
satisfaction
(R2= 0.112)

0.861***

INDP1

0.899***
0.972***

SU2

0.219***

0.490***

Individual
performance
(R2= 0.373)

T1
T2

Ease ofuse
(R2= 0.631)

0.957*
**
**
*
4
0.92

Timeliness
(R2= 0.686)

0.022

*
**
82
0.6

EU2

0.967*
**
***
0.936

0.107*

0.835***

Firm size

INDP4
INDP5
INDP6

Elapsedtimesince
BIadoption

INDP7

*** p < 0.001, **p < 0.01, *p < 0.05

Second-Order
Factor

First-Order
Factors

INDP3

*
**
13
0.9
*
0.663**
0.636
0.6 ***
48
**
*

0.794***

EU1

INDP2

0.9 0.8
81*
18
**
**
*

0.8
59
*
0.87 **
9***
0.860***
*
7**
0.91

C2

Fig. 3. Results of the structure model.

Table 6
Measurement model: second-order construct.
Second-order construct

End-user computing
satisfaction (EUCS)

a
***

First-order constructs

Contents (C)
Accuracy (A)
Format (F)
Ease of use (EU)
Timeliness (T)

Construct reliability and validity

Target coefcient (T-ratio)

Factor loading

Convergent validity
(t-value)

Composite
reliability (CR)

Average variance
extracted (AVE)

0.886a
0.907***
0.886***
0.797***
0.832***

17.037
14.498
15.115
15.946

0.935

0.744

0.985

loadings are specied as xed to make the model identied.


Signicance level: p < 0.001.

NNFI = 0.974, and RMSEA = 0.072). Marsh and Hocevar (1985) suggest that the efcacy of the second-order model can be assessed
by examining the target coefcient (T-ratio), which is dened as
the ratio of the chi-square value for the rst-order model to the
chi-square value for the higher-order model. This T-ratio has an
upper limit of 1.0 with higher values, implying that the relationship
among rst-order factors is sufciently captured by the higherorder factor (Stewart & Segars, 2002). In the study, the chi-square
of the rst-order factor model was 101.366 and 102.839 for the
second-order factor model. The T-ratio was 0.99, indicating that
the second order construct accounted for a sufcient portion of
the covariance among the rst-order factors, providing reasonable

evidence of the existence of a second-order EUCS construct (Doll &


Torkzadeh, 1988).
In summary, the combination of these results suggested that the
demonstrated measurement model adequately reected a good t
to the data.
5.1.2. The structural model
Given an adequate measurement model, the hypotheses can be
tested by examining the structural model. Since hypotheses H1a
and H1b includes a mutual inuence between EUCS and system
usage that could not be simultaneously tested, we tested two
different models. Model 1 assumes the inuence is from EUCS

Table 7
Results for the structural model.
Hypothesis

Path: from to

Direct effect

Indirect effect

Total effect

Results

H1a
H1b
H2
H3

EUCS system usage


System usage EUCS
System usage individual performance
EUCS individual performance
Firm size individual performance
Elapsed time since BI adoption individual performance

0.334*** (5.530)
0.334*** (5.669)
0.219*** (3.972)
0.490*** (7.589)
0.022 (0.457)
0.107* (2.064)

0.073* (2.315)

0.334*** (5.530)
0.334*** (5.669)
0.219*** (3.972)
0.563*** (9.904)
0.022 (0.457)
0.107* (2.064)

Supported
Supported
Supported
Supported
Not supported
Supported

t-Values are in parentheses.


*
Signicance level: p < 0.05.
**
Signicance level: p < 0.01.
***
Signicance level: p < 0.001.

C.-K. Hou / International Journal of Information Management 32 (2012) 560573

569

Table 8
The results of moderate effect voluntariness of use.
Path: from to

EUCS system usage


System usage EUCS
EUCS individual performance
SU individual performance

Voluntariness of use:

Standard errors (SE)

Low (n = 129)

High (n = 201)

Low

High

0.333***
0.333***
0.436***
0.243***

0.306***
0.306***
0.503***
0.206***

0.242
0.041
0.116
0.042

0.240
0.024
0.076
0.021

Sp

mandatory voluntary

t-Value

0.241
0.032
0.094
0.031

0.027
0.027
0.067
0.037

1.043 ns
7.913***
6.654***
11.124***

n = 330; ns: not


signicant. The t-test is calculated by the following formula, which follows a t-distribution with ne + nc 2 degrees of freedom. Pooled estimator for the

variance: Sp =

((ne 1) SE2e + (nc 1) SE2c )/(ne + nc 2), t = (e c )/(Sp

(1/ne ) + (1/nc )), where ne and nc are the sample sizes, SE is the standard error of path

in the structural model, is the path coefcient (Keil et al., 2000).


*
p < 0.05.
**
p < 0.01.
***
p < 0.001.

to system usage (H1a), and model 2 works from system usage


to EUCS (H1b). The results of the tests performed on the two
structural models are shown in Fig. 3. The results indicate that all
the measurements have signicant loadings to their corresponding second-order construct. Overall, the model has a satisfactory
t with 2 /d.f. = 2.187, NFI = 0.948, GFI = 0.900, AGFI = 0.863,
NNFI = 0.959, and RMSEA = 0.060. In addition, the coefcient of
determination (R2 ) of the research model shown in Fig. 3 indicates
how well the antecedents explain an endogenous construct. The
overall R2 for the structural model was 0.373 in both models,
indicating that 37.3% of the variance in individual performance is
explained by the end-user computing satisfaction and BI system
usage. The results also show that EUCS explains 11.2% of the
variance in system usage in model 1. System usage explains 11.2%
of the variance in EUCS in model 2.The ve rst-order constructs
also have high values of R2 (81.4% for content, 84.0% for accuracy,
74.2% for format, 69.8% for timeliness, and 63.0% for ease of use).
The results of the proposed structural equation model analysis
are also presented in Table 7, indicating support for all the hypotheses. The results support Hypothesis 1a, which states that high levels
of EUCS lead to an individual having higher levels of BI system
usage. The path coefcient from EUCS to system usage is 0.334
in model 1, which is statistically signicant at p < 0.001 (t = 5.530).
Hypothesis 1b is supported and indicates that system usage has a
signicantly positive effect on EUCS in the context of a BI system.
The path coefcient in model 2 from system usage to EUCS is 0.334,
which is statistically signicant at p < 0.001 (t = 5.669).
Hypothesis 2 is also supported, which indicates that BI system
usage has a direct impact on individual performance. The path coefcient in both models from system usage to individual performance
is 0.219, which is statistically signicant at p < 0.001 (t = 3.972).
Consistent with our expectations, EUCS and system usage are all
positively related to individual performance in both models. The
results reported in Table 7 also show that individual performance
is more inuenced by EUCS than by system usage.
The results also indicate that higher levels of EUCS may lead
to improved individual performance, thus conrming Hypothesis
3. The path coefcient in both models from EUCS to individual
performance is 0.490, which is statistically signicant at p < 0.001
(t = 7.589). Also, the indirect effect of EUCS was found on individual
performance in model 1. The path coefcient of the indirect effect
of EUCS on individual performance is 0.073, which is signicant at
p < 0.05 level (t = 2.315). This shows that EUCS had both direct and
indirect positive effects on individual performance through system
usage. Compared to system usage, EUCS has higher total effect on
individual performance. Concerning the control variables, rm size
and elapsed time since the BI adoption, we found that rm size
had no signicant inuence on individual performance ( = 0.022;
t = 0.457) and elapsed time since the BI adoption had an expected

positive inuence on individual performance ( = 0.107; t = 2.064).


Therefore, all hypotheses are supported.
5.1.3. Voluntariness of system use and individual performance
Performance improvements from the use of IS might also be
affected by whether system usage was mandatory or voluntary
(Devaraj & Kohli, 2003). To distinguish between mandatory and
voluntary settings, our model posits voluntariness as a moderating variable. The voluntariness of use is dened as the extent to
which users perceive the technology adoption or use decision as
non-mandatory; the manadated use environment is one in which
users perceive use to be organizationally compulsory (Agarwal &
Prasad, 1997; Hartwick & Barki, 1994). To investigate the moderating effect of voluntariness of use in the model, the split-sample
approach was used to divide the total sample into two sub-samples:
low voluntariness (i.e. mandatory) and high voluntariness (i.e. voluntary) groups (Ha, Yoon, & Choi, 2007; Serenko, Turel, & Yol, 2006;
Shin, 2009). Four items used to measure voluntariness of use were
adapted from Moore and Benbasat (1991) (See Appendix A). The
mean score of the low voluntariness group (n = 129) ranged from 1
to 4, and the mean score of the high voluntariness group (n = 201)
ranged from 4 to 7. The study conducted a t-test for group comparisons as described by Keil et al. (2000). The group comparison
approach was suggested by Chin (2000). Results are described in
Table 8.
The results indicate that the inuence of system usage on user
satisfaction to use BI systems was stronger in mandatory contexts
than in voluntary contexts (beta difference = 0.027, t-value = 7.913,
p < 0.001), user satisfaction affected individual performance more
strongly in voluntary contexts than in mandatory usage contexts
(beta difference = 0.067, t-value = 6.654, p < 0.001), and system
usage impacted individual performance more saliently in mandatory contexts than in voluntary contexts (beta difference = 0.037,
t-value = 11.124, p < 0.001).
6. Discussions and conclusion
Empirical studies that investigated the relationship between IS
usage and individual performance effects have reported contradictory results. The primary purpose of this study was to empirically
examine the research framework, identifying the relationships
between EUCS, system usage, and individual performance in the
context of a BI system. We examine three research questions:
(1) does there exist a signicant positive relationship between
EUCS and BI system usage; (2) does an individual with higher
levels of BI system usage have higher levels of individual performance; and (3) does an individual with higher levels of EUCS
have higher levels of individual performance? Based on survey
data from 330 respondents in the Taiwanese electronics industry,

570

C.-K. Hou / International Journal of Information Management 32 (2012) 560573

the research framework was examined using structural equation


modeling. Overall, these results provide strong empirical evidence
that higher levels of EUCS lead to increased BI system usage and
improved individual performance. This nding also conrms the
argument of DeLone and McLean (2003), who suggest a signicant bidirectional positive relationship between system use and
user satisfaction so that the greater the use of the BI system, the
more satised the user and the more satised the user, the greater
the use of the BI system. Consistent with prior studies (Gelderman,
1998; Igbaria & Tan, 1997), our research results indicate that higher
levels of EUCS lead to improved individual performance by using
BI systems. The strong and statistically signicant impact of EUCS
on individual performance supports the suggestion that user satisfaction may serve as a valid surrogate for individual performance
(Iivari, 2005; Ives et al., 1983). BI adoption in organizations helped
individuals accomplish their tasks more effectively, increased their
productivity, and improved their decision-making quality. Therefore, organizations can improve employee performance if the user
has a higher level of user satisfaction with BI systems. In particular, the results demonstrate the importance of examining end-user
computing satisfaction in explaining user performance.
As expected, and consistent with prior research (Goodhue &
Thompson, 1995; Igbaria & Tan, 1997; Leidner & Elam, 1993;
Torkzadeh & Doll, 1999), the results show that higher levels of
BI system usage lead to higher levels of individual performance.
The results also indicate that EUCS has a stronger effect on individual performance than system usage. This supports the ndings of
Gelderman (1998), Igbaria and Tan (1997), and McGill et al. (2003).
When examining the direct and indirect effects of EUCS on individual performance, the results show that BI system usage can mediate
the effect of EUCS on individual performance. Therefore, it should
be mentioned that EUCS has a signicant indirect effect on individual performance through BI system usage. These ndings are
consistent with research by Igbaria and Tan (1997). In addition,
we investigated the effect of two control variables. One control
variable, elapsed time since the BI adoption, shows a positive and
signicant impact on individual performance. This nding is also
consistent with the Purvis et al. (2001) study that suggested that
the longer the time that has elapsed, the more employee learning
that takes place with the system and the greater the chance of realization of business benets. However, another control variable, rm
size, did not affect individual performance. This result may appear
surprising. One plausible explanation is that our sample is already
biased toward large rms, because the BI system is typically used
by large rms.

7. Implications
As discussed earlier, while the role of BI systems as a source
of improved decision-making capabilities has received a great
deal of interest from researchers and practitioners (Chou et al.,
2005; Friedman & Hostmann, 2004; Hou and Papamichail, 2010;
InformationAge, 2006), few empirical studies have investigated the
impact of BI system usage on user performance at the individual level or examined the relationships between user satisfaction,
system usage, and individual performance.
This study presents and empirically tests a research framework
and makes the following theoretical and practical contributions.
For researchers, this study makes a major contribution by validating the EUCS instrument by Doll and Torkzadeh (1988) in the
area of BI. Our study provided strong evidence that EUCS is a multifaceted construct that consists of ve subscales: content, accuracy,
format, ease of use, and timeliness. Therefore, the rigorous validation process in the study enables researchers to use the instrument
with increased condence. In the BI context, this study empirically

validates a signicant bidirectional positive relationship between


system usage and EUCS that has already been proposed by DeLone
and McLean (2003) in their IS success model. Furthermore, our
results indicate that system usage and EUCS both inuence the individual performance by using the BI system. Furthermore, this study
implemented moderating effects of voluntariness of use based on
the valid statistical analysis suggested by Keil et al. (2000).
From a practical standpoint, the study should enable managers
to gain a better understanding of the relationships between enduser satisfaction, system usage, and individual performance to
assess the benets of the BI system implementation. In addition,
the EUCS instrument of the study can be used as a diagnostic tool
to evaluate the degree of user satisfaction while a company is using
BI systems. One way to use the instrument is by testing it at different levels of assessment. The EUCS instrument can be assessed at
the overall level and at the subscale level. Analysis of data at these
different levels would allow managers to justify and measure both
the overall and subscale level of EUCS, and identify problem areas
within their organizations. Similarly, the average score of items
measuring individual performance effects should also enable managers to evaluate the degree of success of BI systems in improving
the work environment.

8. Limitations and future research


The study has its limitations. First, we measured user satisfaction using an established measurement instrument. Future
research should attempt to identify additional measures of user satisfaction that are specic to a BI environment. Some measures could
cover issues related to data security and privacy, and integration
with legacy systems. Second, this study was conducted in a single industry and, therefore, the generalizability to other industries
may be questionable. Further research is needed to determine the
applicability of the results of this study to other industries. Third,
our empirical study was carried out in Taiwan and the results might
not be directly applicable to other countries due to cultural differences. Consequently, this study should be conducted in different
countries. Fourth, this study focuses on users perceptual measures
of system usage and performance rather than on objective measures, because most of the data required to measure performance
are intangible or qualitative.
Although all responses were anonymous, it is still possible that
the respondents misrepresented their past performance measure
in the survey. Therefore, the use of a single respondent from each
organization may generate some measure of inaccuracy and lead to
common method bias (Bhatt & Grover, 2005; Podsakoff, MacKenzie,
Lee, & Podsakoff, 2003). To address the concern in this study,
we conducted Harmans one-factor test for common method bias
(Podsakoff et al., 2003). This test evaluated whether a signicant
amount of common method bias exists in the data by examining
whether or not most of the variance observed is explained by a single factor (Podsakoff & Organ, 1986). In this study, an exploratory
factor analysis conducted on all of our items measured revealed
no single factor that explained the majority of the variance, thus
indicating that a common method bias may not be a signicant
issue. Future research should seek to utilize multiple respondents
from each participating organization to enhance the validity of the
research (Teo & Pian, 2003).
Finally, this study presented a cross-sectional research that
measures users perceptions at one point in time. It is logical to
assume that users perceptions may change over time as they
gain more experience using BI systems (Mathieson et al., 2001;
Venkatesh & Davis, 1996). Hence, a longitudinal approach should
be considered in future research.

C.-K. Hou / International Journal of Information Management 32 (2012) 560573

571

Appendix A. Questionnaire items used in the study


Construct

Item code

End-user computing satisfaction (EUCS)


C1
Content (C)
C2
C3

Accuracy (A)

C4
A1
A2

Format (F)

F1
F2

Ease of use (EU)


Timeliness (T)

EU1
EU2
T1
T2

Individual performance
(INDP)

INDP1
INDP2
INDP3
INDP4
Problem identication speed a :
Consists of INDP5a, INDP5b and
INDP5c.
INDP5

Decision-making speed a :
Consists of INDP6a, INDP6b
and INDP6c.
INDP6

Extent of analysis in
decision-makinga : Consists of
INDP7a, INDP7b, INDP7c and
INDP7d.
INDP7

System usage (SU)

SU1
SU2

Voluntariness of use
(VOL)

VOL1b
VOL2
VOL3
VOL4

a
b

Composite variable.
Reversed coded.

Measure

Source

The BI system provides the precise information


I need.
The information content provided by the BI
system meets my need.
The BI system provides reports that seem to be
just about exactly what I need.
The BI system provides sufcient information.
The BI system is accurate.
I am satised with the accuracy of the BI
system.
I think the output provided by the BI system is
presented in a useful format.
The information provided by the BI system is
clear.
The BI system is user friendly.
The BI system is easy to use.
I get the information provided by the BI system
I need in time.
The BI system provides up-to-date information.

Doll and Torkzadeh


(1988)

Job performance: Using the BI system


improves my job performance.
Individual productivity: Using the BI system in
my job increases my productivity.
Job effectiveness: Using the BI system
enhances my effectiveness in my job.
Decision-making quality: Using the BI system
improves my decision-making quality.

Igbaria and Tan (1997)

Leidner and Elam


(1993)
INDP5a: Using the BI system helps me to
identify potential problems faster.
INDP5b: Using the BI system notices me
potential problems before they become serious
crises.
INDP5c: Using the BI system helps me to sense
key factors impacting my area of responsibility.
Leidner and Elam
(1993)
INDP6a: Using the BI system helps me to make
decisions quicker.
INDP6b: Using the BI system helps me to
shorten the time frame for making decisions.
INDP6c: Using the BI system helps me to spend
less time in meetings.
Leidner and Elam
(1993)

INDP7a: Using the BI system helps me to spend


signicantly more time analyzing data before
making a decision.
INDP7b: Using the BI system helps me to
examine more alternatives in decision-making.
INDP7c: Using the BI system helps me to use
more sources of information in
decision-making.
INDP7d: Using the BI system helps me to
engage in more in-depth analysis.
Duration of use: How much time you spend
each week using BI system?
Frequency of system usage: at present, how
often do you use the BI system?
My superiors expect me to use the BI system.
My use of the BI system is voluntary.
My boss does not require me to use the BI
system.
Although it might be helpful, using the BI
system is certainly not compulsory in my job.

Moore and Benbasat


(1991)

572

C.-K. Hou / International Journal of Information Management 32 (2012) 560573

References
Abdinnour-Helm, S. F., Chaparro, B. S., & Farmer, S. M. (2005). Using the end-user
computing satisfaction (EUCS) instrument to measure satisfaction with a web
site. Decision Sciences, 36(2), 341364.
AberdeenGroup. (2006). Best practices in extending ERP a buyers guide to ERP versus
best of breed decisions. Boston, MA: Aberdeen Group, Inc.
Agarwal, R., & Prasad, J. (1997). The role of innovation characteristics and perceived
voluntariness in the acceptance of information technologies. Decision Sciences,
28(3), 557582.
Anderson, J. C., & Gerbing, D. W. (1988). Structural equation modeling in practice:
A review and recommended two-step approach. Psychological bulletin, 103(3),
411423.
Armstrong, J. S., & Overton, T. S. (1977). Estimating nonresponse bias in mail surveys.
Journal of Marketing Research, 14(3), 396402.
Arnott, D., & Pervan, G. (2005). A critical analysis of decision support systems
research. Journal of Information Technology, 20(2), 6787.
Bailey, J. E., & Pearson, S. W. (1983). Development of a tool for measuring and analyzing computer user satisfaction. Management Science, 29(5), 530545.
Barkin, S. R., & Dickson, G. W. (1977). An investigation of information system utilization. Information & Management, 1(1), 3545.
Baroudi, J. J., Olson, M. H., & Ives, B. (1986). An empirical study of the impact of user
involvement on system usage and information satisfaction. Communications of
the ACM, 29(3), 232238.
Bhatt, G. D., & Grover, V. (2005). Types of information technology capabilities and
their role in competitive advantage: An empirical study. Journal of Management
Information Systems, 22(2), 253277.
Bokhari, R. H. (2005). The relationship between system usage and user satisfaction:
a meta-analysis. Journal of Enterprise Information Management, 18(1), 211234.
Bradford, M., & Florin, J. (2003). Examining the role of innovation diffusion factors on the implementation success of enterprise resource planning systems.
International Journal of Accounting Information Systems, 4(3), 205225.
Burton-Jones, A., & Straub, D. W. (2006). Reconceptualizing system usage: An
approach and empirical test. Information Systems Research, 17(3), 228246.
Chen, I. J. (2001). Planning for ERP systems: Analysis and future trend. Business
Process Management Journal, 7(5), 374386.
Chen, M. K., Wang, S. C., & Chiou, C. H. (2009). The e-business policy of global logistics
management for manufacturing. International Journal of Electronic Business, 7(2),
8697.
Chin, W. W. (1998). Issues and opinion on structural equation modeling. MIS Quarterly, 22(1), viixvi.
Chin, W. W. (2000). Frequently asked questions partial least squares & PLS-graph.
In home page, available at: http://disc-nt.cba.uh.edu/chin/plsfaq.htm
Chiu, C. M., Chiu, C. S., & Chang, H. C. (2007). Examining the integrated inuence of
fairness and quality on learners satisfaction and Web based learning continuance intention. Information Systems Journal, 17(3), 271287.
Chou, D. C., Tripuramallu, H. B., & Chou, A. Y. (2005). BI and ERP integration. Information Management & Computer Security, 13(5), 340349.
Cotterman, W. W., & Kumar, K. (1989). User cube: A taxonomy of end users. Communications of the ACM, 32(11), 13131320.
Dasgupta, S., Granger, M., & McGarry, N. (2002). User acceptance of e-collaboration
technology: An extension of the technology acceptance model. Group Decision
and Negotiation, 11(2), 87100.
Davis, F. D. (1989). Perceived usefulness, perceived ease of use, and user acceptance
of information technology. MIS Quarterly, 13(3), 319339.
DeLone, W. H., & McLean, E. R. (1992). Information systems success: The quest for
the dependent variable. Information & Management, 3(1), 6095.
DeLone, W. H., & McLean, E. R. (2003). The DeLone and McLean model of information
systems success: A ten-year update. Journal of Management Information Systems,
19(4), 930.
Deng, X., Doll, W. J., Al-Gahtani, S. S., Larsen, T. J., Pearson, J. M., & Raghunathan, T. S.
(2008). A cross-cultural analysis of the end-user computing satisfaction instrument: A multi-group invariance analysis. Information & Management, 45(4),
211220.
Devaraj, S., & Kohli, R. (2003). Performance impacts of information technology: Is
actual usage the missing link? Management Science, 49(3), 273289.
Doll, W. J., Xia, W., & Torkzadeh, G. (1994). A conrmatory factor analysis
of the end-user computing satisfaction instrument. MIS Quarterly, 18(4),
357369.
Doll, W. J., & Torkzadeh, G. (1988). The measurement of end-user computing satisfaction. MIS Quarterly, 12(2), 259274.
Doll, W. J., & Xia, W. (1997). A conrmatory factor analysis of the end-user computing
satisfaction instrument: A replication. Journal of End User Computing, 9(2), 2431.
Elbashir, M. Z., Collier, P. A., & Davern, M. J. (2008). Measuring the effects of business
intelligence systems: The relationship between business process and organizational performance. International Journal of Accounting Information Systems, 9(3),
135153.
Fornell, C., & Larcker, D. F. (1981). Evaluating structural equation models with unobservable variables and measurement error. Journal of Marketing Research, 18(1),
3950.
Friedman, T., & Hostmann, B. (2004). Management update: The cornerstones of business
intelligence excellence Gartner Research., pp. 17.
Gartner. (2008). 2009 CIO Survey-meeting the challenge: The 2009 CIO agenda. Gartner,
Inc.
Gatian, A. W. (1994). Is user satisfaction a valid measure of system effectiveness?
Information & Management, 26(3), 119131.

Gattiker, T. F., & Goodhue, D. L. (2005). What happens after ERP


implementation:
Understanding
the
impact
of
interdependence
and differentiation on plant-level outcomes. MIS Quarterly, 29(3),
559585.
Gelderman, M. (1998). The relation between user satisfaction, usage of
information systems and performance. Information & Management, 34(1),
1118.
Goodhue, D. L. (1995). Understanding user evaluations of information systems. Management Science, 41(12), 18271844.
Goodhue, D. L., & Thompson, R. L. (1995). Task-technology t and individual performance. MIS Quarterly, 19(2), 213236.
Guimaraes, T., & Igbaria, M. (1997). Client/server system success: Exploring the
human side. Decision Sciences, 28(4), 851876.
Ha, I., Yoon, Y., & Choi, M. (2007). Determinants of adoption of mobile games under
mobile broadband wireless access environment. Information & Management,
44(3), 276286.
Hair, J. F., Anderson, R. E., Tatham, R. L., & Black, W. C. (1998). Multivariate data
analysis (5th ed.). Upper Saddle River, NJ: Prentice Hall.
Halawi, L. A., McCarthy, R. V., & Aronson, J. E. (2007). An empirical investigation
of knowledge-management systems success. Journal of Computer Information
Systems, 48(2), 121135.
Hartwick, J., & Barki, H. (1994). Explaining the role of user participation in information system use. Management Science, 40(4), 440465.
Hendrickson, A. R., Glorfeld, K., & Cronan, T. P. (1994). On the repeated testretest
reliability of the end-user computing satisfaction instrument: A comment. Decision Sciences, 25(4), 655667.
Hou, C. K., & Papamichail, K. N. (2010). The impact of integrating enterprise resource
planning systems with business intelligence systems on decision-making performance: An empirical study of the semiconductor industry. International
Journal of Technology, Policy and Management, 10(3), 201226.
Hunton, J. E., Lippincott, B., & Reck, J. L. (2003). Enterprise resource planning systems:
Comparing rm performance of adopters and nonadopters. International Journal
of Accounting Information Systems, 4(3), 165184.
Igbaria, M. (1990). End-user computing effectiveness: A structural equation model.
Omega, 18(6), 637652.
Igbaria, M., Guimaraes, T., & Davis, G. B. (1995). Testing the determinants of
microcomputer usage via a structural equation model. Journal of Management
Information Systems, 11(4), 87114.
Igbaria, M., & Tan, M. (1997). The consequences of information technology acceptance on subsequent individual performance. Information & Management, 32(3),
113121.
Iivari, J. (2005). An empirical test of the DeLone-McLean model of information system
success. The DATA BASE for advances in Information Systems, 36(2), 827.
InformationAge. (2006). Business brieng the intelligent enterprise-transforming corporate performance through business intelligence InformationAge., pp. 113.
Ives, B., Olson, M. H., & Baroudi, J. J. (1983). The measurement of user information
satisfaction. Communications of the ACM, 26(10), 785793.
Jain, V., & Kanungo, S. (2005). Beyond perceptions and usage: Impact
of nature of information systems use on information system-enabled
productivity. International Journal of HumanComputer Interaction, 19(1),
113136.
Keil, M., Tan, B. C. Y., Wei, K. K., Saarinen, T., Tuunainen, V., & Wassenaar, A. (2000). A
cross-cultural study on escalation of commitment behavior in software projects.
MIS Quarterly, 24(2), 299325.
Kulkarni, U. R., Ravindran, S., & Freeze, R. (2007). A knowledge management success
model: Theoretical development and empirical validation. Journal of Management Information Systems, 23(3), 309347.
Lambert, D. M., & Harrington, T. C. (1990). Measuring nonresponse bias in customer
service mail surveys. Journal of Business Logistics, 11(2), 525.
Law, C. C. H., & Ngai, E. W. T. (2007). ERP systems adoption: An exploratory study
of the organizational factors and impacts of ERP success. Information & Management, 44(4), 418432.
Lee, K. K. F. (2000). The ve disciplines of ERP software implementation. In Management of Innovation and Technology, 2000. ICMIT 2000 Proceedings of the 2000
IEEE International Conference on, vol. 2.
Lee, S. M., Kim, Y. R., & Lee, J. (1995). An empirical study of the relationships among
end-user information systems: Acceptance, training, and effectiveness. Journal
of Management Information Systems, 12(2), 189202.
Lee, Y., Kozar, K. A., & Larsen, K. R. T. (2003). The technology acceptance model: Past,
present, and future. Communications of the Association for Information Systems,
12(50), 752780.
Lee, Y. H. (2001). Supply chain model for the semiconductor industry of global
market. Journal of Systems Integration, 10, 189206.
Lees, J. D. (1987). Successful development of small business information systems.
Journal of Systems Management, 38(9), 3239.
Leidner, D. E., & Elam, J. J. (1993). Executive information systems: Their impact on
executive decision making. Journal of Management Information Systems, 10(3),
139155.
Lin, W. T., Chen, S. C., Lin, M. Y., & Wu, H. H. (2006). A study on
performance of introducing ERP to semiconductor related industries in
Taiwan. The International Journal of Advanced Manufacturing Technology, 29(1),
8998.
Lucas, H. C. (1978). Empirical evidence for a descriptive model of implementation.
MIS Quarterly, 2(2), 2741.
Lucas, H. C., & Spitler, V. K. (1999). Technology use and performance: A eld study
of broker workstations. Decision Sciences, 30(2), 291311.

C.-K. Hou / International Journal of Information Management 32 (2012) 560573


Mahmood, M. A., & Soon, S. K. (1991). A Comprehensive model for measuring the
potential impact of information technology on organizational strategic variables. Decision Sciences, 22(4), 869897.
Marsh, H. W., & Hocevar, D. (1985). Application of conrmatory factor
analysis to the study of self-concept: First- and higher order factor
models and their invariance across groups. Psychological Bulletin, 97(3),
562582.
Mathieson, K., Peacock, E., & Chin, W. W. (2001). Extending the technology acceptance model: The inuence of perceived user resources. Data Base for Advances
in Information Systems, 32(3), 86112.
McGill, T. J., Hobbs, V. J., & Klobas, J. E. (2003). User developed applications and
information systems success: A test of DeLone and McLeans model. Information
Resources Management Journal, 16(1), 2445.
McHaney, R., & Cronan, T. P. (1998). Computer simulation success: On the use of
the end user computing satisfaction instrument: A comment. Decision Sciences,
29(2), 525535.
McHaney, R., Hightower, R., & Pearson, J. (2002). A validation of the end-user computing satisfaction instrument in Taiwan. Information & Management, 39(6),
503511.
McHaney, R., Hightower, R., & White, D. (1999). EUCS testretest reliability in representational model decision support systems. Information & Management, 36(2),
109119.
Mikroyannidis, A., & Theodoulidis, B. (2010). Ontology management and evolution
for business intelligence. International Journal of Information Management, 30(6),
559566.
Moore, G. C., & Benbasat, I. (1991). Development of an instrument to measure the
perceptions of adopting an information technology innovation. Information Systems Research, 2(3), 192222.
Moss, L. T., & Atre, S. (2003). Business intelligence roadmap: The complete project
lifecycle for decision-support applications. Boston, MA: Addison-Wesley.
Negash, S., & Gray, P. (2004). Business intelligence. Communications of the Association
for Information Systems, 13, 177195.
Newell, S., Huang, J. C., Galliers, R. D., & Pan, S. L. (2003). Implementing enterprise
resource planning and knowledge management systems in tandem: Fostering
efciency and innovation complementarity. Information and Organization, 13(1),
2552.
Nunnally, J. C. (1978). Psychometric theory. New York, NY: McGraw-Hill.
OReilly, C. A. (1982). Variations in decision makers use of information sources: The
impact of quality and accessibility of information. The Academy of Management
Journal, 25(4), 756771.
Parikh, M., & Fazlollahi, B. (2002). Analyzing user satisfaction with decisional guidance. In Proceedings of the 33rd Annual Meeting of the Decision Sciences Institute
San Diego, CA, (pp. 128133).
Parr, A., & Shanks, G. (2000). A model of ERP project implementation. Journal of
Information Technology, 15(4), 289303.
Pentland, B. T. (1989). Use and productivity in personal computing: An empirical
test. In Tenth international conference on information systems Boston, MA, (pp.
211222).
Pikkarainen, K., Pikkarainen, T., Karjaluoto, H., & Pahnila, S. (2006). The measurement of end-user computing satisfaction of online banking services:
Empirical evidence from Finland. International Journal of Bank Marketing, 24(3),
158172.
Podsakoff, P. M., MacKenzie, S. B., Lee, J. Y., & Podsakoff, N. P. (2003). Common
method biases in behavioral research: A critical review of the literature and
recommended remedies. Journal of Applied Psychology, 88(5), 879903.
Podsakoff, P. M., & Organ, D. W. (1986). Self-reports in organizational research:
Problems and prospects. Journal of Management, 12(4), 531544.
Purvis, R. L., Sambamurthy, V., & Zmud, R. W. (2001). The Assimilation of knowledge
platforms in organizations: An empirical investigation. Organization Science,
12(2), 117135.
Ravichandran, T., & Lertwongsatien, C. (2005). Effect of information systems
resources and capabilities on rm performance: A resource-based perspective.
Journal of Management Information Systems, 21(4), 237276.
Schwarz, A., & Chin, W. (2007). Looking forward: Toward an understanding of the
nature and denition of IT acceptance. Journal of the Association for Information
Systems, 8(4), 230243.

573

Segars, A. H., & Grover, V. (1993). Re-examining perceived ease of use and usefulness:
A conrmatory factor analysis. MIS Quarterly, 17(4), 517525.
Serenko, A., Turel, O., & Yol, S. (2006). Moderating roles of user demographics in the
American customer satisfaction model within the context of mobile services.
Journal of Information Technology Management, 17(4), 2032.
Sethi, V., & King, W. R. (1991). Construct measurement in information systems
research: An illustration in strategic systems. Decision Sciences, 22(3), 455472.
Shin, D. H. (2009). Towards an understanding of the consumer acceptance of mobile
wallet. Computers in Human Behavior, 25(6), 13431354.
Simmers, C. A., & Andandarajan, M. (2001). User Satisfaction in the internetanchored workplace: An exploration study. Journal of Information Technology
Theory and Application, 3(5), 3953.
Somers, T. M., Nelson, K., & Karimi, J. (2003). Conrmatory factor analysis of the
end user computing satisfaction instrument: Replication within an ERP domain.
Decision Sciences, 34(3), 595621.
Stewart, K. A., & Segars, A. H. (2002). An empirical examination of the concern for
information privacy instrument. Information Systems Research, 13(1), 3649.
Subramani, M. (2004). How do suppliers benet from IT use in supply chain relationships. MIS Quarterly, 28(1), 4574.
Szajna, B. (1993). Determining information systems usage: Some issues and examples. Information & Management, 25(3), 147154.
Szajna, B., & Scamell, R. W. (1993). The effects of information system user expectations on their performance and perceptions. MIS Quarterly, 17(4), 493516.
Tallon, P. P., Kraemer, K. L., & Gurbaxani, V. (2000). Executives perceptions of the
business value of information technology: A process-oriented approach. Journal
of Management Information Systems, 16(4), 145173.
Teo, T. S. H., & Pian, Y. (2003). A contingency perspective on internet adoption and
competitive advantage. European Journal of Information Systems, 12(2), 7892.
Tippins, M. J., & Sohi, R. S. (2003). IT competency and rm performance: Is organizational learning a missing link? Strategic Management Journal, 24(8), 745761.
Torkzadeh, G., & Doll, W. J. (1999). The development of a tool for measuring the
perceived impact of information technology on work. Omega - The International
Journal of Management Science, 27(3), 327339.
Torkzadeh, G., & Dwyer, D. J. (1994). A path analytic study of determinants of information system usage. Omega, 22(4), 339348.
Tung, A. C. (2001). Taiwans semiconductor industry: What the state did and did not.
Review of Development Economics, 5(2), 266288.
Turban, E., Aronson, J., Liang, T. P., & Sharda, R. (2007). Decision support systems and
intelligent systems (8th ed.). Upper Saddle River, NJ, USA: Prentice Hall PTR.
Venkatesh, V., & Davis, F. D. (1996). A model of the antecedents of perceived ease of
use: Development and test. Decision Sciences, 27(3), 451481.
Venkatesh, V., & Davis, F. D. (2000). A theoretical extension of the technology
acceptance model: Four longitudinal eld studies. Management Science, 45(2),
186204.
Venkatesh, V., Morris, M. G., Davis, G. B., & Davis, F. D. (2003). User acceptance of
information technology: Toward a unied view. MIS Quarterly, 27(3), 425478.
Wang, L., Xi, Y., & Huang, W. W. (2007). A validation of end-user computing satisfaction instrument in group decision support systems. In Proceedings of the
wireless communications, networking and mobile computing (pp. 60256028).
IEEE: Shanghai, China.
Xiao, L., & Dasgupta, S. (2002). Measurement of user satisfaction with web-based
information systems: An empirical study. In Proceedings of the 8th Americas
Conference on Information Systems Dallas, TX, (pp. 11491155).
Yuthas, K., & Young, S. T. (1998). Material matters: Assessing the effectiveness of
materials management IS. Information & Management, 33(3), 115124.
Zhu, K., & Kraemer, K. L. (2002). E-commerce metrics for net-enhanced organizations: Assessing the value of e-commerce to rm performance in the
manufacturing sector. Information Systems Research, 13(3), 275295.
Chung-Kuang Hou is an assistant professor in the Department of Information Management at Chia-Nan University of Pharmacy and Science, Taiwan. He received a
PhD in management information systems from the University of Manchester, UK,
in 2009. His primary research interests include the impacts of IT/IS in organizations
or individuals, business intelligence systems, electronic commerce, and e-learning.
Before his academic career, Dr. Hou had six years experiences in various business
and management positions.