Beruflich Dokumente
Kultur Dokumente
J Performance
Introduction
The linkage between information technology
and individual performance has been an on•
By: Dale L. Goodhue going concern in IS research. This article pre•
Information and Decision Sciences sents and tests a new, comprehensive model of
University of Minnesota this linkage by drawing on insights from two
27119th Ave. South complementary streams of research (user atti•
Minneapolis, MN 55455 tudes as predictors of utilization and task-tech•
U.S.A. nology fit as a predictor of performance). The
dgoodhue@csom.umn.edu essence of this new model, called the Technol•
ogy-to-Performance Chain (TPC), rs the asser•
tion that for an information technology to have a
Ronald L. Thompson
positive impact on individual performance, the
School of Business Administration technology must be utilized, and the technology
University of Vermont must be a good fit with the tasks it supports.
Burlington, VT 05405
This new model is consistent with one proposed
U.S.A.
by Delone and Mclean (1992) in that both utili•
thompson@emba.uvm.edu
zation and user attitudes about the technology
lead to individual performance impacts. It goes
beyond the Delone and Mclean model in two
Abstract
important ways. First, it highlights the impor•
A key concern in Information Systems (IS) re• tance of task-technology fit (TTF) in explaining
search has been to better understand the link• how technology leads to performance impacts.
age between information systems and individual Task-technology fit is a critical construct that
performance. The research reported in this was missing or only implicit in many previous
study models. Second. it is more explicit concerning
has two primary objectives: (1J to propose the links between the constructs, prov1d1ng a
a stronger theoretical basis for thinking about a
comprehensive theoretical model that incor• number of issues relating to the impact of IT on
porates valuable insights from two comple men• performance. These include: making choices for
tary streams of research, and (2) to empirically surrogate measures of MIS success, 1 under•
test the core of the model. At the heart of the standing the impact of user involvement on per•
new model is the assertion that for an informa• formance. and developing better diagnostics for
tion technology to have a positive impact on in• IS problems.
dividual performance, the technology: (1) must
be utilized and (2) must be a good fit with the
tasks it supports. This new model is moderately
1
supported by an analysis of data from over 600 "MIS Success" 1s vanously descnbed as improved
productivity (Bailey and Pearson, 1983). changes in
individuals in two companies. This research high•
orgamzalionat effecbveness. uhlrty in decision making (Ives,
lights the importance of the fit between technol• et al., 1983). higher relative value or net utility of a means of
ogies and users' tasks in achieving individual lnqu11y (Swanson. 1974; 1982). etc. Thus. MIS
performance impacts from information tech• success ultimately corresponds to what Delone and
nology. It also suggests that task-technology fit, Mclean (1992) label 1nd1viduat impact or organizational
impact For our purposes, the paper focuses on
when decomposed into its more detailed com• 1ndiv1dua1 perfonnance impacts as the dependent vanable of
ponents, could be the basis for a strong diag• interest
nostic tool to evaluate whether information sys•
tems and services in a given organization are
meeting user needs.
MIS Quarterly/June 1995 213
Task-Technology Fit
This paper describes the technology-to-perform• sumed and, have argued that performance im•
ance chain model, and its major relationships pacts will result from task-technology fit-that is.
are tested empirically using data from over 600 when a technology provides features and sup•
individuals using 25 different information tech• port that "fit" the requirements of a task. This
nologies and working in 26 different depart• view is shown by the middle model in Figure 1.
ments in two companies. in which fit determines performance (and some•
times utilization) but without the richer model of
utilization from above as a critical predictor of
performance.
Models LinkingTechnology
The "fit" focus has been most evident in re•
and Performance search on the impact of graphs versus tables on
Described below are the two research streams individual decision-making performance. Two
mentioned earlier and the limitations of relying studies report that over a series of laboratory
completely on either one alone. experiments. the impact of data representation
on performance seemed to depend on fit with
the task (Benbasat. et al., 1986; Dickson. et al.,
Utilization focus research 1986). Another study proposes that mismatches
between data representations (a technology
The first (and most common) of the two comple•
characteristic) and tasks would slow decision•
mentary research streams on which the TPC is
making performance by requiring additional
based is the "utilization focus" stream. This
translations between data representations or de•
stream employs user attitudes and beliefs to
cision processes (Vessey, 1991) Still others
predict the utilization of information systems
found strong support for this linkage between
(e.g ,Cheney, et al., 1986, Davis, 1989; Davis.
"cognitive fit" and performance in laboratory ex•
et al . 1989; Doll and Torkzadeh, 1991, Lucas,
periments (Jarvenpaa, 1989; Vessey, 1991).
1975; 1981: Robey, 1979, Swanson, 1987,
Thompson. et al., 1991). The top model m Figure The case has been made for a more general
1 snows a rough model of the way in which "fit" theory of tasks. systems, individual charac•
technology is said to affect performance in this teristics, and performance (Goodhue, 1988).
research. This study proposes that information systems
(systems. policies. IS staff, etc.) have a positive
Most of the utilization research rs based on
impact on performance only when there is cor•
theories of attitudes and behavior (Bagozzi,
respondence between their functionality and the
1982; Fishbein and Ajzen, 1975; Triandis,
task requirements of users.
1980). Aspects of the technology (for example.
high quality systems (Lucas. 1975) or charge· There have also been links suggested between
back policies (Olson and Ives. 1982)) lead to fit and utilization (shown to the dotted arrow in
user attitudes (beliefs. affect) about systems (for the middle model of Figure 1). At the organiza•
example, usefulness (Davis, 1989) or user infor• tional level "fit" and utilization or adoption have
mation satisfaction (Baroudi, et al . 1986)) User been linked (Cooper and Zmud, 1990; Tor•
attitudes. along with socral norms (Hartwick and natzky and Klein, 1982). At the individual level,
Barki, 1994, Moore and Benbasat. 1992) and a "system/work fit" construct has been found to
other situational factors. lead to intentions to be a strong predictor of managerial electronic
util• ize systems and ultimately to increased workstation use (Floyd, 1986, 1988)
utiliza• tion. Stated or unstated, the implication is
that increased utilization will lead to positive
perform• ance impacts. Limitations of the utilization focus
model
Task-technology fit focus \Nhile each of these perspectivesgives insight
into the impact of information technology on
research per• formance. each alone has some important
A smaller number of researchers have focused limi• tations. First. utilization is not always
on situations where utilization can often be as- voluntary
Q) Q)
Q)
o
o
cctl .....
en co en
c en
c
E
tl g
,._ a.
c
E
.... a.
tl -
g
-
g EQ)E .g E
.... a. Q)
o,
-
o E
~-
o,
-
o,
11
1..-'I Cl)
u
c e
0
:;:;
c (Ii
c 0
ctl
.!:::! .Q
ro roN .
N
5 E
Cl)
g
a.
E>,
Cl
0
0c
s:
u
~
E
e
u,
~
e
-
I
:J
Cl)
en
.....0
en VI
en en iii
-c,
en 0 >, .2 C)
0
:; 0
-
.!Q >,.2 :::iE
>,.2
: ;
en
0)u5 ~ ·;:: - ..... ;:: I-Cl)
·
(/) 0) u5
- o·
-C).... (/) 0 Q) ~ -o·-....
2
2
en
O Q)
..c ~g....
c: 0
~g.... O Q)
.c.c-C)
es:
.c.c -
( '
0 O .....
cu Q) (1J o~
o ~ s: ctl
.c I-
~~ 0 0 ..c
0 ~~0
0
0)
c c c0
0 c
·-........., · - +c-o' ·+-'
co·-
N Cl) Cl)
::::s
..0 N LL
·- o ::::s o E+-'
o+-
"'O
........., 0 c
:J LL
0 LL LL 0:J co
2
An earlier version of this model was first presented by
Goodhue (1992)
3
There 1s potentJal for some confusion in terminology here
Organizallonal researchers sometimes define technology
quite broadly as actions used to transform inputs into outputs
(e g, Perrow 1967, Fry and Slocum, 1964) That rs,
technologies are the tasks of ind1v1duals producing outputs
This paper d1fferen11ates technologies from tasks
Task-Technology Fit
.x Q)
o
co c en
.c'C coE- ~
Q) .... a.
Q) o E
LL ~
a..
.....: . r -
....
0
u- : ·I
c
·: ;;
co ·;c
0
·.;:.
.c.
Q) s:
co co o
0 -~ -
CII
u
en c
Q)
·;:: 5 c
0 "
0 co
e'
-
(I)
s: I en
I-
I
Q)
"O
:J
.g
Cl)
e,
·I :;::;
~ 6
:,
· I
0
en
'";'
>,
g>
I Q) 0c
·;::
0 s:
Q) u
.c. 1-
CII
I- CI)
s
I-
N
C
...
.
l)
____
., :, :;: ;gco g ~ ~
.,
I
=:;:;N enI
I
! o-=
Q)
I co:::,
f= 1
:J (l)
Q) ·Cu,
0)
c
0
. ....
'6
c0
en
::> c- en
>,--~
en I -
oo c -o E o
U) -~ I 0 .Q CU u..-
0)
co:J I·;::ci> ~ 0
0 ·;:: en (l)~-0 Z g :
- o_ (l)
0
"O
's t5
(l)
.._
., I - ·- 1-
' - -
encu
.... i5
s:u co....
:J
E co - co
-0 ....
Ulx- a.:::> I Q) ·5
I
uJ
~~
o er.
u<=1>-
· w
u• -~ 0 -c co I
O
I
Cf)
I
MIS Quarterly/June1995 217
Task- Technology Fit
cifically. TIF is the correspondence between quence of the size of the task and/or the TIF of
task requirements, individual abilities, and the the system, not the choice to use the system.
4
functionality of the technology.
If the focus is expanded to include a portfolio of
The antecedents of TTF are the interactions some number of tasks (such as in a field study
between task, technology, and individual. Cer• of information systems use), then the appropri•
tain kinds of tasks (for example, interdependent ate conceptualization would be the proportion of
tasks requiring information from many organiza• times the individual decided to use the system
tional units) require certain kinds of technologi• (the sum of the decisions to use, divided by the
cal functionality (for example, integrated data• number of tasks). Note that this is quite different
bases with all corporate data accessible to all). from conceptualizing utilization as the length of
As the gap between the requirements of a task ltme or the frequency with which a system was
and the functionalities of a technology widens, used. Knowing that an individual decided to use
TIF is reduced. Starting with the assumption a system three times means one thing rf there
that no system provides perfect data to meet were only four tasks, but something else if there
complex task needs without any expenditure of were 20 tasks.
effort (Le . there rs usually some non-zero gap),
we believe that as tasks become more demand• The antecedents of utilization can be sug•
ing or technologies offer less functionality, TIF gested by theories about attitudes and behavior.
will decrease (Goodhue, forthcoming). as described above. Note that both voluntary
and mandatory utilization are reflected in the
Utilization is the behavior of employing the model. Mandatory use can be thought of as a
technology in completing tasks. Measures such situation where social norms to use a system
as the frequency of use or the diversity of appli• are very strong and overpower other considera•
cations employed (Davis, et al., 1989; tions such as beliefs about expected conse•
Thompson, et al., 1991; 1994) have been used quences and affect.
However, the construct is arguably not yet well
understood, and efforts to refine the conceptu• The impact of TTF on utilization is shown via
alization should be grounded in an appropriate a link between task-technology fit and beliefs
reference discipline (Trice and Treacy, 1988). about the consequences of using a system. This
is because TIF should be one important deter•
Since the lower portion of the TPC model in Fig• minant of whether systems are believed to be
ure 2 is derived from other theories about atti• more useful, more important. or give more rela•
tudes (beliefs or affect) and behavior (Bagozzi tive advantage All of these related constructs
1982; Fishbein and Ajzen, 1975;Triandis, 1980), have been shown to predict utilization of sys•
rt would seem an appropriate reference tems (Davis, 1989; Hartwick and Barki, 1994;
discipline. Consider the utilization of a specific Moore and Benbasat, 1992), though they are
system for a single, defined task in light of not the only determinant, as the model shows.
those theories. Beliefs about the consequences
of use, affect toward use, social norms. etc. Performance impact in ttus context relates to
would lead to the individual's decision to use or the accomplishment of a portfolio of tasks by an
not use the sys• tem. In this case, utilization individual. Higher performance implies some
should be conceptu• alized as the binary mix of improved efficiency, improved effective•
condition of use or no-use. We would not be ness, and/or higher quality As shown in Figure
interested in how long the indi• vidual used the 2, not only does high TIF increase the likeli•
system at this single, defined task, since hood of utilization, but it also increases the per•
length of use would be a conse- formance impact of the system regardless of
why it is utilized. At any given level of utilization,
a system with higher TIF will lead to better per•
formance since rt more closely meets the task
needs of the individual.
Feedback is an important aspect of the model.
• Perhaps a more accurate label for the construct would be Once a technology has been utilized and per•
task-md1v1dual-technology fit, but the simpler TIF label formance effects have been experienced, there
rs easier to use
Cl)
0
c "'
«s -
E ~
'- a.
o E
'Ct:l)
-
a.
-e
s
-
II)
·u -.
QI
aJ
s
c
>,. ·;;;
-
.J::.
C) o
-c0 0 c
0
QI
CJ
cr
s: «sN e
-
o
0
C\J · .g
~ o, • o ,
Q I
~
ti) ..... ::::, .0...
!;;
~ 0
0c
.J::.
CJ
~
-. ..
0
s
Q
· -...
ti)
0 ti) I
"' cn
0
>,. ·-
QI
II)
..c
::,
-O·-
fl)
-ti) QI
O '-
.J::.
I-
Cl)
.c O
0 ~
~~
o
response rate of 93 percent. The total usable Based on an assessment of the reliability and
re• discriminant validity of the questions, 14 ques•
spondents from both companies was 662.5 tions (and 5 dimensions) were dropped as being
unsuccessfully measured.6 Using a principal
components factor analysis with promax rota•
Measures, measurement validity, tion, the remaining 34 questions (including 16 of
reliability the 21 original dimensions) were collapsed into
eight clearly distinct factors of TIF. For all ques•
Where possible, measures were adapted from tions, factor loadings were at least .50 on the
previous research. Because of a lack of ade• primary factor, and no more than .45 on any
quate measurement scales, however, it was secondary factor. For only one question was the
necessary to develop and refine some meas• difference between the primary and the secon•
ures specifically for this study. dary loading less than .20, and in this one case
Task-technology fit has been measured by the difference was .10.
Goodhue (1993; forthcoming) within the user Table 1 shows the mapping from the 16 remain•
task domain of IT-supported decision making. ing dimensions of TIF to the eight final TIF fac•
From Goodhue's instrument we borrowed multi• tors, as well as the Cronbach's alpha reliabilities
ple questions on each of 14 dimensions of TIF for the eight factors, ranging from .60 to .88.
addressing the extent to which existing informa• This grouping of dimensions seemed quite rea•
tion systems support the identification, access, sonable, since similar dimensions were collect•
and interpretation of data for decision making. ed into more aggregate but still coherent TIF
To expand the task domain somewhat, two ad• factors. The final eight components of TIF that
ditional IT-supported user tasks were added: (1) were successfully measured included (1) data
responding to changed business requirements quality; (2) locatability of data; (3) authorization
with new and modified systems, and (2) execut• to access data; (4) data compatibility (between
ing day-to-day business transactions. For these systems); (5) training and ease of use; (6)
two new tasks multiple questions were devel• produc• tion timeliness (IS meeting scheduled
oped on each of seven new dimensions ad• opera• tions): (7) systems reliability; and (8) IS
dressing the extent to which IS meets user task relation• ship with users. The first five factors
needs: having sufficient understanding of the focused on meeting task needs for using data
business, having sufficient interest and dedica• in decision making The next two focused on
tion, providing effective technical and business meeting day• to-day operational needs, and
planning assistance, delivering agreed-upon so• the last focused on responding to changed
lutions on time. responsiveness on requests for business needs. The successful TIF questions
services, production timeliness, and impact of IS are listed in the Ap• pendix, Part A.
policies and standards on ability to do the job. Task characteristics and their impact on infor•
Altogether this resulted in 48 questions measur• mation use have been studied by a great many
ing 21 dimensions ofTIF. researchers (e.g., Guinan, 1983; Daft and
Macintosh, 1981, O'Reilly 1982). Following Fry
and Slocum's (1984) suggestion of a general
5 A concern with the two-company sample described above ls
characterization of tasks, Goodhue (forthcom•
ing) combined Perrow's (1967) and Thompson's
that the model may apply so differently m the two companies
that it is inappropriate to pool the data for a single analysis (1967) dimensions and successfully measured a
We used Neter and Wasserman's (1974, p. 160-161; see two-dimensional construct of task characteris•
also Pedhazur, 1962, pp. 436-450) test for the equivalence tics: non-routineness (lack of analyzable search
of two regression Imes to test whether it is appropriate to behavior) and interdependence (with other or•
pool the data from the two companies This involves testing
a full model giving each company its own intercept and ganizational units).
beta values, and comparing that to a restricted model with a
single intercept and a single set of shared beta values
O
This test was performed for the regressions predicting Details of the analysis of the measurement validity or all
utilization and performance impacts In neither case was measures, as well as a correlation matrix, are available from
the improvement m fit for the full model significant at the 05 the authors upon request.
level, supporting our approach of pooling the data
Five measures of task characteristics (three Technology characteristics facing users could
questions on non-routineness and two on inter• be measured along a number of dimensions.
dependence) were adopted from Goodhue's With this measure we focused on two proxies
(forthcoming) study, as shown in the Appendix, for the underlying characteristics of the technol•
Part B. A factor analysis separated the ques• ogy of information systems: first, the information
tions into two factors with all questions loading systems used by each respondent, and second,
at least .51 on their primary factor, and no more the department of the respondent. The two or•
than .36 on their secondary factor. Cronbach's ganizations provided a large range of informa•
alpha reliabilities were 73 and . 76 for non-rou• tion systems for their employees. As part of the
tmeness and interdependence respectively. customization of the questionnaire, about 20
In addition to these general characteristics of major systems in each company were identified.
tasks, several researchers have suggested Each respondent identified up to five of these
managerial level as a determinant of user evalu• that they actually used. Twenty-five major sys•
ations of IS (e.g . Cronan and Douglas, 1990; tems (13 in Company A and 12 in Company B)
Franz and Robey, 1986). It is certainly true that were used by a minimum of five employees.
the kinds of tasks users engage in (and the de• Rather than try to define each system in terms
mands they make on their information systems of its characteristics, we made the simplifying
and service providers) should vary considerably assumption that the characteristics of any given
from clerical staff to low-level managers to system are the same for all who use that sys•
higher-level managers. As a pragmatic proxy to tem. For respondents who used only a single
capture these kinds of task differences, dummy system, the characteristics were captured by a
variables were used for each of eight groupings dummy variable (1 indicates use of this system;
of job title Job titles in the two companies are O indicates no use). Where respondents used
shown in Table 2, matched where possible more than one system, the dummy variables
across the two companies. Though no specific were weighted. The weighting was accom•
hypotheses were made. we expected that differ• plished by simply dividing 1 by the number of
ences in job title would affect user evaluations of major systems used For example, a respondent
TIF. who used three major systems would receive a
weighting of .33 for each of these three and a no choice but to use the system provided by his
weighting of zero for all other systems. This ap• or her IS department. Regardless of the claims
proach allowed us to capture inherent differ• processor's evaluation of the system, it was not
ences between technologies without having to possible to process claims without using it.
explicitly define those differences. In effect, the
Our solution was to conceptualize utilization as
collection of dummy variables for system was
the extent to which the information systems
used as a proxy for different, unspecified system
have been integrated into each individual's work
characteristics.
rou• tine, whether by individual choice or by
The department of the respondent was also organ• izational mandate. This reflected the
used as a second proxy measure for the char• individual (or organizational) choice to accept the
acteristics of information systems IS depart• systems, or the institutionalization of those
ments may have differentiated between user systems.
departments in terms of attention, emphasis. pri•
We operationalized this by asking users to rate
ority. and relationship management, perhaps
how dependent they were on a list of systems
because of the organization's strategic direction
available in their organizations. Respondents
or historic inertia. These differences could have
selected up to five systems that were major
affected the level of service experienced by re•
sources of information for them personally and
spondents in the different departments. A set of
self-reported on system-specific dependence.
departmental dummy variables was used to
Dependence on each system was rated on a
capture the potentially different levels of atten•
three-point scale (0- not very dependent; 1-
tion paid by IS departments to each of 26 dis•
somewhat dependent: 2-very dependent).
tinct user departments.
Overall dependence on systems (our measure
These were somewhat crude measures of char• of utilization) was calculated as the total de•
acteristics of information systems and services, pendence reported on up to five systems (the
involving a great many dummy variables. Given sum of the dependence responses).
our relatively large sample size, however, these
Perfonnance impact was measured by per•
measures did allow us to test the assertion that
ceived performance impacts since objective
user evaluations were a function of the underly•
measures of performance were unavailable in
ing systems used and the departments users
this field context, and at any rate would not have
were in.
been compatible across individuals with different
Utilization should ideally be measured as the task portfolios. Three questions were used that
proportion of times users choose to utilize sys• asked individuals to self-report on the perceived
tems. Unfortunately, this proportion was ex• impact of computer systems and services on
tremely difficult to ascertain in a field study. In their effectiveness. productivity, and perform•
addition, there was also the problem of manda• ance in their job. Low correlations between one
tory use. In many field situations. use of a sys• question and the others (.23 and .21) suggested
tem may be mandated as part of a Job that it was measuring something quite different
description. For example, a claims processor from the other two (in this case, problems with
with the insurance company (Company B) had the IS department as opposed to impact of sys•
tems on performance). This third question was
Q)
0
c
-
C'CS
E,._ (I)
0
('Q
a..
E
!0? "'O
Q)
't:
O C'CS
0. a,> '
~
~ .§ Q)
c..
..... (I)
> a:
..
.... - - .c
"C
-
LL 0 >= s: Q) u
CII
.J
o=+:>C : ( I)
e:::,
u. .c a..=·- Cl)
0 u ...
~ ('Q ·- ('Q - 0 c: "C 2 ,2 ~ LL.
C) ui - !I 0
- o ~ cCQV.
(O.!!!e:Cco
- ('Q "'O o
0 c:
('Q
0 - ...
O o O ·- ll> Q) GI
m.co.§
-·-
<lloU"a,CI: a.. :;: r!
c..J _ c-a: en-
Q)
c c 0
0 'ti
Q) 0 ~
.c 0
· - >O _c 0
(I) 3 .. ·-
n i 0~ :.: (I) ·c C\J
"'O
Cl) 3:.2~~
::, cij
('Q (I) • -
-:,Nll><'O
c: .... o, C'CS
N
'a, " '
c X,oc5
a..
~ ('Q ·-
.r.
...... ·-
UJ
0
>Oo:.:~
en .... --e(I)
- ....
Q)
<
"g=li:"8
~.c ....
·.-..
~ C(Il)) ~
('Q
~
Q)
·._- U('QJ £11110 4 >
c, .. .. u
:::::,
~ ::>
e
c .5 c c
i2 ~
!!! o e Q)
:::,.CG>Q.
111
111 3G>
cvG>111C
~
o,
CII '- C 'ti
:!!: IlCV l·-O G>
" "'
. : , te
2! I
Ill ...
~ : g>,f
·- a::
:::, :E >
uC:l -
en
.e..n. 0
Cl)
"O
·,.-_
(I)
::>
... ,n Q)
0
E
en
-
Q)
c
. ..
Q) Q)
0 (I)
,._ (I)
,n O
Q) c
~en
C'CS
c,·~
C'CS ~ "'O
·- c 0 Q) e>..n..
.c '5 Q)
O Q)
c._ ~ 0 0 ~E C'CS
0 a: 'E ._
.c ,._
c('Q ::, t::
I .~
~ c: .c c.
o- en
Q) C'CS 0 t::
0
c.. 0 .
(1:1 Cl)
C'CS z.5-, Q)
c
of the regressions (the one predicting the quality Effect of Technology Characteristics on TTF.
of the relationship with IS) was significant at The two proxies for characteristics of the tech•
greater than the .001 level. At least one task nology were "systems used" and "department."
characteristic was significant in six out of eight Together these were significant predictors for
regressions, and at least one technology char• four of the eight factors of TIF. The specific
acteristic was significant in four out of eight. This findings (see columns 4 and 5 of Table 3) have
is moderate support for Proposition 1. Below is good face validity, although not all anticipated
a more detailed examination of the findings. influences were observed.
Effect of Task Characteristics on TTF. The For example, department is a significant predic•
strongest effect of task characteristics on TTF tor of user evaluations of production timeliness
was from non-routine tasks. We found that indi• and of training/ease of use. If IS groups focus
viduals engaged in more non-routine tasks rated special emphasis on strategically important or
their information systems lower on data quality. powerful departments, we might expect that dif•
data compatibility, data locatability, training/ease ferent levels of training and easier-to-use, more
of use, and difficulty of getting authorization to up-to-date systems would be provided to those
access data (note the significance and the departments. To the extent that IS groups have
negative coefficients in column 1 of Table 3). consistent standards for production turnaround,
This is consistent with the idea that because of interface design, training policies, and so on,
the non routine nature of their jobs, these peo• there are likely some departments for whom
ple are constantly forced to use information sys• these standards are more appropriate than for
tems to address new problems, such as seeking others. A third area where we expected to see
out new data and combining it in unfamiliar differences between departments, but did not,
ways. Thus, they make more demands on sys• was the relationship with IS. (But see footnote 7
tems and are more acutely aware of shortcom• below.)
ings. Interdependence of job tasks (column 2 of Systems used is a significant predictor of locata•
Table 3) was observed to influence perceptions bility and systems reliability. This too conforms
of the compatibility and reliability of systems. to our expectations. We might expect that some
systems are better than others for locatability of
Finally, two factors of TIF are clearly affected data or for system reliability, and users reflect
by job level (column 3 of Table 3). compatibility that in their ratings. Another area where we ex·
and ease of getting authorization for access. Ta•
peeled to see differences between systems, but
bles 4 and 5 show a more detailed analysis of
did not, was in the quality of the data. It is possi•
the specific impact of the various job titles on
ble that our proxy measures of technology char•
these two factors. Lower and middle-level staff
acteristics were too crude to pick up any but the
and managers found the data least compatible,
strongest influences within this study.7
while upper-level management found it most
compatible This is consistent with the proposi•
tion that upper-level management is often
1
shielded from the hands-on difficulties of bring• The absence of an effect or department on relauonship with
ing together data from multiple sources and IS and of system on quality of the data 1s sufficiently
perplexing to suggest doing some secondary exploratory
sees it only after the difficulties have been analysis Since some systems may be department
ironed out. It is lower and mid-level individuals spec1ric, there rs the poss1b1hty that inelud1ng dummy
who must pay with effort and frustration for data vanables ror both department and system In the same
incompatibilities. regression (47 dummy variables ln all) dilutes the impact that
either group alone would have For this reason the data were
reanalyzed, dropping system from the analysis of relationship
Similarly, Table 5 shows that upper-level man• with IS and dropping department rrom the analysis or quality
agement found it much easier obtaining authori• Under these circumstances we round both of the expected
zation for access to data. On the other hand, retanonstnps Without system 1n the analysis, department is
admirnstrative and clerical staff, with less organ• a s1gn1ficant ( 05) predictor or relationship with IS
Without department 1n the analysis. system is a s1gn1ficant (
izational clout, faced red tape in getting permis• 05) predictor of quality This suggests that with stronger
sion to access the data they need. measures of technology charactensbcs, this aspect of the
model might have stronger emp,ncal support.
•
Table 4. Effect of Job Titles on User
Evaluations of Data Compatibility* Task-Technology Fit
Administrative/Clerical Staff -.33
Manager/Assistant Director -.27
Director/Assistant Superintendent -.23 be more likely to use the systems. This contrary
Supervisor/Assistant Manager -10 behavior seems implausible.
Analyst/Technical -.08 A more compelling interpretation is that in this
Trainmaster/Roadmaster .00 case the causal effect works in the other direc•
Professional .26 tion (through the feedback mechanism shown in
SuperintendenWP and up .29 Figure 2) For example, perhaps individuals who
use the systems a great deal and are very de•
• Job titles are ordered by impact from more pendent on them will be more frustrated by sys•
negative to more positive. The numbers shown tem downtime and the performance impacts it
are the regression beta coefficients for the dummy engenders. These highly dependent users are
variables reflecting membership in each job more likely to be stymied in their work by
category. (Overall effect is significant at 001.)
downed systems and more likely to rate those
systems as unreliable. Similarly, people who are
more dependent on systems might be more
In hindsight, it seems reasonable that charac•
frustrated with poor relationships with the IS de•
teristics of the technology would influence some
partment and might give poorer evaluations of
but not all TTF components. For example, it is
that relationship. This is quite different from nu•
unlikely that differences between systems will
merous findings showing the link from user atti•
have any influence on whether a user has the
tudes (beliefs, affect) to utilization (e.g., Davis.
authority to access data; it is much more likely
1989; Hartwick and Barki, 1994; Moore and
that job level will influence authority. Overall,
Benbasat, 1992; Thompson, et al., 1991), but rs
these results suggest that task and technology
consistent with arguments made by Melone
characteristics do influence user ratings of task•
(1990) that under certain circumstances utiliza•
technology fit, giving moderate support for tion will influence attitudes.
proposition P 1.
Several possible explanations for lack of sup•
port for Proposition 2 should be noted. First. this
P2: Does TTF Predict Utilization? paper has conceptualized utilization as depend•
The arrow from Task-Technology FIT to Utiliza• ence on information systems. rather than on the
tion in Figure 3 shows Proposition 2. Strong more common concept of duration or frequency
support would require a significant regression of use. Though we have raised some questions
and significant positive links between at least about the applicability of these other conceptu•
some of the eight TTF factors and utilization. alizations in a field study with portfolios of tasks,
The results (shown in Table 6) provide little sup•
port for the hypothesized relation. Although the
Table 5. Effect of Job Titles on User
regression as a whole and three of the path co•
efficients were statistically significant, the ad• Evaluations of Ease of Authorization•
justed R2 was only 02. Administrative/Clerical Staff -.32
In addition, two of the three significant path co• Analyst/Technical -.11
efficients (reliability of systems and relationship Manager/Assistant Director .00
with IS) had negative path coefficients. Inter• Trainmaster/Roadmaster .00
preted within a theoretical framework in which
Supervisor/Assistant Manager .06
attitudes (beliefs, affect) determine behavior, the
two negative links suggest that users who be• Director/Assistant Superintendent .06
lieve that systems are less reliable and who are SuperintendenWP and up .21
less positive about the relationship with IS, will Professional .31
• Job titles are ordered by impact from more negative
to more positive. The numbers shown are the
regression beta coefficients for the dummy
variables reflecting membership in each Job
category {Overall effect is significant at 001.)
it might be that this shift in conceptualization is tion. The results are shown in Table 7. Utiliza•
responsible for the weak link between TIF be• tion alone explained 4 percent (adJusted R2) of
liefs and behavior Testing this was possible in a the variance in performance, while TIF alone
secondary analysis since for Company B we explained 14 percent. Together, TIF and utiliza•
had gathered additional utilization measures for tion explained 16 percent of the variance.8 The
duration and frequency of use. Two additional F-test of the improvement in fit from adding the
regressions were run. one with TIF predicting eight TIF factors as a group was significant at
duration and the other with TIF predicting fre• the .001 level.
quency. Though the R2 increased to .10 for
both new regressions, in each case the Table 7 (the full Model 3) shows that quality of
strongest link by far was between negative the data, production timeliness, and relationship
beliefs about "sys• tems reliability" or with IS all predict higher perceived impact of in•
"relationship with IS" and greater utilization. formation systems, beyond what could be pre•
Thus, it appeared that our conceptualization of dicted by utilization alone.9 Though we need to
utilization is not responsible for the lack of be careful about generalizing too freely about
support for Proposition 2. the impact of specific factors of TIF from a sam•
ple including only two companies (including
A more promising explanation is that the direct more companies in our sample might bring
link between TIF and utilization proposed for other factors into sharper focus), the results do
Figure 3 may not be justified in general. That is, strongly support Proposition 3. It appears that
TIF may not dominate the decision to utilize performance impacts are a function of both task•
technology Rather, other influences from atti• technology fit and utilization, not utilization
tudes and behavior theory such as habit (Roms alone.
et al., 1989), social norms (and mandated use),
etc. may dominate, at least in these organiza•
tions. This would suggest that testing the link
between TIF and utilization requires much more
detailed attention to other variables from atti• Conclusio
tudes and behavior research. n
A third possibility is that none of the current con• Even with some caveats, the TPC model repre•
ceptualizations of utilization are well suited for sents an important evolution in our thinking from
field settings where many technologies are the earlier models in Figure 1, which shows how
available and individuals face a portfolio of technologies add value to individual perform•
tasks. The resolution to this dilemma will have to ance. We found moderately supportive evidence
await further research that user evaluations of TIF are a function of
both systems characteristics and task charac•
teristics, and strong evidence that to predict per•
P3: Does TIF Predict Performance Impact formance both TIF and utilization must be
Better Than Utilization Alone? included Evidence of the causal link between
Finally, the arrows from Task-Technology Fit TIF and utilization was more ambiguous, with
and Utilization to Performance Impacts show the suggestion that, at least in these companies,
Proposition 3. Strong support would require that utilization could cause beliefs about TIF
both TIF and Utilization be significant predictors through feedback from performance outcomes.
of Performance Impacts. Again the test sug•
gested by Neter and Wasserman (1974, p. 274)
was used to explicitly test for the importance of 8 Although an adJusted R2 of .16 is not high. rt is in hne with
addingthe eight TIF factorsas a group to a results from other field research predicting user percepbons
regres• sion predicting performance using of performance impacts (for example, Franz and Robey,
utilization 1986)
• One perplexing finding from Model 2 ln Table 7 1s the
To get a complete picture, we ran three regres• significant negative relauonsrnp between compabblhly and
sions predicting performance impact, using performance impacts However. this relationship drops to
three different sets of independent variables: insignificance with Model 3 (1nclud1ng utihzabon), which we
(1) only utilization, (2) only the eight TIF factors, believe to be the correctly specified model This suggests
that the negative Model 2 relabonsh,p rs spurious
and (3) both the eight TIF factors and utiliza-
••
dimensions for use in comparing the information Task-TechnologyFit
technology base across companies. Similarly, it
would be important to continue work on the is•
sue of defining and measuring utilization to ob•
tain a better understanding of the role of this Research (19), November 1982, pp.
construct. It is also important to go beyond per• 562-
ceived performance impacts, perhaps by con• 584.
structing a laboratory environment in which the Bailey, J.E. and Pearson, S.W. "Development of
model can be tested with objective measures of a Tool Measuring and Analyzing Computer
performance. User Satisfaction," Management Science
A second avenue for future research is to ex• (29:5), May 1983, pp. 530-544.
pand the scope of testing across more diverse Barclay, D., Higgins, C.A., and Thompson, R.L.
settings. Testing across a wider scope of com• ''The Partial Least Squares (PLS) Approach
panies would give a better sense of the relative to Causal Modeling: Personal Computer Use
importance of various components of TIF. as an Illustration," Technology Studies: Spe•
Clearly there is a dilemma here since using cial Issue on Research Methodology, forth•
more diverse settings would tend to dilute the coming, 1995.
impact of particular effects, but give greater clar• Baroudi, J.J., Olson, M.H., and Ives, D. "An Em•
ity to effects that are more generally present. An pirical Study of the Impact of User Involve•
additional opportunity is to explicitly examine ment on System Usage and Information
feedback in the model. For example, an interest• Satisfactron,' Communications of the ACM
ing area for investigation would be the effect of (29 3), March 1986.
performance impacts on utilization, either di• Benbasat, I., Dexter, A.S .• and Todd, P. "An Ex•
rectly or indirectly through changes in user rat• perimental Program Investigating Color-En•
ings of TIF and perceived consequences of hanced and Graphical Information Presen•
use. tation An Integration of the Findings," Com•
munications of the ACM (29:11 ). November
Models are ways to structure what we know 1986, pp. 1094-1105.
about reality, to clarify understandings, and to Cheney, P.H., Mann, R.I • and Amoroso. D.L.
communicate those understandings to others. "Organizational Factors Affecting the Suc•
Once articulated and shared, a model can guide cess of End-User Computing," Journal of
thinking in productive ways, but it can also con• Management lnfonnation Systems (3:1),
strain our thinking into channels consistent with 1986, pp. 65-80.
the model, blocking us from seeing part of what Cooper, R. and Zmud, R. "Information Technol•
is happening in the domain we have modeled. ogy Implementation Research: A Technologi•
We believe the TPC model is a useful evolution cal Diffusion Approach," Management
of the models in which IT leads to performance Science (36:2), February 1990, pp. 123-139.
impacts. It should provide a better basis for un• Cronan, T.P. and Douglas, D.E. "End-user
derstanding these critical constructs and for un• Training and Computing Effectiveness in
derstanding how they link to other related IS Public Agencies." Journal of Management ln•
research issues. totmetion Systems (6:4), Spring 1990.
Guinan. M.J. "Environmental Scanning: The Ef•
References fects of Task Complexity and Source Acces•
sibility on Information Gathering Behavior,"
Adams, DA, Nelson, RR, and Todd, P.A. Decision Sciences (14:2), April 1983, pp.
"Perceived Usefulness, Ease of Use, and 194-206.
Usage of Information Technology. A Replica• Daft, R L. and Macintosh, N.B. "A Tentative Ex•
tion," MIS Quarterly (16:2). June 1992, pp. ploration into the Amount and Equivocality of
227-248. Information Processing in Organizational
Bagozzi. R.P. "A Field Investigation of Causal Work Units," Administrative Science Quar•
Relations Among Cognitions, Affect, Inten• terly (26), 1981, pp. 207-224.
tions and Behavior," Journal of Marl<eting Davis, F D "Perceived Usefulness, Perceived
Ease of Use, and User Acceptance of Infor•
mation Technology," MIS Quarterly (13:3),
September 1989, pp. 319-342.
Davis, F.D., Bagozzi, R.P., and Warsaw, P.R.
"User Acceptance of Computer Technology:
A Comparison of Two Theoretical Models," Goodhue, D.L. "User Evaluations of MIS Suc•
Management Science (35:8}, August 1989, cess: What Are We Really Measuring?" Pro•
pp. 983-1003. ceedings of the Hawaii International
Delone, W.H and Mclean, ER "Information Conference on Systems Sciences. Vol. 4,
Systems Success: The Quest for the De• Kauai, Hawaii, January 1992, pp. 303-314.
pendent Variable," Information Systems Re• Goodhue, D.L. "Understanding the Linkage Be•
search (3;1), March 1992, pp. 60-95. tween User Evaluations of Systems and the
Dickson, G.W., Desanctis, G., and McBride, Underlying Systems," working paper, MIS
D.J. "Understanding the Effectiveness of Research Center, University of Minnesota,
Computer Graphics for Decision Support: A Minneapolis, MN. 1993.
Cumulative Experimental Approach," Com• Goodhue, D.L. "Understanding User Evalu•
munications of the ACM (29:1), January ations of Information Systems." Management
1986, pp. 40-47. Science, forthcoming.
Doll, W.J. and Torkzadeh, G. "The Measure• Hartwick, J. and Barki, H. "Explaining the Role
ment of End-User Computing Satisfaction: of User Participation in Information System
Theoretical and Methodological Issues," MIS Use," Management Science (40:4), April
Quarterly (15:1). March 1991. pp. 5-12. 1994, pp 440-465.
Fishbein, M and Ajzen, I. Belief. Attitude, Inten• Ives, B., Olson. M.H .• and Baroudi, J.J. "The
tions and Behavior: An Introduction to Theory Measurement of User Information Satisfac•
and Research, Addison-Wesley, Boston, tion," Communication of the ACM, Vol. 26,
1975. No. 10, October 1983, pp. 785-793
Floyd, S.W. "A Causal Model of Managerial Jarvenpaa, S.L. "The Effect of Task Demands
Electronic Workstation Use," unpublished and Graphical Format on Information Proc•
doctoral dissertation. University of Colorado essing Strategies," Management Science
at Boulder, Boulder, CO, 1986. (35:3), March 1989, pp. 285-303.
Floyd, S W. "A Micro Level Model of Information Lucas. H. "Performance and the Use of an
Technology Use by Managers," in Studies in Infor• mation System," Management
Technological Innovation and Human Re• Science (21 :8), Apnl 1975, pp. 908-919.
sources (Vol. 1) Managing Technological De• Lucas, H. The Analysis, Design. and
velopment, U.E. Gattiker (ed.), Walter de Implemen• tation of Information Systems,
Gruyter, Berlin & New York, 1988, pp. 123- McGraw-Hill, New York, 1981.
142. Mathieson. K. "Predicting User Intentions: Com•
Franz, C.R. and Robey, D. "Organizational Con• paring the Technology Acceptance Model
text, User Involvement, and the Usefulness with the Theory of Planned Behavior." Infor•
of Information Systems," Decision Sciences mation Systems Research (2:3}, September
(17.3), Summer 1986, pp. 329-356. 1991, pp. 173-191.
Fry, L.W. and Slocum, J.W. "Technology, Melone. N.P. "A Theoretical Assessment of the
Struc• User-Satisfaction Construct in Information
ture, and Workgroup Effectiveness: A Test of System Research," Management Science
a Contingency Model," Academy of Manage• (36:1), January 1990, pp. 76-91.
ment Journal (27:2) 1984, pp. 221-246. Moore, G.C. and Benbasat, I. "An Empirical Ex•
Fornell, C. (ed.) A Second Generation of Multi• amination of a Model of the Factors Affecting
variate Analysis, Vol. 1, Methods, Praeger, New Utilization of Information Technology by End
York, 1982 Users," working paper. University of British
Fornell, C. "A Second Generation of Multivariate Columbia. Vancouver, B.C., 1992
Analysis. Classification of Methods and Impli• Neter, J. and Wasserman, W. Applied Unear
cations for Marketing Research," unpub• Statistical Models, Richard D. Irwin, Inc..
lished working paper, Graduate School of Homewood, IL, 1974.
Business Administration, the University of Olson, M.H. and Ives, B. "Chargeback Systems
Michigan, Ann Arbor, Ml, 1984. and User Involvement m Systems - An Em•
Goodhue, D.L. "IS Attitudes: Toward Theoretical pirical Investigation," MIS Quarterly (6:2),
and Definition Clarity," DataBase (19:3/4), 1982, pp. 47-60.
Fall/Winter 1988, pp. 6-15.
O'Reilly, C.A. "Variations in Decision Makers' MIS Quarterly (15:1), March 1991, pp. 125-
Use of Information Sources: The Impact of 143.
Quality and Accessibility of Information," Thompson. R.L., Higgins, CA. and Howell, J.M.
Academy of Management Journal (25:4), "Influence of Experience on Personal Com•
1982, pp. 756-771. puter Utilization: Testing a Conceptual
Pedhazur, E. Multiple Regression ,n Behavioral Model," Journal of Management Information
Research (2nd ed.), Holt, Rinehart and Win• Systems(11.1), 1994, pp.167-187
ston, New York, 1982. Tornatzky, LG. and Klein, K.J. "Innovation
Pentland, B.T. "Use and Productivity in Personal Characteristics and Innovation Adoption-Im•
Computers: An Empirical Test," Proceedings plementation: A Meta-Analysis of Findings,"
of the Tenth International Conference on In• IEEE Transactions on Engineering Manage•
formation Systems, Boston, MA, December ment (29:1 ), February 1982, pp. 28-45.
1989, pp. 211-222. Triandis, H.C. "Values, Attitudes and Interper•
Perrow, C. "A Framework for the Comparative sonal Behavior," in Nebraska Symposium on
Analysis of Organizations," American Socia• Motivation, 1979: Beliefs, Attitudes and Val•
logical Review (32:2), 1967, pp. 194-208. ues, H.E. Howe (ed.), University of Nebraska
Petingell, K., Marshall, T., and Remington, W. Press, Lincoln, NE, 1980, pp. 195-259.
"A Review of the Influence of User Involve• Trice, AW. and Treacy, M.E. "Utilization as a
ment on System Success," Proceedings of Dependent Variable in MIS Research," Data
the Ninth International Conference on Infor• Base (19:3/4), Fall/Winter 1988.
mation Systems, Minneapolis, MN, Decem• Vessey, I. "Cognitive Fit: A Theory-Based
ber 1988, pp. 227-236. Analysis of the Graphs Vs. Tables Litera•
Robey, 0. "User Attitudes and Management In• ture," Decision Sciences (22:2), Spring 1991,
formation System Use," Academy of Man• pp. 219-240.
agement Journal (22:3), 1979, pp. 527-538.
Ronis, D.L., Yates, J.F, and Kirscht, J.P. "Atti•
tudes, Decisions, and Habits as Determi•
nants of Repeated Behavior," in Attitude and About the Authors
Structure and Function, AR. Pratkanis, S.
Breckler, and A.G. Greenwald (eds.), Dale L. Goodhue is an assistant professor of
Lawrence Erlbaum Associates, Hillsdale, NJ, MIS at the University of Minnesota's Carlson
1989. School of Management. He received his Ph.D.
Straub, O.W. and Trower, J.K. "The Importance of in MIS from MIT, and has published in MIS
User Involvement in Successful Systems: A Quarterly, Data Base, Information & Manage•
Meta-Analytical Reappraisal," MISRC-WP- ment, and (soon) Management Science. His re•
89-01, Management Information Systems search interests include measuring the impact
Research Center, University of Minnesota, of information systems, impact of task-technol•
Minneapolis, MN, 1989. ogy fit on performance, and the management of
Swanson, E.B. "Management Information Sys• data and other IS infrastructures/resources.
tems: Appreciation and Involvement," Man• Ronald L.Thompson is an associate professor
agement Science (21 :2), October 1974, pp. with the School of Business Administration, Uni•
178-188. versity of Vermont. He holds a Ph D from the
Swanson, E.B. "Measuring User Attitudes in University of Western Ontario (Canada), and
MIS Research: A Review," Omega (10:2), gained experience in ranching and banking prior
1982, pp. 157-165. to entering academe. His articles have ap•
Swanson, E.B. "Information Channel Disposition peared in journals such as MIS Quarterly, Jour•
and Use," Decision Sciences (18:1), Winter nal of Management lnformatJon Systems.
1987, pp. 131-145. Information & Management, and the Journal of
Thompson, J.O. Organizations in Action, Creative Behaviour. Ron's current research in•
McGraw-Hill, New York, 1967 terests focus on factors influencing the adoption
Thompson, R.L., Higgins, C.A., and Howell, J.M. and use of information technology by individuals,
''Towards a Conceptual Model of Utilization," as well as the relation between IT use and indi-
vidual performance. His book. Information W. Cats-Baril). is scheduled for release by Irwin
Technology and Management (co-authored Publishing in 1996.
with
Appendi
x·
ConstructMeasures and Descriptive Statistics
In each company, the basic research questionnaire was customized by inserting precise acronyms
and terms so that names of systems and departments would be readily identifiable by the respondents.
COMP3 - When rt's necessary to compare or consolidate data from different sources, I find
that there may be unexpeded or difficult inconsistencies.
Production Timeliness
TIMELINESS: (IS meets pre-defined production turnaround schedules.)
PROD1 - IS, to my knowledge, meets its production schedules such as report delivery and
running scheduled jobs.
PR002 - Regular IS activities (such as pnnted report delivery or runnmg scheduled jobs} are
completed on time.
Systems Reliability
SYSTEMS RELIABILITY, (Dependability and consistency of access and uptime of
systems.) RELY1 - I can count on the system to be "up" and available when I need
it
RELY2 - The computer systems I use are subject to unexpected or mconvenient down times
which makes it harder to do my work.
RELY3 - The computer systems I use are subject to frequent problems and crashes.
Ease of Use I Training
EASE OF USE OF HARDWARE & SOFlWARE: (Ease of domg what I want to do using the system hardware
and software for submitting, accessing, analyzing data
EASE1 - It is easy to learn how to use the computer systems I need.
EASE2 - The computer systems I use are convenient and easy to use.
TRAINING (Can I get the kind of quality computer-related training when I need rt?)
TRNG1 - There is not enough training for me or my staff on how to find, understand. access or
use the company computer systems
TRNG2 - I am getting the training I need to be able to use company computer systems,
languages, procedures and data effectively.
Relationship with Users
IS UNDERSTANDING OF BUSINESS; (How well does IS understand my unrt's business mission and
its relation to corporate objectives?)
UNBS1 - The IS people we deal with understand the day-to-day objectives of my work group and
its mission withm our company
UNBS2 - My work group feels that IS personnel can communicate with us in familiar
business terms that are consistent
IS INTEREST AND DEDICATION: (to supporting customer business needs.}
INDN1 - IS takes my business group's business problems seriously.
INDN2 - IS takes a real interest in helping me solve my business
problems. RESPONSIVENESS: (Turnaround time for a request submitted for IS
service.}
RESP1 - It often takes too long for IS to communicate with me on my requests.
RESP2 - I generally know what happens to my request for IS services or assistance or
whether it is being acted upon.
RESP3 - When I make a request for service or assistance. IS normally responds to my request in
a timely manner
CONSUL TING. (Availability and quality of technical and business planning assistance for systems)
CONS1 - Based on my previous experience I would use IS technical and business
planning
consulting services in the future if I had a need.
CONS2 - I am satisfied wrth the level of technical and business planning consulting expertise I
receive from IS.
IS PERFORMANCE: (How well does IS keep its agreements?}
PERF2 - IS delivers agreed-upon solutions to support my business needs
PART 8. TASK/JOB CHARACTERISTICS MEASURES
TASK EQUIVOCAUTY
ADHC1 - I frequently deal with ill-defined business problems
ADHC2 - I frequently deal with ad-hoc. non-routine business problems
MIS Quarterly/June1995 235
Task-Technology Fit
ADHC3 - Frequently the business problems I work on involve answering questions that have
never been asked in quite that form before
TASK INTERDEPENDENCE
INTR1 - The business problems I deal with frequently involve more than one business
function. INTR2 - The problems I deal w,th frequently involve more than one business function.