Sie sind auf Seite 1von 12

Computers & Education 50 (2008) 894905 www.elsevier.

com/locate/compedu

Multi-criteria evaluation of the web-based e-learning system: A methodology based on learner satisfaction and its applications
Daniel Y. Shee *, Yi-Shun Wang
Department of Information Management, National Changhua University of Education, Changhua 500, Taiwan Received 30 May 2006; received in revised form 19 August 2006; accepted 11 September 2006

Abstract The web-based e-learning system (WELS) has emerged as a new means of skill training and knowledge acquisition, encouraging both academia and industry to invest resources in the adoption of this system. Traditionally, most pre- and post-adoption tasks related to evaluation are carried out from the viewpoints of technology. Since users have been widely recognized as being a key group of stakeholders in inuencing the adoption of information systems, their attitudes toward this system are pivotal. Therefore, based on the theory of multi-criteria decision making and the research products of user satisfaction from the elds of humancomputer interaction and information systems, this study proposed a multi-criteria methodology from the perspective of learner satisfaction to support those evaluation-based activities taking place at the pre- and post-adoption phases of the WELS life cycle. In addition, by following this methodology, this study empirically investigated learners perceptions of the relative importance of decision criteria. This investigation carried out a survey of college students, and the data thus obtained was then analyzed by analytic hierarchy process in order to derive an integrated preference structure of learners as a ground for evaluation. We found that learners regarded the learner interface as being the most important dimension of decision criteria. Future applications of these results are recommended and the implications are discussed. 2006 Elsevier Ltd. All rights reserved.
Keywords: Web-based e-learning system; Multi-criteria methodology; Learner satisfaction

1. Introduction The capability and exibility of the web-based e-learning system (WELS) have been demonstrated in both training and education, resulting in its adoption by the academia as well as the industry. Since the commercial application package (or commercial o-the-shelf) strategy of system development is so widespread (Whitten, Bentley, & Dittman, 2004), the proliferation of WELS applications has created confusion for the potential adopters when they have to make a decision regarding the selection from candidate products or solutions. At the same time, those organizations which have already adopted a system are faced with issues that arise in the post-adoption phase. For example, what are the improvements or enhancements that must be carried
*

Corresponding author. Tel.: +886 4 7232105x7614; fax: +886 4 7211162. E-mail addresses: dyshee@cc.ncue.edu.tw (D.Y. Shee), yswang@cc.ncue.edu.tw (Y.-S. Wang).

0360-1315/$ - see front matter 2006 Elsevier Ltd. All rights reserved. doi:10.1016/j.compedu.2006.09.005

D.Y. Shee, Y.-S. Wang / Computers & Education 50 (2008) 894905

895

out? What is the priority maintenance item that deserves the limited resources of the organization? A prior evaluation is required to answer these questions, and a sound methodology is the key to eective evaluation. Conventional approaches for evaluating an information system (IS) have leaned towards the standpoints of technical personnel (Kao, 1998; Karat, 1988; Smith & Williams, 1999). In contrast, the WELS places particular stress on certain areas, such as, the content and the ways in which it is presented, demonstrating that it is a highly user-oriented system. Since users are widely recognized as the key stakeholders in any IS or IS service (Jiang, Klein, Roan, & Lin, 2001), their attitudes toward the system are pivotal and should be valued. This is evidenced by the fact that user satisfaction is often seen as a key antecedent to predict the success of a particular IS (DeLone & McLean, 2003; Melone, 1990; Raymond, 1987), or to anticipate a users behavior of reuse (Gelderman, 1998; Lin & Wang, 2006; Lin, Wu, & Tsai, 2005). Hence, this study will apply the construct of user satisfaction to evaluate WELS. In the context of WELS, however, there is a special group of users, the learners, who hold a unique view regarding satisfaction (Wang, 2003). Actually, they are the e-learners. This means that traditional measures for assessing user satisfaction with IS and for assessing learner satisfaction in the context of classroom teaching are not suitable for web-based e-learning. The purpose of this study is twofold. First, it will propose a step-based, multi-criteria methodology from the perspective of e-learner satisfaction in order to support important evaluation-related tasks (e.g., selection from candidate products or solutions, and maintenance), which will be carried out at the pre- or post-adoption phase of the WELS life cycle. This includes dening the constituent steps and recommending tools and techniques which can be used in each step. Second, by following the proposed methodology, this study will investigate how e-learners perceive the relative importance of the decision criteria and the dimensions of these criteria in order to construct a preference structure, which is key in this decision-making process. The remainder of this paper is organized as follows: Section 2 discusses WELS, the development of user satisfaction scales, and the theory of multi-criteria decision making. Section 3 proposes and explains the methodology. Section 4 describes how the empirical investigation is carried out to derive the preference structure, and presents the results of the investigation. Section 5 provides a case study as an illustrative example. Finally, we present the discussions in Section 6 and draw our conclusions in Section 7. 2. Theoretical background 2.1. Web-based e-learning system E-learning refers to the use of electronic devices for learning, including the delivery of content via electronic media such as Internet/Intranet/Extranet, audio or video tape, satellite broadcast, interactive TV, CD-ROM, and so on (Kaplan-Leiserson, 2000). This type of learning moves the traditional instruction paradigm to a learning paradigm (Jo nsson, 2005), thereby relinquishing much control over planning and selection to the learners. In addition, it is capable of bringing the following advantages to learners: cost-eectiveness, timely content, and access exibility (Hong, Lai, & Holton, 2003; Lorenzetti, 2005; Rosenberg, 2001). E-learning applications may appear with dierent forms of designation such as web-based learning, virtual classrooms, and digital collaboration (Kaplan-Leiserson, 2000; Khalifa & Kwok, 1999). This study is focused on web-based e-learning which is conducted using the Internet (or Intranet/Extranet) and web technologies. This type of e-learning places a greater emphasis on the enabling or facilitating role technology plays in data search and transmission, interactivity, and personalization (Piccoli, Ahmad, & Ives, 2001). Regarding the design and construction of a WELS, the trend is toward incorporating two dierent technologies: webpagebased computer-assisted instruction (Fletcher-Flinn & Gravatt, 1995), which is basically the tutorial, drill and practice, and learning networks (Hiltz & Turo, 2002), including extensive learnerlearner and instructorlearner communication and interaction, into an integrated environment. Learners in this environment can thus access remote resources, as well as interact with instructors and other learners to satisfy their requirements. 2.2. From user satisfaction to e-learner satisfaction Satisfaction is the pleasure or contentment that one person feels when she/he does something or gets something that she/he wanted or needed to do or get (Collins Cobuild English Dictionary, 1999). When used in

896

D.Y. Shee, Y.-S. Wang / Computers & Education 50 (2008) 894905

research, satisfaction is usually conceptualized as the aggregate of a persons feelings or attitudes toward the many factors that aect a certain situation (Bailey & Pearson, 1983). In the eld of humancomputer interaction, user satisfaction is usually visualized as the expression of aections gained from an interaction (Mahmood, Burn, Gemoets, & Jacquez, 2000). This means that user satisfaction is the subjective sum of interactive experiences inuenced by many aective components in the interaction (Lindgaard & Dudek, 2003). In the eld of IS, the concept of user satisfaction is usually used to represent the degree to which users believe the IS they are using conforms to their requirements (Cyert & March, 1963). In the past, many scholars have attempted to measure user satisfaction. The results of their eorts revealed that user satisfaction is a complex construct and its substance varies with the nature of the experience or case. In the eld of humancomputer interaction, user satisfaction is traditionally measured in terms of visual appeal, productivity, and usability (Hassenzahl, Beau, & Burmester, 2001; Lindgaard & Dudek, 2003). Since the early 1980s, many scholars in the eld of IS began to conduct systematic studies to develop a comprehensive and reasonable set of factors to measure user satisfaction. For example, Bailey and Pearson (1983) developed an instrument with 39 items to measure the level of perceived user satisfaction with IS. Ives, Olson, and Baroudi (1983) proposed the scale of User Information Satisfaction which consists of three parts: electronic data processing sta and services, information product, and knowledge or involvement. Doll and Torkzadeh (1988) developed a questionnaire with 18 items, which can be classied into ve dimensions: system content, system accuracy, report format, ease of use, and system timeliness, to assess the End-user Computing Satisfaction. New scales developed in the past decade are largely based on the aforementioned products. The prevalence of web-based e-learning applications stimulates the development of e-learner satisfaction scales by directly adapting from teaching quality scales in the eld of educational psychology (e.g., Cashin & Downey, 1992; Cohen, 1981; Marsh, 1991), or from user satisfaction scales in the eld of humancomputer interaction or IS. However, the application of the achievement from any single eld is deemed insucient because it can omit some critical aspects of learner satisfaction with WELS. Based on the scales of students evaluation of teaching eectiveness and user satisfaction, Wang (2003) conducted an exploratory study directed at e-learners. The results of his work showed that a total of 17 items applicable to measuring e-Learner Satisfaction could be classied into the following dimensions: content, personalization, learning community and learner interface. 2.3. Multi-criteria decision making Multi-criteria decision making (MCDM), which deals mainly with problems about evaluation or selection (Keeney & Raia, 1976; Teng, 2002), is a rapidly developing area in operational research and management science. The complete MCDM process involves the following basic elements: criterion set, preference structure, alternative set, and performance values (Yu, 1985). While the nal decision will be made based on the performance of alternatives, a well-dened criterion set and preference structure are key inuential factors and should be prepared in advance. In order to obtain the criterion set and preference structure, a hierarchical analysis must be carried out. Such an analysis helps decision makers to preliminarily derive an objective hierarchy structure to demonstrate the relationship between the goal and the decision criteria (MacCrimmon, 1969). The goal of the hierarchy may be a perceived better direction of a decision organization (Teng, 2002). On the other hand, the criteria represent the standards for judging (Hwang & Masud, 1979), which should be complete, operational, decomposable, non-redundant, and minimal in size (Keeney & Raia, 1976; Teng, 2002). Based on this hierarchy structure, decision makers can set about deriving the relative importance of the criteria and then assessing alternatives against each criterion. By integrating the assessments of alternatives with the relative importance of criteria, an organization can select one alternative which best meets its requirements to accomplish its goal. 3. The methodology Fig. 1a shows the system life cycle under a commercial application package implementation strategy. The major characteristic of this strategy, compared with an in-house development one, is that the organization communicates its system requirements in a form of either request for proposal or request for quotation to

D.Y. Shee, Y.-S. Wang / Computers & Education 50 (2008) 894905

897

System change request (improvements or enhancements)

1. SCOPE DEFINITION

2.
Problem statement

REQUIREMENT ANALYSIS

Request for proposal or request for quotation


Business requirement statement

Start

THE USER COMMUNITY


Working solution

Route for in-house develop ment 4.

2.1. LOGICAL DESIGN


Logical design

5. OPERATION & MAINTENANCE

ADOPTION (INSTALLATION & Operational A product or CUSTOMIZATION) system solution

3. DECISION ANALYSIS

WELS VENDORS Alternative products/solutions

b
(3 or 5).1. PROBLEM & GOAL DEFINITION (3 or 5).2. HIERARCHY STRUCTURE DEVELOPMENT
Hierarchical analysis from an e-learner satisfaction perspective
Empirical investigation Posterior examination

(3 or 5).3. DERIVING THE PREFERENCE STRUCTURE


Survey of e-learners AHP (pool first and pool last methods)

5.4. SYSTEM EV ALUATION


Rating-based method

5.5. IMPROVEMENTS & ENHANCEMENTS IDENTIFICATION


Predefined threshold Weighted distance from perfection

3.4.
GATHERING THE ALTERNATIVES

3.5. ALTERNATIVES EV ALUATION


Rating-based method Ranking-based method

3.6. ALTERNATIVE SELECTION

Fig. 1. The methodology. (a) System life cycle under the commercial application package implementation strategy (except the shadowed area) (adapted from Whitten et al., 2004) and (b) multi-criteria methodology for evaluating WELS from the perspective of e-learner satisfaction.

candidate WELS vendors. Afterwards, those vendors submit their products or solutions as alternatives for evaluation (Whitten et al., 2004). The methodology proposed in this study aims at supporting the tasks that take place in the phase of decision analysis or the phase of operation and maintenance. As shown in Fig. 1b, this methodology is logically divided into steps. The explanations, including the tasks and the tools or techniques applicable in each step, are as follows. The start-up step denes the problem and the goal. The problems may be those, as mentioned in the rst section, which are experienced by an organization in the phase of decision analysis or operation and maintenance when the commercial application package implementation strategy is used. To solve them, evaluation is necessary. Therefore, the goal is dened as the evaluation of WELS alternatives. After dening the problems and the goal, the next step involves the development of the hierarchy structure. In this step, a hierarchical analysis based on e-learner satisfaction is to be carried out. Literature review, systematic analysis, empirical investigation, brainstorming, and interpretive structural modeling are the feasible methods (MacCrimmon, 1969). The completion of this step will answer the following questions: What are the criteria that can be applied to this context? How are these criteria classied into dimensions? How can these criteria and dimensions be arranged into a hierarchy. The hierarchy structure used in this study for evaluating WELS alternatives was adapted from Wangs (2003) empirical work, because what we refer to as learner satisfaction in the WELS context is conceptually close to his e-Learner Satisfaction. However, a considerable amount of Wangs measurement items, which would violate the principle of minimal size in the criterion set and complicate the MCDM process, inspired us to further examine those items. This examination was carried out through discussions with three professors of MIS and ve experienced WELS learners in order to rate the relevance of each item in terms of WELS evaluation and to check if there were any conceptually or connotatively redundant items. As a result, a total of four items were eliminated, with the remainder being transformed into the form of decision criteria. As shown in Fig. 2, we ended up with four dimensions, comprising a total of 13 criteria in the hierarchy structure.

898

D.Y. Shee, Y.-S. Wang / Computers & Education 50 (2008) 894905

Goal

Dimensions

Criteria
C01 Ease of use C02 User-friendliness

D1: Learner Interface

The Evaluation of WELS Alternatives

C03 Ease of understanding C04 Operational stability C05 Ease of discussion with other learners D2: Learning Community C06 Ease of discussion with teachers C07 Ease of accessing shared data C08 Ease of exchanging learning with the others C09 Up-to-date content D3: System Content C10 Sufficient content C11 Useful content C12 Capability of controlling learning progress D4: Personalization C13 Capability of recording learning performance

Fig. 2. The hierarchy structure for evaluating WELS.

Based on the hierarchy structure, the third step, deriving the preference structure, explores learners perceptions of the relative importance of the criteria and the dimensions of these criteria. This may help answer what it is that users regard highly in terms of learner satisfaction in the context of WELS. Among several procedures that have been proposed, the analytic hierarchy process (AHP), developed by Saaty (1980), is recommended because of its superiority in judgment and measuring scales over others (Forman, 1989; Lane & Verdini, 1989). Satty also proposed a consistency ratio to examine the rationality of the judgment of the decision-maker. A value of consistency ratio less than 0.1 is deemed as being suciently consistent. After the preference structure is obtained, the center of MCDM activities shifts to the evaluation of WELS alternatives. For the phase of decision analysis, organizations can gather the alternatives and then evaluate the alternatives against the criteria. In this evaluation, two methods, rating-based and ranking-based, are recommended. The rating-based method involves assessing a particular alternative by rating it under each criterion or dimension. The overall performance of this alternative can be acquired by summating its weighted performance under each criterion or dimension, and the decision organization can thus select from alternatives according to the overall performance of each alternative. The simplest form of this method, as shown in the right-half of Fig. 3, is the simple additive weight method. On the other hand, as shown in the left-half of Fig. 3, the ranking-based method involves ranking the alternatives by their key attributes. Traditional ranking procedures used in the eld of social science include: method of rank order, method of paired comparisons, method of constant stimuli, and method of successive categories (Yang, Wen, Wu, & Li, 2001). In the present paper, we recommend the method of paired comparisons because it can be integrated with the preference structure (weights of criteria and dimensions) to facilitate overall assessment of the alternatives. Moreover, the use of the method of paired comparisons means that another AHP must be carried out in this step. This makes the evaluation a straightforward task for organizations, because they have already had quite a few experiences with AHP from the previous step (deriving the preference structure). Under this method, the alternatives will be pairwise-compared with respect to each criterion or dimension to derive normalized relative priorities of each alternative. The overall priority of each alternative and their rankings can then be used as the basis for selection. Finally, in the phase of operation and maintenance, the existing system will be assessed against criteria or dimensions. Under these circumstances, only the rating-based method is applicable. Under a particular criterion or dimension, a performance value below the pre-dened threshold indicates that this is an area which needs improvement or enhancement. If there are many such areas, and if organizational resources are so limited that maintenance eorts are only allowed to be devoted to one area, then the area with the greatest

D.Y. Shee, Y.-S. Wang / Computers & Education 50 (2008) 894905

899

Dimensions (Criteria)
Pairwise comparisons Alternative A1 Weights w1 w2 w3

c1 x 1 c2 x 2 c3 x 3

A i1 A i2 A i3

d i1=

A
j =1

ij

wj

Alternative A2

A i = d ik = A ij w j
k =1 j =1

Alternative Am

Ranking-based evaluation

Note: cj denotes criterion j and wj denotes weight of criterion j , j = 1, 2, , n . A i denotes WELS i, i = 1, 2,, m. A ij denotes the performance score of WELS i under criterion j. d ik denotes the performance score of WELS i under dimension k , k = 1, 2, , p .

weighted distance from perfection, dened by weighted distance from perfection = weight of a particular criterion or a particular dimension (the perfect score performance score), will be the priority.1 4. Deriving the preference structure: an empirical study by AHP In order to derive the preference structure, a survey was carried out. Data were gathered from students enrolled in courses taught by means of WELS at a large university in northern Taiwan. Six classes were selected and an AHP questionnaire was distributed to each student (see Appendix for an example of the AHP questionnaire). A total of 276 valid samples were returned. Among the 276 respondents, 86 were freshmen (31.2%), 69 were sophomores (25.0%), 46 were juniors (16.7%), 72 were seniors (26.1%), and three were classied as others (1.1%); as for the colleges to which they were aliated, 23 were from the College of Science (8.3%), 80 were from the College of Social Sciences (29.0%), 114 were from the College of Commerce (41.3%), 6 were from the College of Law (2.2%), 24 were from the College of Liberal Arts or Foreign Languages (8.7%), 13 were from the College of Communication (4.7%), and 16 were from others (5.8%); the frequency distribution of their experience with computer usage was 4 with less then one year (1.4%), 115 with one to ve years (41.7%), 139 with ve to ten years (50.4%), and 18 with more than ten years (6.5%); more than half (149 out of 276, 54.0%) indicated that they had experience in using WELS. In this study, the integration of the preferences of each respondent was carried out by pool rst and pool last methods (Buckley, 1985), respectively. The pool last method also provides the coecient of variation to represent the level of variation in the respondents perceptions, since this method will produce a preference for each respondent. It must be noted that the weights of each dimension were calculated on the basis of pairwise comparisons between dimensions, and the local weights of each criterion were calculated on the basis of pairwise comparisons between criteria within the same dimension. A criterions overall weight can then be obtained by multiplying its local weight by the weight of the dimension to which it belongs. Moreover, not every set of responses with respect to dimensions or goal from each respondent passed the consistency test (a value of consistency ratio less than 0.1). Consequently, for both pool rst and pool last methods, we provide results for the entire sample and for that sample but after removing those responses which did not pass the test, respectively, to show the dierence between before and after the adjustment. The results are shown in Tables 1 and 2. It is found that the preference structures produced by these two methods are identical, showing
1

Under the rating-based method, for example, if a 1-to-10 (lowest to highest) scale is used, the perfect score is 10.

c n-1 cn

A i(n-1) A in

w n-1 wn
n

d ip=
j=n1

ij

wj

Rating-based evaluation

Fig. 3. The principle of rating-based and ranking-based evaluations.

900

D.Y. Shee, Y.-S. Wang / Computers & Education 50 (2008) 894905

Table 1 Weights of dimensions and criteria (pool rst) Dimension Before adjustment Weight D1 D2 D3 D4 Learner interface Learning community System content Personalization 0.297 0.195 0.272 0.236 Local weight 0.230 0.212 0.261 0.297 0.228 0.269 0.257 0.246 0.217 0.353 0.430 0.576 0.424 Consistency ratio 0.004 After adjustment Weight 0.319 0.197 0.262 0.222 Consistency ratio 0.001 Local weight 0.248 0.213 0.260 0.279 0.230 0.254 0.280 0.236 0.221 0.361 0.418 0.576 0.424 Overall weight 0.079 0.068 0.083 0.089 0.045 0.050 0.055 0.046 0.058 0.095 0.110 0.128 0.094 Consistency ratio 0.000

Criterion C01 C02 C03 C04 C05 C06 C07 C08 C09 C10 C11 C12 C13 Ease of use User-friendliness Ease of understanding Operational stability Ease of discussion with other learners Ease of discussion with teachers Ease of accessing shared data Ease of exchanging learning with the others Up-to-date content Sucient content Useful content Capability of controlling learning progress Capability of recording learning performance

Overall weight 0.068 0.063 0.078 0.088 0.044 0.052 0.050 0.048 0.059 0.096 0.117 0.136 0.100

Consistency ratio 0.001

0.002

0.001

0.004

0.001

0.000

0.000

The before adjustment column lists results for the entire sample; the after adjustment column lists results for the sample after removing those responses which do not pass the consistency test.

little dierence between the results before and after the adjustment. Most of the consistency ratio values after adjustment are, especially in Table 2, lower than those before the adjustment, as are the values of coecient of variation in Table 2. Further discussions regarding the results are provided in Section 6. 5. Illustration: a case study In order to prove the applicability of the proposed methodology, three real WELS products, all involving English learning, were employed as illustrative alternatives. A pseudonym was used to protect their anonymity. Ten experienced WELS learners were invited to be the evaluators to demonstrate how a decision can be arrived at for WELS alternatives based on the obtained preference structure and both rating-based and ranking-based methods. The evaluators were rst asked to assess the WELS alternatives by using the rating-based method. Under each criterion, these alternatives were rated on a 1-to-10 scale (lowest to highest) which expressed the judgment of the evaluators to the extent to which the WELS alternatives met a particular criterion. The evaluators were then asked to compare these alternatives with respect to each criterion by using the ranking-based method. There were three alternatives with thirteen criteria, resulting in a total of 39 pairwise comparisons under this method. Finally, each WELS alternatives overall performance or priority can then be acquired by summating its weighted performance scores or weighted priority scores under each criterion. The results are shown in Table 3. In Table 3, we nd that both methods produce the same pattern of priority of alternatives except for the one under C07. Lets assume that the evaluators must decide among the three WELS alternatives: A, B, and C. In this case, WELS B appears to be preferable. If only a few criteria or dimensions are emphasized, for example, when considering D1 (learner interface), WELS A performs better. In addition to supporting the decision of the selection, these results can also be applied to the single-system evaluation in order to nd out what improvements or enhancements are required. For example, regarding WELS B, if a threshold value of 7 is set for each dimension under the rating-based method, further eorts of system maintenance should be concentrated on the learning community (D2) and personalization (D4). However, if maintenance is subject to limited organizational resources, the personalization (D4) is the priority because it has a greater weighted distance from perfection.

D.Y. Shee, Y.-S. Wang / Computers & Education 50 (2008) 894905 Table 2 Weights of dimensions and criteria (pool last) Dimension Before adjustment Weight D1 Learner interface D2 Learning community D3 System content D4 Personalization Criterion C01 Ease of use C02 User-friendliness C03 Ease of understanding C04 Operational stability C05 Ease of discussion with other learners C06 Ease of discussion with teachers C07 Ease of accessing shared data C08 Ease of exchanging learning with the others C09 Up-to-date content C10 Sucient content C11 Useful content C12 Capability of controlling learning progress C13 Capability of recording learning performance 0.287 0.204 0.268 0.241 Local weight 0.235 0.218 0.255 0.292 0.230 0.277 0.253 0.240 0.233 0.350 0.417 0.571 Coefcient of variation 0.467 0.527 0.481 0.544 Overall weight 0.067 0.063 0.073 0.084 0.047 0.057 0.052 0.049 0.062 0.094 0.112 0.138 Coefcient of variation 0.519 0.491 0.399 0.479 0.457 0.507 0.427 0.442 0.699 0.316 0.326 0.409 0.144 Consistency ratio 0.158 Consistency ratio 0.164 After adjustment Weight 0.307 0.202 0.264 0.227 Local weight 0.246 0.214 0.259 0.281 0.228 0.267 0.271 0.234 0.237 0.356 0.407 0.571 Overall weight 0.076 0.066 0.080 0.086 0.046 0.054 0.055 0.047 0.063 0.094 0.107 0.130 Coefcient of variation 0.414 0.523 0.463 0.552 Coefcient of variation 0.385 0.384 0.356 0.459 0.386 0.485 0.341 0.382 0.643 0.262 0.303 0.409 0.041

901

Consistency ratio 0.040

Consistency ratio 0.045

0.185

0.036

0.000

0.000

0.429

0.103

0.544

0.429

0.097

0.544

The before adjustment column lists results for the entire sample; the after adjustment column lists results for the sample after removing those responses which do not pass the consistency test.

6. Discussions Based on the adjusted AHP results, it is found that WELS learners regard the learner interface as being the most important dimension. Since many IS-related studies have pointed out that the user interface is an area where a high level of interaction takes place (Dam, 2001; Kumar, Smith, & Bannerjee, 2004), a well-designed, user-friendly learner interface therefore becomes one of the critical factors in determining whether learners will enjoy using the WELS. This should remind those responsible for maintaining WELS to ensure that the present learner interface conforms to learners requirements. In addition to a user-friendly interface, learners also place great value on system content. As a result, emphasizing the non-technical aspect of the system content is critical. This also points out that in addition to technical engineers, a sound WELS needs a high level of participation from other non-technical experts, such as teachers, teaching material editors, and pedagogy professionals in the construction phase as well as in the subsequent operation and maintenance. Regarding the criteria of each dimension, we discover that respondents place the greatest emphasis on the stability of the learner interface. As to the learning community, the key issue for the learners is to be able to easily access shared data. When it comes to system content, learners care most about whether they nd it useful. As to personalization, the results reect that the learners most important requirement is being able to control their learning progress. When we look at all the criteria with respect to the overall goal of the hierarchy structure, those criteria which belong to personalization and system content are given more weight. This

902

D.Y. Shee, Y.-S. Wang / Computers & Education 50 (2008) 894905

Table 3 Evaluation of three WELS alternatives Rating-based method (performance score) C01 Ease of use C02 User-friendliness C03 Ease of understanding C04 Operational stability D1 Learner interface C05 Ease of discussion with other learners C06 Ease of discussion with teachers C07 Ease of accessing shared data C08 Ease of exchanging learning with the others D2 Learning community C09 Up-to-date content C10 Sucient content C11 Useful content D3 System content C12 Capability of controlling learning progress C13 Capability of recording learning performance D4 Personalization Overall A = 7.0 B = 7.8 POA: B ! A ! C A = 7.1 B = 7.6 POA: B ! A ! C A = 8.1 B = 7.1 POA: A ! B ! C A = 8.2 B = 7.6 POA: A ! B ! C A = 7.64 B = 7.52 POA: A ! B ! C A = 6.9 B = 6.1 POA: A ! B ! C A = 6.7 B = 7.3 POA: B ! A ! C A = 6.1 B = 6.1 POA: A = B ! C A = 7.1 B = 6.2 POA: A ! B ! C A = 6.67 B = 6.43 POA: A ! B ! C A = 7.2 B = 7.8 POA: B ! A ! C A = 6.9 B = 7.8 POA: B ! A ! C A = 6.8 B = 7.8 POA: B ! A ! C A = 6.93 B = 7.80 POA: B ! A ! C A = 5.9 B = 6.8 POA: B ! A ! C A = 6.1 B = 6.2 POA: B ! A ! C A = 5.98 B = 6.55 POA: B ! A ! C A = 6.90 B = 7.17 POA: B ! A ! C C = 5.8 C = 6.2 C = 6.0 C = 3.4 C = 5.27 C = 4.6 C = 4.7 C = 4.7 C = 4.8 C = 4.70 C = 5.3 C = 5.8 C = 5.5 C = 5.56 C = 4.7 C = 4.4 C = 4.57 C = 5.08 Ranking-based method (priority score) A = 0.392 B = 0.449 POA: B ! A ! C A = 0.395 B = 0.410 POA: B ! A ! C A = 0.433 B = 0.384 POA: A ! B ! C A = 0.499 B = 0.389 POA: A ! B ! C A = 0.433 B = 0.407 POA: A ! B ! C A = 0.480 B = 0.342 POA: A ! B ! C A = 0.382 B = 0.439 POA: B ! A ! C A = 0.424 B = 0.419 POA: A ! B ! C A = 0.447 B = 0.386 POA: A ! B ! C A = 0.432 B = 0.399 POA: A ! B ! C A = 0.347 B = 0.478 POA: B ! A ! C A = 0.353 B = 0.449 POA: B ! A ! C A = 0.351 B = 0.483 POA: B ! A ! C A = 0.351 B = 0.470 POA: B ! A ! C A = 0.386 B = 0.440 POA: B ! A ! C A = 0.367 B = 0.454 POA: B ! A ! C A = 0.378 B = 0.446 POA: B ! A ! C A = 0.399 B = 0.430 POA: B ! A ! C C = 0.159 C = 0.195 C = 0.183 C = 0.112 C = 0.160 C = 0.178 C = 0.179 C = 0.157 C = 0.167 C = 0.170 C = 0.175 C = 0.198 C = 0.165 C = 0.179 C = 0.174 C = 0.179 C = 0.176 C = 0.170

POA is abbreviated from priority of alternatives; ! denotes superior to, i.e. A ! B indicates that WELS A is superior to WELS B. All results obtained by ranking-based method pass the consistency test (a value of consistency ratio less than 0.1). The gures in each row of criteria (C01-C13) are original performance or priority scores; the gures in each row of dimensions (D1-D4), which are calculated on the basis of the local weights of criteria, and in the row of Overall, which are calculated on the basis of the overall weights of criteria, are weighted performance or priority scores. All the weights used in this evaluation are from the column of after adjustment in Table 1.

may be because these dimensions possess fewer criteria. However, it is worth noting that we discover that the learning community is regarded by the learners as having the least relative importance. We also nd that the criterion about which learners care the least (ease of discussion with other learners) belongs to this dimension. The reason for this result might be that the course design did not incorporate interacting with members of the learning community as a key component, or may be the functions related to the learning community of the WELS we investigated were not fully perceived by the learners. However, since interactions with the learning community can enhance learning achievement, this study recommends the establishment of certain mechanisms within WELS to promote a higher level of interactive learning behaviors. In this study, we have proposed a multi-criteria methodology for evaluating WELS from the perspective of learner satisfaction. Contrary to many MCDM-related studies, which simply implemented expert opinions to establish their hierarchy structure and preference structure, this study has adopted an empirically derived e-learner satisfaction scale as the foundation of building a hierarchy structure. This way, compared with the

D.Y. Shee, Y.-S. Wang / Computers & Education 50 (2008) 894905

903

traditional expert-based approach, contributes signicantly to the identication of several sets of decision criteria which have a higher level of reliability and validity in terms of e-learner satisfaction. As for the preference structure, a large-sample survey of WELS learners was conducted to explore the relative importance of the criteria. This user-based empirical eort promises a more valid result, and alleviates the concerns regarding the potential risks of overly relying on experts in MCDM practice. In addition, the results produced by the pool rst and pool last methods reveal an identical pattern of learners preferences. This convergence indicates the stability of the derived preference structure across the methods. Finally, this methodology accommodates the decision makers with a scheme, including rating-based and ranking-based methods, for the evaluation of alternatives. Our proposed methodology can be visualized as a procedural synthesis, with its main advantage being its exibility. Those who are responsible for making the decisions can either follow the entire process or they can customize part of it. For example, they can either use the AHP results exactly as obtained in this study, or carry out a distinct AHP within their organization so as to derive their own preference structure. When evaluating and selecting, they can choose using the rating-based method, the ranking-based method, or both. Such exibility also signies that future research eorts are possible to modify or consolidate the procedures in order to acquire variants, so that decision makers can be supplied with a variety of tools to comprehensively evaluate their WELS alternatives, and to make the right decision. 7. Conclusions The use of the e-learner-satisfaction perspective and a large-sample, learner-based AHP, contribute to adapting the conventional MCDM paradigm to problems that are highly user-oriented. Our methodology supplies management in both education and industry with not only a less complex but also a more appropriate and exible way to eectively analyze their currently deployed WELS. It can also support their selections of an appropriate WELS product, solution, or module by assessing the alternatives available when their organizations wish to adopt such a technological innovation. At the same time, it allows the technical personnel of WELS vendors (e.g., the analysts or designers) to gain a better understanding of learners preferences toward system features before a WELS is implemented, and it also can pinpoint any necessary improvements or enhancements. This allows a higher level of e-learner satisfaction to be achieved, and in the process increases the level of system acceptance and continued use. Acknowledgement This study was supported by the National Science Council of Taiwan under contract number NSC 94-2416H-018-019. Appendix. An example of question items in AHP questionnaire When assessing the WELS, the following dimensions (or criteria) will be used. Considering the dimensions (or criteria) listed on the right-hand and the left-hand sides, please indicate the relative importance between them.
Dimension (or criterion) A Learner interface 9 8 7 6 5 4 3 2 1 2 3 4 5 6 7
A is more important than B B is more important than A

Dimension (or criterion) B 8 9 Learning community

The descriptions of each scale: 1: equal importance; 3: weak importance; 5: essential importance; 7: very strong importance; 9: absolute importance; 2, 4, 6, and 8: intermediate values.

904

D.Y. Shee, Y.-S. Wang / Computers & Education 50 (2008) 894905

References
Bailey, J. E., & Pearson, S. W. (1983). Development of a tool for measuring and analyzing computer user satisfaction. Management Science, 29(5), 530545. Buckley, J. J. (1985). Fuzzy hierarchical analysis. Fuzzy Sets and Systems, 17(3), 233247. Cashin, W. E., & Downey, R. G. (1992). Using global student rating items for summative evaluation. Journal of Educational Psychology, 84(4), 563572. Cohen, P. A. (1981). Student ratings of instruction and student achievement. Review of Educational Research, 51(3), 281309. Collins Cobuild English Dictionary (1999). London, UK: HarperCollins Publishers. Cyert, R. M., & March, J. G. (1963). A behavior theory of the rm. Englewood Clis, NJ: Prentice-Hall. Dam, A. V. (2001). User interfaces: disappearing, dissolving, and evolving. Communications of the ACM, 44(3), 5052. DeLone, W. H., & McLean, E. R. (2003). The DeLone and McLean model of information systems success: a ten-year update. Journal of Management Information Systems, 19(4), 930. Doll, W. J., & Torkzadeh, G. (1988). The measure of end-user computing satisfaction. MIS Quarterly, 12(2), 259274. Fletcher-Flinn, C. M., & Gravatt, B. (1995). The ecacy of computer assisted instruction (CAI): a meta-analysis. Journal of Educational Computing Research, 12(3), 219241. Forman, E. H. (1989). AHP is intended for more than expected value calculation. Decision Science, 21(3), 670672. Gelderman, M. (1998). The relation between user satisfaction, usage of information systems and performance. Information and Management, 34(1), 1118. Hassenzahl, M., Beau, A., & Burmester, M. (2001). Engineering joy. IEEE Software, 18(1), 7076. Hiltz, S. R., & Turo, M. (2002). What makes learning networks eective? Communications of the ACM, 45(4), 5659. Hong, K. S., Lai, K. W., & Holton, D. (2003). Students satisfaction and perceived learning with web-based course. Educational Technology and Society, 6(1), 116124. Hwang, C. L., & Masud, A. S. M. (1979). Multiple objective decision making: methods and applications. New York: Spring-Verlag. Ives, B., Olson, M. H., & Baroudi, J. J. (1983). The measurement of user information satisfaction. Communications of the ACM, 26(10), 785793. Jiang, J. J., Klein, G., Roan, J., & Lin, J. T. M. (2001). IS service performance: self-perceptions and user perceptions. Information & Management, 38(8), 499506. Jo nsson, B.-A. (2005). A case study of successful e-learning: a web-based distance course in medical physics held for school teachers of the upper secondary level. Medical Engineering & Physics, 27(7), 571581. Kao, C. (1998). Performance of several nonlinear programming software packages on microcomputers. Computers & Operations Research, 25(10), 807816. Kaplan-Leiserson, E. (2000). e-Learning glossary, Available from http://www.learningcircuits.org/glossary.html. Karat, J. (1988). Software evaluation methodologies. In M. Helander (Ed.), Handbook of humancomputer interaction (pp. 891903). Amsterdam, Netherland: Elsevier Science. Keeney, R., & Raia, H. (1976). Decision with multiple objectives: preference and value tradeos. New York: Wiley and Sons. Khalifa, M., & Kwok, R. C.-W. (1999). Remote learning technologies: eectiveness of hypertext and GSS. Decision Support Systems, 26(3), 195207. Kumar, R. L., Smith, M. A., & Bannerjee, S. (2004). User interface features inuencing overall ease of use and personalization. Information & Management, 41(3), 289302. Lane, E. F., & Verdini, W. A. (1989). A consistency test for AHP decision makers. Decision Science, 20(3), 575590. Lindgaard, G., & Dudek, C. (2003). What is this evasive beast we call user satisfaction? Interacting with Computers, 15(3), 429452. Lin, H.-H., & Wang, Y.-S. (2006). An examination of the determinants of customer loyalty in mobile commerce contexts. Information & Management, 43(3), 271282. Lin, C. S., Wu, S., & Tsai, R. J. (2005). Integrating perceived playfulness into expectation-conrmation model for web portal context. Information & Management, 42(5), 683693. Lorenzetti, J. P. (2005). How e-learning is changing higher education: a new look. Distance Education Report, 22(July), 47. MacCrimmon, K. R. (1969). Improving the system design and evaluation process by the use of trade-o information: an application to northeast corridor transportation planning. RM-5877-Dot, Santa Monica, CA: The Rand Corporation. Mahmood, M. A., Burn, J. M., Gemoets, L. A., & Jacquez, C. (2000). Variables aecting information technology end-user satisfaction: a meta-analysis of the empirical literature. International Journal of HumanComputer Studies, 52(4), 751771. Marsh, H. W. (1991). Multidimensional students evaluations of teaching eectiveness: a test of alternative higher-order structures. Journal of Educational Psychology, 83(2), 285296. Melone, N. P. (1990). A theoretical assessment of the user-satisfaction construct in information systems research. Management Science, 36(1), 7691. Piccoli, G., Ahmad, R., & Ives, B. (2001). Web-based virtual learning environments: a research framework and a preliminary assessment of eectiveness in basic IT skills training. MIS Quarterly, 25(4), 401426. Raymond, L. (1987). Validating and applying user satisfaction as a measure of MIS success in small organization. Information and Management, 12(4), 173179. Rosenberg, M. J. (2001). E-learning: Strategies for delivery knowledge in the digital age. New York: McGraw-Hill. Saaty, T. L. (1980). The analytic hierarchy process. New York: McGraw-Hill. Smith, C. U., & Williams, L. G. (1999). A performance model interchange format. Journal of Systems and Software, 49(1), 6380.

D.Y. Shee, Y.-S. Wang / Computers & Education 50 (2008) 894905

905

Teng, J.-Y. (2002). Project evaluation: method and applications. Keelung, Taiwan: National Taiwan Ocean University. Wang, Y.-S. (2003). Assessment of learner satisfaction with asynchronous electronic learning systems. Information & Management, 41(1), 7586. Whitten, J. L., Bentley, L. D., & Dittman, K. C. (2004). System analysis & design methods. New York: McGraw-Hill. Yang, G. S., Wen, C. Y., Wu, C. S., & Li, Y. Y. (2001). Social and behavioral science research methods. Taipei, Taiwan: Bookcake Publishing. Yu, P.-L. (1985). Behavior and decision. Taipei, Taiwan: Sinica.

Das könnte Ihnen auch gefallen