Beruflich Dokumente
Kultur Dokumente
www.emeraldinsight.com/1741-038X.htm
Performance
A conceptual model measurement
of performance measurement
for supply chains
125
Alternative considerations
Received May 2006
Adisak Theeranuphattana and John C.S. Tang Revised February 2007
School of Management, Asian Institute of Technology, Pathumthani, Thailand Accepted April 2007
Abstract
Purpose – This paper revisits the recent work of Chan and Qi which proposed an innovative
performance measurement method for supply chain management. While the measurement method has
many advantages, it can be unwieldy in practice. This paper aims to address these limitations and to
propose a more user-friendly alternative performance measurement model.
Design/methodology/approach – The performance measurement model described in this paper is
a combination of two existing methods: Chan and Qi’s model and the supply chain operations reference
(SCOR) model. To demonstrate the applicability of the combined approach, actual SCOR level 1
performance data and the measurement information from a case supply chain (SC) are collected and
processed by Chan and Qi’s measurement algorithm.
Findings – These two methods complement each other when measuring SC performance.
Originality/value – This paper develops a practical and efficient measurement model that can
resolve SC performance problems by incorporating the strengths of two different measurement models
to create a synergistic new model.
Keywords Supply chain management, Performance measurement (quality), Fuzzy logic
Paper type Research paper
1. Introduction
Performance measurement is critical to the success of almost any organization because it
creates understanding, molds behavior and improves competitiveness (Fawcett and
Cooper, 1998). As companies move towards supply chain management (SCM), it
becomes necessary to measure the performance of Supply Chains (SCs). Traditional
measurement approaches however, have become less relevant to SCM because they are
too narrow in scope to address the broad range of measurement activities. SCM has
evolved over the last decade, due to a dramatic increase in the publication of SCM
theories and practices. However, the topic of SC performance measurement has not
received adequate attention from researchers or practitioners (Beamon, 1999; Holmberg,
2000; Gunasekaran et al., 2001; Chan and Qi, 2003a; Chan et al., 2003; Gunasekaran et al.,
2004; Schmitz and Platts, 2004; Folan and Browne, 2005; Park et al., 2005).
Chan and Qi (2002, 2003a, b) and Chan et al. (2003) proposed an innovative performance
measurement system (PMS) for SCs (hereafter “Chan and Qi’s model”), which includes a Journal of Manufacturing Technology
conceptual performance model, a performance measurement and aggregation method, Management
Vol. 19 No. 1, 2008
and example performance measures. Chan and Qi’s model is regarded as a significant pp. 125-148
development towards measuring SC performance because the model provides q Emerald Group Publishing Limited
1741-038X
managers with many useful techniques for analyzing and assessing SC performance. DOI 10.1108/17410380810843480
JMTM The model can quantify the relative importance of both SC processes and measures
19,1 with respect to SC strategies. This technique enables managers to make connections
between strategies and measurements and to concentrate on key processes and
measures that have a significant impact on the overall performance of a SC. The PMS
also offers managers a way to aggregate performance results into a composite index
that depicts the overall performance of a SC. This index offers managers an efficient
126 means of analyzing and benchmarking SC performance (Chan and Qi, 2003a).
Practitioners might find it difficult to apply the model however, because it measures
local performance and therefore involves too many performance measures and too
much data. Furthermore, the lack of clarity, and the inability to reach a consensus on
how to define SC metrics are two barriers that can prevent the successful
implementation of the model.
To solve these problems, new developments such as the supply chain operations
reference (SCOR) model can be used. The SCOR model is a well recognized SC model
used in various industries around the world. The model allows SC partners to “speak a
common language” because it provides standardized definitions for processes, process
elements, and metrics. Since, the SCOR model offers standardized definitions of
performance metrics for the SC, it is easier for managers to identify relevant measures
and use them. More and more companies have adopted SCOR performance metrics as
standard criteria for evaluating their SC performance. Though widely used in practice,
the SCOR model is largely ignored by academia (Gammelgaard and Vesth, 2004; Kasi,
2005). In addition, some authors (Gammelgaard and Vesth, 2004; Angerhofer and
Angelides, 2006) have noted that although oriented towards process and efficiency, the
SCOR model is not oriented towards strategy.
Given the contributions of Chan and Qi’s model and the SCOR model’s pragmatic
advantages, this paper aims to develop a performance measurement model for SCs that
manages to both incorporate the advantages of the two models and mitigate their
disadvantages by creating a more practical and efficient model. Chan and Qi’s
performance measurement method can be divided into two separate models:
(1) The structural performance measurement framework that links the SC
hierarchical structure with performance measures.
(2) The measurement and aggregation algorithm that converts the performance
data at the operational level into the meaningful composite index.
2. SCOR model
The SCOR model was introduced in 1996 and has been endorsed by the Supply-Chain
Council (SCC), a global organization of firms interested in SCM. The SCOR model is
a business process reference model, which provides a framework (toolkit) that includes Performance
SC business processes, metrics, best practices, and technology features. The SCOR measurement
model attempts to integrate the concepts of business process reengineering,
benchmarking, process measurement, and best practice analysis and apply them to
SCs. The SCOR model offers users the following benefits:
.
standard descriptions of management processes that make up the SC;
.
a framework of relationships among the standard processes;
127
. standard metrics to measure process performance;
. management practices that produce best-in-class performance; and
. standard alignment to software features and functionality that enable best
practices.
To represent the SC, the SCOR model uses a “building block” approach based on five
core processes – plan, source, make, deliver, and return – altogether called level 1
processes. The “plan process” balances the demand and supply to best meet the
sourcing, manufacturing, and delivery requirements. The “source process” procures
goods and services to meet planned or actual demand. The “make process” transforms
a product to a finished state to meet planned or actual demand. The “deliver process”
provides finished goods and services to meet planned or actual demand, typically
including order management, transportation management, and distribution
management. The “return process” is associated with returning or receiving
returned products for any reason.
The SCOR model is divided into three standardized levels of process details. The top
level (level 1) defines the scope and content of the SC by using the five core processes.
The configuration level (level 2) specifies configuration of the SC at the process level by
using a tool kit of process categories. At level 2, processes are configured in line with
operations strategies. For example, “make” can be configured into make-to-stock (M1),
make-to-order (M2), or engineer-to-order (M3). The process element level (level 3)
defines a process flow diagram with process elements or specific tasks for each process
category in level 2. For example, M2 embraces schedule production activities (M2.1),
issue sourced/in-process product (M2.2), produce and test (M2.3), etc.
The SCOR model advocates hundreds of performance metrics used in conjunction
with five performance attributes: reliability, responsiveness, flexibility, cost, and asset
metrics. Note that quality is excluded here. Hausman (2004) explained that in modern
SCM, quality is taken as a given and that factors in quality management and
improvement are somewhat separate from those in SCM development. According to
the Supply-Chain Council (2006), five attributes of SC performance are defined as
follows:
(1) SC reliability. The performance of the SC in delivering the correct product, to the
correct place, at the correct time, in the correct condition and packaging, in the
correct quantity, with the correct documentation, to the correct customer.
(2) SC responsiveness. The speed at which a SC provides products to the customer.
(3) SC flexibility. The agility of a SC in responding to marketplace changes to gain
or maintain competitive advantage.
(4) SC costs. The costs associated with operating the SC.
JMTM (5) SC asset management. The effectiveness of an organization in managing assets
19,1 to support demand satisfaction. This includes the management of all assets:
fixed and working capital.
SCOR performance metrics are classified not only by five performance attributes but
also by all processes at the three levels. The SCOR model Version 8.0 endorses ten
128 performance metrics for its level 1 as shown in Table I. These metrics are designed to
provide a view of overall SC performance. Level 2 metrics are the measures of process
categories, and level 3 metrics are those of process elements. The SCOR model levels 1
and 2 metrics keep management focused, while level 3 metrics are used to diagnose
variations in performance against plan.
5. Proposed model
5.1 General structure of SCOR PMH
The above discussion provides the rationale for the proposed model. In this section, we
show how the SCOR model and Chan and Qi’s methodology can be combined and shed
some light on measuring SC performance at the top level. This paper uses the SCOR
model to represent the SC system and to identify performance measures. Managers
need to select a pre-specified set of processes, metrics, and their relationships from the
SCOR model to map the SC measurement system into the PMH.
Performance can be measured at several levels. To build a comprehensive view of
SCOR performance measurement, Figure 1 shows the general structure of SCOR PMH,
comprising three hierarchical levels: the top level, a configuration level, and a process
element level. Each level could be analyzed to reflect the performance of different
operation and management levels. For example, managers may assess overall SC
health at the top level, diagnose problems at the configuration level, and identify
corrective actions at the process element level. Managers may also construct the PMH
for any particular level.
SCOR Level 1 Metrics Supply Chain Performance
measurement
Reliability
Plan
Responsiveness
Source Make Deliver Level 1: Top level
Flexibility (Process Types)
Costs Return Return 133
Assets
Reliability
Responsiveness
Level 2: Configuration Level
Flexibility (Process Categories)
Costs
Assets
Reliability
Responsiveness
Level 3: Process Element
Flexibility (Decompose Processes)
Costs
Assets Figure 1.
The general structure of
applying the SCOR model
Source: Chan and Qi (2003b) and SCOR Model 8.0, Supply-Chain Council (June 2006)
Before illustrating the SCOR PMH for each hierarchy, we would like to confine the
scope of the proposed model. Even though the SCOR model endorses five processes,
including the return process, the proposed model will not include this process. Min and
Zhou (2002) suggested that the scope of a SC model should be compromised between
model complexity and reality. The return process in the SCOR model demands an
array of process categories, yet this process has little to do with high-level SCOR
performance measures (level 1 metrics). Considering that the return process will
significantly increase the complexity of the proposed model and that the process makes
little contribution to the overall SC performance, we deliberately exclude this process
from the proposed model.
To construct an example of a SCOR process categories and measures hierarchy for a
SC, we assume that the “source” “make” and “deliver” process categories all are
configured to “make-to-order”. The example of the PMH is shown in Figure 2.
Although for practical reasons, not every metric identified by the SCOR model needs to
be included in the PMH, all the metrics are presented to make it clear how the SCOR
19,1
134
JMTM
Figure 2.
measures hierarchy
An example of SCOR
process categories and
Supply Chain Supply Chain Level
Top Level
Plan (P) Source (S) Make (M) Deliver (D) (Process Types)
P1: Plan P2: Plan P3: Plan P4: Plan S2: Source Make- M2: Make-to- D2: Deliver Make- Configuration Level
Supply Chain Source Make Deliver to-Order Product Order to-Order Product (Process Categories)
Yield Reliability
Order Fulfillment Order Fulfillment Order Fulfillment Order Fulfillment Order Fulfillment Order Fulfillment Order Fulfillment
Cycle Time Cycle Time Cycle Time Cycle Time Cycle Time Cycle Time Cycle Time
Plan Source Cycle
Plan Cycle Time Source Cycle Time Make Cycle Time Deliver Cycle Time Responsiveness
Time
Upside Make Upside Deliver
Flexibility Flexibility
136
JMTM
Figure 3.
Schedule Issue Sourced/In- Produce and Test Package Cycle Stage Finished Release Finished
Production Process Product Cycle Time Time Product Cycle Product to Deliver
Activities Cycle Cycle Time Time Cycle Time
Time
Responsiveness
Flexibility
Cost to Schedule Cost to Issue Cost to Produce Cost to Package Cost to Stage Cost to Release
Production Sourced/ In- and Test Finished Product Finished Product
Activities Process Product to Deliver
Warranty Costs
Costs
Supply Chain
Performance Supply Chain
Customer-Facing Internal-Facing
Performance
Reliability Responsiveness Flexibility Costs Assets
Attributes
140
JMTM
Table III.
Aggregation of weights
of performance attributes
t(l, m, u) Evaluator I Evaluator II Evaluator III AGG Defuzzified weight Normalized weight
Reliability t(0.154, 0.239, 0.365) t(0.043, 0.089, 0.183) t(0.109, 0.193, 0.340) t(0.102, 0.174, 0.296) 0.190 0.181
Responsiveness t(0.031, 0.048, 0.074) t(0.029, 0.059, 0.123) t(0.033, 0.058, 0.102) t(0.031, 0.055, 0.100) 0.062 0.059
Flexibility t(0.009, 0.015, 0.022) t(0.005, 0.012, 0.027) t(0.098, 0.193, 0.376) t(0.038, 0.073, 0.142) 0.084 0.080
Costs t(0.419, 0.650, 0.991) t(0.425, 0.800, 1.000) t(0.327, 0.524, 0.836) t(0.390, 0.658, 0.942) 0.664 0.631
Assets t(0.017, 0.048, 0.134) t(0.016, 0.040, 0.101) t(0.016, 0.032, 0.062) t(0.016, 0.040, 0.099) 0.052 0.049
Note: t(l, m, u): triangular membership function with parameter l, m, u
performance grades from different evaluators were aggregated, and the Performance
aggregated results are presented at the end of the table: measurement
.
Step 3. Aggregate and defuzzify the measurement results. After the performance
grade sets and the relative weights of all the performance measures had been
calculated, the measurement results of all attributes could be aggregated through
the weighted averaging aggregation method. For example, the aggregated
measurement result of the assets attribute is shown in Table VII. In Table VIII, 141
the fuzzy performance grades of all attributes were aggregated into those of the
SC. The SC performance index, which is a crisp number, was determined by
multiplying the fuzzy performance grades by their defined numerical meanings,
adding the resultant values, and then dividing by the sum of the fuzzy
performance grades. Based on the above calculation, the performance index of
the case study’s SC for December 2006 was 4.050. This number reveals that the
overall SC performance was not very satisfactory with respect to the ten-point
scale.
The defuzzified measurement results for all performance attributes and
metrics are shown in Figure 5. With this information, it is easy for SC managers
to understand and benchmark the whole picture of SC performance. The
problematical aspects of performance that can weaken SC performance can be
identified by tracking the smallest index numbers. The monthly historical
performance indices were calculated and plotted with the recently calculated
index as shown in Figure 6 so that the progress of the SC could be monitored.
7. Conclusions
Chan and Qi’s PMS offers a performance measurement method that aggregates
upwards the performance of activities and processes into the composite performance
index for a SC. The measurement model is comprehensive, systemic, process-oriented,
flexible, dynamic, participative, and capable of reducing the complexity of the PMS.
The model however, requires a tremendous number of performance metrics, which
may hamper its practicality. Since, the SCOR model offers a framework of processes
and metrics as well as the standardized definitions, it is easier for practitioners to
identify and select pertinent measures. Our proposed method combines the distinct
advantages of Chan and Qi’s model with the pragmatism of the SCOR model to develop
an alternative approach that is more practical and efficient than using either one in
isolation. While SC performance can be measured at different process levels, it is more
USCF (0, 0) (4, 1) (6, 0) (0, 0) (2, 1) (4, 1) (0, 0) (2, 1) (2, 1)
USCA (2 4, 1) (0, 0) (4, 0) (2 2, 1) (0, 0) (2, 2) (22, 1) (0, 0) (2, 1)
DSCA (2 6, 0) (2 4, 0) (0, 0) (2 4, 1) (22, 2) (0, 0) (22, 1) (22, 1) (0, 0) Table IV.
Calculated weight Pair-wise comparison
l 0.588 0.111 0.025 0.322 0.100 0.037 0.289 0.148 0.076 matrices and calculated
m 0.817 0.154 0.029 0.665 0.245 0.090 0.563 0.289 0.148 weights of flexibility
u 1.000 0.214 0.034 1.000 0.594 0.218 1.000 0.563 0.289 measures
19,1
142
metrics
Table V.
JMTM
Aggregation of local
weights of performance
Attribute Metric Evaluator I Evaluator II Evaluator III AGG local weights
Flexibility USCF t(0.588, 0.817, 1.000) t(0.322, 0.665, 1.000) t(0.289, 0.563, 1.000) t(0.400, 0.682, 1.000)
USCA t(0.111, 0.154, 0.214) t(0.100, 0.245, 0.594) t(0.148, 0.289, 0.563) t(0.120, 0.229, 0.457)
DSCA t(0.025, 0.029, 0.034) t(0.037, 0.090, 0.218) t(0.076, 0.148, 0.289) t(0.046, 0.089, 0.181)
Costs SCMC t(0.534, 0.881, 1.000) t(0.534, 0.881, 1.000) t(0.534, 0.881, 1.000) t(0.534, 0.881, 1.000)
COGS t(0.072, 0.119, 0.197) t(0.072, 0.119, 0.197) t(0.072, 0.119, 0.197) t(0.072, 0.119, 0.197)
Assets C2C t(0.148, 0.563, 1.000) t(0.024, 0.090, 0.342) t(0.335, 0.688, 1.000) t(0.169, 0.447, 0.781)
ROSCFA t(0.039, 0.148, 0.563) t(0.065, 0.245, 0.298) t(0.053, 0.130, 0.314) t(0.052, 0.174, 0.602)
ROWC t(0.076, 0.289, 1.000) t(0.175, 0.665, 1.000) t(0.075, 0.181, 0.439) t(0.109, 0.379, 0.813)
Metric POF OFCT USCF USCA DSCA SCMC COGS C2C ROSCFA ROWC
Unit of measurement Percent Days Days Percent Percent Percentage of rev Percentage rev Days Percent Percent
Performance data (December 2006) 90.1 2.7 15.0 10 25 19.3 65.7 54 4.0 7.7
Evaluator I
Measurement scale
Perfect 100 1.0 7.0 50 100 15 40 30 10.0 12.0
Bottom 70 5.0 21.0 230 0 20 80 70 2.0 4.0
Performance score 6.700 5.813 4.283 5.005 2.529 1.323 3.563 3.995 2.559 4.598
Performance grade
A 0 0 0 0 0 0 0 0 0 0
B 0.350 0 0 0 0 0 0 0 0 0
C 0.650 0.906 0.141 0.503 0 0 0 0 0 0.299
D 0 0.094 0.859 0.497 0.264 0 0.781 0.998 0.279 0.701
E 0 0 0 0 0.736 0.662 0.219 0.002 0.721 0
F 0 0 0 0 0 0.338 0 0 0 0
Evaluator II
Measurement scale
Perfect 97 1.0 12.0 40 100 19.0 59.5 40 5.5 8.5
Bottom 60 7.0 18.0 230 0 19.5 68.0 65 3.5 5.5
Performance score 8.135 7.209 4.993 5.721 2.529 3.232 2.649 4.392 2.735 7.262
Performance grade
A 0.068 0 0 0 0 0 0 0 0 0
B 0.932 0.604 0 0 0 0 0 0 0 0.631
C 0 0.396 0.497 0.860 0 0 0 0.196 0 0.369
D 0 0 0.503 0.140 0.264 0.616 0.325 0.804 0.367 0
E 0 0 0 0 0.736 0.384 0.675 0 0.633 0
F 0 0 0 0 0 0 0 0 0 0
(continued)
Performance
measurement
December 2006
performance grades for
143
Table VI.
19,1
144
JMTM
Table VI.
Metric POF OFCT USCF USCA DSCA SCMC COGS C2C ROSCFA ROWC
Unit of measurement Percent Days Days Percent Percent Percentage of rev Percentage rev Days Percent Percent
Evaluator III
Measurement scale
Perfect 98 1.3 12.0 21 100 18 58 38 5.9 8.5
Bottom 70 6.0 18.0 219 0 20 70 65 3.0 5.5
Performance score 7.179 7.075 4.993 7.261 2.529 3.308 3.543 4.067 3.610 7.262
Performance grade
A 0 0 0 0 0 0 0 0 0 0
B 0.589 0.537 0 0.630 0 0 0 0 0 0.631
C 0.411 0.463 0.497 0.370 0 0 0 0.034 0 0.369
D 0 0 0.503 0 0.264 0.654 0.772 0.966 0.805 0
E 0 0 0 0 0.736 0.346 0.228 0 0.195 0
F 0 0 0 0 0 0 0 0 0 0
AGG
Performance grade
A 0.023 0 0 0 0 0 0 0 0 0
B 0.624 0.381 0 0.210 0 0 0 0 0 0.421
C 0.354 0.588 0.378 0.578 0 0 0 0.077 0 0.346
D 0 0.031 0.622 0.212 0.264 0.423 0.626 0.923 0.484 0.234
E 0 0 0 0 0.736 0.464 0.374 0.001 0.516 0
F 0 0 0 0 0 0.113 0 0 0 0
efficient for managers to measure performances and compute the composite Performance
performance index at the system-wide level. The proposed performance model measurement
employs SCOR level 1 metrics because they are a smaller number of more relevant,
integrated, balanced, and strategic performance measures. One distinction between
this paper and Chan and Qi (2003b) is that while Chan and Qi’s measurement activities
span the SC processes, the performance measures in the proposed model span multiple
processes. By doing so, a more practical way of measuring SC performance can be 145
achieved without compromising theoretical rigor. SC managers who use the proposed
model can quickly monitor the progress of the SC and systematically align the metrics
with strategies. One case study is presented to demonstrate the measurement and the
application of the performance measurement method.
Metric weight (0.169, 0.447, 0.781) (0.052, 0.174, 0.602) (0.109, 0.379, 0.813)
Measurement
result (0, 0, 0.077, 0.923, 0.001, 0) (0, 0, 0, 0.484, 0.516, 0) (0, 0.421, 0.346, 0.234, 0, 0)
Aggregated Table VII.
result (0, 0.155, 0.158, 0.565, 0.122, 0) Aggregation of
Performance measurement result of the
index 4.693 assets attribute
Attribute weight (0.102, 0.174, (0.031, 0.055, (0.038, 0.073, (0.390, 0.658, (0.016, 0.040,
0.296) 0.100) 0.142) 0.942) 0.099)
Measurement (0.023, 0.624, (0, 0.381, 0.588, (0, 0.053, 0.391, (0, 0, 0, 0.451, (0, 0.155, 0.158,
result 0.354, 0, 0, 0) 0.031, 0, 0) 0.484, 0.072, 0) 0.451, 0.097) 0.565, 0.122, 0)
Aggregated Table VIII.
result (0.004, 0.147, 0.138, 0.353, 0.297, 0.061) Aggregation of
Performance measurement result of the
index 4.050 case supply chain
Supply Chain
(4.050)
POF OFCT USCF USCA DSCA SCMS COGS C2C ROSCFA ROWC Figure 5.
(7.338) (6.699) (4.756) (5.996) (2.529) (2.621) (3.252) (4.152) (2.968) (6.374) Measurement results of SC
performance
Note: December 2006
JMTM 10
19,1
8
Figure 6. 0
Monthly PI trend Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
Month
References
Angerhofer, B. and Angelides, M.C. (2006), “A model and a performance measurement system for
collaborative supply chains”, Decision Support Systems, Vol. 42 No. 1, pp. 283-301.
Baddeley, A. (1994), “The magical number seven: still magic after all these years?”, Psychological
Review, Vol. 101 No. 2, pp. 353-6.
Basu, R. (2001), “New criteria of performance management: a transition from enterprise to
collaborative supply chain”, Measuring Business Excellence, Vol. 5 No. 4, pp. 7-12.
Beamon, B.M. (1999), “Measuring supply chain performance”, International Journal of
Operations & Production Management, Vol. 19 No. 3, pp. 275-92.
Bechtel, C. and Jayaram, J. (1997), “Supply chain management: a strategic perspective”,
The International Journal of Logistics Management, Vol. 8 No. 1, pp. 15-34.
Boender, C.G.E., de Graan, J.G. and Lootsma, F.A. (1989), “Multi-criteria decision analysis with
fuzzy pairwise comparisons”, Fuzzy Sets and Systems, Vol. 29 No. 2, pp. 133-43.
Brewer, P.C. and Speh, T.W. (2000), “Using the balanced scorecard to measure supply chain
performance”, Journal of Business Logistics, Vol. 21 No. 1, pp. 75-93.
Chan, F.T.S. and Qi, H.J. (2002), “A fuzzy basis channel-spanning performance
measurement method for supply chain management”, Proceedings of the Institution of
Mechanical Engineers Part B: Journal of Engineering Manufacture, Vol. 216 No. 8,
pp. 1155-67.
Chan, F.T.S. and Qi, H.J. (2003a), “An innovative performance measurement method for supply
chain management”, Supply Chain Management: An International Journal, Vol. 8 No. 3,
pp. 209-23.
Chan, F.T.S. and Qi, H.J. (2003b), “Feasibility of performance measurement system for supply
chain: a process-based approach and measures”, Integrated Manufacturing Systems,
Vol. 14 No. 3, pp. 179-90.
Chan, F.T.S., Qi, H.J., Chan, H.K., Lau, H.C.W. and Ip, R.W.L. (2003), “A conceptual model of
performance measurement for supply chains”, Management Decision, Vol. 41 No. 7,
pp. 635-42.
Dasgupta, T. (2003), “Using the six-sigma metric to measure and improve the performance of a Performance
supply chain”, Total Quality Management, Vol. 14 No. 3, pp. 355-66.
measurement
Davenport, T.H. (1993), Process Innovation: Reengineering Work through Information
Technology, Harvard Business School Press, Boston, MA.
Farris, M.T. II and Hutchison, P.D. (2002), “Cash-to-cash: the new supply chain management
metric”, International Journal of Physical Distribution & Logistics Management, Vol. 32
No. 4, pp. 288-98. 147
Fawcett, S.E. and Cooper, M.B. (1998), “Logistics performance measurement and customer
success”, Industrial Marketing Management, Vol. 27 No. 4, pp. 341-57.
Folan, P. and Browne, J. (2005), “A review of performance measurement: towards performance
management”, Computers in Industry, Vol. 56 No. 7, pp. 663-80.
Gammelgaard, B. and Vesth, H. (2004), “The SCOR model – a critical review”, Proceedings of
Operations Management as a Change Agent Conferences, INSEAD, pp. 233-41.
Griffis, S.E., Cooper, M., Goldsby, T.J. and Closs, D.J. (2004), “Performance measurement:
measure selection based upon firm goals and information reporting needs”, Journal of
Business Logistics, Vol. 25 No. 2, pp. 95-118.
Gunasekaran, A., Patel, C. and McGaughey, R.E. (2004), “A framework for supply chain
performance measurement”, International Journal of Production Economics, Vol. 87 No. 3,
pp. 333-47.
Gunasekaran, A., Patel, C. and Tirtiroglu, E. (2001), “Performance measures and metrics in a
supply chain environment”, International Journal of Operations & Production
Management, Vol. 21 Nos 1/2, pp. 71-87.
Hausman, W.H. (2004), “Supply chain performance metrics”, in Harrison, T.P., Lee, H.L. and
Neale, J.J. (Eds), The Practice of Supply Chain Management: Where Theory and Application
Converge, Springer Science & Business Media, New York, NY, pp. 61-73.
Hofman, D. (2004), “The hierarchy of supply chain metrics”, Supply Chain Management Review,
Vol. 8 No. 6, pp. 28-37.
Holmberg, S. (2000), “A system perspective on supply chain measurements”, International
Journal of Physical Distribution & Logistics Management, Vol. 30 No. 10, pp. 847-68.
Huang, S.H., Sheoran, S.K. and Keskar, H. (2005), “Computer-assisted supply chain configuration
based on supply chain operations reference (SCOR) model”, Computers & Industrial
Engineering, Vol. 48 No. 2, pp. 377-94.
Kasi, V. (2005), “Systemic assessment of SCOR for modeling supply chains”, Proceedings of the
38th Annual Hawaii International Conference on System Sciences, p. 87.
Lai, K., Ngai, E.W.T. and Cheng, T.C.E. (2002), “Measures for evaluating supply chain
performance in transport logistics”, Transportation Research Part E: Logistics and
Transportation, Vol. 38 No. 6, pp. 439-56.
Lambert, D.M. and Pohlen, T.L. (2001), “Supply chain metrics”, International Journal of Logistics
Management, Vol. 12 No. 1, pp. 1-19.
Lohman, C., Fortuin, L. and Wouters, M. (2004), “Designing a performance measurement system:
a case study”, European Journal of Operational Research, Vol. 156 No. 2, pp. 267-86.
Min, H. and Zhou, G. (2002), “Supply chain modeling: past, present, and future”, Computers &
Industrial Engineering, Vol. 43 Nos 1/2, pp. 231-49.
Morgan, C. (2004), “Structure, speed and salience: performance measurement in the supply
chain”, Business Process Management Journal, Vol. 10 No. 5, pp. 522-36.
JMTM Neely, A., Mills, J., Platts, K., Gregory, M. and Richards, H. (1996), “Performance measurement
system design: should process based approaches be adopted?”, International Journal of
19,1 Production Economics, Vol. 46/47, pp. 423-31.
Novack, R.A. and Thomas, D.J. (2004), “The challenges of implementing the perfect order
concept”, Transportation Journal, Vol. 43 No. 1, pp. 5-16.
Park, J.H., Lee, J.K. and Yoo, J.S. (2005), “A framework for designing the balanced supply chain
148 scorecard”, European Journal of Information Systems, Vol. 14 No. 4, pp. 335-46.
Reisinger, H., Cravens, K.S. and Tell, N. (2003), “Prioritizing performance measures within the
balanced scorecard framework”, Management International Review, Vol. 43 No. 4,
pp. 429-38.
Robson, I. (2004), “From process measurement to performance improvement”, Business Process
Management Journal, Vol. 10 No. 5, pp. 510-21.
Saaty, T.L. (1990), Multicriteria Decision Making: The Analytic Hierarchy Process, RWS
Publications, Pittsburgh, PA.
Schmitz, J. and Platts, K.W. (2004), “Supplier logistics performance measurement: indications
from a study in the automotive industry”, International Journal of Production Economics,
Vol. 89 No. 2, pp. 231-43.
Schroeder, R.G., Anderson, J.C. and Scudder, G.D. (1986), “White collar productivity
measurement”, Management Decision, Vol. 24 No. 5, pp. 3-7.
Supply-Chain Council (2006), “Supply-chain operations reference-model version 8.0”, available at:
www.supply-chain.org (accessed 16 August 2006).
van Hoek, R.I. (1998), “Measuring the unmeasurable” – measuring and improving performance in
the supply chain”, Supply Chain Management: An International Journal, Vol. 3 No. 4,
pp. 187-92.
Corresponding author
John C.S. Tang can be contacted at: tang@ait.ac.th