Sie sind auf Seite 1von 24

The current issue and full text archive of this journal is available at

www.emeraldinsight.com/1741-038X.htm

Performance
A conceptual model measurement
of performance measurement
for supply chains
125
Alternative considerations
Received May 2006
Adisak Theeranuphattana and John C.S. Tang Revised February 2007
School of Management, Asian Institute of Technology, Pathumthani, Thailand Accepted April 2007

Abstract
Purpose – This paper revisits the recent work of Chan and Qi which proposed an innovative
performance measurement method for supply chain management. While the measurement method has
many advantages, it can be unwieldy in practice. This paper aims to address these limitations and to
propose a more user-friendly alternative performance measurement model.
Design/methodology/approach – The performance measurement model described in this paper is
a combination of two existing methods: Chan and Qi’s model and the supply chain operations reference
(SCOR) model. To demonstrate the applicability of the combined approach, actual SCOR level 1
performance data and the measurement information from a case supply chain (SC) are collected and
processed by Chan and Qi’s measurement algorithm.
Findings – These two methods complement each other when measuring SC performance.
Originality/value – This paper develops a practical and efficient measurement model that can
resolve SC performance problems by incorporating the strengths of two different measurement models
to create a synergistic new model.
Keywords Supply chain management, Performance measurement (quality), Fuzzy logic
Paper type Research paper

1. Introduction
Performance measurement is critical to the success of almost any organization because it
creates understanding, molds behavior and improves competitiveness (Fawcett and
Cooper, 1998). As companies move towards supply chain management (SCM), it
becomes necessary to measure the performance of Supply Chains (SCs). Traditional
measurement approaches however, have become less relevant to SCM because they are
too narrow in scope to address the broad range of measurement activities. SCM has
evolved over the last decade, due to a dramatic increase in the publication of SCM
theories and practices. However, the topic of SC performance measurement has not
received adequate attention from researchers or practitioners (Beamon, 1999; Holmberg,
2000; Gunasekaran et al., 2001; Chan and Qi, 2003a; Chan et al., 2003; Gunasekaran et al.,
2004; Schmitz and Platts, 2004; Folan and Browne, 2005; Park et al., 2005).
Chan and Qi (2002, 2003a, b) and Chan et al. (2003) proposed an innovative performance
measurement system (PMS) for SCs (hereafter “Chan and Qi’s model”), which includes a Journal of Manufacturing Technology
conceptual performance model, a performance measurement and aggregation method, Management
Vol. 19 No. 1, 2008
and example performance measures. Chan and Qi’s model is regarded as a significant pp. 125-148
development towards measuring SC performance because the model provides q Emerald Group Publishing Limited
1741-038X
managers with many useful techniques for analyzing and assessing SC performance. DOI 10.1108/17410380810843480
JMTM The model can quantify the relative importance of both SC processes and measures
19,1 with respect to SC strategies. This technique enables managers to make connections
between strategies and measurements and to concentrate on key processes and
measures that have a significant impact on the overall performance of a SC. The PMS
also offers managers a way to aggregate performance results into a composite index
that depicts the overall performance of a SC. This index offers managers an efficient
126 means of analyzing and benchmarking SC performance (Chan and Qi, 2003a).
Practitioners might find it difficult to apply the model however, because it measures
local performance and therefore involves too many performance measures and too
much data. Furthermore, the lack of clarity, and the inability to reach a consensus on
how to define SC metrics are two barriers that can prevent the successful
implementation of the model.
To solve these problems, new developments such as the supply chain operations
reference (SCOR) model can be used. The SCOR model is a well recognized SC model
used in various industries around the world. The model allows SC partners to “speak a
common language” because it provides standardized definitions for processes, process
elements, and metrics. Since, the SCOR model offers standardized definitions of
performance metrics for the SC, it is easier for managers to identify relevant measures
and use them. More and more companies have adopted SCOR performance metrics as
standard criteria for evaluating their SC performance. Though widely used in practice,
the SCOR model is largely ignored by academia (Gammelgaard and Vesth, 2004; Kasi,
2005). In addition, some authors (Gammelgaard and Vesth, 2004; Angerhofer and
Angelides, 2006) have noted that although oriented towards process and efficiency, the
SCOR model is not oriented towards strategy.
Given the contributions of Chan and Qi’s model and the SCOR model’s pragmatic
advantages, this paper aims to develop a performance measurement model for SCs that
manages to both incorporate the advantages of the two models and mitigate their
disadvantages by creating a more practical and efficient model. Chan and Qi’s
performance measurement method can be divided into two separate models:
(1) The structural performance measurement framework that links the SC
hierarchical structure with performance measures.
(2) The measurement and aggregation algorithm that converts the performance
data at the operational level into the meaningful composite index.

This paper focuses primarily on developing an alternative framework to the first


model.
The rest of this paper is organized as follows. The next section introduces the SCOR
model. The third section reviews Chan and Qi’s measurement method and its
advantages. The fourth section provides the rationale for the proposed performance
model. The fifth section highlights this paper’s major contributions and presents the
proposed performance model for SCs. The sixth section presents a case study to
demonstrate the use of the proposed performance model in combination with Chan and
Qi’s measurement method. The paper ends with a short summary.

2. SCOR model
The SCOR model was introduced in 1996 and has been endorsed by the Supply-Chain
Council (SCC), a global organization of firms interested in SCM. The SCOR model is
a business process reference model, which provides a framework (toolkit) that includes Performance
SC business processes, metrics, best practices, and technology features. The SCOR measurement
model attempts to integrate the concepts of business process reengineering,
benchmarking, process measurement, and best practice analysis and apply them to
SCs. The SCOR model offers users the following benefits:
.
standard descriptions of management processes that make up the SC;
.
a framework of relationships among the standard processes;
127
. standard metrics to measure process performance;
. management practices that produce best-in-class performance; and
. standard alignment to software features and functionality that enable best
practices.

To represent the SC, the SCOR model uses a “building block” approach based on five
core processes – plan, source, make, deliver, and return – altogether called level 1
processes. The “plan process” balances the demand and supply to best meet the
sourcing, manufacturing, and delivery requirements. The “source process” procures
goods and services to meet planned or actual demand. The “make process” transforms
a product to a finished state to meet planned or actual demand. The “deliver process”
provides finished goods and services to meet planned or actual demand, typically
including order management, transportation management, and distribution
management. The “return process” is associated with returning or receiving
returned products for any reason.
The SCOR model is divided into three standardized levels of process details. The top
level (level 1) defines the scope and content of the SC by using the five core processes.
The configuration level (level 2) specifies configuration of the SC at the process level by
using a tool kit of process categories. At level 2, processes are configured in line with
operations strategies. For example, “make” can be configured into make-to-stock (M1),
make-to-order (M2), or engineer-to-order (M3). The process element level (level 3)
defines a process flow diagram with process elements or specific tasks for each process
category in level 2. For example, M2 embraces schedule production activities (M2.1),
issue sourced/in-process product (M2.2), produce and test (M2.3), etc.
The SCOR model advocates hundreds of performance metrics used in conjunction
with five performance attributes: reliability, responsiveness, flexibility, cost, and asset
metrics. Note that quality is excluded here. Hausman (2004) explained that in modern
SCM, quality is taken as a given and that factors in quality management and
improvement are somewhat separate from those in SCM development. According to
the Supply-Chain Council (2006), five attributes of SC performance are defined as
follows:
(1) SC reliability. The performance of the SC in delivering the correct product, to the
correct place, at the correct time, in the correct condition and packaging, in the
correct quantity, with the correct documentation, to the correct customer.
(2) SC responsiveness. The speed at which a SC provides products to the customer.
(3) SC flexibility. The agility of a SC in responding to marketplace changes to gain
or maintain competitive advantage.
(4) SC costs. The costs associated with operating the SC.
JMTM (5) SC asset management. The effectiveness of an organization in managing assets
19,1 to support demand satisfaction. This includes the management of all assets:
fixed and working capital.

SCOR performance metrics are classified not only by five performance attributes but
also by all processes at the three levels. The SCOR model Version 8.0 endorses ten
128 performance metrics for its level 1 as shown in Table I. These metrics are designed to
provide a view of overall SC performance. Level 2 metrics are the measures of process
categories, and level 3 metrics are those of process elements. The SCOR model levels 1
and 2 metrics keep management focused, while level 3 metrics are used to diagnose
variations in performance against plan.

Performance attribute Level 1 metric Metric definition

Reliability Perfect order fulfillment The percentage of orders meeting delivery


performance with complete and accurate
documentation and no delivery damage
Responsiveness Order fulfillment cycle time‘ The average actual cycle time consistently
achieved to fulfill customer orders
Flexibility Upside SC flexibility The number of days to achieve an unplanned
sustainable 20 percent increase in quantities
delivered
Upside SC adaptability The maximum sustainable percentage
increase in quantity delivered that can be
achieved in 30 days
Downside SC adaptability The reduction in quantities ordered
sustainable at 30 days prior to delivery with
no inventory or cost penalties
Costs SCM cost The sum of the costs associated with the
SCOR Level 2 processes to Plan, Source,
Deliver, and Return
Cost of goods sold The cost associated with buying raw
materials and producing finished goods. This
cost includes direct costs (labor, materials)
and indirect costs (overhead)
Assets Cash-to-cash cycle time The time it takes for an investment made to
flow back into a company after it has been
spent for raw materials
Return on SC fixed assets The return an organization receives on its
invested capital in SC fixed assets. This
includes the fixed assets used in Plan, Source,
Make, Deliver, and Return
Return on working capital A measurement which assesses the
magnitude of investment relative to a
Table I. company’s working capital position verses
SCOR performance the revenue generated from a SC
attributes and associated
level 1 metrics Source: Supply-Chain Council (2006)
3. Chan and Qi’s measurement method Performance
In this section, we review Chan and Qi’s performance measurement method and its measurement
advantages. As mentioned earlier, Chan and Qi’s measurement approach can be
separated into two parts: the framework of performance measures and the
measurement and aggregation algorithm.

3.1 Framework of performance measures 129


Chan and Qi’s model was developed through the concept of performance of activity
(POA) – an approach to identifying and employing performance measures. The POA
concept is based on a SC process-based model, by which the SC is viewed as a set of
integrated business processes across organizational boundaries. A process is a
structured set of activities that perform specific functions and produce specific output
(Davenport, 1993). In the model, the SC is represented by six core business processes:
supplying, inbound logistics, manufacturing, outbound logistics, marketing and sales,
and end customer processes. These processes are then divided into sub-processes or
further into activities to identify their detailed performance measures. Chan and Qi
(2003b) maintained that the aggregated results of the performance of activities and
sub-processes could depict the performance of their corresponding process. Thus, the
measurement of the higher process performance is converted into that of the activities
and lower processes performance.
Chan and Qi explained that any process or activity would consume different
resources, perform a specific set of tasks, and add value to materials and products. The
used resources, specific operations, and expected outcomes will indicate the
performance of a process or activity. Typically, these can be measured according to
several aspects of performance. Managers, nevertheless, always find it difficult to select
the appropriate measures because there are so many performance measures for SCs
(Beamon, 1999; Hofman, 2004), and the failure to use the right measures may jeopardize
the competitiveness of firms and that of their entire SC (Griffis et al., 2004). To assist in
identifying and selecting measures for processes or activities, therefore, the POA
concept employs a board of performance metrics – called the metrics board – which
functions as the referential categories of performance measures. Chan and Qi (2003b)
suggested the following performance categories: cost, time, capacity, capability
(including effectiveness, reliability, availability, and flexibility), productivity,
utilization, and outcome. Therefore, managers should be able to identify and to select
performance measures systematically by referring to the balanced dimensions of the
metrics board. After corresponding measures are identified, processes and measures
will be grouped into the processes and measures hierarchy (PMH) – the hierarchical
framework composed of SC processes, activities, and measures linked with activities.

3.2 Measurement and aggregation algorithm


Since, the performance that counts is that of the entire SC – neither that of single activities
nor that of particular processes (Holmberg, 2000) – there is the need to develop a
measurement and aggregation algorithm to convert the detailed performance of activities
and processes into the performance of the SC. Chan and Qi (2003a, b) proposed an
innovative measurement method that converts performance data from various
measures into a meaningful composite index for a SC. The methodology developed is
based on the fuzzy set theory to address the imprecision in human judgments. A geometric
JMTM scale of triangular fuzzy numbers by Boender et al. (1989) is employed to quantify the
19,1 relative weights of performance measures. Performance data are transformed into fuzzy
measurement results by two subsequent mappings. First, the performance data are
converted into the performance scores by means of the proportional scoring technique, by
which the two end points of the measurement scale for each measure are defined and
scaled so that the score ranges from 0 to 10. Second, the performance score is translated
130 into a fuzzy performance grade set, defined by the triangular fuzzy number. The weighted
average method is used to aggregate the fuzzy measurement results and to defuzzify the
fuzzy performance grades into a crisp (exact) number ranging from zero to ten, called the
performance index (PI) (for detailed discussions on the measurement and aggregation
algorithm, please see Chan and Qi, 2003a; Chan et al., 2003). The performance index can be
used for benchmarking the performance of the SC system and for motivational purposes.

3.3 Strengths of Chan and Qi’s model


Chan and Qi’s innovative measurement model has several advantages to measuring SC
performance: First, it is a comprehensive PMS for SCs that offers both structural and
procedural performance measurement frameworks. POA provides a structural
framework, which helps managers to identify and select individual measures, whereas
the measurement and aggregation method provides a procedural framework, which
describes the process of developing a performance measure (Folan and Browne, 2005).
Within the SCM literature, the structural and procedural performance measurement
frameworks for SCs are usually developed in isolation with an unbalanced focus on the
structural framework.
Second, Chan and Qi’s model illustrates a high degree of systems thinking. For
example, the SC is viewed as an integrated entity; the scope of measurement activities
is wide enough and beyond the organizational boundaries; local measurement
activities are considered part of a greater whole so that the properties on the whole are
recognized; and the measurement method makes it possible to make the trade-offs
between components (Holmberg, 2000). The benefits of adopting the systems thinking
approach to SCM are numerous. For example, it facilitates SC process integration and
global optimization, induces desired behavior and helps to improve SC performance
(Holmberg, 2000; Chan and Qi, 2003a, b; Chan et al., 2003).
Third, the model adopts a process-based approach to analyze the SC structure and
to build a PMS for SCs. The process-based approach enables managers to see where SC
performance could be improved so that they could focus attention on achieving a
higher level of performance. According to Fawcett and Cooper (1998), most of the
executives in the interviews indicated that it is impossible to obtain the required level
of integration without effective, process-oriented measures.
Fourth, Chan and Qi used many techniques to reduce PMS complexity, such as:
hierarchical structuring of processes and measures, clustering of metrics into various
perspectives, and the aggregation of various performance measures into one number
(Lohman et al., 2004). In this way, the hierarchical structuring of complexity into
homogeneous clusters of factors is a common way for humans to handle complexity
(Saaty, 1990). The clustering of metrics helps to create a clearer connection between
metrics and strategies (Lohman et al., 2004) and the performance index makes it easier
for senior managers to comprehend the complex performance of the SC.
Fifth, the model is flexible and concerned with the dynamics of SCs. On the one Performance
hand, the model can be applied to virtually any SC. On the other hand, it can be measurement
adjusted or may be supplanted by other SC process-based models. The assessment of
performance is also flexible because it can be undertaken at different hierarchical levels
(Chan and Qi, 2003b). Measures and processes are given weights or priorities that
correspond to SC objectives and strategies. As change occurs, measures can be revised,
and the weights can be reevaluated. The use of weights enables managers to focus on 131
strategic initiatives that greatly influence performance improvement.
Finally, the model supports participative decision making. Because of the extensive
scope and span of the SC, the SC development and measurement is usually a group
effort. In Chan and Qi’s model, a performance measurement team (PMT) is suggested
to enhance the quality of the performance evaluation process through the coordination
of multiple members and the synthesis of their judgments.

4. Rationale for the proposed model


It is suggested that SC managers take advantage of Chan and Qi’s model to measure SC
performance. Those who use POA however, may have difficulty constructing the PMH
because it usually involves a huge number of measures and because the metrics board
provides only the categories of performance measures, not the measures themselves.
Alternatively, the SCOR model offers users an array of standardized metrics as well as
performance attributes. Since, the SCOR model functions as – and has several
things in common with – POA, it is possible to combine the SCOR model with Chan
and Qi’s model to overcome the above obstacle.
The rationale for combining the two models is that first, the SCOR model is
compatible with the POA method. Both are developed from the process-based view of
the SC. Chan and Qi employ a holistic system-thinking perspective to develop the PMS
for SCs. Like Chan and Qi’s model, the SCOR model indicates a high degree of systems
thinking (Holmberg, 2000). Both methods help managers to identify and select
performance measures systematically to analyze SC performances. SCOR performance
attributes are comparable to the POA metrics board in that they are frameworks for the
selection of performance measures for SC systems.
Second, the SCOR model, equipped with the framework of SC processes and metrics
and their definitions, makes it easier for practitioners to structure the PMH. Even
though Chan and Qi (2003b) provided guidance for types of metrics and some examples
of performance measures, it would be helpful for practitioners to have a framework and
a comprehensive list of metrics for SCs so that they could readily identify the pertinent
metrics and would not miss important ones. Time constraints can make it particularly
difficult for a single research study to come up with a complete list. Since, its
introduction in 1996, the SCOR model has undergone several major revisions in
response to changing needs, and its standardized metrics for SCs have become more
comprehensive. Therefore, to better identify and select measures, practitioners may
use the SCOR model in addition to the POA method.
Third, measures should be clearly defined (Neely et al., 1996). The lack of both
clarity and a common definition for SC metrics are obstacles to developing SC
measurement systems. The absence of standardized metrics and terminology not only
prevents organizations from comparing themselves to others but also prevents
companies from adopting the best practices of other successful companies (Kasi, 2005).
JMTM Measurement should be comprehensible to all SC members and should offer very little
19,1 chance of manipulation (Schroeder et al., 1986; Gunasekaran et al., 2004). Nevertheless,
the problems of ambiguous and inconsistent definitions of SC metrics are ubiquitous.
For example, people in different areas and cultures may understand the same measure
differently (Basu, 2001); the definition of a perfect order varies even within the same
firm (Novack and Thomas, 2004); there is no consensus as to precisely what total
132 logistics cost is (Fawcett and Cooper, 1998); and the definitions of cash-to-cash cycle
time are inconsistent in the academic literature (Farris II and Hutchison, 2002). A
possible way to avoid – or to at least alleviate – these problems is to use the SCOR
model for modeling the SC and its measurement system because the model provides
the glossary of metrics that exist in a SC.
Finally, SC performance measures should be linked with strategies (Holmberg,
2000; Lambert and Pohlen, 2001; Morgan, 2004). Nevertheless, as mentioned in the
introduction, a weakness of the SCOR model is arguably that it mainly focuses on
processes and efficiency but not on strategy. Scholars have often criticized the concept
of process management for failing to come up with strategies. Based on the process
management concept, the SCOR model has been criticized for not explicitly discussing
strategies in relation to the SC (Gammelgaard and Vesth, 2004; Angerhofer and
Angelides, 2006). For that reason, firms that invest in implementing the SCOR model
may not use it effectively, mainly because managers have difficulty relating it to SC
strategies. Therefore, they may need a quantitative tool to link SCOR metrics to SC
strategies. By using Boender et al.’s (1989) measurement methodology suggested by
Chan and Qi (2003a), managers can quantify – from their judgments – the weights of
influence of SC strategy on individual performance measures. After applying this
method to the SCOR model, managers can determine the degree to which performance
metrics contribute towards the success of a particular strategy.
In other words, since both methods share similar characteristics and complement
each other in terms of measuring SC performance, a combination of both methods
offers an ideal alternative.

5. Proposed model
5.1 General structure of SCOR PMH
The above discussion provides the rationale for the proposed model. In this section, we
show how the SCOR model and Chan and Qi’s methodology can be combined and shed
some light on measuring SC performance at the top level. This paper uses the SCOR
model to represent the SC system and to identify performance measures. Managers
need to select a pre-specified set of processes, metrics, and their relationships from the
SCOR model to map the SC measurement system into the PMH.
Performance can be measured at several levels. To build a comprehensive view of
SCOR performance measurement, Figure 1 shows the general structure of SCOR PMH,
comprising three hierarchical levels: the top level, a configuration level, and a process
element level. Each level could be analyzed to reflect the performance of different
operation and management levels. For example, managers may assess overall SC
health at the top level, diagnose problems at the configuration level, and identify
corrective actions at the process element level. Managers may also construct the PMH
for any particular level.
SCOR Level 1 Metrics Supply Chain Performance
measurement
Reliability
Plan
Responsiveness
Source Make Deliver Level 1: Top level
Flexibility (Process Types)
Costs Return Return 133
Assets

SCOR Level 2 Metrics

Reliability
Responsiveness
Level 2: Configuration Level
Flexibility (Process Categories)
Costs
Assets

SCOR Level 3 Metrics

Reliability
Responsiveness
Level 3: Process Element
Flexibility (Decompose Processes)
Costs
Assets Figure 1.
The general structure of
applying the SCOR model
Source: Chan and Qi (2003b) and SCOR Model 8.0, Supply-Chain Council (June 2006)

Before illustrating the SCOR PMH for each hierarchy, we would like to confine the
scope of the proposed model. Even though the SCOR model endorses five processes,
including the return process, the proposed model will not include this process. Min and
Zhou (2002) suggested that the scope of a SC model should be compromised between
model complexity and reality. The return process in the SCOR model demands an
array of process categories, yet this process has little to do with high-level SCOR
performance measures (level 1 metrics). Considering that the return process will
significantly increase the complexity of the proposed model and that the process makes
little contribution to the overall SC performance, we deliberately exclude this process
from the proposed model.
To construct an example of a SCOR process categories and measures hierarchy for a
SC, we assume that the “source” “make” and “deliver” process categories all are
configured to “make-to-order”. The example of the PMH is shown in Figure 2.
Although for practical reasons, not every metric identified by the SCOR model needs to
be included in the PMH, all the metrics are presented to make it clear how the SCOR
19,1

134
JMTM

Figure 2.

measures hierarchy
An example of SCOR
process categories and
Supply Chain Supply Chain Level

Top Level
Plan (P) Source (S) Make (M) Deliver (D) (Process Types)

P1: Plan P2: Plan P3: Plan P4: Plan S2: Source Make- M2: Make-to- D2: Deliver Make- Configuration Level
Supply Chain Source Make Deliver to-Order Product Order to-Order Product (Process Categories)

Perfect Order Perfect Order Perfect Order SCOR Level 2 Metrics


Fulfillment Fulfillment Fulfillment

Yield Reliability
Order Fulfillment Order Fulfillment Order Fulfillment Order Fulfillment Order Fulfillment Order Fulfillment Order Fulfillment
Cycle Time Cycle Time Cycle Time Cycle Time Cycle Time Cycle Time Cycle Time
Plan Source Cycle
Plan Cycle Time Source Cycle Time Make Cycle Time Deliver Cycle Time Responsiveness
Time
Upside Make Upside Deliver
Flexibility Flexibility

Upside Make Upside Deliver


Adaptability Adaptability

Downside Make Downside Deliver


Adaptability Adaptability Flexibility
Cost to Plan Cost to Plan Cost to Plan Make Cost to Plan Cost to Source Cost to Make Cost to Deliver
Supply Chain Source Deliver
Product Cost of Goods Sold
Total Deliver Costs Acquisition Costs
Inventory Days Finished Goods
of Supply Inventory Days of Inventory Days of
(Raw Material) Supply (WIP) Supply
Costs
Cash-to-cash Cash-to-cash Cash-to-cash Cash-to-cash Cash-to-cash Cash-to-cash Cash-to-cash
Cycle Time Cycle Time Cycle Time Cycle Time Cycle Time Cycle Time Cycle Time

Return on SC Return on SC Return on SC Return on SC Return on SC Return on SC Return on SC


Fixed Assets Fixed Assets Fixed Assets Fixed Assets Fixed Assets Fixed Assets Fixed Assets

Return on Return on Return on Return on Return on Return on Return on Asset


Working Capital Working Capital Working Capital Working Capital Working Capital Working Capital Working Capital Management
model (Version 8.0) proposes to measure the performance at the configuration level. Performance
The metrics relevant to a particular SC can be chosen later based on the actual measurement
requirements, possibly through pair-wise comparisons. It is worth mentioning that no
more than seven – plus or minus two – elements in the hierarchy are compared
concurrently because humans have trouble dealing effectively with more than seven to
nine things at a time (Saaty, 1990). If a single cluster contains more than seven
elements, we should break it down into hierarchical clusters of comparable elements – 135
for example, the cluster of flexibility metrics, that of cost metrics, and that of asset
metrics. Similarly, Figure 3 shows an example of part of the SCOR process elements
and measures hierarchy under the “make-to-order” process category (M2). It is also
worth noting that the SCOR model makes it easier for managers to identify pertinent
metrics.
Both the PMHs in Figures 2 and 3 enable managers to analyze the detailed metrics
at the operation nodes to reveal the root causes of problems such as high costs,
long-cycle time, and poor delivery performance. Once the cause is identified, managers
can design and implement the interventions or best practices that will correct it
with the most efficient use of resources. Nevertheless, managers should go at least
one step further by computing the performance index as suggested by Chan and Qi
(2003a).
It is administratively cumbersome and impractical however, for a PMT
(practitioners in particular) to use these PMHs to construct the performance index
because the hierarchy requires a huge number of performance metrics on account of
measuring local performance. Since, the POA concept as well as the SCOR model
begins with breaking down the SC into detailed elements, there is a tendency to
concentrate on detailed performances of activities to compute the performance index.
Although Chan and Qi’s measurement method provides a brief normalized
performance index and thus makes it easy to interpret the end performance results,
managers have to wade through lots of metrics and hundreds of pair-wise comparisons
at the start. Measuring local performance involves so many measures and data that it
may cause confusion, since it is not easy for a person to process such a large amount of
information (Saaty, 1990; Baddeley, 1994; Reisinger et al., 2003). It may also incur
unnecessary costs unless measures are used effectively (Robson, 2004), and may add
little value had problems already been solved (Holmberg, 2000). Chan and Qi (2003b)
seemed to recognize these limitations, and thus noticed that POA does not confine itself
to measuring only local performance.

5.2 Model for measuring SC performance


The previous discussion reveals the need for practitioners to measure SC performance
at the high level by using a limited number of critical measures. Companies that have
many performance measures often fail to realize that they would be better off
measuring performance with a few good metrics that provide the most balanced view
of end-to-end SC performance (Gunasekaran et al., 2001; Hofman, 2004). Hausman
(2004, p. 63) suggests; “it is critical therefore to focus management attention on the
performance of the SC as an integrated whole, rather than as a collection of separate
processes or companies”. Several authors, such as van Hoek (1998), Gunasekaran et al.
(2001), Lai et al. (2002), Dasgupta (2003), and Gunasekaran et al. (2004) seem to agree
with Hausman.
19,1

136
JMTM

Figure 3.

SCOR process elements


and measures hierarchy
An example of part of the
M2: Make-to-Order Configuration Level
(Process Categories)

M2.6: Release Process Element


M2.1: Schedule M2.2: Issue Sourced/ M2.3: Produce and M2.5: Stage Finished
M2.4: Package Finished Product Level (Decompose
Production Activities In-Process Product Test Product
to Deliver processes)

SCOR Level 3 Metrics


Schedule Yield Warranty Costs
Achievement
Yield Variability
Reliability

Schedule Issue Sourced/In- Produce and Test Package Cycle Stage Finished Release Finished
Production Process Product Cycle Time Time Product Cycle Product to Deliver
Activities Cycle Cycle Time Time Cycle Time
Time
Responsiveness

Flexibility

Cost to Schedule Cost to Issue Cost to Produce Cost to Package Cost to Stage Cost to Release
Production Sourced/ In- and Test Finished Product Finished Product
Activities Process Product to Deliver

Warranty Costs
Costs

Capacity Capacity Capacity


Utilization Utilization Utilization

Asset Turns Asset Turns Asset


Management
To measure the performance of the SC as an integrated whole, we may use integrated Performance
measures – the measures of an entire process or series of processes across functional measurement
areas (Bechtel and Jayaram, 1997). Integrated measures help SC members to avoid local
optimization in the SC (van Hoek, 1998), and provide them incentive to work with
others to improve performance on these measures (Bechtel and Jayaram, 1997).
Scholars (Bechtel and Jayaram, 1997; van Hoek, 1998; Brewer and Speh, 2000) have
suggested that measures in the SC combine both integrated and non-integrated 137
measures because doing so will allow managers to assess the overall SC performance
by using the former and the latter helps to detect problems that can occur.
SCOR level 1 performance metrics are considered integrated measures. According
to the Supply-Chain Council (2006), “level 1 metrics are primary, high level measures
that may cross multiple SCOR processes”. SCOR level 1 metrics assess a performance
on a SC basis since it measures performance with inputs from, and outputs to, member
firms in the SC (Lai et al., 2002).
In order to adequately capture the performance of the SC, the proposed model uses
all ten SCOR level 1 metrics although in practice most companies typically
select among four to six of them to focus on (Huang et al., 2005). All ten performance
metrics are a balanced set of performance metrics: there is at least one performance
metric from each of the five performance attributes. SCOR five performance attributes
consider both internal and external viewpoints. Reliability, flexibility, and
responsiveness are customer-facing performance measures, whereas costs and assets
are a firm’s internal-facing performance measures (Lai et al., 2002).
As shown in Figure 4, the proposed model, by incorporating SCOR performance
attributes and level 1 metrics into Chan and Qi’s model, will shift the emphasis from
operational to strategic considerations. The model provides a more efficient way for
policy makers to construct the performance index since the model requires only ten
integrated measures identified by the SCOR model. Although the proposed model does
not focus on detailed measurement activities, it is developed in accordance with the
fundamentals of systems thinking. To illustrate, the SC is viewed as a whole entity,
and the measurement system spans across the whole. The proposed model addresses
multi-dimensional aspects of SC performance, thus providing balanced, comprehensive
performance information. Performance attributes are regarded as part of a greater
whole in order for all properties along the chain to be realized. It is also possible to
make the high-level trade-offs among performance attributes and among measures.
The model would work in the following ways. First, one must adopt a clearly defined
SC strategy – efficient or responsive – since the adopted strategy affects the importance

Supply Chain
Performance Supply Chain

Customer-Facing Internal-Facing

Performance
Reliability Responsiveness Flexibility Costs Assets
Attributes

SCOR Level 1 Figure 4.


Metrics
The proposed model for
Perfect
Order
Order
Fulfillment
Upside
Supply Chain
Upside
Supply Chain
Downside
Supply Chain
Supply Chain
Management
Cost of Cash-to-cash
Return on
Supply Chain
Return on
Working
measuring SC
Fulfillment Cycle Time Flexibility Adaptability Adaptability Cost
Good Sold
(COGS)
Cycle Time
(C2C)
Fixed Assets Capital performance at top level
(POF) (OFCT) (USCF) (USCA) (DSCA) (SCMC) (ROSCFA) (ROWC)
JMTM of performance attributes and measures. Next, one needs to prioritize performance
19,1 attributes and measures to align with one’s adopted strategy because it is not possible
for a SC to achieve excellent performance in all aspects. An efficient SC should not
excessively emphasize flexibility and responsiveness metrics because doing so could
deviate from its strategy; likewise, a responsive SC should not overly accentuate cost
factors (Hausman, 2004). After performance data are collected, the remaining task
138 involves using the measurement and aggregation algorithm to convert the performance
data into the composite index that depicts the overall performance of a SC. The next
section demonstrates the use of Chan and Qi’s measurement and aggregation algorithm
to process the information from the proposed performance model.
6. Case study
6.1 Background information
The case study selected for this paper was one of Thailand’s largest cement producers,
in terms of total capacity and market share. The proposed approach was applied to its
manufacturing SC to assess its overall performance. The firm had used the SCOR
model to evaluate its SC for many years, however since the latest version of the model
had never been applied, some of the SCOR level 1 metrics had never been monitored.
Thus, some new measures, particularly those related to responsiveness, flexibility, and
asset management, were calculated for this study.
The PMT for this study was comprised of three evaluators: one SC analyst, one
logistics planning manager, and one supply-chain optimization manager. The simple
average was used to aggregate individual judgments into the judgment of the PMT.
After obtaining the performance data from the company, one author met the PMT to
explain the significance of the SCOR level 1 metrics, the proposed model, and to collect
the requisite information. The team was then presented with the monthly performance
data for 2006. The data inputs to manage Chan and Qi’s measurement algorithm were
collected as listed below:
. The pair of parameters of the qualitative preference ratios and the degree of
fuzziness ðdij ; aij Þ provided by each evaluator to denote the pair-wise comparison
between the ith and jth attributes and metrics.
.
The measurement scale for each metric by each evaluator set in terms of (bottom,
perfect) in which the bottom value represents the intolerable level of
performance, and the perfect value indicates the totally satisfactory performance.

6.2 Illustrative procedures


The measurement algorithm was carried out by means of the Microsoft Excel
Spreadsheet. The brief illustrative procedures for applying the Chan and Qi’s
measurement algorithm to the SCOR-based performance model were divided into the
following three major steps:
.
Step 1. Determine the relative importance of attributes and measures. The
relative weights of particular attributes and measures were calculated from the
fuzzy pair-wise comparisons with respect to the changing SC objectives and
strategies. The weights in terms of the triangular fuzzy number t(l, m, u) were
calculated following a geometric scale of triangular fuzzy numbers by Boender
et al. (1989). An illustrative pair-wise comparison matrix and the calculated
attribute weights by Evaluator I are provided in Table II.
The weights of performance attributes supplied by all evaluators are shown Performance
in Table III. These weights were then aggregated through the simple averaging measurement
aggregation method. For easier interpretation, the fuzzy vector can be
standardized in terms of the crisp numeric value. When the fuzzy number is
triangular, the defuzzified value can be calculated by taking an average of l, m,
and u. The resulting values are further normalized to sum to unity to yield the
final weights. The result shows that the PMT considered costs the most 139
important attribute.
Regarding the weights of measures, the weights of POF and OFCT could be
referred to those of reliability and responsiveness as there is only one proxy
under such an attribute. The fuzzy pair-wise comparisons were conducted to
quantify the relative weights of metrics under flexibility, costs, and assets.
Examples of the pair-wise comparison matrices comparing flexibility metrics are
presented in Table IV. The comparison process was also done for metrics under
the cost and asset categories, yet not shown here. For the sake of brevity, Table V
shows the local weights of measures supplied by individual evaluators as well as
the aggregated weights:
.
Step 2. Determine the performance scores and assess fuzzy performance grades.
This step involves converting the performance data into the fuzzy measurement
result (the fuzzy performance grade set). To do so, two mappings of scales need to
be carried out. First, the diverse performance data need to be converted into a
common unit of performance score. Second, the performance score needs to be
converted into the fuzzy performance grade set. For the first mapping, for each
measure, the measurement scale is set independently by each evaluator in the
form of an interval (bottom, perfect). With the measurement scales, the
performance data are mapped into the performance scores, the crisp numbers
ranging from zero to ten. For the second mapping, the crisp performance score is
mapped into a fuzzy performance grade set, in which the six grades A, B, C, D, E,
F denote the gradational measurement results ranging from the perfect to the
worst. All these grades are defined by the triangular fuzzy numbers of
performance scores. These six grades represent the meanings of approximation
“about” that is, A for “about 10” B for “about 8”,. . . and F for “about 0”.
Depending on the measurement scale set by individual evaluators and the
performance outcome, the performance score and the fuzzy performance grade set
of each measure were calculated for December 2006 and shown in Table VI. The

ðdij ; aij Þ Reliability Responsiveness Flexibility Costs Assets

Reliability (0, 0) (6, 0) (6, 0) (2 4, 0) (2, 2)


Responsiveness (26, 0) (0, 0) (4, 0) (2 6, 0) (2, 2)
Flexibility (26, 0) (24, 0) (0, 0) (2 6, 0) (22, 2)
Costs (4, 0) (6, 0) (6, 0) (0, 0) (4, 2)
Assets (22, 2) (22, 2) (2, 2) (2 4, 2) (0, 0)
Calculated weights Table II.
l 0.154 0.031 0.009 0.419 0.017 Pairwise comparisons of
m 0.239 0.048 0.015 0.650 0.048 five performance
u 0.365 0.074 0.022 0.991 0.134 attributes by Evaluator I
19,1

140
JMTM

Table III.
Aggregation of weights
of performance attributes
t(l, m, u) Evaluator I Evaluator II Evaluator III AGG Defuzzified weight Normalized weight

Reliability t(0.154, 0.239, 0.365) t(0.043, 0.089, 0.183) t(0.109, 0.193, 0.340) t(0.102, 0.174, 0.296) 0.190 0.181
Responsiveness t(0.031, 0.048, 0.074) t(0.029, 0.059, 0.123) t(0.033, 0.058, 0.102) t(0.031, 0.055, 0.100) 0.062 0.059
Flexibility t(0.009, 0.015, 0.022) t(0.005, 0.012, 0.027) t(0.098, 0.193, 0.376) t(0.038, 0.073, 0.142) 0.084 0.080
Costs t(0.419, 0.650, 0.991) t(0.425, 0.800, 1.000) t(0.327, 0.524, 0.836) t(0.390, 0.658, 0.942) 0.664 0.631
Assets t(0.017, 0.048, 0.134) t(0.016, 0.040, 0.101) t(0.016, 0.032, 0.062) t(0.016, 0.040, 0.099) 0.052 0.049
Note: t(l, m, u): triangular membership function with parameter l, m, u
performance grades from different evaluators were aggregated, and the Performance
aggregated results are presented at the end of the table: measurement
.
Step 3. Aggregate and defuzzify the measurement results. After the performance
grade sets and the relative weights of all the performance measures had been
calculated, the measurement results of all attributes could be aggregated through
the weighted averaging aggregation method. For example, the aggregated
measurement result of the assets attribute is shown in Table VII. In Table VIII, 141
the fuzzy performance grades of all attributes were aggregated into those of the
SC. The SC performance index, which is a crisp number, was determined by
multiplying the fuzzy performance grades by their defined numerical meanings,
adding the resultant values, and then dividing by the sum of the fuzzy
performance grades. Based on the above calculation, the performance index of
the case study’s SC for December 2006 was 4.050. This number reveals that the
overall SC performance was not very satisfactory with respect to the ten-point
scale.
The defuzzified measurement results for all performance attributes and
metrics are shown in Figure 5. With this information, it is easy for SC managers
to understand and benchmark the whole picture of SC performance. The
problematical aspects of performance that can weaken SC performance can be
identified by tracking the smallest index numbers. The monthly historical
performance indices were calculated and plotted with the recently calculated
index as shown in Figure 6 so that the progress of the SC could be monitored.

7. Conclusions
Chan and Qi’s PMS offers a performance measurement method that aggregates
upwards the performance of activities and processes into the composite performance
index for a SC. The measurement model is comprehensive, systemic, process-oriented,
flexible, dynamic, participative, and capable of reducing the complexity of the PMS.
The model however, requires a tremendous number of performance metrics, which
may hamper its practicality. Since, the SCOR model offers a framework of processes
and metrics as well as the standardized definitions, it is easier for practitioners to
identify and select pertinent measures. Our proposed method combines the distinct
advantages of Chan and Qi’s model with the pragmatism of the SCOR model to develop
an alternative approach that is more practical and efficient than using either one in
isolation. While SC performance can be measured at different process levels, it is more

Evaluator I Evaluator II Evaluator III


ðdij ; aij Þ USCF USCA DSCA USCF USCA DSCA USCF USCA DSCA

USCF (0, 0) (4, 1) (6, 0) (0, 0) (2, 1) (4, 1) (0, 0) (2, 1) (2, 1)
USCA (2 4, 1) (0, 0) (4, 0) (2 2, 1) (0, 0) (2, 2) (22, 1) (0, 0) (2, 1)
DSCA (2 6, 0) (2 4, 0) (0, 0) (2 4, 1) (22, 2) (0, 0) (22, 1) (22, 1) (0, 0) Table IV.
Calculated weight Pair-wise comparison
l 0.588 0.111 0.025 0.322 0.100 0.037 0.289 0.148 0.076 matrices and calculated
m 0.817 0.154 0.029 0.665 0.245 0.090 0.563 0.289 0.148 weights of flexibility
u 1.000 0.214 0.034 1.000 0.594 0.218 1.000 0.563 0.289 measures
19,1

142

metrics
Table V.
JMTM

Aggregation of local
weights of performance
Attribute Metric Evaluator I Evaluator II Evaluator III AGG local weights

Flexibility USCF t(0.588, 0.817, 1.000) t(0.322, 0.665, 1.000) t(0.289, 0.563, 1.000) t(0.400, 0.682, 1.000)
USCA t(0.111, 0.154, 0.214) t(0.100, 0.245, 0.594) t(0.148, 0.289, 0.563) t(0.120, 0.229, 0.457)
DSCA t(0.025, 0.029, 0.034) t(0.037, 0.090, 0.218) t(0.076, 0.148, 0.289) t(0.046, 0.089, 0.181)
Costs SCMC t(0.534, 0.881, 1.000) t(0.534, 0.881, 1.000) t(0.534, 0.881, 1.000) t(0.534, 0.881, 1.000)
COGS t(0.072, 0.119, 0.197) t(0.072, 0.119, 0.197) t(0.072, 0.119, 0.197) t(0.072, 0.119, 0.197)
Assets C2C t(0.148, 0.563, 1.000) t(0.024, 0.090, 0.342) t(0.335, 0.688, 1.000) t(0.169, 0.447, 0.781)
ROSCFA t(0.039, 0.148, 0.563) t(0.065, 0.245, 0.298) t(0.053, 0.130, 0.314) t(0.052, 0.174, 0.602)
ROWC t(0.076, 0.289, 1.000) t(0.175, 0.665, 1.000) t(0.075, 0.181, 0.439) t(0.109, 0.379, 0.813)
Metric POF OFCT USCF USCA DSCA SCMC COGS C2C ROSCFA ROWC
Unit of measurement Percent Days Days Percent Percent Percentage of rev Percentage rev Days Percent Percent

Performance data (December 2006) 90.1 2.7 15.0 10 25 19.3 65.7 54 4.0 7.7
Evaluator I
Measurement scale
Perfect 100 1.0 7.0 50 100 15 40 30 10.0 12.0
Bottom 70 5.0 21.0 230 0 20 80 70 2.0 4.0
Performance score 6.700 5.813 4.283 5.005 2.529 1.323 3.563 3.995 2.559 4.598
Performance grade
A 0 0 0 0 0 0 0 0 0 0
B 0.350 0 0 0 0 0 0 0 0 0
C 0.650 0.906 0.141 0.503 0 0 0 0 0 0.299
D 0 0.094 0.859 0.497 0.264 0 0.781 0.998 0.279 0.701
E 0 0 0 0 0.736 0.662 0.219 0.002 0.721 0
F 0 0 0 0 0 0.338 0 0 0 0
Evaluator II
Measurement scale
Perfect 97 1.0 12.0 40 100 19.0 59.5 40 5.5 8.5
Bottom 60 7.0 18.0 230 0 19.5 68.0 65 3.5 5.5
Performance score 8.135 7.209 4.993 5.721 2.529 3.232 2.649 4.392 2.735 7.262
Performance grade
A 0.068 0 0 0 0 0 0 0 0 0
B 0.932 0.604 0 0 0 0 0 0 0 0.631
C 0 0.396 0.497 0.860 0 0 0 0.196 0 0.369
D 0 0 0.503 0.140 0.264 0.616 0.325 0.804 0.367 0
E 0 0 0 0 0.736 0.384 0.675 0 0.633 0
F 0 0 0 0 0 0 0 0 0 0
(continued)
Performance
measurement

Measurement scales and

December 2006
performance grades for
143

Table VI.
19,1

144
JMTM

Table VI.
Metric POF OFCT USCF USCA DSCA SCMC COGS C2C ROSCFA ROWC
Unit of measurement Percent Days Days Percent Percent Percentage of rev Percentage rev Days Percent Percent

Evaluator III
Measurement scale
Perfect 98 1.3 12.0 21 100 18 58 38 5.9 8.5
Bottom 70 6.0 18.0 219 0 20 70 65 3.0 5.5
Performance score 7.179 7.075 4.993 7.261 2.529 3.308 3.543 4.067 3.610 7.262
Performance grade
A 0 0 0 0 0 0 0 0 0 0
B 0.589 0.537 0 0.630 0 0 0 0 0 0.631
C 0.411 0.463 0.497 0.370 0 0 0 0.034 0 0.369
D 0 0 0.503 0 0.264 0.654 0.772 0.966 0.805 0
E 0 0 0 0 0.736 0.346 0.228 0 0.195 0
F 0 0 0 0 0 0 0 0 0 0
AGG
Performance grade
A 0.023 0 0 0 0 0 0 0 0 0
B 0.624 0.381 0 0.210 0 0 0 0 0 0.421
C 0.354 0.588 0.378 0.578 0 0 0 0.077 0 0.346
D 0 0.031 0.622 0.212 0.264 0.423 0.626 0.923 0.484 0.234
E 0 0 0 0 0.736 0.464 0.374 0.001 0.516 0
F 0 0 0 0 0 0.113 0 0 0 0
efficient for managers to measure performances and compute the composite Performance
performance index at the system-wide level. The proposed performance model measurement
employs SCOR level 1 metrics because they are a smaller number of more relevant,
integrated, balanced, and strategic performance measures. One distinction between
this paper and Chan and Qi (2003b) is that while Chan and Qi’s measurement activities
span the SC processes, the performance measures in the proposed model span multiple
processes. By doing so, a more practical way of measuring SC performance can be 145
achieved without compromising theoretical rigor. SC managers who use the proposed
model can quickly monitor the progress of the SC and systematically align the metrics
with strategies. One case study is presented to demonstrate the measurement and the
application of the performance measurement method.

Metric C2C ROSCFA ROWC

Metric weight (0.169, 0.447, 0.781) (0.052, 0.174, 0.602) (0.109, 0.379, 0.813)
Measurement
result (0, 0, 0.077, 0.923, 0.001, 0) (0, 0, 0, 0.484, 0.516, 0) (0, 0.421, 0.346, 0.234, 0, 0)
Aggregated Table VII.
result (0, 0.155, 0.158, 0.565, 0.122, 0) Aggregation of
Performance measurement result of the
index 4.693 assets attribute

Attribute Reliability Responsiveness Flexibility Costs Assets

Attribute weight (0.102, 0.174, (0.031, 0.055, (0.038, 0.073, (0.390, 0.658, (0.016, 0.040,
0.296) 0.100) 0.142) 0.942) 0.099)
Measurement (0.023, 0.624, (0, 0.381, 0.588, (0, 0.053, 0.391, (0, 0, 0, 0.451, (0, 0.155, 0.158,
result 0.354, 0, 0, 0) 0.031, 0, 0) 0.484, 0.072, 0) 0.451, 0.097) 0.565, 0.122, 0)
Aggregated Table VIII.
result (0.004, 0.147, 0.138, 0.353, 0.297, 0.061) Aggregation of
Performance measurement result of the
index 4.050 case supply chain

Supply Chain
(4.050)

Reliability Responsiveness Flexibility Costs Assets


(7.338) (6.699) (4.849) (2.708) (4.693)

POF OFCT USCF USCA DSCA SCMS COGS C2C ROSCFA ROWC Figure 5.
(7.338) (6.699) (4.756) (5.996) (2.529) (2.621) (3.252) (4.152) (2.968) (6.374) Measurement results of SC
performance
Note: December 2006
JMTM 10
19,1
8

146 Performance Index 6


5.780
5.155
4.747 4.765 4.797
4.058 4.209 4.141 4.083 4.002 4.050
3.939
4

Figure 6. 0
Monthly PI trend Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
Month

References
Angerhofer, B. and Angelides, M.C. (2006), “A model and a performance measurement system for
collaborative supply chains”, Decision Support Systems, Vol. 42 No. 1, pp. 283-301.
Baddeley, A. (1994), “The magical number seven: still magic after all these years?”, Psychological
Review, Vol. 101 No. 2, pp. 353-6.
Basu, R. (2001), “New criteria of performance management: a transition from enterprise to
collaborative supply chain”, Measuring Business Excellence, Vol. 5 No. 4, pp. 7-12.
Beamon, B.M. (1999), “Measuring supply chain performance”, International Journal of
Operations & Production Management, Vol. 19 No. 3, pp. 275-92.
Bechtel, C. and Jayaram, J. (1997), “Supply chain management: a strategic perspective”,
The International Journal of Logistics Management, Vol. 8 No. 1, pp. 15-34.
Boender, C.G.E., de Graan, J.G. and Lootsma, F.A. (1989), “Multi-criteria decision analysis with
fuzzy pairwise comparisons”, Fuzzy Sets and Systems, Vol. 29 No. 2, pp. 133-43.
Brewer, P.C. and Speh, T.W. (2000), “Using the balanced scorecard to measure supply chain
performance”, Journal of Business Logistics, Vol. 21 No. 1, pp. 75-93.
Chan, F.T.S. and Qi, H.J. (2002), “A fuzzy basis channel-spanning performance
measurement method for supply chain management”, Proceedings of the Institution of
Mechanical Engineers Part B: Journal of Engineering Manufacture, Vol. 216 No. 8,
pp. 1155-67.
Chan, F.T.S. and Qi, H.J. (2003a), “An innovative performance measurement method for supply
chain management”, Supply Chain Management: An International Journal, Vol. 8 No. 3,
pp. 209-23.
Chan, F.T.S. and Qi, H.J. (2003b), “Feasibility of performance measurement system for supply
chain: a process-based approach and measures”, Integrated Manufacturing Systems,
Vol. 14 No. 3, pp. 179-90.
Chan, F.T.S., Qi, H.J., Chan, H.K., Lau, H.C.W. and Ip, R.W.L. (2003), “A conceptual model of
performance measurement for supply chains”, Management Decision, Vol. 41 No. 7,
pp. 635-42.
Dasgupta, T. (2003), “Using the six-sigma metric to measure and improve the performance of a Performance
supply chain”, Total Quality Management, Vol. 14 No. 3, pp. 355-66.
measurement
Davenport, T.H. (1993), Process Innovation: Reengineering Work through Information
Technology, Harvard Business School Press, Boston, MA.
Farris, M.T. II and Hutchison, P.D. (2002), “Cash-to-cash: the new supply chain management
metric”, International Journal of Physical Distribution & Logistics Management, Vol. 32
No. 4, pp. 288-98. 147
Fawcett, S.E. and Cooper, M.B. (1998), “Logistics performance measurement and customer
success”, Industrial Marketing Management, Vol. 27 No. 4, pp. 341-57.
Folan, P. and Browne, J. (2005), “A review of performance measurement: towards performance
management”, Computers in Industry, Vol. 56 No. 7, pp. 663-80.
Gammelgaard, B. and Vesth, H. (2004), “The SCOR model – a critical review”, Proceedings of
Operations Management as a Change Agent Conferences, INSEAD, pp. 233-41.
Griffis, S.E., Cooper, M., Goldsby, T.J. and Closs, D.J. (2004), “Performance measurement:
measure selection based upon firm goals and information reporting needs”, Journal of
Business Logistics, Vol. 25 No. 2, pp. 95-118.
Gunasekaran, A., Patel, C. and McGaughey, R.E. (2004), “A framework for supply chain
performance measurement”, International Journal of Production Economics, Vol. 87 No. 3,
pp. 333-47.
Gunasekaran, A., Patel, C. and Tirtiroglu, E. (2001), “Performance measures and metrics in a
supply chain environment”, International Journal of Operations & Production
Management, Vol. 21 Nos 1/2, pp. 71-87.
Hausman, W.H. (2004), “Supply chain performance metrics”, in Harrison, T.P., Lee, H.L. and
Neale, J.J. (Eds), The Practice of Supply Chain Management: Where Theory and Application
Converge, Springer Science & Business Media, New York, NY, pp. 61-73.
Hofman, D. (2004), “The hierarchy of supply chain metrics”, Supply Chain Management Review,
Vol. 8 No. 6, pp. 28-37.
Holmberg, S. (2000), “A system perspective on supply chain measurements”, International
Journal of Physical Distribution & Logistics Management, Vol. 30 No. 10, pp. 847-68.
Huang, S.H., Sheoran, S.K. and Keskar, H. (2005), “Computer-assisted supply chain configuration
based on supply chain operations reference (SCOR) model”, Computers & Industrial
Engineering, Vol. 48 No. 2, pp. 377-94.
Kasi, V. (2005), “Systemic assessment of SCOR for modeling supply chains”, Proceedings of the
38th Annual Hawaii International Conference on System Sciences, p. 87.
Lai, K., Ngai, E.W.T. and Cheng, T.C.E. (2002), “Measures for evaluating supply chain
performance in transport logistics”, Transportation Research Part E: Logistics and
Transportation, Vol. 38 No. 6, pp. 439-56.
Lambert, D.M. and Pohlen, T.L. (2001), “Supply chain metrics”, International Journal of Logistics
Management, Vol. 12 No. 1, pp. 1-19.
Lohman, C., Fortuin, L. and Wouters, M. (2004), “Designing a performance measurement system:
a case study”, European Journal of Operational Research, Vol. 156 No. 2, pp. 267-86.
Min, H. and Zhou, G. (2002), “Supply chain modeling: past, present, and future”, Computers &
Industrial Engineering, Vol. 43 Nos 1/2, pp. 231-49.
Morgan, C. (2004), “Structure, speed and salience: performance measurement in the supply
chain”, Business Process Management Journal, Vol. 10 No. 5, pp. 522-36.
JMTM Neely, A., Mills, J., Platts, K., Gregory, M. and Richards, H. (1996), “Performance measurement
system design: should process based approaches be adopted?”, International Journal of
19,1 Production Economics, Vol. 46/47, pp. 423-31.
Novack, R.A. and Thomas, D.J. (2004), “The challenges of implementing the perfect order
concept”, Transportation Journal, Vol. 43 No. 1, pp. 5-16.
Park, J.H., Lee, J.K. and Yoo, J.S. (2005), “A framework for designing the balanced supply chain
148 scorecard”, European Journal of Information Systems, Vol. 14 No. 4, pp. 335-46.
Reisinger, H., Cravens, K.S. and Tell, N. (2003), “Prioritizing performance measures within the
balanced scorecard framework”, Management International Review, Vol. 43 No. 4,
pp. 429-38.
Robson, I. (2004), “From process measurement to performance improvement”, Business Process
Management Journal, Vol. 10 No. 5, pp. 510-21.
Saaty, T.L. (1990), Multicriteria Decision Making: The Analytic Hierarchy Process, RWS
Publications, Pittsburgh, PA.
Schmitz, J. and Platts, K.W. (2004), “Supplier logistics performance measurement: indications
from a study in the automotive industry”, International Journal of Production Economics,
Vol. 89 No. 2, pp. 231-43.
Schroeder, R.G., Anderson, J.C. and Scudder, G.D. (1986), “White collar productivity
measurement”, Management Decision, Vol. 24 No. 5, pp. 3-7.
Supply-Chain Council (2006), “Supply-chain operations reference-model version 8.0”, available at:
www.supply-chain.org (accessed 16 August 2006).
van Hoek, R.I. (1998), “Measuring the unmeasurable” – measuring and improving performance in
the supply chain”, Supply Chain Management: An International Journal, Vol. 3 No. 4,
pp. 187-92.

Corresponding author
John C.S. Tang can be contacted at: tang@ait.ac.th

To purchase reprints of this article please e-mail: reprints@emeraldinsight.com


Or visit our web site for further details: www.emeraldinsight.com/reprints

Das könnte Ihnen auch gefallen