Beruflich Dokumente
Kultur Dokumente
Introduction
In the current competitive scenario, organisations are required to monitor their performance
on a sustained basis across several dimensions. It is, therefore, understandable that, today
more than ever, the development of an effective performance measurement system (PMS)
represents for managers a complex challenge to face. Several problems, such as for
example the accuracy of data, the definition of meaningful metrics to support decision
making processes, the measurement of organisation’s intangible aspects, need to be
successfully addressed.
To be useful a PMS should ensure, first of all, a clear definition of performance and accuracy
of metrics. Closely related to this subject is the selection of meaningful performance
indicators.
The selection of performance indicators is one of the major tasks in designing a PMS.
Performance indicators are at heart of a PMS and represent indispensable means for
making performance based management decisions about program strategies and
activities.
PAGE 66 j MEASURING BUSINESS EXCELLENCE j VOL. 14 NO. 2 2010, pp. 66-76, Q Emerald Group Publishing Limited, ISSN 1368-3047 DOI 10.1108/13683041011047876
Therefore, it is reasonable that any effective PMS has to include a limited number of
indicators, i.e. key performance indicators (KPIs), capable of providing an integrated and
complete view of company’s performance. This is important in order to prevent information
overload, to avoid confusion for their potential users, to provide a clear picture of the critical
organisational competitive factors and to facilitate the overall measurement.
The dilemma is that often managers measure too much and spend a lot of time and efforts to
quantify all the aspects of the company. This results in the generation of a great amount of
indicators, and calls for the use of proper approaches to identify the most meaningful
metrics.
Selecting performance indicators is a complex decision making process, which can be
interpreted as a multi-criteria decision-making (MCDM) problem. Indeed, it regards a finite
set of performance indicators that can be evaluated and selected by means of some criteria
and their relative weight.
This paper proposes a decision model based on the analytic network process (ANP) method
(Saaty, 1996) to drive managers in the selection of KPIs to include within a PMS. The
proposed model takes into account the main characteristics of performance indicators
which guarantee the quality of a company’s information system. These characteristics have
been defined on the basis of a review of the management literature regarding the analysis of
the information quality as well as of the qualitative aspects required by financial and
performance information. They represent the basic criteria for selecting KPIs and make up,
along with performance indicators, the building blocks of the model.
The strength of the model is basically related to the application of the ANP for selecting and
prioritising performance indicators. In particular, the application of the ANP allows to face
dependencies between and among indicators and criteria and, as a result, to improve the
quality of the selection process. In fact, by applying the ANP, the relative importance of
performance indicators to select, does not merely result from a top-down process carried
out by judging how well the performance indicators perform against the criteria, but also
from a judgment process which takes into account the feedback relationships between the
criteria and performance indicators as well as the mutual interactions among indicators.
The paper is organised as follows. Section 1 describes the research background and
explains criteria for selecting KIPs. Section 2 introduces the model and discusses the reason
why ANP can be usefully applied to KIPs selection. Section 3 presents an application of the
model to a real case. Finally, in the last section, conclusions and suggestions for future
research are provided.
1. Research background
1.1 Selection criteria for performance indicators
Selecting meaningful performance indicators is a critical managerial and organisational
task. As outlined by Neely (1998), by right measurements an organisation can:
B check its position, that it knows where it is and where it is going;
B communicate its position according to two perspectives, internal, i.e. organisation
internally communicates in order to thank or spur individuals and teams, and external,
organisation externally communicates in order to cope with legal requirements or
market’s needs;
B confirm priorities, since by measuring it can identify how far it is from its goal; and
B compel progress, that means it can use measurement as means of motivation and
communicating priorities, and as a basis for reward.
Selecting indicators can be considered the final stage of the process of definition of
performance indicators. Once an initial list of performance indicators has been created, the
next step is to assess every possible indicator against a set of criteria, which guarantee
quality and meaningfulness of indicators.
j j
VOL. 14 NO. 2 2010 MEASURING BUSINESS EXCELLENCE PAGE 67
A number of studies have addressed the criteria for assessing the quality and utility of
information embedded in a performance indicator.
With specific reference to the accounting information of financial reports, Accounting
Standard Board (1991) and Financial Accounting Standards Board (1980) have examined
the characteristics that make accounting information useful. In particular, Financial
Accounting Standards Board (1980) makes a clear distinction between the user-specific
qualities, such as understandability, from qualities inherent to accounting information. Then it
identifies a number of criteria to assess the quality of the accounting information in terms of
decision usefulness, such as relevance, comparability and reliability. Additionally, for each
one of them, it specifies several detailed qualities and highlights the importance that the
perceived benefits derived from performance indicators disclosure must exceed the
perceived costs associated with it. Moreover Financial Accounting Standards Board (1980)
outlines that, although ideally the choice of an indicator should produce information
satisfying all cited criteria, sometimes it is necessary to sacrifice some of one quality in order
to gain in another.
Further suggestions about the selection criteria for performance indicators have been
provided in the literature and, particularly, in management information system literature.
Holzer (1989) has distinguished data criteria, i.e. availability, accuracy, timeliness, security,
costs of data collection, from measurement criteria, i.e. validity, uniqueness, evaluation.
Niven (2002) has argued that performance indicators have to be: linked to strategy,
quantitative, built on accessible data, easily understood, counterbalanced, relevant, and
commonly defined. According to Usaid (1996) good performance indicators are direct,
objective, adequate, quantitative, where possible, disaggregated, where appropriated,
practical and reliable.
Ballou et al. (1998), modelling an information manufacturing system, have considered four
criteria of information products: timeliness, data quality, cost and value. Wang and Strong
(1996), analysing data quality for consumers, have sustained that high-quality data should
be intrinsically good, contextually appropriate for the task, clearly represented, and
accessible.
The analysis of literature highlights that many criteria, frequently interchangeable, with the
juxtaposition of their meaning, have been proposed.
Against the variety of contributions, it seems possible to identify the following criteria for
selecting performance indicators:
B Relevance. A relevant performance indicator provides information to make a difference in
a decision by helping users to either form predictions about the outcomes of past,
present, and future events or to confirm or correct prior expectations. It deals with the
predictive value and/or feedback value. Feedback value refers to the quality of
information that enables users to confirm or correct prior expectations, while predictive
value stands for the quality of information that helps users to increase the likelihood of
correctly forecasting the outcome of past or present events (Financial Accounting
Standards Board, 1980). A critical feature of the relevance is the timeliness. In fact, the
information provided by the indicator has to be available to decision makers before it
loses its capacity to influence decisions;
B Reliability. It refers to the quality of a performance indicator that assures that it is
reasonably free from error and bias and faithfully represents what it purports to represent
(Financial Accounting Standards Board, 1980). Therefore relevance is related to the
directness or adequateness of information, i.e. the capacity of an indicator to measure as
closely as possible the result it intends to measure. High directness means lack of
duplicated information provided by indicators or, in other terms, uniqueness of indicators.
Financial Accounting Standards Board (1980) describes reliability in terms of
Representational Faithfulness, Verifiability, Neutrality. Representational Faithfulness is
the correspondence between a measure and the phenomenon that it purports to
represent. One of the issues, which affect representational faithfulness is the availability of
data used to build indicators. This availability, in turn, affects the costs of data collection.
j j
PAGE 68 MEASURING BUSINESS EXCELLENCE VOL. 14 NO. 2 2010
Verifiability is the ability through consensus among measurers to ensure that
information represents what it purports to represent or that the chosen measurement
method has been used without any error or bias.
Finally, neutrality is the absence in reported information of bias intended to either attain
a predetermined result or to induce a particular mode of behaviour.
B Comparability and consistency. Comparability refers to the quality of information related
to a performance indicator that enables users to identify similarities and differences
between two sets of economic phenomena, while the consistency is the conformity of an
indicator from period to period with unchanging policies and procedures. Financial
Accounting Standards Board (1980) underlines: ‘‘Information about a particular
enterprise gains greatly in usefulness if it can be compared with similar information
about other enterprises and with similar information about the same enterprise for some
other period or some other point in time. Comparability between enterprises and
consistency in the application of methods over time increase the information value of
comparisons of relative economic opportunities or performance’’ (p. 6).
B Understandability and representational quality. This criterion deals with aspects related to
the meaning and format of data collected to build a performance indicator. Performance
indicators have to be interpretable as well as easy to understand for users. They have to
be easily communicated and understood both internally and externally, or at least
presented in an easily understandable and appealing way to both the target audience
and users. Moreover, indicators have to be concise and unsophisticated.
The cited criteria represent the essential reference for selecting the most appropriate
indicators and building an effective PMS.
j j
VOL. 14 NO. 2 2010 MEASURING BUSINESS EXCELLENCE PAGE 69
circuits, inner dependence is related to the dependence within a cluster combined with
feedback among clusters. Figure 1 shows a network system with feedback and inner and
outer dependency.
The implementation of the ANP involves four main steps:
1. Step 1 – Model construction and problem structuring. In this step the decision problem is
clearly stated and decomposed into a network. The elements of the network can be
obtained by the opinion of decision makers through brainstorming or other appropriate
methods.
2. Step 2 – Building pairwise comparisons matrices of interdependent component levels.
Similar to AHP, the ANP is based on deriving ratio scale measurements founded on
pairwise comparisons. In particular, on the basis of the inner and the outer dependencies,
elements of each cluster and clusters themselves are compared pairwise. Like in AHP, in
pairwise comparison decision makers compare two elements or two clusters at a time in
terms of their relative importance with respect to their particular upper-level element or
cluster and express their judgments on the basis of Saaty’scale (Saaty, 1980). By
comparing decision elements of the network pairwisely, relative ratings are assigned and
paired comparison matrices can be formed as a result. Once the pairwise comparisons
have been completed the priority of the element is obtained by the computation of
eigenvalues and eigenvectors.
During the assessment process there may occur a problem in consistency. Therefore it
is important to examine the consistency of judgments. In this regard, Saaty (1980)
introduced the consistency ratio (CR). Decision makers judgements are consistent if CR
# 0,1. In case CR . 0,1 decision makers are solicited to revise their judgments in order to
obtain a consistent new comparison matrix.
3. Step 3 – Supermatrix formation. The supermatrix represents the tool by which
determining global priorities in a network system. The supermatrix is a partitioned matrix,
where each submatrix is composed by a set of relationships dealing with two levels in the
network model.
The ANP involves three kinds of supermatrix, i.e. unweighted supermatrix, weighted
supermatrix and limit supermatrix, which are respectively formed one after the other,
through proper computations (for more details, see Saaty, 1996).
4. Step 4 – Prioritising and selecting alternatives. The limit supermatrix provides priorities of
alternatives. In fact, values of the limit supermatrix stand for the overall priorities, which
embrace the cumulative influence of each element of the network on every other element,
with which it interacts. The priority weights of alternatives are in the column of alternatives
of the matrix.
Figure 1 Feedback network with clusters having inner and outer dependence among their
elements
Outer dependence
CI.1
CI.2
Feedback
CI.4
CI.3
Inner dependence
j j
PAGE 70 MEASURING BUSINESS EXCELLENCE VOL. 14 NO. 2 2010
Due to its features, the ANP has been applied to a large variety of decisions (e.g.
Hämäläinen and Seppäläinen, 1986; Meade and Sarkis, 1998; Partovi, 2001).
About the use of ANP for addressing topics related to performance measurement and
management area, it is still limited. Sarkys (2003) proposes the use of ANP for quantifying
the combined effects of several factors, both tangible and intangible, on organisational
performance measures. Isik et al. (2007) propose the use of ANP for performance
measurement ın constructıon. Liao and Chang (2009) use ANP for measuring performance
of hospitals, while Yüksel and Dağdeviren (2010) apply ANP to Balanced Scorecard.
Acknowledging that it is not true that forcing an ANP model always produces better results
than using AHP, I believe that ANP disentangles the question of the choice of performance
indicators better than AHP. In the following, I argue the reasons at the basis of this
assumption and describe the conceptual decision model for selecting performance
indicators.
Criteria
Cr. 1 Cr. 2 Cr. 3
.......
Performance Indicators
PI. 1 PI. 2 PI. 3
.......
j j
VOL. 14 NO. 2 2010 MEASURING BUSINESS EXCELLENCE PAGE 71
Figure 3 Supermatrix structure of the ANP-based model for selecting performance
indicators
Performance
Criteria indicators
Criteria C B
Performance
indicators A D
3. Case example
The application of the model concerns the assessment and selection of manufacturing
process performance indicators at a medium size manufacturer, operating in sofa industry.
The model has been applied to weigh the relative importance of existing manufacturing
process performance indicators and, then, to identify a set of indicators capable of
providing suitable information for driving and assessing management decisions and
actions.
The research methodology used for implementing the model included analyses of existing
documents, interviews, and targeted focus groups. They have involved managers along with
researchers. Especially researchers acted as facilitators, if necessary, throughout the model
implementation.
The ANP-based model has been implemented by several steps described in the following.
In particular the computations related to ANP application have been carried out effortlessly
by the software Superdecisions.
j j
PAGE 72 MEASURING BUSINESS EXCELLENCE VOL. 14 NO. 2 2010
Step 2 – Pairwise comparisons matrices of interdependent component levels
Eliciting preferences of various elements of the network model components has required a
series of pairwise comparisons, based on Saaty’s scale (1980), where managers have
compared two components at a time with respect to a ‘‘control’’ criterion. Within this
illustrative example, the relative importance of the indicators with respect to each specific
criterion has been first determined. A pairwise comparison matrix has been performed for
each of the four criteria for the calculation of the impacts of each of the indicators. A sample
of question used in this comparison process is: with respect to ‘‘relevance’’ what is preferred
indicator, ‘‘Number of supplies’ claims’’ or ‘‘Working minutes for employee/estimated
minutes’’?
Moreover, seven pairwise comparison matrices have been determined for the calculation of
the relative impacts of the criteria on a specific indicator. A sample of question used in this
comparison process is: what is a more pronounced or prevalent characteristic of ‘‘Actual
leather consumptions – estimated leather consumptions’’, its ‘‘relevance’’ or its
‘‘comparability’’? Therefore, to fully describe these two-way relationships, 11 pairwise
comparison matrices have been required. In addition a pairwise comparison has been
performed for the calculation of the influence of some indicators on other indicators. In order
to explain these dependencies within the indicators level, two pairwise comparison matrices
have been used. A sample of question used in this comparison process is: with respect to
‘‘Number of shifts of the delivery dates of orders/planned orders’’ what is the most influential
indicator, ‘‘Number of claims occurred during the process’’ or ‘‘Number of supplies’ claims’’?
Once the pairwise comparisons have been completed, the local priorities have been
computed. The consistency of the each pairwise comparison matrix has been also checked.
In case of inconsistency, top managers have been invited to revise their judgments. Table II
shows an example of the performance indicators pairwise comparison matrix within
relevance criteria. The last column of the matrix shows the weighted priorities for this matrix.
Table II Performance indicators pairwise comparison matrix for relevance criteria and
eigenvector
Relevance [Ind.1] [Ind.2] [Ind.3] [Ind.4] [Ind.5] [Ind.6] [Ind.7] Priorities vector
j j
VOL. 14 NO. 2 2010 MEASURING BUSINESS EXCELLENCE PAGE 73
Table III Limit supermatrix
Cr1 Cr2 Cr3 Cr4 Ind1 Ind2 Ind3 Ind4 Ind5 Ind6 Ind7
Cr1 0.067 0.067 0.067 0.067 0.067 0.067 0.067 0.067 0.067 0.067 0.067
Cr2 0.129 0.129 0.129 0.129 0.129 0.129 0.129 0.129 0.129 0.129 0.129
Cr3 0.159 0.159 0.159 0.159 0.159 0.159 0.159 0.159 0.159 0.159 0.159
Cr4 0.051 0.051 0.051 0.051 0.051 0.051 0.051 0.051 0.051 0.051 0.051
W ¼ Ind1 0.063 0.063 0.063 0.063 0.063 0.063 0.063 0.063 0.063 0.063 0.063
Ind2 0.127 0.127 0.127 0.127 0.127 0.127 0.127 0.127 0.127 0.127 0.127
Ind3 0.101 0.101 0.101 0.101 0.101 0.101 0.101 0.101 0.101 0.101 0.101
Ind4 0.058 0.058 0.058 0.058 0.058 0.058 0.058 0.058 0.058 0.058 0.058
Ind5 0.050 0.050 0.050 0.050 0.050 0.050 0.050 0.050 0.050 0.050 0.050
Ind6 0.132 0.132 0.132 0.132 0.132 0.132 0.132 0.132 0.132 0.132 0.132
Ind7 0.061 0.061 0.061 0.061 0.061 0.061 0.061 0.061 0.061 0.061 0.061
indicators can be read from any column of the limit supermatrix. Table IV shows the ranking
of the performance indicators identified to measure the production process effectiveness
and efficiency.
On the basis of the priorities, the following key performance indicators have been selected:
‘‘Working minutes per employee/estimated minutes’’, ‘‘Employees’ expenses/turnover’’, and
‘‘Number of claims occurred during the process’’.
Selected indicators represent the most basic and important dimensions that managers have
estimated to be valuable as the basis for tracking future progress and assess the current
baseline performance of the process. Obviously the appropriateness of this set of
performance indicators, over time, will depend upon how the manufacturing process
evolves as well as internal and external stakeholders’ information needs change.
4. Conclusions
This paper proposes a model, based on the ANP method, for driving managers in the
selection of KIPs. The ANP-based model takes into consideration that the priorities of
performance indicators depend on both a set of important criteria and feedback
relationships between the criteria and performance indicators as well as among
indicators. This is an issue that decision makers often consider when choosing
performance indicators in an implicit way without the possibility of addressing it through a
rigorous approach.
The criteria introduced in the model focus on the requests of quality and usefulness of the
information embedded in performance indicators. However they are not necessarily
exhaustive.
For future case studies additional criteria or a variation of dependencies contemplated in the
proposed network model could be considered. This is in order to better fit the organization’s
needs that applies the model and/or to specific features of the decision problem.
[Ind.1] 0.0631 4
[Ind.2] 0.1267 2
[Ind.3] 0.1013 3
[Ind.4] 0.0579 6
[Ind.5] 0.0500 7
[Ind.6] 0.1323 1
[Ind.7] 0.0613 5
j j
PAGE 74 MEASURING BUSINESS EXCELLENCE VOL. 14 NO. 2 2010
In this regard, it is important to underline the importance of preserving the parsimony of the
decision model.
In fact even if, from an operative perspective, the use of software and group decision
support systems may lower the barriers when implementing ANP, it is always important to
take into account the time ANP takes to obtain the results, the effort involved in making the
judgments, and the relevance as well as accuracy of the results. In particular, the application
of the model has revealed that it is particularly important limiting the number of comparisons
questions in order to maintain decision makers’ capability to have a holistic view of the
decision problem.
In such a pragmatic prospect, the ANP method may appear complicated and
time-consuming. However, ANP is a valuable tool to management as it allows for
participative inputs, obtained from multiple evaluators, by setting the priorities for a panel of
performance indicators, by bringing the diverse belief systems together in a consistent and
organized way. This is particularly valuable in selecting performance indicators for all the
performance dimensions of company, by comparing indicators of many departments.
About the results of the ANP application it seems important to stress that, as for any decision
models, the final values that are determined should be critically analysed. Obviously when
managers make decisions based on the priorities and importance with which they have had
experience, the results of ANP are particularly reliable.
Additionally, to support continuous improvement, a periodic reconsideration of the selected
indicators should be performed.
A further suggestion for future studies concerns the consideration of fuzziness of decision
makers’ judgments. The proposed ANP-based model, ignores the fuzziness, therefore a
further development of the research should be related to improve the model by introducing
the concept of fuzzy set. The fuzzy extension should allow to address the issue of subjectivity
particularly the fuzziness of judgment. Finally, the model is seen as open for future extension
and development, especially on the basis of the results of a more widespread use in several
cases.
In conclusion, this paper suggests a framework that can be used for either the selection or
justification of a set of performance indicators and provides an exploratory evaluation of an
analytical approach for managerial decision-making in relation to performance
measurement.
References
Accounting Standard Board (1991), Qualitative Characteristic of Financial Information, ASB, London.
Ballou, D., Wang, R., Pazer, H. and Tayi, G.K. (1998), ‘‘Modeling information manufacturing systems to
determine information product quality’’, Management Science, Vol. 44 No. 4, pp. 462-84.
Financial Accounting Standards Board (1980), ‘‘Qualitative characteristics of accounting information’’,
Statement of financial accounting concepts, No. 2.
Hämäläinen, R.P. and Seppäläinen, T.O. (1986), ‘‘The analytic network process in energy policy
planning’’, Socio-Economic Planning Sciences, Vol. 20 No. 6, pp. 399-405.
Holzer, M. (1989), ‘‘Public service: present problems, future prospects’’, International Journal of Public
Administration, Vol. 12 No. 4, pp. 585-93.
Isik, Z., Dikmen, I. and Birgonul, M.T. (2007), Using Analytic Network Process (ANP) for Performance
Measurement in Construction, RICS, London.
Liao, S.K. and Chang, K.L. (2009), ‘‘Measure performance of hospitals using analytic network process
(ANP)’’, International Journal of Business Performance and Supply Chain Modelling, Vol. 1 Nos 2/3,
pp. 129-43.
Meade, L. and Sarkis, J. (1998), ‘‘Strategic analysis of logistics and supply chain management systems
using the analytical network process’’, Transportation Research Part E: Logistics and Transportation
Review, Vol. 34 No. 3, pp. 201-15.
j j
VOL. 14 NO. 2 2010 MEASURING BUSINESS EXCELLENCE PAGE 75
Neely, A.D. (1998), Performance Measurement: Why, What and How, Economist Books, London.
Niven, P.R. (2006), Balanced Scorecard Step-by-Step Maximizing Performance and Maintaining
Results, John Wiley & Sons, Hoboken, NJ.
Partovi, F.Y. (2001), ‘‘An analytic model to quantify strategic service vision’’, International Journal of
Service Industry Management, Vol. 12 No. 5, pp. 476-99.
Saaty, T.L. (1980), The Analytic Hierarchy Process, McGraw-Hill Company, New York, NY.
Saaty, T.L. (1996), Decision Making with Dependence and Feedback: The Analytic Network Process,
RWS Publications, Pittsburgh, PA.
Wang, R.Y. and Strong, D.M. (1996), ‘‘Beyond accuracy: what data quality means to data consumers’’,
Journal of Management Information System, Vol. 12 No. 4, pp. 5-34.
USAID Center for Development Information and Evaluation (1996), Selecting Performance Indicators.
Performance Monitoring and Evaluation, pp. 1-4, TIPS, 6.
Yüksel, I. and Dağdeviren, M. (2010), ‘‘Using the fuzzy analytic network process (ANP) for Balanced
Scorecard (BSC): a case study for a manufacturing firm’’, Expert Systems with Applications, Vol. 37
No. 2, pp. 1270-8.
j j
PAGE 76 MEASURING BUSINESS EXCELLENCE VOL. 14 NO. 2 2010
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.