Sie sind auf Seite 1von 12

Evaluating and selecting key performance

indicators: an ANP-based model


Daniela Carlucci

Daniela Carlucci is Summary


Assistant Professor, based Purpose – Selecting the most meaningful performance indicators, i.e. key performance indicators
at the Center for Value (KPIs), represents one of the major challenges that companies have to face for developing an effective
Management, DAPIT, performance measurement system (PMS). Selecting KPIs can be interpreted as a multiple criteria
Università degli Studi della decision-making (MCDM) problem, involving a number of factors and related interdependencies. The
Basilicata, Potenza, Italy. purpose of this paper is to propose a model, based on the analytic network process (ANP), for driving
managers in the selection of KIPs. The model draws upon the consideration that KPIs can be evaluated
and selected on the basis of a set of criteria, theoretically founded, and the feedback dependencies
between the criteria and performance indicators as well as among indicators.
Design/methodology/approach – Based on a review of management literature regarding the
information quality required by performance measures, the paper identifies a set of criteria for selecting
KIPs. The criteria form the building blocks of the proposed ANP model. The feasibility of the model is
proved through its application to a real case.
Findings – The paper proposes and illustrates the practical application of an ANP-based model for
selecting KPIs. The use of the ANP makes it possible to extracts weights for setting the priorities among
indicators, by taking account of mutual dependencies among indicators and criteria. This enhances the
quality of the selection process.
Originality/value – Often managers choose KPIs without an accurate approach. The paper offers a
novel model for driving managers towards the choice of KPIs through a rigorous approach, based on the
ANP method. The model draws on a solid theoretical foundation and has been proven in practice.
Keywords Selection, Performance measures, Managers
Paper type Research paper

Introduction
In the current competitive scenario, organisations are required to monitor their performance
on a sustained basis across several dimensions. It is, therefore, understandable that, today
more than ever, the development of an effective performance measurement system (PMS)
represents for managers a complex challenge to face. Several problems, such as for
example the accuracy of data, the definition of meaningful metrics to support decision
making processes, the measurement of organisation’s intangible aspects, need to be
successfully addressed.
To be useful a PMS should ensure, first of all, a clear definition of performance and accuracy
of metrics. Closely related to this subject is the selection of meaningful performance
indicators.
The selection of performance indicators is one of the major tasks in designing a PMS.
Performance indicators are at heart of a PMS and represent indispensable means for
making performance based management decisions about program strategies and
activities.

PAGE 66 j MEASURING BUSINESS EXCELLENCE j VOL. 14 NO. 2 2010, pp. 66-76, Q Emerald Group Publishing Limited, ISSN 1368-3047 DOI 10.1108/13683041011047876
Therefore, it is reasonable that any effective PMS has to include a limited number of
indicators, i.e. key performance indicators (KPIs), capable of providing an integrated and
complete view of company’s performance. This is important in order to prevent information
overload, to avoid confusion for their potential users, to provide a clear picture of the critical
organisational competitive factors and to facilitate the overall measurement.
The dilemma is that often managers measure too much and spend a lot of time and efforts to
quantify all the aspects of the company. This results in the generation of a great amount of
indicators, and calls for the use of proper approaches to identify the most meaningful
metrics.
Selecting performance indicators is a complex decision making process, which can be
interpreted as a multi-criteria decision-making (MCDM) problem. Indeed, it regards a finite
set of performance indicators that can be evaluated and selected by means of some criteria
and their relative weight.
This paper proposes a decision model based on the analytic network process (ANP) method
(Saaty, 1996) to drive managers in the selection of KPIs to include within a PMS. The
proposed model takes into account the main characteristics of performance indicators
which guarantee the quality of a company’s information system. These characteristics have
been defined on the basis of a review of the management literature regarding the analysis of
the information quality as well as of the qualitative aspects required by financial and
performance information. They represent the basic criteria for selecting KPIs and make up,
along with performance indicators, the building blocks of the model.
The strength of the model is basically related to the application of the ANP for selecting and
prioritising performance indicators. In particular, the application of the ANP allows to face
dependencies between and among indicators and criteria and, as a result, to improve the
quality of the selection process. In fact, by applying the ANP, the relative importance of
performance indicators to select, does not merely result from a top-down process carried
out by judging how well the performance indicators perform against the criteria, but also
from a judgment process which takes into account the feedback relationships between the
criteria and performance indicators as well as the mutual interactions among indicators.
The paper is organised as follows. Section 1 describes the research background and
explains criteria for selecting KIPs. Section 2 introduces the model and discusses the reason
why ANP can be usefully applied to KIPs selection. Section 3 presents an application of the
model to a real case. Finally, in the last section, conclusions and suggestions for future
research are provided.

1. Research background
1.1 Selection criteria for performance indicators
Selecting meaningful performance indicators is a critical managerial and organisational
task. As outlined by Neely (1998), by right measurements an organisation can:
B check its position, that it knows where it is and where it is going;
B communicate its position according to two perspectives, internal, i.e. organisation
internally communicates in order to thank or spur individuals and teams, and external,
organisation externally communicates in order to cope with legal requirements or
market’s needs;
B confirm priorities, since by measuring it can identify how far it is from its goal; and
B compel progress, that means it can use measurement as means of motivation and
communicating priorities, and as a basis for reward.
Selecting indicators can be considered the final stage of the process of definition of
performance indicators. Once an initial list of performance indicators has been created, the
next step is to assess every possible indicator against a set of criteria, which guarantee
quality and meaningfulness of indicators.

j j
VOL. 14 NO. 2 2010 MEASURING BUSINESS EXCELLENCE PAGE 67
A number of studies have addressed the criteria for assessing the quality and utility of
information embedded in a performance indicator.
With specific reference to the accounting information of financial reports, Accounting
Standard Board (1991) and Financial Accounting Standards Board (1980) have examined
the characteristics that make accounting information useful. In particular, Financial
Accounting Standards Board (1980) makes a clear distinction between the user-specific
qualities, such as understandability, from qualities inherent to accounting information. Then it
identifies a number of criteria to assess the quality of the accounting information in terms of
decision usefulness, such as relevance, comparability and reliability. Additionally, for each
one of them, it specifies several detailed qualities and highlights the importance that the
perceived benefits derived from performance indicators disclosure must exceed the
perceived costs associated with it. Moreover Financial Accounting Standards Board (1980)
outlines that, although ideally the choice of an indicator should produce information
satisfying all cited criteria, sometimes it is necessary to sacrifice some of one quality in order
to gain in another.
Further suggestions about the selection criteria for performance indicators have been
provided in the literature and, particularly, in management information system literature.
Holzer (1989) has distinguished data criteria, i.e. availability, accuracy, timeliness, security,
costs of data collection, from measurement criteria, i.e. validity, uniqueness, evaluation.
Niven (2002) has argued that performance indicators have to be: linked to strategy,
quantitative, built on accessible data, easily understood, counterbalanced, relevant, and
commonly defined. According to Usaid (1996) good performance indicators are direct,
objective, adequate, quantitative, where possible, disaggregated, where appropriated,
practical and reliable.
Ballou et al. (1998), modelling an information manufacturing system, have considered four
criteria of information products: timeliness, data quality, cost and value. Wang and Strong
(1996), analysing data quality for consumers, have sustained that high-quality data should
be intrinsically good, contextually appropriate for the task, clearly represented, and
accessible.
The analysis of literature highlights that many criteria, frequently interchangeable, with the
juxtaposition of their meaning, have been proposed.
Against the variety of contributions, it seems possible to identify the following criteria for
selecting performance indicators:
B Relevance. A relevant performance indicator provides information to make a difference in
a decision by helping users to either form predictions about the outcomes of past,
present, and future events or to confirm or correct prior expectations. It deals with the
predictive value and/or feedback value. Feedback value refers to the quality of
information that enables users to confirm or correct prior expectations, while predictive
value stands for the quality of information that helps users to increase the likelihood of
correctly forecasting the outcome of past or present events (Financial Accounting
Standards Board, 1980). A critical feature of the relevance is the timeliness. In fact, the
information provided by the indicator has to be available to decision makers before it
loses its capacity to influence decisions;
B Reliability. It refers to the quality of a performance indicator that assures that it is
reasonably free from error and bias and faithfully represents what it purports to represent
(Financial Accounting Standards Board, 1980). Therefore relevance is related to the
directness or adequateness of information, i.e. the capacity of an indicator to measure as
closely as possible the result it intends to measure. High directness means lack of
duplicated information provided by indicators or, in other terms, uniqueness of indicators.
Financial Accounting Standards Board (1980) describes reliability in terms of
Representational Faithfulness, Verifiability, Neutrality. Representational Faithfulness is
the correspondence between a measure and the phenomenon that it purports to
represent. One of the issues, which affect representational faithfulness is the availability of
data used to build indicators. This availability, in turn, affects the costs of data collection.

j j
PAGE 68 MEASURING BUSINESS EXCELLENCE VOL. 14 NO. 2 2010
Verifiability is the ability through consensus among measurers to ensure that
information represents what it purports to represent or that the chosen measurement
method has been used without any error or bias.
Finally, neutrality is the absence in reported information of bias intended to either attain
a predetermined result or to induce a particular mode of behaviour.
B Comparability and consistency. Comparability refers to the quality of information related
to a performance indicator that enables users to identify similarities and differences
between two sets of economic phenomena, while the consistency is the conformity of an
indicator from period to period with unchanging policies and procedures. Financial
Accounting Standards Board (1980) underlines: ‘‘Information about a particular
enterprise gains greatly in usefulness if it can be compared with similar information
about other enterprises and with similar information about the same enterprise for some
other period or some other point in time. Comparability between enterprises and
consistency in the application of methods over time increase the information value of
comparisons of relative economic opportunities or performance’’ (p. 6).
B Understandability and representational quality. This criterion deals with aspects related to
the meaning and format of data collected to build a performance indicator. Performance
indicators have to be interpretable as well as easy to understand for users. They have to
be easily communicated and understood both internally and externally, or at least
presented in an easily understandable and appealing way to both the target audience
and users. Moreover, indicators have to be concise and unsophisticated.
The cited criteria represent the essential reference for selecting the most appropriate
indicators and building an effective PMS.

2. A network model to prioritise and select performance indicators


Focusing merely on the decision problem regarding the choice of performance indicators
from those defined in an initial list, this paper suggests a model, based on the ANP method,
aimed to support managers in the selection process. In the following, the model and the
methodology underpinning its construction are described in detail.

2.1 The methodology


The ANP is a generalisation of the analytic hierarchy process (AHP). The AHP is a widely
used multi-criteria decision-making method based on the representation of a decision
problem by a hierarchical structure where elements are uncorrelated and uni-directionally
affected by the hierarchical relationship. One of the main shortcomings of AHP is related to
the fact that the decision-making processes cannot be always structured by the hierarchy of
the elements – generally goal, criteria and alternatives – involved in the decision problem.
This can be due to some interactions and feedback dependencies between those elements
that belong to same and/or different levels of the hierarchy. It has long been observed that
decision-making is not strictly a top-down process carried out by judging how well the
alternatives of choice perform on the criteria. The criteria themselves are often dependent on
the available alternatives. This calls for some kind of iteration or feedback dependencies
among decision elements. As a result for some decision-making problems, a more holistic
approach capable of capturing all kinds of interactions is needed. The ANP, developed by
Saaty (1996), satisfies such a request. The ANP generalizes the AHP by replacing hierarchy
with a network system, which comprises all the possible elements of a problem and their
connections. The network structure consists of clusters of elements, rather than elements
arranged in levels. The simplest network model has a goal cluster containing the goal
element, a criteria cluster containing the criteria elements and an alternatives cluster
containing the alternative elements.
The ANP allows to include all the factors and criteria that have to be bearing on making a
best decision as well as to capture both interaction and feedback within clusters of decision
elements (inner dependence) and between clusters (outer dependence). While outer
dependence implies the dependence among clusters in a way to allow for the feedback

j j
VOL. 14 NO. 2 2010 MEASURING BUSINESS EXCELLENCE PAGE 69
circuits, inner dependence is related to the dependence within a cluster combined with
feedback among clusters. Figure 1 shows a network system with feedback and inner and
outer dependency.
The implementation of the ANP involves four main steps:
1. Step 1 – Model construction and problem structuring. In this step the decision problem is
clearly stated and decomposed into a network. The elements of the network can be
obtained by the opinion of decision makers through brainstorming or other appropriate
methods.
2. Step 2 – Building pairwise comparisons matrices of interdependent component levels.
Similar to AHP, the ANP is based on deriving ratio scale measurements founded on
pairwise comparisons. In particular, on the basis of the inner and the outer dependencies,
elements of each cluster and clusters themselves are compared pairwise. Like in AHP, in
pairwise comparison decision makers compare two elements or two clusters at a time in
terms of their relative importance with respect to their particular upper-level element or
cluster and express their judgments on the basis of Saaty’scale (Saaty, 1980). By
comparing decision elements of the network pairwisely, relative ratings are assigned and
paired comparison matrices can be formed as a result. Once the pairwise comparisons
have been completed the priority of the element is obtained by the computation of
eigenvalues and eigenvectors.
During the assessment process there may occur a problem in consistency. Therefore it
is important to examine the consistency of judgments. In this regard, Saaty (1980)
introduced the consistency ratio (CR). Decision makers judgements are consistent if CR
# 0,1. In case CR . 0,1 decision makers are solicited to revise their judgments in order to
obtain a consistent new comparison matrix.
3. Step 3 – Supermatrix formation. The supermatrix represents the tool by which
determining global priorities in a network system. The supermatrix is a partitioned matrix,
where each submatrix is composed by a set of relationships dealing with two levels in the
network model.
The ANP involves three kinds of supermatrix, i.e. unweighted supermatrix, weighted
supermatrix and limit supermatrix, which are respectively formed one after the other,
through proper computations (for more details, see Saaty, 1996).
4. Step 4 – Prioritising and selecting alternatives. The limit supermatrix provides priorities of
alternatives. In fact, values of the limit supermatrix stand for the overall priorities, which
embrace the cumulative influence of each element of the network on every other element,
with which it interacts. The priority weights of alternatives are in the column of alternatives
of the matrix.

Figure 1 Feedback network with clusters having inner and outer dependence among their
elements

Outer dependence
CI.1

CI.2

Feedback

CI.4
CI.3

Inner dependence

j j
PAGE 70 MEASURING BUSINESS EXCELLENCE VOL. 14 NO. 2 2010
Due to its features, the ANP has been applied to a large variety of decisions (e.g.
Hämäläinen and Seppäläinen, 1986; Meade and Sarkis, 1998; Partovi, 2001).
About the use of ANP for addressing topics related to performance measurement and
management area, it is still limited. Sarkys (2003) proposes the use of ANP for quantifying
the combined effects of several factors, both tangible and intangible, on organisational
performance measures. Isik et al. (2007) propose the use of ANP for performance
measurement ın constructıon. Liao and Chang (2009) use ANP for measuring performance
of hospitals, while Yüksel and Dağdeviren (2010) apply ANP to Balanced Scorecard.
Acknowledging that it is not true that forcing an ANP model always produces better results
than using AHP, I believe that ANP disentangles the question of the choice of performance
indicators better than AHP. In the following, I argue the reasons at the basis of this
assumption and describe the conceptual decision model for selecting performance
indicators.

2.2 The ANP-based model


As argued, the ANP allows to disentangle a decision problem, taking into account the
feedback relationships between decision elements.
This is particularly important when we consider the problem of selecting performance
indicators, as there are some feedback relationships between criteria and performance
indicators as well as among indicators to take into account.
Generally, when decision makers select performance indicators often they don’t consider
the dependency of criteria on the available performance indicators and the
interdependency among indicators or, at most, consider those dependencies in an
implicit way, without the possibility of addressing it through a rigorous approach. This could
compromise the quality of the results of the selection.
The proposed decision model provides a more accurate and practicable approach to deals
with the selection of performance indicators.
The model, based on the ANP method, consists of two clusters, named criteria and
performance indicators. The criteria included in the model are: relevance, reliability,
comparability and understandability.
The network model is characterised by two interdependencies among levels: between
criteria and indicators, which are represented by two-way arrows, and within the same level
of analysis of indicators, which is represented by a looped arc. Figure 2 provides a basic
picture of the proposed ANP-based decision model.
The network model involves a supermatrix which comprises four matrices: matrices A and B
which represent interdependencies between the two clusters criteria and indicators, the
matrix D which represents the interdependence of performance indicators on themselves
and, finally, C which is a zero matrix (see Figure 3).

Figure 2 The network model for selecting performance indicators

Criteria
Cr. 1 Cr. 2 Cr. 3
.......

Performance Indicators
PI. 1 PI. 2 PI. 3
.......

j j
VOL. 14 NO. 2 2010 MEASURING BUSINESS EXCELLENCE PAGE 71
Figure 3 Supermatrix structure of the ANP-based model for selecting performance
indicators

Performance
Criteria indicators

Criteria C B
Performance
indicators A D

In the following, a practical application of the ANP-based model is detailed in a series of


steps. It demonstrates how the proposed model can be applied to practice.

3. Case example
The application of the model concerns the assessment and selection of manufacturing
process performance indicators at a medium size manufacturer, operating in sofa industry.
The model has been applied to weigh the relative importance of existing manufacturing
process performance indicators and, then, to identify a set of indicators capable of
providing suitable information for driving and assessing management decisions and
actions.
The research methodology used for implementing the model included analyses of existing
documents, interviews, and targeted focus groups. They have involved managers along with
researchers. Especially researchers acted as facilitators, if necessary, throughout the model
implementation.

The ANP-based model has been implemented by several steps described in the following.
In particular the computations related to ANP application have been carried out effortlessly
by the software Superdecisions.

Step 1 – Model construction and problem structuring


The first step has been to structure the problem. For this reason, the network model in
Figure 2 has been properly tailored to cope with the practical decision problem, i.e. selecting
the ‘‘best’’ set of performance indicators for assessing the manufacturing process of the
company.
The group of performance indicators to be evaluated has been indicated by the company’s
top managers (see Table I).

Table I Performance indicators


Performance indicators
(alternatives) Description

[Ind.1] Actual leather consumptions – estimated leather consumptions (daily)


[Ind.2] Employees’ expenses/turnover (monthly)
[Ind.3] Number of claims occurred during the process (daily)
[Ind.4] Number of supplies’ claims (daily)
[Ind.5] Number of shifts of the delivery dates of orders/planned orders (daily)
[Ind.6] Working minutes for employee/ estimated minutes (daily)
[Ind.7] Working minutes for department/ estimated minutes (daily)

j j
PAGE 72 MEASURING BUSINESS EXCELLENCE VOL. 14 NO. 2 2010
Step 2 – Pairwise comparisons matrices of interdependent component levels
Eliciting preferences of various elements of the network model components has required a
series of pairwise comparisons, based on Saaty’s scale (1980), where managers have
compared two components at a time with respect to a ‘‘control’’ criterion. Within this
illustrative example, the relative importance of the indicators with respect to each specific
criterion has been first determined. A pairwise comparison matrix has been performed for
each of the four criteria for the calculation of the impacts of each of the indicators. A sample
of question used in this comparison process is: with respect to ‘‘relevance’’ what is preferred
indicator, ‘‘Number of supplies’ claims’’ or ‘‘Working minutes for employee/estimated
minutes’’?
Moreover, seven pairwise comparison matrices have been determined for the calculation of
the relative impacts of the criteria on a specific indicator. A sample of question used in this
comparison process is: what is a more pronounced or prevalent characteristic of ‘‘Actual
leather consumptions – estimated leather consumptions’’, its ‘‘relevance’’ or its
‘‘comparability’’? Therefore, to fully describe these two-way relationships, 11 pairwise
comparison matrices have been required. In addition a pairwise comparison has been
performed for the calculation of the influence of some indicators on other indicators. In order
to explain these dependencies within the indicators level, two pairwise comparison matrices
have been used. A sample of question used in this comparison process is: with respect to
‘‘Number of shifts of the delivery dates of orders/planned orders’’ what is the most influential
indicator, ‘‘Number of claims occurred during the process’’ or ‘‘Number of supplies’ claims’’?
Once the pairwise comparisons have been completed, the local priorities have been
computed. The consistency of the each pairwise comparison matrix has been also checked.
In case of inconsistency, top managers have been invited to revise their judgments. Table II
shows an example of the performance indicators pairwise comparison matrix within
relevance criteria. The last column of the matrix shows the weighted priorities for this matrix.

Step 3 – Supermatrix formation


The unweighted supermatrix, weighted supermatrix and limit supermatrix, have been
determined. In particular, the unweighted supermatrix has been formed considering all the
weighted priorities for each of the 13 pairwise comparison matrices, i.e. matrices A and B of
the network model have been built from 11 matrices related to relationships from succeeding
components, i.e. criteria and performance indicators. Matrix D of the network model has
been built from two matrices related to interdependence among the indicators. The
weighted supermatrix has been built considering the clusters to be equally important.
Raising the weighted supermatrix to an arbitrarily large number, the converge of the
interdependent relationships has been gained or, in other terms, ‘‘long term’’ stable
weighted values have been achieved. These values appear in the limit supermatrix, which is
column-stochastic and represents the final eigenvector. Table III shows the limit supermatrix.

Step 4 – Prioritising and selecting alternatives


The main results of the ANP application were the overall priorities of the indicators obtained
by synthesizing the priorities of the indicators from the entire network. The priorities for all the

Table II Performance indicators pairwise comparison matrix for relevance criteria and
eigenvector
Relevance [Ind.1] [Ind.2] [Ind.3] [Ind.4] [Ind.5] [Ind.6] [Ind.7] Priorities vector

[Ind.1] 1.000 1.000 3.000 2.000 3.000 0.500 0.500 0.1827


[Ind.2] 1.000 1.000 1.000 1.000 3.000 1.000 1.000 0.1549
[Ind.3] 0.333 1.000 1.000 2.000 1.000 1.000 1.000 0.1321
[Ind.4] 0.500 1.000 0.500 1.000 1.000 1.000 1.000 0.1123
[Ind.5] 0.333 0.333 1.000 1.000 1.000 0.500 0.500 0.0808
[Ind.6] 2.000 1.000 1.000 1.000 2.000 1.000 1.000 0.1685
[Ind.7] 2.000 1.000 1.000 1.000 2.000 1.000 1.000 0.1685

j j
VOL. 14 NO. 2 2010 MEASURING BUSINESS EXCELLENCE PAGE 73
Table III Limit supermatrix
Cr1 Cr2 Cr3 Cr4 Ind1 Ind2 Ind3 Ind4 Ind5 Ind6 Ind7

Cr1 0.067 0.067 0.067 0.067 0.067 0.067 0.067 0.067 0.067 0.067 0.067
Cr2 0.129 0.129 0.129 0.129 0.129 0.129 0.129 0.129 0.129 0.129 0.129
Cr3 0.159 0.159 0.159 0.159 0.159 0.159 0.159 0.159 0.159 0.159 0.159
Cr4 0.051 0.051 0.051 0.051 0.051 0.051 0.051 0.051 0.051 0.051 0.051
W ¼ Ind1 0.063 0.063 0.063 0.063 0.063 0.063 0.063 0.063 0.063 0.063 0.063
Ind2 0.127 0.127 0.127 0.127 0.127 0.127 0.127 0.127 0.127 0.127 0.127
Ind3 0.101 0.101 0.101 0.101 0.101 0.101 0.101 0.101 0.101 0.101 0.101
Ind4 0.058 0.058 0.058 0.058 0.058 0.058 0.058 0.058 0.058 0.058 0.058
Ind5 0.050 0.050 0.050 0.050 0.050 0.050 0.050 0.050 0.050 0.050 0.050
Ind6 0.132 0.132 0.132 0.132 0.132 0.132 0.132 0.132 0.132 0.132 0.132
Ind7 0.061 0.061 0.061 0.061 0.061 0.061 0.061 0.061 0.061 0.061 0.061

indicators can be read from any column of the limit supermatrix. Table IV shows the ranking
of the performance indicators identified to measure the production process effectiveness
and efficiency.
On the basis of the priorities, the following key performance indicators have been selected:
‘‘Working minutes per employee/estimated minutes’’, ‘‘Employees’ expenses/turnover’’, and
‘‘Number of claims occurred during the process’’.
Selected indicators represent the most basic and important dimensions that managers have
estimated to be valuable as the basis for tracking future progress and assess the current
baseline performance of the process. Obviously the appropriateness of this set of
performance indicators, over time, will depend upon how the manufacturing process
evolves as well as internal and external stakeholders’ information needs change.

4. Conclusions
This paper proposes a model, based on the ANP method, for driving managers in the
selection of KIPs. The ANP-based model takes into consideration that the priorities of
performance indicators depend on both a set of important criteria and feedback
relationships between the criteria and performance indicators as well as among
indicators. This is an issue that decision makers often consider when choosing
performance indicators in an implicit way without the possibility of addressing it through a
rigorous approach.

The criteria introduced in the model focus on the requests of quality and usefulness of the
information embedded in performance indicators. However they are not necessarily
exhaustive.
For future case studies additional criteria or a variation of dependencies contemplated in the
proposed network model could be considered. This is in order to better fit the organization’s
needs that applies the model and/or to specific features of the decision problem.

Table IV Performance indicators ranking


Performance indicators Priorities Ranking

[Ind.1] 0.0631 4
[Ind.2] 0.1267 2
[Ind.3] 0.1013 3
[Ind.4] 0.0579 6
[Ind.5] 0.0500 7
[Ind.6] 0.1323 1
[Ind.7] 0.0613 5

j j
PAGE 74 MEASURING BUSINESS EXCELLENCE VOL. 14 NO. 2 2010
In this regard, it is important to underline the importance of preserving the parsimony of the
decision model.
In fact even if, from an operative perspective, the use of software and group decision
support systems may lower the barriers when implementing ANP, it is always important to
take into account the time ANP takes to obtain the results, the effort involved in making the
judgments, and the relevance as well as accuracy of the results. In particular, the application
of the model has revealed that it is particularly important limiting the number of comparisons
questions in order to maintain decision makers’ capability to have a holistic view of the
decision problem.
In such a pragmatic prospect, the ANP method may appear complicated and
time-consuming. However, ANP is a valuable tool to management as it allows for
participative inputs, obtained from multiple evaluators, by setting the priorities for a panel of
performance indicators, by bringing the diverse belief systems together in a consistent and
organized way. This is particularly valuable in selecting performance indicators for all the
performance dimensions of company, by comparing indicators of many departments.
About the results of the ANP application it seems important to stress that, as for any decision
models, the final values that are determined should be critically analysed. Obviously when
managers make decisions based on the priorities and importance with which they have had
experience, the results of ANP are particularly reliable.
Additionally, to support continuous improvement, a periodic reconsideration of the selected
indicators should be performed.
A further suggestion for future studies concerns the consideration of fuzziness of decision
makers’ judgments. The proposed ANP-based model, ignores the fuzziness, therefore a
further development of the research should be related to improve the model by introducing
the concept of fuzzy set. The fuzzy extension should allow to address the issue of subjectivity
particularly the fuzziness of judgment. Finally, the model is seen as open for future extension
and development, especially on the basis of the results of a more widespread use in several
cases.
In conclusion, this paper suggests a framework that can be used for either the selection or
justification of a set of performance indicators and provides an exploratory evaluation of an
analytical approach for managerial decision-making in relation to performance
measurement.

References
Accounting Standard Board (1991), Qualitative Characteristic of Financial Information, ASB, London.
Ballou, D., Wang, R., Pazer, H. and Tayi, G.K. (1998), ‘‘Modeling information manufacturing systems to
determine information product quality’’, Management Science, Vol. 44 No. 4, pp. 462-84.
Financial Accounting Standards Board (1980), ‘‘Qualitative characteristics of accounting information’’,
Statement of financial accounting concepts, No. 2.
Hämäläinen, R.P. and Seppäläinen, T.O. (1986), ‘‘The analytic network process in energy policy
planning’’, Socio-Economic Planning Sciences, Vol. 20 No. 6, pp. 399-405.
Holzer, M. (1989), ‘‘Public service: present problems, future prospects’’, International Journal of Public
Administration, Vol. 12 No. 4, pp. 585-93.
Isik, Z., Dikmen, I. and Birgonul, M.T. (2007), Using Analytic Network Process (ANP) for Performance
Measurement in Construction, RICS, London.
Liao, S.K. and Chang, K.L. (2009), ‘‘Measure performance of hospitals using analytic network process
(ANP)’’, International Journal of Business Performance and Supply Chain Modelling, Vol. 1 Nos 2/3,
pp. 129-43.
Meade, L. and Sarkis, J. (1998), ‘‘Strategic analysis of logistics and supply chain management systems
using the analytical network process’’, Transportation Research Part E: Logistics and Transportation
Review, Vol. 34 No. 3, pp. 201-15.

j j
VOL. 14 NO. 2 2010 MEASURING BUSINESS EXCELLENCE PAGE 75
Neely, A.D. (1998), Performance Measurement: Why, What and How, Economist Books, London.

Niven, P.R. (2006), Balanced Scorecard Step-by-Step Maximizing Performance and Maintaining
Results, John Wiley & Sons, Hoboken, NJ.

Partovi, F.Y. (2001), ‘‘An analytic model to quantify strategic service vision’’, International Journal of
Service Industry Management, Vol. 12 No. 5, pp. 476-99.

Saaty, T.L. (1980), The Analytic Hierarchy Process, McGraw-Hill Company, New York, NY.

Saaty, T.L. (1996), Decision Making with Dependence and Feedback: The Analytic Network Process,
RWS Publications, Pittsburgh, PA.

Sarkis, J. (2003), ‘‘Quantitative models for performance measurement systems-alternate


considerations’’, International Journal of Production Economics, Vol. 86 No. 1, pp. 81-90.

Wang, R.Y. and Strong, D.M. (1996), ‘‘Beyond accuracy: what data quality means to data consumers’’,
Journal of Management Information System, Vol. 12 No. 4, pp. 5-34.

USAID Center for Development Information and Evaluation (1996), Selecting Performance Indicators.
Performance Monitoring and Evaluation, pp. 1-4, TIPS, 6.

Yüksel, I. and Dağdeviren, M. (2010), ‘‘Using the fuzzy analytic network process (ANP) for Balanced
Scorecard (BSC): a case study for a manufacturing firm’’, Expert Systems with Applications, Vol. 37
No. 2, pp. 1270-8.

About the author


Daniela Carlucci is Researcher at the Center for Value Management – LIEG, at the University
of Basilicata, Italy. Daniela’s research, teaching and consulting focus on knowledge
management, knowledge assets and intellectual capital assessment & management,
innovation, business performance measurement and management, decision making in
organizations and decision support methods. She is actively involved in relevant research
and consultancy activities as researcher and has worked in research projects involving
national organisations and institutions. Moreover Daniela is systematically engaged in
teaching activities in public and private institutions. She has authored and co-authored of
several publications, including chapters of books, articles, research reports and white
papers on a range of research topics particularly embracing knowledge assets and
intellectual capital management. Daniela is a regular speaker at national and international
conferences and author of various academic and practitioner papers. Daniela Carlucci can
be contacted at: daniela.carlucci@unibas.it

To purchase reprints of this article please e-mail: reprints@emeraldinsight.com


Or visit our web site for further details: www.emeraldinsight.com/reprints

j j
PAGE 76 MEASURING BUSINESS EXCELLENCE VOL. 14 NO. 2 2010
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Das könnte Ihnen auch gefallen