Sie sind auf Seite 1von 28

The current issue and full text archive of this journal is available at

www.emeraldinsight.com/0951-3574.htm

Performance management
practices in public sector
organizations
Impact on performance
Frank H.M. Verbeeten
Rotterdam School of Management, Erasmus University,
Rotterdam, The Netherlands

Performance
management
practices
427
Received 11 January 2006
Revised 7 July 2006,
2 January 2007
Accepted 17 April 2007

Abstract
Purpose The aim of this study is to investigate whether performance management practices affect
performance in public sector organizations.
Design/methodology/approach Theoretically, the research project is based on economic as well
as behavioral theories. The study distinguishes amongst quantitative performance (efficiency,
quantities produced) and qualitative performance (accuracy, quality, innovation and employee morale)
and uses survey data from 93 public sector organizations in the Netherlands.
Findings The research shows that the definition of clear and measurable goals is positively
associated with quantity performance as well as quality performance. In addition, the use of incentives
is positively associated with quantity performance yet not related to quality performance. Finally, the
effects of performance management practices in public sector organizations are affected by
institutional factors. The results suggest that the behavioral effects of performance management
practices are as important as the economic effects in public sector organizations.
Research limitations/implications All limitations of survey research apply. The survey is
based on public sector organizations in The Netherlands; findings may not be transferable to other
countries.
Practical implications The joint introduction of performance management practices may provide
an opportunity to increase quantity performance yet may have no impact on quality performance.
Originality/value The paper responds to previous calls in the literature to use quantitative
research methods to generalize findings from previous case studies. Also, the paper empirically tests
the impact of performance management practices on performance, an area that has attracted scarce
research attention.
Keywords Performance management, Public sector organizations, Control systems, Behaviour,
The Netherlands
Paper type Research paper

Introduction
Recent efforts to reinvent the government and improve performance in public sector
organizations (also known as New Public Management, NPM) have focused on
Previous versions of this paper have benefited from comments from Arnick Boons, Ken
Cavalluzzo, Martine Cools, Henri Dekker, Frank Hartmann, David Naranjo Gil, Joan Luft, Paolo
Perego and Jerold Zimmerman as well as two anonymous reviewers. In addition, comments from
workshop participants at the Free University Amsterdam and the EAA Annual Congress
Workshops are appreciated. Part of this research has been executed while the author was
working at Nyenrode University. Assistance for this project was provided by Deloitte.

Accounting, Auditing &


Accountability Journal
Vol. 21 No. 3, 2008
pp. 427-454
q Emerald Group Publishing Limited
0951-3574
DOI 10.1108/09513570810863996

AAAJ
21,3

428

performance management practices (Hood, 1995, 1991). Performance management


(PM) practices include specifying which goals to achieve, allocating decision rights,
and measuring and evaluating performance (Heinrich, 2002; Ittner and Larcker, 2001;
Otley, 1999; Kravchuk and Schack, 1996; Brickley et al., 1995). A fundamental question
is whether PM is applicable in the public sector, and whether it will actually improve
public sector performance (Heinrich, 2002; Van Thiel and Leeuw, 2002; Ittner and
Larcker, 2001; Hyndman and Eden, 2000; Brignall and Modell, 2000; Hood, 1995).
Theory suggests that clear goals and measurable results are necessary in order to
prevent the diffusion of organizational energy (Rangan, 2004; Kaplan, 2001). By
quantifying goals and measuring whether they are achieved, organizations reduce and
eliminate ambiguity and confusion about objectives, and gain coherence and focus in
pursuit of their mission. In addition, the use of incentives may increase performance
(Bonner and Sprinkle, 2002); however, measuring and rewarding only part of the
performance may have undesirable effects on overall performance (Burgess and Ratto,
2003; De Bruijn, 2002; Van Thiel and Leeuw, 2002; Smith, 1995; Tirole, 1994; Gray and
Jenkins, 1993). Empirical, large-scale evidence on the impact of various PM-practices
on performance in public sector organizations is limited (Van Helden, 2005; Merchant
et al., 2003; Heinrich, 2002).
This paper investigates whether the use of PM-practices in public sector
organizations affects the performance of those organizations. This study makes four
contributions to the PM-literature. First, this is one of a rather limited number of
large-scale, empirical studies[1] that investigates the impact of PM-practices in public
sector organizations (Van Helden, 2005). As such, it extends results from previous
small sample studies, case studies and literature reviews (Newberry and Pallot, 2004;
De Bruijn, 2002; Van Thiel and Leeuw, 2002; Smith, 1993). Second, most of the
literature in this area has relied on the relation between one factor (e.g. defining clear
goals; Brignall and Modell, 2000; or introducing incentives, Newberry and Pallott,
2004) and performance. Using Partial Least Squares, the effects of several variables are
investigated in this paper. Third, this study distinguishes between quantitative and
qualitative performance in order to investigate the effects of PM-practices. This
distinction is important since it relates to the trade-off between (short-term) efficiency
and (long-term) effectiveness. Finally, this study responds to previous calls in the
literature to integrate several research disciplines (Van Helden, 2005; Merchant et al.,
2003). The use of economic and behavioural theories provides the opportunity to
investigate what PM-elements help in explaining public sector organizations
performance.
The remainder of this paper is organized in four sections. The next section provides
an overview of literature on management control systems in public sector
organizations. Section three discusses the methodology, including sample
characteristics and variables. The fourth section provides the results of the study,
while the last section offers the discussion, conclusions and limitations.
Literature review and hypotheses
Management control systems in the public sector
Management control systems focus on the implementation of the strategies and the
attainment of the goals of the organization; they attempt to assure that the organization
designs effective programs and implements them efficiently (Anthony and Young,

2003, p. 4; Merchant and Van der Stede, 2003, p. 14; Merchant, 1998, p. 2).
Organizations can use three forms of control: output (results) controls, action
(behavioural) controls, and clan (personnel/cultural) controls (Merchant and Van der
Stede, 2003; Merchant, 1998; Ouchi, 1979). Output controls involve evaluating and
rewarding individuals (and sometimes groups of individuals) for generating good
results, or punishing them for poor results. Action controls try to ascertain that
employees perform (or do not perform) specific actions known to be beneficial (or
harmful) to the organization. Finally, clan controls help to ensure that employees will
control their own behaviours or that the employees will control each others
behaviours. Amongst others, clan controls clarify expectations; they help ensure that
each employee understands what the organization wants. It should be noted that these
forms of control are not necessarily discrete, and elements of all three forms may be
found in any one organization.
Previous literature (Pollitt, 2006; Johnsen, 2005; Merchant and Van der Stede, 2003;
Modell, 2000; Merchant, 1998; Mol, 1996; Gupta et al., 1994; Hofstede, 1981; Ouchi, 1979,
1980) suggests that output controls are most useful when objectives are unambiguous,
outputs are measurable, activities are repetitive and the effects of management
interventions are known. If these conditions are not met, reliance on other forms of
control is necessary in order to efficiently and effectively achieve the goals of the
organization. In that case, the performance measures may still be useful for
exploratory purposes (expert control, trial-and-error control, etc; see Hofstede, 1981;
Burchell et al., 1980); however, excessive reliance on performance measures for
incentive purposes may result in dysfunctional effects.
From a control point of view, the most difficult case in public sector organizations is
when objectives are (excessively) ambiguous. To some extent, objectives are
ambiguous in most public sector organizations; yet when ambiguity of objectives is
excessive, it is likely that the incidence of political control also increases (i.e. the
organization is more likely to depend on power structures, negotiation processes,
particular interests and conflicting values; cf. Vakkuri and Meklin, 2006; Hofstede,
1981, p. 198). There may be clear political benefits[2] (and therefore incentives) to
formulate ambiguous objectives: ambiguous, unclear objectives provide politicians the
opportunity to react to changes in the political environment (Hofstede, 1981, p. 194).
Ambiguous goals may also prevent budget cuts in pet projects: if the organization
does not invest in efficiency and transparency, it is not clear to other politicians
whether money can be saved (De Bruijn, 2002, p. 583). Finally, ambiguous goals
decrease the extent to which politicians can be held accountable for problems and
disasters: a multitude of goals provides the opportunity to compensate
underperformance in one area (for example, exceeding cost budgets) by referring to
overperformance in another area (for example, reduction in the number of patients
waiting; Bevan and Hood, 2006; Vakkuri and Meklin, 2006; Johnsen, 2005).
Performance management practices in the public sector
Historically, public sector organizations have relied on action controls (rules and
procedures) to control organizations; however, the past decade has witnessed various
changes in management control of public sector organizations, including a shift
towards output controls (Ter Bogt, 2003; Hyndman and Eden, 2001, 2000; Lapsley,
1999; Guthrie et al., 1999; Hood, 1998, 1995; Gray and Jenkins, 1995). Most Western

Performance
management
practices
429

AAAJ
21,3

430

countries have promoted several initiatives to stimulate the use of performance


management (PM) practices in public sector organizations (including central
government, local governments and other public sector organizations such as
hospitals, education institutions, police forces, etc; see Van Helden, 2005; Cavalluzzo
and Ittner, 2004; Van Thiel and Leeuw, 2002; Hood, 1995, 1991). PM can be defined as
the process of defining goals, selecting strategies to achieve those goals, allocating
decision rights, and measuring and rewarding performance (Heinrich, 2002; Ittner and
Larcker, 2001; Otley, 1999; Kravchuck and Shack, 1996; Brickley et al., 1995).
PM-practices can serve several political as well as managerial purposes[3] (Propper
and Wilson, 2003; De Bruijn, 2002; Kloot and Martin, 2000). These purposes have an
impact on each other, yet the focus of this study is on the managerial purposes. First of
all, the definition of clear missions, objectives and targets helps each employee
understand what the organization wants and provides focus in operations
(communication purpose; see Rangan, 2004; Merchant and Van der Stede, 2003;
Kaplan, 2001; Hyndman and Eden, 2000). Second, by measuring performance with
regard to the objectives and targets, politicians and public managers should be able to
tell the public for what purposes their money is used (transparency/accountability
purpose). Third, public sector organizations may use performance measurement to
learn and improve performance (learning purpose). The transparency created by
measuring performance may indicate where the organization excels, and where
improvements are necessary. Fourth, performance measurement systems may provide
the basis for compensation of public government officials (appraising purpose). A
careful specification and monitoring of performance, along with a set of incentives and
sanctions, can be used to ensure that the public sector managers continue to act in
societys interest (Newberry and Pallot, 2004).
The characteristics of public sector organizations may also result in unintended
managerial side effects of PM-practices (Vakkuri and Meklin, 2006; Hood and Peters,
2004; De Bruijn, 2002; Van Thiel and Leeuw, 2002; Pallot, 2001, 1999; Smith, 1995).
These unintended side effects may include additional internal bureaucracy, a lack of
innovation, a reduction of system or process responsibility, tunnel vision,
sub-optimization and gaming of performance measures, and measure-fixation. As a
result, the performance of organizations may decrease rather than increase due to the
use of PM-practices.
Public sector performance
For the purposes of this research project, it is useful to distinguish between
quantitative and qualitative performance. Quantitative performance refers to
quantitative aspects of performance such as the use of resources (budget depletion,
or economy), number of outputs produced, and efficiency (Carter et al., 1992, p. 36).
Although the last aspect relates output to input, it can still be considered as a
quantitative performance measure since it usually contains no (or limited) indication of
quality. Qualitative performance refers to both operational quality (for example,
accuracy; Carter et al., 1992) as well as strategic capacity (for example, innovation
and long-term effectiveness; Newberry and Pallot, 2004; Kaplan, 2001; Kloot and
Martin, 2000).
Carter et al. (1992, p. 177) argue that measures for operational quality can be
critical by suggesting that more often, the real indicators of quality will be that the

ordinary, routine things are being done properly and on time. Similarly, Kaplan (2001)
as well as Atkinson et al. (1997) argues that measures for strategic capacity are
required in order to maintain (or improve) the long-term effectiveness of the
organization. Employee/user/customer/stakeholder surveys, professional judgment or
peer-group reviews may be needed in order to measure these aspects of quality.
Although these measures introduce an element of subjectivity, performance
measurement systems that ignore it are likely to lack balance (Kaplan, 2001). In
other words, it is important that PM-practices are constructed to identify (long-term)
quality differences in output and that this information becomes an integral part of the
performance measurement system (Henley et al., 1992, p. 287).
A general complaint (Carter et al., 1992, p. 177) as well as an empirical finding
(Pollitt, 2006, 1986; Pollanen, 2005; De Lancer Julnes and Holzer, 2001; Kloot and
Martin, 2000) is that quantitative performance measures tend to ignore the quality
aspect of service delivery since qualitative performance is much more difficult to
measure. In other words, the easy to measure drives out the more difficult (Gray and
Jenkins, 1995, p. 89). The result of such a focus on quantity performance (measures)
may be that the increase in quantitative performance (efficiency, number of units
produced) has been achieved at the expense of quality performance (operational
quality, Henley et al., 1992, p. 287; Carter et al., 1992, p. 40/41; or innovation and
long-term effectiveness; see Newberry and Pallot, 2006, 2005, 2004, Newberry, 2002).
The results from a meta-review by Jenkins et al. (1998) indicate that, in general, this
may be the case: they find that there is a positive effect of PM-practices on performance
quantity (i.e. the number of units produced or assembled) yet not necessarily on
performance quality (i.e. supervisor rating, accuracy). The previous review of literature
suggests that it is important to distinguish between the effects of PM-initiatives on
quantity and quality performance.
Motivation theories
The focus of this paper is on the managerial purposes for using PM-practices in the
public sector. Two strands of literature (behavioural and economic theory) will be
explored in this paper since the use of solely one research discipline may result in
incomplete, and, in some cases, incorrect conclusions (Merchant et al., 2003). Goal
setting theory provides a behavioral explanation for the hypothesized relation between
clear and measurable goals and performance (see Locke and Latham, 2002, 1990, for a
review)[4]. The focus of agency theory is on determining the optimal incentive contract;
agency theory may provide an economic explanation for the impact of PM-practices on
performance (see Lambert, 2001; Baiman, 1990; and Eisenhardt, 1989, 1985 for a
review). Table I provides a summary of similarities, as well as differences between goal
setting theory and agency theory.
Goal setting theory
The underlying premise of goal setting theory is that conscious goals affect what is
achieved (Latham, 2004). Goal setting theory asserts that people with specific and
challenging goals perform better than those with vague goals, such as do your best,
specific easy goals or no goals at all. Thus, goal setting theory assumes that there is a
direct relation between the definition of specific and measurable goals and
performance: if managers know what they are aiming for, they are motivated to

Performance
management
practices
431

AAAJ
21,3

Similarities

Agency theory

Clear and measurable goals are required


Incentives are positively related to performance
Decentralization and performance measurement systems are important
for high performance
Complexity complicates the achievement of high performance

432

Table I.
Main characteristics of
goal setting theory and
agency theory

Goal setting theory

Differences

Goal setting theory

Agency theory

Main driver of performance

Goals

Incentives

Goals

Clear and measurable goals


motivate managers to achieve
these goals

Clear and measurable goals are


necessary in order to decentralize
decision rights, develop adequate
performance measures and
provide adequate incentives

Decentralization

May block the implementation of Part of an optimal configuration


in order to mitigate control
adequate actions in order to
problems
achieve the goals

Performance measurement
system

Provide feedback to managers in


order to improve performance

Incentives

May provide meaning to the goals Motivate managers


provided

Complexity

Multiple goals and stakeholders


Complexity (task complexity)
reduces the relation between clear affect the applicability of
high-powered incentive systems
and measurable goals and
performance

Important characteristics of
public sector employees

Ability and commitment to goals Intrinsic motivation, self selection


affect performance
and professionalism affect
marginal costs of incentives

Provide outcome information as


the basis for contracts,
respectively provide indications of
managerial behaviour

exert more effort, which increases performance (Locke and Latham, 2002, 1990).
Challenging goals are usually implemented in terms of specific levels of output to be
attained (Locke and Latham, 1990). Review articles (Locke and Latham, 2002; Rodgers
and Hunter, 1991) suggest a positive relation between clear and measurable goals and
performance. However, Locke and Latham (1990) acknowledge that task difficulty
(which is associated with difficult to measure goals) reduces the impact of clear and
measurable goals on performance.
Empirical evidence from the public sector provides somewhat mixed results. For
example, Hyndman and Eden (2001) interviewed the chief executives of nine agencies
in Northern Ireland. All their respondents indicate that a focus in mission, objectives,
targets and performance measures had improved the performance of the agency for all
stakeholders. Respondents also indicated that the poor implementation of the system
(i.e. systems that value efficiency over quality and/or short-term over long-term results,
p. 593), as well as the tendency to overemphasize numbers at the expense of judgment,
could jeopardize performance.

Cavalluzzo and Ittner (2004) examine some of the factors that influence the
development, use, and perceived benefits of results-oriented performance measures in
US government agencies. Their findings indicate that metric difficulties (due to,
amongst others, ambiguous or difficult to capture goals) are negatively associated with
perceived current and future benefits from the US governments performance
measurement initiatives. This suggests that US agency managers believe that the use
of PM-practices may not improve performance in situations where ambiguity of
objectives is high.
Summarizing, although goal setting theory suggests that clear and measurable
goals should be positively associated with performance, empirical evidence on this
issue in the public sector is inconclusive. The main problem appears to be associated
with the impact of clear and measurable goals on long-term, qualitative performance;
this results in the following hypothesis:
H1. There is (a) a positive relation between clear and measurable goals and
quantity performance, and (b) no relation between clear and measurable goals
and quality performance.
Agency theory
An agency theory relationship exists when one or more individuals (called principals)
hire others (called agents) in order to delegate responsibilities to them (Baiman, 1990).
The rights and responsibilities of the principals and agents are specified in their
mutually agreed-upon employment relationship; agency theory attempts to describe
that relationship using the metaphor of a contract. Agency theory assumes that
individuals are fully rational and have well-defined preferences and beliefs that
conform to the axioms of expected utility theory (Bonner and Sprinkle, 2002).
Furthermore, each individual is presumed to be motivated solely by self-interest
(Baiman, 1990). This self-interest can be described in a utility function that contains
two arguments: wealth (monetary and non-monetary incentives) and leisure.
Incentives can be defined as extrinsic motivators where pay, bonuses or career
perspectives are linked to performance (Bonner et al., 2000). Individuals are presumed
to have preferences for increases in wealth and increases in leisure. Agency theory
therefore posits that individuals will shirk (i.e. exert no effort) on a task unless it
somehow contributes to their own economic wellbeing (Bonner and Sprinkle, 2002).
Incentives that are not contingent on performance generally do not satisfy this
criterion; thus, agency theory suggests that incentives play a fundamental role in
motivation and the control of performance because individuals have utility for
increases in wealth[5].
The public sector has some specific characteristics that make the design of incentive
schemes quite complex (Pollitt, 2006; Anthony and Young, 2003; Burgess and Ratto,
2003; Dixit, 2002, 1997; Dewatripont et al., 1999; Kravchuk and Schack, 1996; Gupta
et al., 1994; Tirole, 1994; Hofstede, 1981). First of all, public sector organizations
generally have multiple stakeholders (principals) with multiple goals. Delivering
incentives is complex in these circumstances; each principal will offer a positive
coefficient on the element(s) (s)he is interested in, and negative coefficients on the other
dimensions (Dixit, 1997). The aggregate marginal incentive coefficient for each
outcome is decreasing with the number of principals (Burgess and Ratto, 2003); as a
result, incentives are weak (Dixit, 1997).

Performance
management
practices
433

AAAJ
21,3

434

Second, several dimensions of performance are hard to measure. This may result in
the fact that only those dimensions of performance that are easy to measure are
included in the incentive scheme, which may have undesirable effects on overall
performance (Burgess and Ratto, 2003; Tirole, 1994). Third, agency theory assumes
that an agent gets utility solely from the incentives, and disutility from the effort he
exerts on behalf of the principal. In reality, agents in public sector organizations may
get utility from some aspects of the task itself. Agents in the public sector may be
motivated by the idealistic or ethical purpose served by the agency (intrinsic
motivation), which may result in a match of workers and public sector organizations.
This match of workers and public sector organizations may have also resulted in the
fact that more risk-averse employees have opted for public sector organizations.
Finally, professionalism may motivate agents in public sector organizations. As a
result, organizations can use so-called low-powered incentives (i.e. incentives are not
based on performance) if the goals of the workers are aligned with those of the
organization (Dixit, 2002). On the other hand, organizations will have a higher
marginal cost of effort if these goals diverge. In addition, public sector professionals
may decouple PM-information and their daily jobs (i.e. not use performance
information for managerial and/or evaluation purposes; see Hoque, 2005; Johnsen,
2005; Hoque et al., 2004).
Anecdotal and empirical evidence on the effectiveness of incentives in public sector
organizations provides mixed results (Bevan and Hood, 2006; Newberry and Pallot,
2005, 2004; Newberry, 2002; Van Thiel and Leeuw, 2002; De Bruijn, 2002). Bevan and
Hood (2006) investigate the use of performance management of public services in
England. They indicate that English health care managers were exposed to greater risk
of being sacked when measured indices (including star rating indicators) were used
and individual hospitals were named and shamed. Although there have been
dramatic improvements in reported performance in the English health care sector,
Bevan and Hood (2006) argue that it is impossible to determine whether these
improvements are genuine or whether they are offset by gaming and/or a reduction in
performance dimensions that are not measured.
Similarly, Dranove et al. (2003) investigates the use of health care report cards
(public disclosure of patient health outcomes at the level of the individual physician,
hospital, or both) on US health care provides. Their results indicate that the use of
performance measures and (implicit) incentives (naming and shaming) results in
both selection behavior by providers as well as improved matching of patients with
hospitals. However, while overall this lead to higher levels of resource use (more
surgery, higher efficiency), it also resulted in worse health outcomes, especially for
sicker patients. Dranove et al. (2003) conclude that, at least in the short run, these report
cards decreased patient and social welfare (i.e. quality aspects of performance).
Newberry (2002) and Newberry and Pallot (2004, 2005, 2006) investigate the
consequences of New Zealands public sector financial management system for the
New Zealand central government departments. Their results indicate that while
accounting-based financial management incentives have resulted in efficiency gains,
they may not have improved effectiveness over the long run. Newberry and Pallot
(2004) indicate that government departments have experienced resource erosion,
leading to a loss of capability to deliver services over the longer-term, which may
ultimately contribute to a loss of morale and difficulties in attracting and retaining

staff. Similarly, Gray and Jenkins (1993, p. 65) note that the introduction of the
Financial Management Initiative in the British public sector has advanced
cost-awareness yet has also resulted in a shift of attention away from the long-term
interests of policy delivery to the meeting of short-term targets.
Finally, Shirley and Xu (2001) investigate the empirical effects of performance
contracts (contracts signed between the government and state enterprise managers) in
China. Performance contracts have been widely used to reform state-owned enterprises
in China. The results by Shirley and Xu (2001) indicate that, on average, performance
contracts in China did not improve performance and may have made it worse.
However, Chinas performance contracts effects were not uniformly bad; they
improved productivity (i.e. quantity performance) in slightly more than half of the
participants. Performance contracts were on average negative because of the large
losses associated with poorly designed contracts. Performance contracts improved
performance in slightly more than half of the cases; successful contracts featured
sensible targets, stronger incentives, longer terms, managerial bonds, and were in more
competitive industries (Shirley and Xu, 2001).
Summarizing, although agency theory suggests that incentives should be positively
associated with performance, empirical evidence on this issue in the public sector is
inconclusive. Overall, the use of incentives appears associated with an increase in
quantity performance yet a decrease in quality performance. This results in the
following hypothesis:
H2. There is (a) a positive relation between incentives and quantity performance,
and (b) no relation between incentives and quality performance.
Control variables
A number of control variables that may affect the relation between clear and
measurable goals, incentives and performance are recognized[6].
Decentralization of decision rights. Goal setting theory suggests that goals are less
likely to be achieved if there are situational constraints blocking performance than if
there are no such blocks (Locke and Latham, 1990). One of these situational
constraints may be the lack of decision rights; decision rights refer to the authority
and responsibility for making particular decisions (Kaplan and Atkinson, 1998, p. 288).
Agency theory indicates that organizations should balance the benefits from
decentralizing decision rights to lower levels in the organization against control losses
from increased information asymmetries (i.e. potential for gaming; Bushman et al.,
2000). Based on these assertions, decentralization is included as a control variable.
Performance measurement system. Goal setting theory suggests that feedback (i.e.
information from the performance measurement system) may provide the opportunity
to set more demanding goals in the future, provides information regarding better task
strategies, and is a basis for recognition and reward (Locke and Latham, 2002). Agency
theory recognizes that the performance measurement system provides the input for
decision-making, as well as for incentives (Abernethy et al., 2004). Based on these
claims, the performance measurement system is included as a control variable.
Size. Size may be an important determinant of PM-practices as well as performance
(see Chenhall, 2003, for a review). Economic theory argues that PM-practices will be more
effective in small organizations (Dewatripont et al., 1999); however, an increase in size

Performance
management
practices
435

AAAJ
21,3

436

may positively affect the adoption and use of PM-practices (De Lancer Julnes and Holzer,
2001; Rogers, 1995). Based on these assertions, size is included as a control variable.
Sector. Sector may affect the relation between PM-practices and performance in (at
least) two ways. First of all, there may be (legal) sector characteristics that may influence
the ability to specify clear and measurable goals as well as the design and implementation
of incentive schemes[7]. In addition, a sector proxy may capture task characteristics
(complexity, standardization) that may affect the applicability and effects of PM.
Methodology
Sample
The study is based on survey data collected from managers in public sector
organizations, located in the Netherlands. Surveys allow contact with otherwise
inaccessible respondents at relatively low costs (Cooper and Schindler, 2003). Students
of an executive education program have been used to contact survey participants; this
procedure results in high response rates, yet sample selection is not random (see also
next section). Since the student teams have met with the respondent, potential
problems with respondent identification (i.e. a junior employee rather than a manager
responds to the survey) and respondent understanding of the questions (students have
been instructed to clarify questions if asked, without giving guidance to obtain
desirable results) are mitigated.
The survey was pre-tested by four experts, either (previous) managers of non-profit
organizations or survey experts. A cover letter, explaining the goal, desired respondent
and other issues, accompanied the survey. In addition, respondents could indicate if
they wanted to receive the results from this study to provide an incentive to cooperate.
A total of 93 useable surveys[8] were returned. The sample is biased towards
government organizations (central government, municipalities), while education and
health care organizations are underrepresented in the sample; therefore, the results
cannot be generalized towards all public sector organizations. Similar to Cavalluzzo
and Ittner (2004), the manager of an individual unit, program, project or operation is
the basis of analysis. Respondents include general directors and general managers (48
percent), financial directors and controllers (26 percent), department heads (19 percent)
and others (7 percent, including head of communication, head of education and
learning, and coordinating staff advisor). All respondents had at least some managerial
responsibilities. On average, respondents were working for nine years in their
organization (median: six years) and have been employed in their current function for
five years (median: three years). These data suggest that respondents are well informed
on PM practices of their organization.
Measurement of variables
The questionnaire elicited information on the clarity and measurability of the goals of
the organization, the incentive system and the performance of the organization as well
as a number of control variables. Each variable has been measured using multiple
items. Existing measures have been used where possible, sometimes with modification
to fit the present research context. The items used to measure each variable, including
the mean and standard deviations of the responses, their factor loadings and
Cronbachs alpha, are in listed in the Appendix. All validity and reliability checks
provide satisfactory results[9].

Dependent variable
Performance was measured by a well-established instrument developed by Van de Ven
and Ferry (1980), designed specifically to assess performance in public sector
organizations. This instrument has also been used by Dunk and Lysons (1997) and
Williams et al. (1990). Each of the items in the instrument is measured on a five-point
Likert scale, ranging from 1, far below average, to 5, far above average. The
performance dimensions include:
(1) quantity or amount of work produced;
(2) quality or accuracy of work produced;
(3) number of innovations or new ideas by the unit;
(4) reputation of work excellence;
(5) attainment of unit production or service goals;
(6) efficiency of unit operations; and
(7) morale of unit personnel[10].
Based upon the theoretical distinction between quantity and quality performance as well
as the results of the factor analysis and scale reliability test, the measure for performance
is split in two sub-measures: one for quantitative performance (QUANTPERF, capturing
performance dimensions (1), (5) and (6)), and one for qualitative performance
(QUALPERF, capturing performance dimensions (2), (3), (4) and (7)).
The results for performance indicate that, with the exception of efficiency,
respondents tend to overestimate the performance of their organization. That is, the
sample performance should be average (i.e. 3.0) if the sample is representative and
respondents perceptions reflect reality. The overestimation of performance is
consistent with other studies that have used similar versions of this instrument. For
example, Dunk and Lysons (1997) present an average overall performance of 21.577
with six items in their instruments (average 3.60 per item); Williams et al. (1990) also
present average scores above 3.58. The average score is 3.36 for the sample in this
study is consistent with the overestimation of performance in previous studies.
Independent variables
Clear and measurable goals
The instrument for clear and measurable goals (CLRMEASG) is purposely developed
for this research project. The variable CLRMEASG captures the extent to which
respondents agree on the following statements (where 1 completely disagree and
5 completely agree):
.
the mission of my organization is formulated unambiguously;
.
the mission of my organization is written on paper and communicated internally
and externally;
.
the goals of my organization are unambiguously related to the mission of my
organization;
.
the goals of my organization have been documented very specifically and
detailed;

Performance
management
practices
437

AAAJ
21,3

438

the sum of goals to be achieved provides a complete picture of the results that
should be achieved by my organization, and
the performance measures for my organization are unambiguously related to the
goals of my organization[11].

Incentives
An adapted version from the instrument by Keating (1997) has been used to
characterize the incentive system. Respondents have been asked to indicate the
importance of:
.
budget versus actuals;
.
quantity measures;
.
efficiency measures;
.
customer satisfaction measures;
.
quality measures; and
.
outcome measures, for their total compensation (salary, bonus, career
perspectives, etc).
The variable for incentives has been labelled INCENTVE.
Control variables
Decentralization
The proxy for decentralization is based on the instrument developed by Gordon and
Narayanan (1984) and also used by, among others, Miah and Mia (1996)[12] and
Abernethy et al. (2004). To obtain a valid measure for the subsequent analysis, a
variable based on only the first two items of the instrument (labelled DECACR,
Decentralization of Asset Commitment Rights; see Bouwens and Van Lent, 2003) had
to be created.
Performance measurement system
The performance measurement system instrument (labelled BROADPMS) is based
upon the instrument by Cavalluzzo and Ittner (2004) and captures the extent to which
different types of results-oriented performance measures have been developed for the
activities of the organization. These different types of measures should be regarded as
a supplement (not a substitute) to the financial (budget) measures that are used in most
public sector organizations (Williams et al., 1990).
Size
Respondents have been asked to provide the number of full-time equivalents of their
organization. To correct for skewness, the measure for size (labeled SIZE) is based on
the logarithm for the number of employees in the organization.
Sector
Finally, respondents have been asked to provide the sector in which their organization
operates. The organizations are regrouped to obtain dummy variables for local
government organizations (labeled LOCGOVT) and other government organizations
(labeled OTHGOVT); the additional category includes central government organizations.

Results
Partial least squares regression
Partial least squares (PLS) regression[13] is used to investigate the relations between
clear and measurable goals, incentives and performance. PLS is a components-based
structural equations modelling technique similar to regression, but simultaneously
models the structural paths (i.e. theoretical relationships among latent variables) and
measurement paths (i.e. relationships between a latent variable and its indicators).
PLS places minimal demands on measurement scales, sample size and residual
distributions; in addition, it is considered better suited for explaining complex
relationships (Chin et al., 2003; Chin, 1998). PLS requires the subsequent evaluation of
the measurement model (where the reliability and validity of the measures is assessed)
and the structural model (where the fit between the theoretical model and the data are
assessed; see Hair et al., 1998, pp. 610-15). Hulland (1999) discusses the use of PLS in
strategic management research; similarly, Smith and Langfield-Smith (2004, p. 75)
review the applicability of structural equation modelling in management accounting
research and indicate that the use of PLS provides opportunities to build theory on
relatively small samples[14].
Measurement model. Table II provides the PLS parameter estimates for the
measurement models. The table suggests that the constructs are measured with
sufficient precision[15], even though they are correlated.
Structural model: test of models. The PLS estimates of the structural model are
reported in Table III. The table provides the standardized bs for each path coefficient,
which are interpreted in the same way as in OLS regression. In addition, the R2 for
each variable is calculated; the PLS R2 can be used to evaluate the model. As PLS
makes no distributional assumptions, bootstrapping (500 samples with replacement) is
used to evaluate the statistical significance of each path coefficient[16] (Chin, 1998).
The results from the analysis indicate that clear and measurable goals are positively
associated with both quantity and quality performance. This result is in accordance
with H1(a), yet conflicts with H1(b). Consistent with goal setting theory is that the
impact of clear and measurable goals on qualitative aspects of performance is lower
than for quantitative aspects of performance (b 0.25 for quality performance,
respectively b 0.35 for quantity performance). The finding that incentives are
positively associated with quantity performance (b 0.25, p , 0.10), yet not related to
quality performance (b 0.09, p . 0.10) fully confirms the second hypothesis.
The control variables also affect the model. First of all, sector and, to a lesser extent,
size appears to affect the ability to define clear and measurable goals. Local
government organizations have significantly less clear and measurable goals than
central government organizations (path coefficient from local government to clear and
measurable goals is 2 0.30, p , 0.05). Contrary, other government organizations
appear to have somewhat more clear and measurable goals than central government
organizations (path coefficient from other government to clear and measurable goals is
0.14, p , 0.15). Local governments also qualify their quality performance as
significantly lower than other public sector organizations (path coefficient from local
government to quality performance is 2 0.35, p , 0.01); their quantitative
performance appears similar to other public sector organizations.
Finally, large organizations appear to have somewhat more difficulties with
defining clear and measurable goals than smaller organizations (path coefficient from

Performance
management
practices
439

0.88
0.83
0.86
0.90
1.00
1.00
1.00
0.81
0.80

0.55
0.72
0.55
0.61
1.00
1.00
1.00
0.59
0.50

0.74
0.18a
0.49c
0.13
2 0.14
2 0.34c
0.28c
0.48c
0.40c

1
0.85
0.21b
0.05
20.12
20.01
0.12
0.17
0.14

0.74
0.22b
0.02
20.18a
0.08
0.40c
0.24b

0.78
2 0.18a
0.01
2 0.14
0.30c
0.18a

1.00
20.05
0.09
20.07
20.25b

1.00
20.52c
20.17
20.39c

1.00
0.08
0.14

0.77
0.46c

0.71

Notes: *p , 0.10; * *p , 0.05; * * *p , 0.01 level (two-tailed) respectively. Diagonal elements are the square roots of the average variance statistics;
Off-diagonal elements are the correlations between the latent variables calculated in PLS. CLRMSG = clear and measurable goals;
DECACR decentralization of asset commitment rights; BROADPMS = broad performance measurement system; INCENTVE Incentive system;
SIZE size (log fte); LOCGOVT local government (dummy; 1 = local government organization, 0 = rest); OTHGOVT other government (dummy;
1 = other government organization, 0 = rest); QUANTPERF quantity performance; QUALPERF quality performance

CLRMSG
DECACR
BRDPMS
INCENTVE
SIZE
LOCGOVT
OTHGOVT
QUANTPERF
QUALPERF

AVE

440

1.
2.
3.
4.
5.
6.
7.
8.
9.

Composite
reliability

Table II.
Composite reliability,
average variance
extracted, Pearson
correlations of latent
variables and square root
of average variance
extracted statistics

Variable

AAAJ
21,3

Paths
Main hypotheses:
1.a CLRMSG ! QUANTPERF
1.b CLRMSG ! QUALPERF
2.a INCENTVE ! QUANTPERF
2.b INCENTVE ! QUALPERF
Control variables:
DECACR ! QUANTPERF
DECACR ! QUALPERF
BROADPMS ! QUANTPERF
BROADPMS ! QUALPERF
LOCGOVT ! CLRMSG
LOCGOVT ! INCENTVE
LOCGOVT ! DECACR
LOCGOVT ! BROADPMS
LOCGOVT ! QUANTPERF
LOCGOVT ! QUALPERF
OTHGOVT ! CLRMSG
OTHGOVT ! INCENTVE
OTHGOVT ! DECACR
OTHGOVT ! BROADPMS
OTHGOVT ! QUANTPERF
OTHGOVT ! QUALPERF
SIZE ! CLRMSG
SIZE ! INCENTVE
SIZE ! DECACR
SIZE ! BROADPMS
SIZE ! QUANTPERF
SIZE ! QUALPERF

Hypothesized
direction

Path
coefficient

T-value

0.35
0.25
0.25
0.09

2.07 * *
3.39 * * *
2.00 * *
0.79

0.06
0.06
0.13
0.04
20.30
20.11
0.03
20.23
20.03
20.35
0.14
20.14
0.12
20.03
20.04
20.09
20.18
20.21
20.11
0.02
0.05
20.20

0.54
0.61
1.08
0.24
2.28 * *
0.76
0.36
1.36
0.34
3.42 * * *
1.64
1.31
1.00
0.10
0.31
0.85
1.62
1.60
1.03
0.12
0.28
1.75 *

Mult. R2-dependent
variable

Performance
management
practices

0.31
0.32

441

0.16
0.05
0.04
0.03

Notes: *p , 0.10; * *p , 0.05; * * *p , 0.01 respectively. Cells report respectively the mean of
sub-samples for path coefficients and t-value; The last column reports the multiple R2 for the
dependent variable; N=93; Results based on bootstrapping with 500 generated samples. CLRMSG =
clear and measurable goals; DECACR decentralization of asset commitment rights; BROADPMS =
broad performance measurement system; INCENTVE Incentive system; SIZE size (log fte);
LOCGOVT local government (dummy; 1 = local government organization, 0 = rest);
OTHGOVT other government (dummy; 1 = other government organization, 0 = rest);
QUANTPERF quantity performance; QUALPERF quality performance

size to clear and measurable goals is 2 0.18, p , 0.15). In addition, it appears that
large organizations rely less on incentive systems (path coefficient from size to
incentives is 2 0.21, p , 0.15) and have lower quality performance (path coefficient
from size to quality performance is 2 0.20, p , 0.10). Other paths from the control
variables to the other variables in the model are non-significant. The results for sector
and size suggest that the application of PM is affected by institutional characteristics.
Discussion and conclusions
This study draws upon economic and behavioral theories to empirically investigate
whether performance management practices affect performance in public sector

Table III.
Path coefficients,
t-statistics and R2 for PLS
structural model

AAAJ
21,3

442

organizations in the Netherlands. The findings of this study suggest that the definition
of clear and measurable goals is positively associated with both quantity performance
(efficiency, production targets) as well as quality performance (accuracy, innovation,
employee morale). This finding is consistent with goal setting theory (Locke and
Latham, 2002, 1990); the specification of clear and measurable goals appears to provide
focus in operations and improves performance. The use of incentives is positively
associated with quantity performance, yet not related to quality performance. This
finding is consistent with the notion that incentives may not be helpful to stimulate
effort when goals are ambiguous or performance is difficult to measure.
Finally, institutional factors (sector, and, to a lesser extent, size) appear to affect the
use and effectiveness of PM-practices. Local government organizations appear to have
more ambiguous and difficult to measure goals as well as lower performance than
other public sector organizations. Large organizations appear to have more difficulty to
define clear and measurable goals, are less likely to use incentives and have lower
quality performance, which is mostly consistent with the notions of agency theory
(Dewatripont et al., 1999). Overall, it appears that the behavioural consequences of
PM-practices on public sector managers are as important as the economic
consequences.
The results from this study suggest that public sector organizations face a trade-off
between achieving quantitative goals (i.e. short-term performance goals such as
efficiency and quantity produced) and quality goals (i.e. long-term or strategic
performance goals such as quality/accuracy, innovation, and employee morale).
Quality goals are unlikely to be attained by introducing performance measurement and
evaluation systems, yet appear to be achieved by providing inspiring missions and/or
goals. This suggests that PM-practices are useful in order to increase and/or maintain
efficiency in so-called production agencies (Wilson, 1989). However, the
overextension of PM-practices to public service activities where underperformance
on quality dimensions has serious (possibly life-threatening) consequences (for
example, health care) should be avoided.
The literature suggests a number of strategies that organizations can pursue in order
to achieve a balance between quantitative and qualitative performance (De Bruijn, 2002;
Van Thiel and Leeuw, 2002; Simons, 2000; Likierman, 1993). First, organizations can use
a variety of (competing) performance measures for all tasks that have to be carried out.
Few indicators for a limited part of total performance may result in gaming; this effect is
reinforced when indicators do not change over time. In addition, adequate safeguards
against quantity indicators need to be developed; that is, soft aspects of performance
should also be included in the performance measurement system. Second, external
sources (such as important stakeholders, external experts and/or client panels) can be
used to provide information on adequate performance measures that should be
included in the performance measurement system, as well as provide information on
how they view the soft aspects of performance.
Performance measurement systems should be devised with the people that work
with them in order to create ownership. Such an interaction reduces the chances that
performance measures are not understood, that they are inconsistent or unfair, or that
targets are set at unattainable levels. Fourth, performance measure results should be
interpreted as guidance, not answers. The appropriate initial managerial response to
performance measures may firstly be a discussion as to suitable actions. The use of

PM-practices should also be accompanied by conduct guidelines (such as ethical codes


and codes of behaviour) that diminish the pressure of quantity measures. Finally, the
use of targets and comparisons over time, between organizations, and/or between
different units within the same organization is recommended. These strategies may
provide the opportunity to exploit the advantages of PM-practices, while limiting the
downside risks of PM.
The findings of this research project may also provide an explanation for the mixed
evidence on the relation between PM-practices and performance. For example,
Newberry and Pallot (2005) indicate that . . . one does not have to search far for
efficiency gains . . . yet also that there is . . . mounting concern that an emphasis on
outputs . . . is achieved . . . at the expense of longer term capability . . . and the
ownership interests of government. In addition, Newberry and Pallot (2006) as well as
Ezzamel et al. (2004) indicate that PM-practices may have unintended consequences for
democracy. Hood and Peters (2004) and Hood (1998) indicate that PM-practices may
have such paradoxical effects; the degree of effectiveness of PM-practices depends to
a large extent on the perspective or world-view that is taken (Hood, 1998, p. 6 and 13).
In other words, PM-practices can be considered successful from an economic-rational
perspective (i.e. that shirking and distortion by public managers and workers should
be reduced). However, PM-practices will be considered less successful if another
perspective (democratic, long-term public interest) is adopted. As Hood and Peters
(2004, p. 275) state:
[. . .] any dominant approach to institutional design (for example, an overly reliance on
PM-practices that solely focus on short-term, quantity performance) might be expected to
encounter unexpected effects of overextension if it turns into a monoculture.

This research project is amongst one of the first large-scale empirical analyses that
investigate whether the use of PM practices is associated with the performance of
public sector organizations (see also Van Helden, 2005). The findings from this study
are not without limitations. First of all, the results presented here are based on
correlations, not necessarily causal relations. Even though the research model is based
on (normative) PM-literature (Ittner and Larcker, 2001; Otley, 1999; Kravchuk and
Schack, 1996), it may be argued that the implementation of PM-practices provide the
opportunity to define clear and measurable goals or that a high level of perceived
performance makes it appear as if the organization has clear and measurable goals.
In addition, it may be argued that performance is better when there are clearer goals
because, when goals are clear, it is easier to measure performance. In that case, the
results for the relation between PM-practices and performance would suffer from
endogeneity due to a misspecification of the model (Ittner and Larcker, 2001, p. 397).
However, the previous argument would suggest that, after recognizing the impact of
task uncertainty and all other objective factors that may explain the use of
PM-practices, all organizations in one sector would use similar PM-practices and have
similar performance. In other words, this would exclude the impact of political
decisions respectively the notion that organizations may learn dynamically (Ittner and
Larcker, 2001, p. 399).
Another limitation is that the results are based on perceptions rather than hard
measures. These perceptions may be inadequate due to inappropriate measures or
inadequate interpretation of the survey instruments (see also Ittner and Larcker, 2001,

Performance
management
practices
443

AAAJ
21,3

444

and Jacobs, 1997, for comments on this issue). While the use of validated instruments
and the pre-tests on survey experts and public managers should prevent such errors,
additional research may be necessary to (further) validate the results from this study.
Third, the study is based on a cross-sectional survey of public sector organizations
rather than one specific type of public sector organization. Institutional differences in
these specific types of public sector organizations (such as the mandatory initiation of
PM, the collective labor agreements that hardly allow the use of bonus payments or the
legally required diversity of tasks) may explain some of the results in this study.
Another limitation is that factors such as mutual trust amongst stakeholders and
managers or financial stress that may affect PM-practices and/or performance have not
explicitly been considered in the survey.
Finally, the survey instrument that measures performance focuses on managerial
tasks. The survey instrument for performance asks respondents to compare the
performance of their organization to other comparable organizations with regard to
quantities produced, quality produced, the number of innovations and morale of unit
personnel. The survey instrument does not ask respondents whether these tasks are
politically sensitive, or whether political executives are happy with the performance
that is achieved. As a result, the survey does not capture political performance, while
in effect that may be the criterion that is used to judge the performance of the
organization.
In addition to the previous limitations, subsequent research may address the
reasons why some public sector organizations have vague and hard to measure goals
while others have clear and measurable goals. Part of this may be explained by the
processes of the organization (Chenhall, 2003; Ouchi, 1980; Burchell et al., 1980), yet
others may be related to political choices in society or in the organization (Hood and
Peters, 2004; Dranove et al., 2003; Hood, 1998; Hofstede, 1981). The consequences of
these choices (in terms of financial and non-financial performance, yet also in terms of
political consequences and societal effects) may be interesting to address.
The interaction of the performance measurement and reward system with other
aspects of the control system (behavioral controls, social controls) in public sector
organizations appears a fruitful avenue for research. It is unclear at this point whether
the introduction of (or focus on) result controls are used as substitutes for, or
complements to other forms of control. In other words, does the introduction of
PM-practices diminish behavioral and/or cultural controls or can we use behavioral
and/or cultural controls to increase the effectiveness of PM-practices?
Notes
1. See Cavalluzzo and Ittner (2004) and Ter Bogt (2003) for other large scale empirical studies
examining the factors and effects of PM in public sector organizations.
2. There may also be economic benefits associated with ambiguous goals: for example, Hart
et al. (1997) indicate that renegotiation of goals may result in excessive costs. Williamson
(1999) suggests that the ongoing relations between politicians and government agencies, in
which information input, decision making and implementation are interrelated and
interdependent, may (economically) prevent the provision of clear and measurable goals.
3. Politically, PM-practices may be used as tools to appease politicians, media and other
stakeholder groups; in other words, the introduction of PM is driven by institutional factors
(legitimization motive; see Hoque, 2005; Hoque et al., 2004; Geiger and Ittner, 1996; Meyer

4.

5.

6.

7.

8.
9.
10.

11.

and Rowan, 1977). PM-practices may provide information that can be used by public sector
managers, politicians and other stakeholders alike to profit from or prevent crises, scandals
and catastrophes (Johnsen, 2005; Ezzamel et al., 2004; Hood, 1998). In other words,
performance measures may act as prices in the political market: they may be used
strategically by individuals for their unique individual, organizational and political purposes
(for example, to challenge the legitimacy of specific political decisions; see Vakkuri and
Meklin, 2006).
It may be argued that goal setting theory is associated with individual task performance
rather than organizational performance. However, the effects of goal setting have been
shown to be applicable to the individuals as well as to organizational units (Maiga and
Jacobs, 2005; Rodgers and Hunter, 1991) and entire organizations (Locke and Latham, 2002).
Goal setting theory also suggests that incentives can be used to enhance performance.
Incentives can be defined as extrinsic motivators where pay, bonuses or career perspectives
are linked to performance (Bonner et al., 2000; Locke and Latham, 1990). However, Locke
(2004) notes that effective incentive systems are extraordinarily difficult to set up and
maintain.
Additional factors that are recognized by goal setting theory include ability, goal
commitment and task complexity (Locke and Latham, 1990); these factors are not explicitly
considered in this study.
For example, the central government in the Netherlands has created agencies that execute
(mostly) operational tasks. Municipalities, on the other hand, are legally obliged to perform a
wide variety of tasks; they may not have the opportunity to specialize on certain tasks. In
addition, the Dutch central government requires central government organizations to specify
clear and measurable goals and measure performance, while Dutch municipalities are
encouraged yet not required to introduce PM-practices. Finally, public sector organizations
in the Netherlands have collective labour agreements; only a few of them allow the use of
pay-for-performance incentive schemes.
A total of 106 surveys has been returned; small organizations (i.e. organizations with less
than 30 employees) and incomplete surveys are excluded from the analysis.
All Cronbachs alphas are above 0.60, the lower limit in exploratory research (see Hair et al.,
1998, p. 118).
The Dutch public sector has several initiatives where public sector organizations get a
feeling for their relative performance. For example, the Dutch Ministry of Internal Affairs
provides information on how to set up benchmarking practices; this project is shared with
Dutch municipalities and provinces, amongst others. Second, there are several initiatives for
sharing information amongst different functions in the public sector. For example, mayors
as well as CFOs in Dutch municipalities have an annual congress where they discuss best
management practices, amongst others. Therefore, it is assumed that the respondents can
adequately judge their performance relative to others in their sector.
The original survey included a number of additional statements, including: the goals of my
organization are completely based on quantitative issues (budgets, productivity, quantities),
a number of external factors determines whether I realize my goals; and the results that my
organization achieves are only measurable in the long run. These items were included to
measure items such as objectivity and responsiveness. However, they did not load on one
factor and/or factor loadings were (far) below 0.45; as a result, they were excluded from the
analysis to obtain a more unidimensional scale. The convergent validity of the
CLRMSG-measure has been investigated by computing Pearson correlation with another
survey variable. Based on Cavalluzzo and Ittner (2004), the survey also included a number of
questions on the difficulty of developing performance measures, including: the development
of long-term, strategic goals for my organization; integrating different perspectives in the

Performance
management
practices
445

AAAJ
21,3

446
12.

13.
14.
15.

16.

mission, goals and strategy of my organization; integrating stakeholder perspectives in the


mission, goals and strategy of my organization; and developing a measures that capture the
goals of my organization. CFA indicates that all items load on one measure; all responses are
summarized to obtain an alternative measure for the difficulty to develop a mission and
measure the achievement of goals. The correlation between this alternative measure and the
summation of the scores on the questions in the CLRMEASG-variable is negative and
significant ( p , 0.01), providing additional support for the validity of the
CLEARMSG-measure.
It should be noted that Jacobs (1997) indicates that this instrument may not be useful to
describe decentralization in public sector settings. However, tests of this instrument
indicated that the first two items adequately described the asset commitment decision
rights of public sector managers in the Netherlands.
PLS Graph Version 3.0 is used to analyze the data.
Additional management accounting research that has used PLS includes Abernethy and
Bouwens (2005), Hartmann (2005), and Chenhall (2005), amongst others.
Internal consistency is measured by the Construct Reliability (using Fornell and Larcker
(1981) measure of composite reliability) and by Average Variance Extracted (AVE; Hulland,
1999; Hair et al., 1998). The composite reliability score for each variable is above 0.81, which
demonstrates acceptable reliability (Hair et al., 1998). Table II also indicates that, with the
exception of the QUALPERF-variable, the AVE for all variables is above the generally
accepted level of 0.54 (Hair et al., 1998). Chin (1998) indicates that the square roots of AVE
should be larger than the correlations among the constructs. The relatively low correlations
among constructs show that the model also satisfies the previous condition for discriminant
validity (Hair et al., 1998; Cool et al., 1989).
In general, the significance levels are higher if the original sample estimates are used.

References
Abernethy, M.A. and Bouwens, J. (2005), Determinants of accounting innovation
implementation, Abacus, Vol. 41 No. 3, pp. 217-40.
Abernethy, M.A., Bouwens, J. and Van Lent, L. (2004), Determinants of control system design in
divisionalized firms, The Accounting Review, Vol. 79 No. 3, pp. 545-70.
Anthony, R.N. and Young, D.W. (2003), Management Control in Non-profit Organizations,
McGraw-Hill/Irwin, New York, NY.
Atkinson, A.A., Waterhouse, J.H. and Wells, R.B. (1997), A stakeholder approach to strategic
performance measurement, Sloan Management Review, Vol. 38 No. 3, pp. 25-37.
Baiman, S. (1990), Agency research in managerial accounting: a second look, Accounting,
Organizations and Society, Vol. 15 No. 4, pp. 341-71.
Bevan, G. and Hood, C. (2006), Whats measured is what matters: targets and gaming in the
English public health care system, Public Administration, Vol. 84 No. 3, pp. 517-38.
Bonner, S.E. and Sprinkle, G.B. (2002), The effects of monetary incentives on effort and task
performance: theories, evidence, and a framework for research, Accounting,
Organizations and Society, Vol. 27, pp. 303-45.
Bonner, S.E., Hastie, R., Sprinkle, G.B. and Young, S.M. (2000), A review of the effects of
financial incentives on performance in laboratory tasks: implications for management
accounting, Journal of Management Accounting Research, Vol. 12, pp. 19-64.
Bouwens, J. (2003), The use of value-based measures for assessing managerial performance,
unpublished working paper, Nyenrode University, Breukelen.

Brickley, J., Smith, C. and Zimmerman, J. (1995), The economics of organizational architecture,
Journal of Applied Corporate Finance, Vol. 8 No. 2, pp. 19-31.
Brignall, S. and Modell, S. (2000), An institutional perspective on performance measurement and
management in the new public sector, Management Accounting Research, Vol. 11,
pp. 281-306.
Burchell, S., Clubb, C., Hopwood, A., Hughes, J. and Nahapiet, J. (1980), The roles of accounting
in organizations and society, Accounting, Organizations and Society, Vol. 5 No. 1, pp. 5-25.
Burgess, S. and Ratto, M. (2003), The role of incentives in the public sector: issues and evidence,
Oxford Review of Economic Policy, Vol. 19 No. 2, pp. 285-99.
Bushman, R.M., Indjejikian, R.J. and Penno, M.C. (2000), Private pre-decision information,
performance measure congruity, and the value of delegation, Contemporary Accounting
Research, Vol. 17 No. 4, pp. 561-87.
Carter, N., Klein, R. and Day, P. (1992), How Organisations Measure Success: The Use of
Performance Indicators in Government, Routledge, London/New York.
Cavalluzzo, K.S. and Ittner, C.D. (2004), Implementing performance measurement innovations:
evidence from government, Accounting, Organizations and Society, Vol. 29, pp. 243-67.
Chenhall, R.H. (2003), Management control systems design within its organizational context:
findings from contingency-based research and directions for the future, Accounting,
Organizations and Society, Vol. 28 Nos 2-3, pp. 127-68.
Chenhall, R.H. (2005), Integrative strategic performance measurement systems, strategic
alignment of manufacturing, learning and strategic outcomes: an exploratory study,
Accounting, Organizations and Society, Vol. 30, pp. 395-422.
Chin, W.W. (1998), The partial least squares approach for structural equation modeling, in
Marcoulides, G.A. (Ed.), Modern Methods for Business Research, Laurence Erlbaum
Associates, Hillsdale, NJ, pp. 295-336.
Chin, W.W., Marcolin, B.L. and Newsted, P.R. (2003), A partial least squares latent variable
modeling approach for measuring interaction effects: results from a Monte Carlo
simulation study and an electronic-mail emotion/adoption study, Information Systems
Research, Vol. 14 No. 2, pp. 189-217.
Cool, K., Dierickx, I. and Jemison, D. (1989), Business strategy, market structure and risk-return
relationships: a structural approach, Strategic Management Journal, Vol. 10, pp. 507-22.
Cooper, D.R. and Schindler, P.S. (2003), Business Research Methods, 8th International Edition,
McGraw-Hill, New York, NY.
De Bruijn, H. (2002), Performance measurement in the public sector: strategies to cope with the
risks of performance measurement, International Journal of Public Sector Management,
Vol. 15 Nos 6/7, pp. 578-94.
De Lancer Julnes, P. and Holzer, M. (2001), Promoting the utilization of performance measures in
public organizations: an empirical study of factors affecting adoption and
implementation, Public Administration Review, Vol. 61 No. 6, pp. 693-708.
Dewatripont, M., Jewitt, I. and Tirole, J. (1999), The economics of career concern, part II:
application to missions ad accountability of government agencies, Review of Economic
Studies, Vol. 66, pp. 199-217.
Dixit, A. (1997), Power of incentives in private versus public organizations, AEA Papers and
Proceedings, May, pp. 378-82.
Dixit, A. (2002), Incentives and organizations in the public sector: an interpretive review, The
Journal of Human Resources, Vol. 37 No. 4, pp. 696-718.

Performance
management
practices
447

AAAJ
21,3

448

Dranove, D., Kessler, D., McClellan, M. and Satterthwaite, M. (2003), Is more information better?
The effects of report cards on health care providers, Journal of Political Economy, Vol. 11
No. 3, pp. 555-88.
Dunk, A.S. and Lysons, A.F. (1997), An analysis of departmental effectiveness, participative
budgetary control processes and environmental dimensionality within the competing
values framework: a public sector study, Financial Accountability and Management,
Vol. 13 No. 1, pp. 1-15.
Eisenhardt, K.M. (1985), Control: organizational and economic approaches, Management
Science, Vol. 31 No. 2, pp. 134-49.
Eisenhardt, K.M. (1989), Agency theory: an assessment and review, Academy of Management
Review, Vol. 14 No. 1, pp. 57-74.
Ezzamel, M., Hyndman, N., Johnsen, A., Lapsley, I. and Pallot, J. (2004), Has devolution
increased democratic accountability?, Public Money and Management, Vol. 24 No. 3,
pp. 145-52.
Fornell, C. and Larcker, D.F. (1981), Evaluating structural equation models with unobservable
variables and measurement error, Journal of Marketing Research, Vol. 18, pp. 39-50.
Geiger, D.R. and Ittner, C.D. (1996), The influence of funding source and legislative requirements
on government cost accounting practices, Accounting, Organizations and Society, Vol. 21,
pp. 549-67.
Gordon, L.A. and Narayanan, V.K. (1984), Management accounting systems, perceived
environmental uncertainty and organization structure: an empirical investigation,
Accounting, Organizations and Society, Vol. 9 No. 1, pp. 33-47.
Gray, A. and Jenkins, B. (1993), Codes of accountability in the new public sector, Accounting,
Auditing and Accountability Journal, Vol. 6 No. 3, pp. 52-67.
Gray, A. and Jenkins, B. (1995), From public administration to public management; reassessing
a revolution?, Public Administration, Vol. 7 No. 3, pp. 75-99.
Gupta, P.P., Dirsmith, M.W. and Fogarty, T.J. (1994), Coordination and control in a government
agency: contingency and institutional theory perspectives on GAO audits, Administrative
Science Quarterly, Vol. 39, pp. 264-84.
Guthrie, J., Olson, O. and Humphrey, C. (1999), Debating developments in new public financial
management: the limits of global theorising and some new ways forward, Financial
Accountability and Management, Vol. 15 No. 3, pp. 209-28.
Hair, J.F. Jr, Anderson, R.E., Tatham, R.L. and Black, W.C. (1998), Multivariate Data Analysis, 5th
ed., Prentice-Hall International, London.
Hart, O., Shleifer, A. and Vishny, R.W. (1997), The proper scope of government: theory and an
application to prisons, The Quarterly Journal of Economics, Vol. 112 No. 4, pp. 1127-61.
Hartmann, F. (2005), The effects of tolerance for ambiguity and uncertainty on the
appropriateness of accounting performance measures, Abacus, Vol. 41 No. 3, pp. 241-66.
Heinrich, C. (2002), Outcomes-based performance management in the public sector: implications
for government accountability and effectiveness, Public Administration Review, Vol. 62
No. 6, pp. 712-25.
Henley, D., Likierman, A., Perrin, J., Evans, M., Lapsley, I. and Whiteoak, J. (1992), Public Sector
Accounting and Financial Control, Chapman & Hall, London.
Hofstede, G. (1981), Management control of public and not-for-profit activities, Accounting,
Organizations and Society, Vol. 6, pp. 193-211.
Hood, C. (1991), A public management for all seasons?, Public Administration, Vol. 69 No. 1,
pp. 3-19.

Hood, C. (1995), The New Public Management in the 1980s: variations on a theme, Accounting,
Organizations and Society, Vol. 20, pp. 93-109.
Hood, C. (1998), The Art of the State. Culture, Rhetoric, and Public Management, Clarendon Press,
London.
Hood, C. and Peters, G. (2004), The middle aging of new public management: into the age of
paradox?, Journal of Public Administration Research and Theory, Vol. 14 No. 3, pp. 267-82.
Hoque, Z. (2005), Securing institutional legitimacy or organizational effectiveness? A case
examining the impact of public sector reform initiatives in an Australian local authority,
International Journal of Public Sector Management, Vol. 18 Nos 4/5, pp. 367-82.
Hoque, Z., Arends, S. and Alexander, R. (2004), Policing the police service: a case study of the
rise of new public management within an Australian police service, Accounting,
Auditing & Accountability Journal, Vol. 17 No. 1, pp. 59-84.
Hulland, J. (1999), Use of partial least squares (PLS) in strategic management research: a review
of four recent studies, Strategic Management Journal, Vol. 20, pp. 195-204.
Hyndman, N. and Eden, R. (2000), A study of the coordination of mission, objectives and targets
in UK executive agencies, Management Accounting Research, Vol. 11, pp. 175-91.
Hyndman, N. and Eden, R. (2001), Rational management, performance targets and executive
agencies: views from agency chief executives in Northern Ireland, Public Administration,
Vol. 79 No. 3, pp. 579-98.
Ittner, C.D. and Larker, D.F. (2001), Assessing empirical research in managerial accounting: a
value-based management perspective, Journal of Accounting and Economics, Vol. 32,
pp. 349-410.
Jacobs, K. (1997), The decentralization debate and accounting controls in the New Zealand
public sector, Financial Accountability and Management, Vol. 13 No. 4, pp. 331-43.
Jenkins, G.D. Jr, Mitra, A., Gupta, N. and Shaw, J.D. (1998), Are financial incentives related to
performance? A meta-analytic review of empirical research, Journal of Applied
Psychology, Vol. 83 No. 5, pp. 777-87.
Johnsen, A. (2005), What does 25 years of experience tell us about the state of performance
measurement in public policy and management?, Public Money and Management, Vol. 25
No. 1, pp. 9-17.
Kaplan, R.S. (2001), Strategic performance measurement and management in non-profit
organizations, Non-Profit Management and Leadership, Vol. 11 No. 3, pp. 353-70.
Kaplan, R.S. and Atkinson, A.A. (1998), Advanced Management Accounting, 3rd ed.,
Prentice-Hall, Upper Saddle River, NJ.
Keating, S.A. (1997), Determinants of divisional performance evaluation practices, Journal of
Accounting and Economics, Vol. 24, pp. 243-73.
Kloot, L. and Martin, J. (2000), Strategic performance management: a balanced approach to
performance management issues in local government, Management Accounting Research,
Vol. 11 No. 2, pp. 231-51.
Kravchuk, R.S. and Schack, R.W. (1996), Designing effective performance measurement
systems under the government performance and results act of 1993, Public
Administration Review, Vol. 56 No. 4, pp. 348-58.
Lambert, R. (2001), Contracting theory and accounting, Journal of Accounting and Economics,
Vol. 3, pp. 3-87.
Lapsley, I. (1999), Accounting and the new public management: instruments of substantive
efficiency or a rationalising modernity?, Financial Accountability and Management,
Vol. 15 Nos 3/4, pp. 201-7.

Performance
management
practices
449

AAAJ
21,3

450

Latham, G.P. (2004), The motivational benefits of goal-setting, Academy of Management


Executive, Vol. 18 No. 4, pp. 126-9.
Likierman, A. (1993), Performance indicators: 20 early lessons from managerial use, Public
Money and Management, Vol. 13 No. 4, pp. 15-22.
Locke, E.A. (2004), Linking goals to monetary incentives, Academy of Management Executive,
Vol. 18 No. 4, pp. 130-3.
Locke, E.A. and Latham, G.P. (1990), A Theory of Goal Setting and Task Performance,
Prentice-Hall, Englewood-Cliffs, NJ.
Locke, E.A. and Latham, G.P. (2002), Building a practically useful theory of goal setting and
task motivation, American Psychologist, Vol. 57 No. 9, pp. 705-17.
Maiga, A.S. and Jacobs, F.A. (2005), Antecedents and consequences of quality performance,
Behavioral Research in Accounting, Vol. 17, pp. 111-31.
Merchant, K.A. (1998), Modern Management Control Systems, Prentice-Hall, Englewood Cliffs,
NJ.
Merchant, K.A. and Van der Stede, W.A. (2003), Management Control Systems, Pearson
Education, Harlow.
Merchant, K.A., Van der Stede, W.A. and Zheng, L. (2003), Disciplinary constraints on the
advancement of knowledge: the case of organizational incentive systems, Accounting,
Organizations and Society, Vol. 28 Nos 2-3, pp. 251-86.
Meyer, J.W. and Rowan, B. (1977), Institutional organizations: formal structures as myth and
ceremony, American Journal of Sociology, Vol. 80, pp. 340-63.
Miah, N.Z. and Mia, L. (1996), Decentralization, accounting controls and performance of
government organizations: a New Zealand empirical study, Financial Accountability and
Management, Vol. 12 No. 3, pp. 173-90.
Modell, S. (2000), Integrating management control and human resource management in public
health care: Swedish case study evidence, Financial Accountability and Management,
Vol. 16 No. 1, pp. 33-53.
Mol, N.P. (1996), Performance indicators in the Dutch department of defence, Financial
Accountability and Management, Vol. 12 No. 1, pp. 71-81.
Newberry, S. (2002), Intended or unintended consequences? Resource erosion in New Zealands
government departments, Financial Accountability and Management, Vol. 18 No. 4,
pp. 309-30.
Newberry, S. and Pallot, J. (2004), Freedom or coercion? NPM incentives in New Zealand central
government departments, Management Accounting Research, Vol. 15, pp. 247-66.
Newberry, S. and Pallot, J. (2005), A wolf in sheeps clothing? Wider consequences of the
financial management system of the New Zealand central government, Financial
Accountability and Management, Vol. 21 No. 3, pp. 263-77.
Newberry, S. and Pallot, J. (2006), New Zealands financial management system: implications for
democracy, Public Money and Management, Vol. 26 No. 4, pp. 221-8.
Otley, D. (1999), Performance management: a framework for management control systems
research, Management Accounting Research, Vol. 10, pp. 363-82.
Ouchi, W.G. (1979), A conceptual framework for the design of organizational control
mechanisms, Management Science, Vol. 25 No. 9, pp. 833-48.
Ouchi, W.G. (1980), Markets, bureaucracies and clans, Administrative Science Quarterly, Vol. 25
No. 1, pp. 129-41.

Pallot, J. (2001), A decade in review: New Zealands experience with resource accounting and
budgeting, Financial Accountability and Management, Vol. 17 No. 4, pp. 383-401.

Performance
management
practices

Pollanen, R.M. (2005), Performance measurement in municipalities: empirical evidence in the


Canadian context, The International Journal of Public Sector Management., Vol. 18 No. 1,
pp. 4-25.

451

Pallot, J. (1999), Beyond NPM developing strategic capacity, Financial Accountability and
Management, Vol. 15 No. 3/4, pp. 419-27.

Pollitt, C. (1986), Beyond the managerial model: the case for broadening performance
assessment in government and the public services, Financial Accountability and
Management, Vol. 2 No. 3, pp. 155-70.
Pollitt, C. (2006), Performance management in practice: a comparative study of executive
agencies, Journal of Public Administration Research and Theory, Vol. 16 No. 1, pp. 25-44.
Propper, C. and Wilson, D. (2003), The use and usefulness of performance measures in the public
sector, Oxford Review of Economic Policy, Vol. 19 No. 2, pp. 250-65.
Rangan, V.K. (2004), Lofty missions, down-to-earth plans, Harvard Business Review, Vol. 82
No. 3, pp. 112-9.
Rodgers, R. and Hunter, J.E. (1991), Impact of management by objectives on organizational
productivity, Journal of Applied Psychology, Vol. 76 No. 2, pp. 322-36.
Rogers, E.M. (1995), Diffusion of Innovations, The Free Press, New York, NY.
Shirley, M.M. and Xu, L.C. (2001), Empirical effects of performance contracts: evidence from
China, Journal of Law Economics and Organization, Vol. 17 No. 1, pp. 168-200.
Simons, R. (2000), Performance Measurement and Control Systems for Implementing StrategyText and Cases, Prentice-Hall, Upper Saddle River, NJ.
Smith, D. and Langfield-Smith, K. (2004), Structural equation modeling in management
accounting research: critical analysis and opportunities, Journal of Accounting Literature,
Vol. 23, pp. 49-86.
Smith, P. (1993), Outcome-related performance indicators and organizational control in the
public sector, British Journal of Management, Vol. 4, pp. 135-51.
Smith, P. (1995), On the unintended consequences of publishing performance data in the public
sector, International Journal of Public Administration, Vol. 18, pp. 277-310.
Ter Bogt, H.J. (2003), Performance evaluation styles in governmental organizations: how do
professional managers facilitate politicians work?, Management Accounting Research,
Vol. 14, pp. 311-32.
Tirole, J. (1994), The internal organization of Government, Oxford Economic Papers, Vol. 46
No. 1, pp. 1-29.
Vakkuri, J. and Meklin, P. (2006), Ambiguity in performance measurement: a theoretical
approach to organizational uses of performance measurement, Financial Accountability
and Management, Vol. 22 No. 3, pp. 235-50.
Van de Ven, A.H. and Ferry, D.L. (1980), Measuring and Assessing Organizations, Wiley, New
York, NY.
Van Helden, G.J. (2005), Researching public sector transformation: the role of management
accounting, Financial Accountability and Management, Vol. 21 No. 1, pp. 99-133.
Van Thiel, S. and Leeuw, F. (2002), The performance paradox in the public sector, Public
Performance and Management Review, Vol. 25 No. 3, pp. 267-81.

AAAJ
21,3

452

Williams, J.J., Macintosh, N.B. and Moore, J.C. (1990), Budget-related behavior in public sector
organizations: some empirical evidence, Accounting, Organizations and Society, Vol. 15,
pp. 221-46.
Williamson, O.E. (1999), Public and private bureaucracies: a transaction cost economics
perspective, Journal of Law, Economics and Organization, Vol. 15 No. 1, pp. 306-42.
Wilson, J.Q. (1989), Bureaucracy: What Government Agencies Do and Why They Do It, Basic
Books, New York, NY.

Appendix. Survey
See Table AI.

Centralization/decentralization
Please compare your influence with the influence of your superior on the following decisions:
.
strategic decisions (e.g. development of new products or services; divestment of specific
products and/or services; strategy of your unit);
.
investment decisions (e.g. buying a new building or new property; renovating buildings,
roads or other property; buying and implementing new information systems);
.
marketing decisions (e.g. determining prices/tariff structures of products and/or services;
promotional campaigns);
.
decisions regarding internal processes (determining project budgets; setting priorities;
contracts with external suppliers); and
.
decisions regarding organizational structures (changing reporting structures; hiring/firing
personnel; compensation, competence profiles and career paths of personnel; changing
committee structures).
If you and/or others in your unit make decisions without having to report this to your superior,
you and/or others in your unit are considered to have all the influence. See, Tables AII-AV.

CLRMEASG (alpha 0.83)

Table AI.

To what extent do you agree on the following statements


(1 completely disagree, 5 completely agree)?
1. The mission of my organization is formulated unambiguously
2. The mission of my organization is written on paper and
communicated internally and externally
3. The goals of my organization are unambiguously related to the
mission of my organization
4. The goals of my organization have been documented very
specifically and detailed
5. The sum of goals to be achieved provides a complete picture of
the results that should be achieved by my organization
6. The performance measures for my organization are
unambiguously related to the goals of my organization
Cronbach alpha

Mean

Std dev.

Factor
loading

3.86
3.83

1.04
1.12

0.788
0.751

3.66

1.17

0.801

2.91

1.17

0.639

2.87

1.06

0.765

2.67

0.98

0.676

0.83

DECACR (alpha 0.65)


Please indicate the amount of Influence you have (1 all influence
in my organization, 5 all influence with my superior)
1. Strategic decisions (R)
2. Investment decisions (R)
3. Marketing decisions
4. Decisions regarding internal processes
5. Decisions regarding organizational structures
Cronbach alpha

Mean

Std dev.

Factor
loading

3.71
3.51
2.96
3.23
3.22
0.65

1.18
1.27
1.37
1.23
1.33

0.860
0.860
NR
NR
NR

To what extent do you agree on the following statement about the


performance measures of your organization (1 completely
disagree, 5 completely agree)?
1. My organization has performance measures that indicate the
quantity of products or services provided
2. My organization has performance measures that indicate the
operating efficiency
3. My organization has performance measures that indicate the
customer satisfaction
4. My organization has performance measures that indicate the
product or service quality
5. My organization has performance measures that indicate the
outcome effects
Cronbach alpha

INCENTVE (alpha 0.88)


For the following performance metrics, what is the importance for
your total reward (1 completely irrelevant, 5 very important)?
1. The importance of budget versus actuals to your total
compensation (salary, bonus, career, etc.)
2. The importance of quantity measures to your total
compensation (salary, bonus, career, etc.)
3. The importance of efficiency measures to your total
compensation (salary, bonus, career, etc.)
4. The importance of customer satisfaction measures to your total
compensation (salary, bonus, career, etc.)
5. The importance of quality measures to your total
compensation (salary, bonus, career, etc.)
6. The importance of outcome measures to your total
compensation (salary, bonus, career, etc.)
Cronbach alpha

453

Table AII.

Note: (R)= reverse coded

BROADPMS (alpha 0.79)

Performance
management
practices

Mean

Std dev.

Factor
loading

3.40

1.23

0.677

2.57

1.12

0.817

2.80

1.28

0.738

2.82

1.12

0.783

2.56

1.15

0.688
Table AIII.

0.79

Mean

Std dev.

Factor
loading

2.20

0.92

0.775

2.19

0.97

0.811

2.19

0.94

0.791

2.16

1.09

0.856

2.52

1.09

0.786

2.19

1.06

0.714

0.88

Table AIV.

AAAJ
21,3

454

QUANTPERF

How would you compare the


performance of your organization to
other, comparable organizations on
the following items (1 far below
average, 5 far above average)
1. The quantity or amount of work
produced
2. The quality or accuracy of work
produceda
3. The number of innovations or
new ideas by the unita
4. Reputation of work excellencea
5. Attainment of unit production or
service goals
6. Efficiency of unit operations
7. Morale of unit personnela
Cronbach alpha

QUALPERF

Mean

Std. dev.

Factor
loading

3.53

0.69

0.810

3.19

0.65

2.96

0.78

Mean

Std dev.

0.076

0.447

3.53

0.64

0.387

0.047

3.32

0.95

0.814

0.195
0.635

3.26

0.94

0.811
0.295

3.71
0.66

0.70

0.749
0.233

0.66

Table AV.

Factor
loading

0.105
0.578

Notes: Quality performance measures; Extraction method: Principal component analysis; Rotation
method: Varimax with Kaiser Normalization; Rotation converged in three iterations

Corresponding author
Frank H.M. Verbeeten can be contacted at: fverbeeten@rsm.nl

To purchase reprints of this article please e-mail: reprints@emeraldinsight.com


Or visit our web site for further details: www.emeraldinsight.com/reprints

Das könnte Ihnen auch gefallen