Beruflich Dokumente
Kultur Dokumente
www.emeraldinsight.com/0951-3574.htm
Performance management
practices in public sector
organizations
Impact on performance
Frank H.M. Verbeeten
Rotterdam School of Management, Erasmus University,
Rotterdam, The Netherlands
Performance
management
practices
427
Received 11 January 2006
Revised 7 July 2006,
2 January 2007
Accepted 17 April 2007
Abstract
Purpose The aim of this study is to investigate whether performance management practices affect
performance in public sector organizations.
Design/methodology/approach Theoretically, the research project is based on economic as well
as behavioral theories. The study distinguishes amongst quantitative performance (efficiency,
quantities produced) and qualitative performance (accuracy, quality, innovation and employee morale)
and uses survey data from 93 public sector organizations in the Netherlands.
Findings The research shows that the definition of clear and measurable goals is positively
associated with quantity performance as well as quality performance. In addition, the use of incentives
is positively associated with quantity performance yet not related to quality performance. Finally, the
effects of performance management practices in public sector organizations are affected by
institutional factors. The results suggest that the behavioral effects of performance management
practices are as important as the economic effects in public sector organizations.
Research limitations/implications All limitations of survey research apply. The survey is
based on public sector organizations in The Netherlands; findings may not be transferable to other
countries.
Practical implications The joint introduction of performance management practices may provide
an opportunity to increase quantity performance yet may have no impact on quality performance.
Originality/value The paper responds to previous calls in the literature to use quantitative
research methods to generalize findings from previous case studies. Also, the paper empirically tests
the impact of performance management practices on performance, an area that has attracted scarce
research attention.
Keywords Performance management, Public sector organizations, Control systems, Behaviour,
The Netherlands
Paper type Research paper
Introduction
Recent efforts to reinvent the government and improve performance in public sector
organizations (also known as New Public Management, NPM) have focused on
Previous versions of this paper have benefited from comments from Arnick Boons, Ken
Cavalluzzo, Martine Cools, Henri Dekker, Frank Hartmann, David Naranjo Gil, Joan Luft, Paolo
Perego and Jerold Zimmerman as well as two anonymous reviewers. In addition, comments from
workshop participants at the Free University Amsterdam and the EAA Annual Congress
Workshops are appreciated. Part of this research has been executed while the author was
working at Nyenrode University. Assistance for this project was provided by Deloitte.
AAAJ
21,3
428
2003, p. 4; Merchant and Van der Stede, 2003, p. 14; Merchant, 1998, p. 2).
Organizations can use three forms of control: output (results) controls, action
(behavioural) controls, and clan (personnel/cultural) controls (Merchant and Van der
Stede, 2003; Merchant, 1998; Ouchi, 1979). Output controls involve evaluating and
rewarding individuals (and sometimes groups of individuals) for generating good
results, or punishing them for poor results. Action controls try to ascertain that
employees perform (or do not perform) specific actions known to be beneficial (or
harmful) to the organization. Finally, clan controls help to ensure that employees will
control their own behaviours or that the employees will control each others
behaviours. Amongst others, clan controls clarify expectations; they help ensure that
each employee understands what the organization wants. It should be noted that these
forms of control are not necessarily discrete, and elements of all three forms may be
found in any one organization.
Previous literature (Pollitt, 2006; Johnsen, 2005; Merchant and Van der Stede, 2003;
Modell, 2000; Merchant, 1998; Mol, 1996; Gupta et al., 1994; Hofstede, 1981; Ouchi, 1979,
1980) suggests that output controls are most useful when objectives are unambiguous,
outputs are measurable, activities are repetitive and the effects of management
interventions are known. If these conditions are not met, reliance on other forms of
control is necessary in order to efficiently and effectively achieve the goals of the
organization. In that case, the performance measures may still be useful for
exploratory purposes (expert control, trial-and-error control, etc; see Hofstede, 1981;
Burchell et al., 1980); however, excessive reliance on performance measures for
incentive purposes may result in dysfunctional effects.
From a control point of view, the most difficult case in public sector organizations is
when objectives are (excessively) ambiguous. To some extent, objectives are
ambiguous in most public sector organizations; yet when ambiguity of objectives is
excessive, it is likely that the incidence of political control also increases (i.e. the
organization is more likely to depend on power structures, negotiation processes,
particular interests and conflicting values; cf. Vakkuri and Meklin, 2006; Hofstede,
1981, p. 198). There may be clear political benefits[2] (and therefore incentives) to
formulate ambiguous objectives: ambiguous, unclear objectives provide politicians the
opportunity to react to changes in the political environment (Hofstede, 1981, p. 194).
Ambiguous goals may also prevent budget cuts in pet projects: if the organization
does not invest in efficiency and transparency, it is not clear to other politicians
whether money can be saved (De Bruijn, 2002, p. 583). Finally, ambiguous goals
decrease the extent to which politicians can be held accountable for problems and
disasters: a multitude of goals provides the opportunity to compensate
underperformance in one area (for example, exceeding cost budgets) by referring to
overperformance in another area (for example, reduction in the number of patients
waiting; Bevan and Hood, 2006; Vakkuri and Meklin, 2006; Johnsen, 2005).
Performance management practices in the public sector
Historically, public sector organizations have relied on action controls (rules and
procedures) to control organizations; however, the past decade has witnessed various
changes in management control of public sector organizations, including a shift
towards output controls (Ter Bogt, 2003; Hyndman and Eden, 2001, 2000; Lapsley,
1999; Guthrie et al., 1999; Hood, 1998, 1995; Gray and Jenkins, 1995). Most Western
Performance
management
practices
429
AAAJ
21,3
430
ordinary, routine things are being done properly and on time. Similarly, Kaplan (2001)
as well as Atkinson et al. (1997) argues that measures for strategic capacity are
required in order to maintain (or improve) the long-term effectiveness of the
organization. Employee/user/customer/stakeholder surveys, professional judgment or
peer-group reviews may be needed in order to measure these aspects of quality.
Although these measures introduce an element of subjectivity, performance
measurement systems that ignore it are likely to lack balance (Kaplan, 2001). In
other words, it is important that PM-practices are constructed to identify (long-term)
quality differences in output and that this information becomes an integral part of the
performance measurement system (Henley et al., 1992, p. 287).
A general complaint (Carter et al., 1992, p. 177) as well as an empirical finding
(Pollitt, 2006, 1986; Pollanen, 2005; De Lancer Julnes and Holzer, 2001; Kloot and
Martin, 2000) is that quantitative performance measures tend to ignore the quality
aspect of service delivery since qualitative performance is much more difficult to
measure. In other words, the easy to measure drives out the more difficult (Gray and
Jenkins, 1995, p. 89). The result of such a focus on quantity performance (measures)
may be that the increase in quantitative performance (efficiency, number of units
produced) has been achieved at the expense of quality performance (operational
quality, Henley et al., 1992, p. 287; Carter et al., 1992, p. 40/41; or innovation and
long-term effectiveness; see Newberry and Pallot, 2006, 2005, 2004, Newberry, 2002).
The results from a meta-review by Jenkins et al. (1998) indicate that, in general, this
may be the case: they find that there is a positive effect of PM-practices on performance
quantity (i.e. the number of units produced or assembled) yet not necessarily on
performance quality (i.e. supervisor rating, accuracy). The previous review of literature
suggests that it is important to distinguish between the effects of PM-initiatives on
quantity and quality performance.
Motivation theories
The focus of this paper is on the managerial purposes for using PM-practices in the
public sector. Two strands of literature (behavioural and economic theory) will be
explored in this paper since the use of solely one research discipline may result in
incomplete, and, in some cases, incorrect conclusions (Merchant et al., 2003). Goal
setting theory provides a behavioral explanation for the hypothesized relation between
clear and measurable goals and performance (see Locke and Latham, 2002, 1990, for a
review)[4]. The focus of agency theory is on determining the optimal incentive contract;
agency theory may provide an economic explanation for the impact of PM-practices on
performance (see Lambert, 2001; Baiman, 1990; and Eisenhardt, 1989, 1985 for a
review). Table I provides a summary of similarities, as well as differences between goal
setting theory and agency theory.
Goal setting theory
The underlying premise of goal setting theory is that conscious goals affect what is
achieved (Latham, 2004). Goal setting theory asserts that people with specific and
challenging goals perform better than those with vague goals, such as do your best,
specific easy goals or no goals at all. Thus, goal setting theory assumes that there is a
direct relation between the definition of specific and measurable goals and
performance: if managers know what they are aiming for, they are motivated to
Performance
management
practices
431
AAAJ
21,3
Similarities
Agency theory
432
Table I.
Main characteristics of
goal setting theory and
agency theory
Differences
Agency theory
Goals
Incentives
Goals
Decentralization
Performance measurement
system
Incentives
Complexity
Important characteristics of
public sector employees
exert more effort, which increases performance (Locke and Latham, 2002, 1990).
Challenging goals are usually implemented in terms of specific levels of output to be
attained (Locke and Latham, 1990). Review articles (Locke and Latham, 2002; Rodgers
and Hunter, 1991) suggest a positive relation between clear and measurable goals and
performance. However, Locke and Latham (1990) acknowledge that task difficulty
(which is associated with difficult to measure goals) reduces the impact of clear and
measurable goals on performance.
Empirical evidence from the public sector provides somewhat mixed results. For
example, Hyndman and Eden (2001) interviewed the chief executives of nine agencies
in Northern Ireland. All their respondents indicate that a focus in mission, objectives,
targets and performance measures had improved the performance of the agency for all
stakeholders. Respondents also indicated that the poor implementation of the system
(i.e. systems that value efficiency over quality and/or short-term over long-term results,
p. 593), as well as the tendency to overemphasize numbers at the expense of judgment,
could jeopardize performance.
Cavalluzzo and Ittner (2004) examine some of the factors that influence the
development, use, and perceived benefits of results-oriented performance measures in
US government agencies. Their findings indicate that metric difficulties (due to,
amongst others, ambiguous or difficult to capture goals) are negatively associated with
perceived current and future benefits from the US governments performance
measurement initiatives. This suggests that US agency managers believe that the use
of PM-practices may not improve performance in situations where ambiguity of
objectives is high.
Summarizing, although goal setting theory suggests that clear and measurable
goals should be positively associated with performance, empirical evidence on this
issue in the public sector is inconclusive. The main problem appears to be associated
with the impact of clear and measurable goals on long-term, qualitative performance;
this results in the following hypothesis:
H1. There is (a) a positive relation between clear and measurable goals and
quantity performance, and (b) no relation between clear and measurable goals
and quality performance.
Agency theory
An agency theory relationship exists when one or more individuals (called principals)
hire others (called agents) in order to delegate responsibilities to them (Baiman, 1990).
The rights and responsibilities of the principals and agents are specified in their
mutually agreed-upon employment relationship; agency theory attempts to describe
that relationship using the metaphor of a contract. Agency theory assumes that
individuals are fully rational and have well-defined preferences and beliefs that
conform to the axioms of expected utility theory (Bonner and Sprinkle, 2002).
Furthermore, each individual is presumed to be motivated solely by self-interest
(Baiman, 1990). This self-interest can be described in a utility function that contains
two arguments: wealth (monetary and non-monetary incentives) and leisure.
Incentives can be defined as extrinsic motivators where pay, bonuses or career
perspectives are linked to performance (Bonner et al., 2000). Individuals are presumed
to have preferences for increases in wealth and increases in leisure. Agency theory
therefore posits that individuals will shirk (i.e. exert no effort) on a task unless it
somehow contributes to their own economic wellbeing (Bonner and Sprinkle, 2002).
Incentives that are not contingent on performance generally do not satisfy this
criterion; thus, agency theory suggests that incentives play a fundamental role in
motivation and the control of performance because individuals have utility for
increases in wealth[5].
The public sector has some specific characteristics that make the design of incentive
schemes quite complex (Pollitt, 2006; Anthony and Young, 2003; Burgess and Ratto,
2003; Dixit, 2002, 1997; Dewatripont et al., 1999; Kravchuk and Schack, 1996; Gupta
et al., 1994; Tirole, 1994; Hofstede, 1981). First of all, public sector organizations
generally have multiple stakeholders (principals) with multiple goals. Delivering
incentives is complex in these circumstances; each principal will offer a positive
coefficient on the element(s) (s)he is interested in, and negative coefficients on the other
dimensions (Dixit, 1997). The aggregate marginal incentive coefficient for each
outcome is decreasing with the number of principals (Burgess and Ratto, 2003); as a
result, incentives are weak (Dixit, 1997).
Performance
management
practices
433
AAAJ
21,3
434
Second, several dimensions of performance are hard to measure. This may result in
the fact that only those dimensions of performance that are easy to measure are
included in the incentive scheme, which may have undesirable effects on overall
performance (Burgess and Ratto, 2003; Tirole, 1994). Third, agency theory assumes
that an agent gets utility solely from the incentives, and disutility from the effort he
exerts on behalf of the principal. In reality, agents in public sector organizations may
get utility from some aspects of the task itself. Agents in the public sector may be
motivated by the idealistic or ethical purpose served by the agency (intrinsic
motivation), which may result in a match of workers and public sector organizations.
This match of workers and public sector organizations may have also resulted in the
fact that more risk-averse employees have opted for public sector organizations.
Finally, professionalism may motivate agents in public sector organizations. As a
result, organizations can use so-called low-powered incentives (i.e. incentives are not
based on performance) if the goals of the workers are aligned with those of the
organization (Dixit, 2002). On the other hand, organizations will have a higher
marginal cost of effort if these goals diverge. In addition, public sector professionals
may decouple PM-information and their daily jobs (i.e. not use performance
information for managerial and/or evaluation purposes; see Hoque, 2005; Johnsen,
2005; Hoque et al., 2004).
Anecdotal and empirical evidence on the effectiveness of incentives in public sector
organizations provides mixed results (Bevan and Hood, 2006; Newberry and Pallot,
2005, 2004; Newberry, 2002; Van Thiel and Leeuw, 2002; De Bruijn, 2002). Bevan and
Hood (2006) investigate the use of performance management of public services in
England. They indicate that English health care managers were exposed to greater risk
of being sacked when measured indices (including star rating indicators) were used
and individual hospitals were named and shamed. Although there have been
dramatic improvements in reported performance in the English health care sector,
Bevan and Hood (2006) argue that it is impossible to determine whether these
improvements are genuine or whether they are offset by gaming and/or a reduction in
performance dimensions that are not measured.
Similarly, Dranove et al. (2003) investigates the use of health care report cards
(public disclosure of patient health outcomes at the level of the individual physician,
hospital, or both) on US health care provides. Their results indicate that the use of
performance measures and (implicit) incentives (naming and shaming) results in
both selection behavior by providers as well as improved matching of patients with
hospitals. However, while overall this lead to higher levels of resource use (more
surgery, higher efficiency), it also resulted in worse health outcomes, especially for
sicker patients. Dranove et al. (2003) conclude that, at least in the short run, these report
cards decreased patient and social welfare (i.e. quality aspects of performance).
Newberry (2002) and Newberry and Pallot (2004, 2005, 2006) investigate the
consequences of New Zealands public sector financial management system for the
New Zealand central government departments. Their results indicate that while
accounting-based financial management incentives have resulted in efficiency gains,
they may not have improved effectiveness over the long run. Newberry and Pallot
(2004) indicate that government departments have experienced resource erosion,
leading to a loss of capability to deliver services over the longer-term, which may
ultimately contribute to a loss of morale and difficulties in attracting and retaining
staff. Similarly, Gray and Jenkins (1993, p. 65) note that the introduction of the
Financial Management Initiative in the British public sector has advanced
cost-awareness yet has also resulted in a shift of attention away from the long-term
interests of policy delivery to the meeting of short-term targets.
Finally, Shirley and Xu (2001) investigate the empirical effects of performance
contracts (contracts signed between the government and state enterprise managers) in
China. Performance contracts have been widely used to reform state-owned enterprises
in China. The results by Shirley and Xu (2001) indicate that, on average, performance
contracts in China did not improve performance and may have made it worse.
However, Chinas performance contracts effects were not uniformly bad; they
improved productivity (i.e. quantity performance) in slightly more than half of the
participants. Performance contracts were on average negative because of the large
losses associated with poorly designed contracts. Performance contracts improved
performance in slightly more than half of the cases; successful contracts featured
sensible targets, stronger incentives, longer terms, managerial bonds, and were in more
competitive industries (Shirley and Xu, 2001).
Summarizing, although agency theory suggests that incentives should be positively
associated with performance, empirical evidence on this issue in the public sector is
inconclusive. Overall, the use of incentives appears associated with an increase in
quantity performance yet a decrease in quality performance. This results in the
following hypothesis:
H2. There is (a) a positive relation between incentives and quantity performance,
and (b) no relation between incentives and quality performance.
Control variables
A number of control variables that may affect the relation between clear and
measurable goals, incentives and performance are recognized[6].
Decentralization of decision rights. Goal setting theory suggests that goals are less
likely to be achieved if there are situational constraints blocking performance than if
there are no such blocks (Locke and Latham, 1990). One of these situational
constraints may be the lack of decision rights; decision rights refer to the authority
and responsibility for making particular decisions (Kaplan and Atkinson, 1998, p. 288).
Agency theory indicates that organizations should balance the benefits from
decentralizing decision rights to lower levels in the organization against control losses
from increased information asymmetries (i.e. potential for gaming; Bushman et al.,
2000). Based on these assertions, decentralization is included as a control variable.
Performance measurement system. Goal setting theory suggests that feedback (i.e.
information from the performance measurement system) may provide the opportunity
to set more demanding goals in the future, provides information regarding better task
strategies, and is a basis for recognition and reward (Locke and Latham, 2002). Agency
theory recognizes that the performance measurement system provides the input for
decision-making, as well as for incentives (Abernethy et al., 2004). Based on these
claims, the performance measurement system is included as a control variable.
Size. Size may be an important determinant of PM-practices as well as performance
(see Chenhall, 2003, for a review). Economic theory argues that PM-practices will be more
effective in small organizations (Dewatripont et al., 1999); however, an increase in size
Performance
management
practices
435
AAAJ
21,3
436
may positively affect the adoption and use of PM-practices (De Lancer Julnes and Holzer,
2001; Rogers, 1995). Based on these assertions, size is included as a control variable.
Sector. Sector may affect the relation between PM-practices and performance in (at
least) two ways. First of all, there may be (legal) sector characteristics that may influence
the ability to specify clear and measurable goals as well as the design and implementation
of incentive schemes[7]. In addition, a sector proxy may capture task characteristics
(complexity, standardization) that may affect the applicability and effects of PM.
Methodology
Sample
The study is based on survey data collected from managers in public sector
organizations, located in the Netherlands. Surveys allow contact with otherwise
inaccessible respondents at relatively low costs (Cooper and Schindler, 2003). Students
of an executive education program have been used to contact survey participants; this
procedure results in high response rates, yet sample selection is not random (see also
next section). Since the student teams have met with the respondent, potential
problems with respondent identification (i.e. a junior employee rather than a manager
responds to the survey) and respondent understanding of the questions (students have
been instructed to clarify questions if asked, without giving guidance to obtain
desirable results) are mitigated.
The survey was pre-tested by four experts, either (previous) managers of non-profit
organizations or survey experts. A cover letter, explaining the goal, desired respondent
and other issues, accompanied the survey. In addition, respondents could indicate if
they wanted to receive the results from this study to provide an incentive to cooperate.
A total of 93 useable surveys[8] were returned. The sample is biased towards
government organizations (central government, municipalities), while education and
health care organizations are underrepresented in the sample; therefore, the results
cannot be generalized towards all public sector organizations. Similar to Cavalluzzo
and Ittner (2004), the manager of an individual unit, program, project or operation is
the basis of analysis. Respondents include general directors and general managers (48
percent), financial directors and controllers (26 percent), department heads (19 percent)
and others (7 percent, including head of communication, head of education and
learning, and coordinating staff advisor). All respondents had at least some managerial
responsibilities. On average, respondents were working for nine years in their
organization (median: six years) and have been employed in their current function for
five years (median: three years). These data suggest that respondents are well informed
on PM practices of their organization.
Measurement of variables
The questionnaire elicited information on the clarity and measurability of the goals of
the organization, the incentive system and the performance of the organization as well
as a number of control variables. Each variable has been measured using multiple
items. Existing measures have been used where possible, sometimes with modification
to fit the present research context. The items used to measure each variable, including
the mean and standard deviations of the responses, their factor loadings and
Cronbachs alpha, are in listed in the Appendix. All validity and reliability checks
provide satisfactory results[9].
Dependent variable
Performance was measured by a well-established instrument developed by Van de Ven
and Ferry (1980), designed specifically to assess performance in public sector
organizations. This instrument has also been used by Dunk and Lysons (1997) and
Williams et al. (1990). Each of the items in the instrument is measured on a five-point
Likert scale, ranging from 1, far below average, to 5, far above average. The
performance dimensions include:
(1) quantity or amount of work produced;
(2) quality or accuracy of work produced;
(3) number of innovations or new ideas by the unit;
(4) reputation of work excellence;
(5) attainment of unit production or service goals;
(6) efficiency of unit operations; and
(7) morale of unit personnel[10].
Based upon the theoretical distinction between quantity and quality performance as well
as the results of the factor analysis and scale reliability test, the measure for performance
is split in two sub-measures: one for quantitative performance (QUANTPERF, capturing
performance dimensions (1), (5) and (6)), and one for qualitative performance
(QUALPERF, capturing performance dimensions (2), (3), (4) and (7)).
The results for performance indicate that, with the exception of efficiency,
respondents tend to overestimate the performance of their organization. That is, the
sample performance should be average (i.e. 3.0) if the sample is representative and
respondents perceptions reflect reality. The overestimation of performance is
consistent with other studies that have used similar versions of this instrument. For
example, Dunk and Lysons (1997) present an average overall performance of 21.577
with six items in their instruments (average 3.60 per item); Williams et al. (1990) also
present average scores above 3.58. The average score is 3.36 for the sample in this
study is consistent with the overestimation of performance in previous studies.
Independent variables
Clear and measurable goals
The instrument for clear and measurable goals (CLRMEASG) is purposely developed
for this research project. The variable CLRMEASG captures the extent to which
respondents agree on the following statements (where 1 completely disagree and
5 completely agree):
.
the mission of my organization is formulated unambiguously;
.
the mission of my organization is written on paper and communicated internally
and externally;
.
the goals of my organization are unambiguously related to the mission of my
organization;
.
the goals of my organization have been documented very specifically and
detailed;
Performance
management
practices
437
AAAJ
21,3
438
the sum of goals to be achieved provides a complete picture of the results that
should be achieved by my organization, and
the performance measures for my organization are unambiguously related to the
goals of my organization[11].
Incentives
An adapted version from the instrument by Keating (1997) has been used to
characterize the incentive system. Respondents have been asked to indicate the
importance of:
.
budget versus actuals;
.
quantity measures;
.
efficiency measures;
.
customer satisfaction measures;
.
quality measures; and
.
outcome measures, for their total compensation (salary, bonus, career
perspectives, etc).
The variable for incentives has been labelled INCENTVE.
Control variables
Decentralization
The proxy for decentralization is based on the instrument developed by Gordon and
Narayanan (1984) and also used by, among others, Miah and Mia (1996)[12] and
Abernethy et al. (2004). To obtain a valid measure for the subsequent analysis, a
variable based on only the first two items of the instrument (labelled DECACR,
Decentralization of Asset Commitment Rights; see Bouwens and Van Lent, 2003) had
to be created.
Performance measurement system
The performance measurement system instrument (labelled BROADPMS) is based
upon the instrument by Cavalluzzo and Ittner (2004) and captures the extent to which
different types of results-oriented performance measures have been developed for the
activities of the organization. These different types of measures should be regarded as
a supplement (not a substitute) to the financial (budget) measures that are used in most
public sector organizations (Williams et al., 1990).
Size
Respondents have been asked to provide the number of full-time equivalents of their
organization. To correct for skewness, the measure for size (labeled SIZE) is based on
the logarithm for the number of employees in the organization.
Sector
Finally, respondents have been asked to provide the sector in which their organization
operates. The organizations are regrouped to obtain dummy variables for local
government organizations (labeled LOCGOVT) and other government organizations
(labeled OTHGOVT); the additional category includes central government organizations.
Results
Partial least squares regression
Partial least squares (PLS) regression[13] is used to investigate the relations between
clear and measurable goals, incentives and performance. PLS is a components-based
structural equations modelling technique similar to regression, but simultaneously
models the structural paths (i.e. theoretical relationships among latent variables) and
measurement paths (i.e. relationships between a latent variable and its indicators).
PLS places minimal demands on measurement scales, sample size and residual
distributions; in addition, it is considered better suited for explaining complex
relationships (Chin et al., 2003; Chin, 1998). PLS requires the subsequent evaluation of
the measurement model (where the reliability and validity of the measures is assessed)
and the structural model (where the fit between the theoretical model and the data are
assessed; see Hair et al., 1998, pp. 610-15). Hulland (1999) discusses the use of PLS in
strategic management research; similarly, Smith and Langfield-Smith (2004, p. 75)
review the applicability of structural equation modelling in management accounting
research and indicate that the use of PLS provides opportunities to build theory on
relatively small samples[14].
Measurement model. Table II provides the PLS parameter estimates for the
measurement models. The table suggests that the constructs are measured with
sufficient precision[15], even though they are correlated.
Structural model: test of models. The PLS estimates of the structural model are
reported in Table III. The table provides the standardized bs for each path coefficient,
which are interpreted in the same way as in OLS regression. In addition, the R2 for
each variable is calculated; the PLS R2 can be used to evaluate the model. As PLS
makes no distributional assumptions, bootstrapping (500 samples with replacement) is
used to evaluate the statistical significance of each path coefficient[16] (Chin, 1998).
The results from the analysis indicate that clear and measurable goals are positively
associated with both quantity and quality performance. This result is in accordance
with H1(a), yet conflicts with H1(b). Consistent with goal setting theory is that the
impact of clear and measurable goals on qualitative aspects of performance is lower
than for quantitative aspects of performance (b 0.25 for quality performance,
respectively b 0.35 for quantity performance). The finding that incentives are
positively associated with quantity performance (b 0.25, p , 0.10), yet not related to
quality performance (b 0.09, p . 0.10) fully confirms the second hypothesis.
The control variables also affect the model. First of all, sector and, to a lesser extent,
size appears to affect the ability to define clear and measurable goals. Local
government organizations have significantly less clear and measurable goals than
central government organizations (path coefficient from local government to clear and
measurable goals is 2 0.30, p , 0.05). Contrary, other government organizations
appear to have somewhat more clear and measurable goals than central government
organizations (path coefficient from other government to clear and measurable goals is
0.14, p , 0.15). Local governments also qualify their quality performance as
significantly lower than other public sector organizations (path coefficient from local
government to quality performance is 2 0.35, p , 0.01); their quantitative
performance appears similar to other public sector organizations.
Finally, large organizations appear to have somewhat more difficulties with
defining clear and measurable goals than smaller organizations (path coefficient from
Performance
management
practices
439
0.88
0.83
0.86
0.90
1.00
1.00
1.00
0.81
0.80
0.55
0.72
0.55
0.61
1.00
1.00
1.00
0.59
0.50
0.74
0.18a
0.49c
0.13
2 0.14
2 0.34c
0.28c
0.48c
0.40c
1
0.85
0.21b
0.05
20.12
20.01
0.12
0.17
0.14
0.74
0.22b
0.02
20.18a
0.08
0.40c
0.24b
0.78
2 0.18a
0.01
2 0.14
0.30c
0.18a
1.00
20.05
0.09
20.07
20.25b
1.00
20.52c
20.17
20.39c
1.00
0.08
0.14
0.77
0.46c
0.71
Notes: *p , 0.10; * *p , 0.05; * * *p , 0.01 level (two-tailed) respectively. Diagonal elements are the square roots of the average variance statistics;
Off-diagonal elements are the correlations between the latent variables calculated in PLS. CLRMSG = clear and measurable goals;
DECACR decentralization of asset commitment rights; BROADPMS = broad performance measurement system; INCENTVE Incentive system;
SIZE size (log fte); LOCGOVT local government (dummy; 1 = local government organization, 0 = rest); OTHGOVT other government (dummy;
1 = other government organization, 0 = rest); QUANTPERF quantity performance; QUALPERF quality performance
CLRMSG
DECACR
BRDPMS
INCENTVE
SIZE
LOCGOVT
OTHGOVT
QUANTPERF
QUALPERF
AVE
440
1.
2.
3.
4.
5.
6.
7.
8.
9.
Composite
reliability
Table II.
Composite reliability,
average variance
extracted, Pearson
correlations of latent
variables and square root
of average variance
extracted statistics
Variable
AAAJ
21,3
Paths
Main hypotheses:
1.a CLRMSG ! QUANTPERF
1.b CLRMSG ! QUALPERF
2.a INCENTVE ! QUANTPERF
2.b INCENTVE ! QUALPERF
Control variables:
DECACR ! QUANTPERF
DECACR ! QUALPERF
BROADPMS ! QUANTPERF
BROADPMS ! QUALPERF
LOCGOVT ! CLRMSG
LOCGOVT ! INCENTVE
LOCGOVT ! DECACR
LOCGOVT ! BROADPMS
LOCGOVT ! QUANTPERF
LOCGOVT ! QUALPERF
OTHGOVT ! CLRMSG
OTHGOVT ! INCENTVE
OTHGOVT ! DECACR
OTHGOVT ! BROADPMS
OTHGOVT ! QUANTPERF
OTHGOVT ! QUALPERF
SIZE ! CLRMSG
SIZE ! INCENTVE
SIZE ! DECACR
SIZE ! BROADPMS
SIZE ! QUANTPERF
SIZE ! QUALPERF
Hypothesized
direction
Path
coefficient
T-value
0.35
0.25
0.25
0.09
2.07 * *
3.39 * * *
2.00 * *
0.79
0.06
0.06
0.13
0.04
20.30
20.11
0.03
20.23
20.03
20.35
0.14
20.14
0.12
20.03
20.04
20.09
20.18
20.21
20.11
0.02
0.05
20.20
0.54
0.61
1.08
0.24
2.28 * *
0.76
0.36
1.36
0.34
3.42 * * *
1.64
1.31
1.00
0.10
0.31
0.85
1.62
1.60
1.03
0.12
0.28
1.75 *
Mult. R2-dependent
variable
Performance
management
practices
0.31
0.32
441
0.16
0.05
0.04
0.03
Notes: *p , 0.10; * *p , 0.05; * * *p , 0.01 respectively. Cells report respectively the mean of
sub-samples for path coefficients and t-value; The last column reports the multiple R2 for the
dependent variable; N=93; Results based on bootstrapping with 500 generated samples. CLRMSG =
clear and measurable goals; DECACR decentralization of asset commitment rights; BROADPMS =
broad performance measurement system; INCENTVE Incentive system; SIZE size (log fte);
LOCGOVT local government (dummy; 1 = local government organization, 0 = rest);
OTHGOVT other government (dummy; 1 = other government organization, 0 = rest);
QUANTPERF quantity performance; QUALPERF quality performance
size to clear and measurable goals is 2 0.18, p , 0.15). In addition, it appears that
large organizations rely less on incentive systems (path coefficient from size to
incentives is 2 0.21, p , 0.15) and have lower quality performance (path coefficient
from size to quality performance is 2 0.20, p , 0.10). Other paths from the control
variables to the other variables in the model are non-significant. The results for sector
and size suggest that the application of PM is affected by institutional characteristics.
Discussion and conclusions
This study draws upon economic and behavioral theories to empirically investigate
whether performance management practices affect performance in public sector
Table III.
Path coefficients,
t-statistics and R2 for PLS
structural model
AAAJ
21,3
442
organizations in the Netherlands. The findings of this study suggest that the definition
of clear and measurable goals is positively associated with both quantity performance
(efficiency, production targets) as well as quality performance (accuracy, innovation,
employee morale). This finding is consistent with goal setting theory (Locke and
Latham, 2002, 1990); the specification of clear and measurable goals appears to provide
focus in operations and improves performance. The use of incentives is positively
associated with quantity performance, yet not related to quality performance. This
finding is consistent with the notion that incentives may not be helpful to stimulate
effort when goals are ambiguous or performance is difficult to measure.
Finally, institutional factors (sector, and, to a lesser extent, size) appear to affect the
use and effectiveness of PM-practices. Local government organizations appear to have
more ambiguous and difficult to measure goals as well as lower performance than
other public sector organizations. Large organizations appear to have more difficulty to
define clear and measurable goals, are less likely to use incentives and have lower
quality performance, which is mostly consistent with the notions of agency theory
(Dewatripont et al., 1999). Overall, it appears that the behavioural consequences of
PM-practices on public sector managers are as important as the economic
consequences.
The results from this study suggest that public sector organizations face a trade-off
between achieving quantitative goals (i.e. short-term performance goals such as
efficiency and quantity produced) and quality goals (i.e. long-term or strategic
performance goals such as quality/accuracy, innovation, and employee morale).
Quality goals are unlikely to be attained by introducing performance measurement and
evaluation systems, yet appear to be achieved by providing inspiring missions and/or
goals. This suggests that PM-practices are useful in order to increase and/or maintain
efficiency in so-called production agencies (Wilson, 1989). However, the
overextension of PM-practices to public service activities where underperformance
on quality dimensions has serious (possibly life-threatening) consequences (for
example, health care) should be avoided.
The literature suggests a number of strategies that organizations can pursue in order
to achieve a balance between quantitative and qualitative performance (De Bruijn, 2002;
Van Thiel and Leeuw, 2002; Simons, 2000; Likierman, 1993). First, organizations can use
a variety of (competing) performance measures for all tasks that have to be carried out.
Few indicators for a limited part of total performance may result in gaming; this effect is
reinforced when indicators do not change over time. In addition, adequate safeguards
against quantity indicators need to be developed; that is, soft aspects of performance
should also be included in the performance measurement system. Second, external
sources (such as important stakeholders, external experts and/or client panels) can be
used to provide information on adequate performance measures that should be
included in the performance measurement system, as well as provide information on
how they view the soft aspects of performance.
Performance measurement systems should be devised with the people that work
with them in order to create ownership. Such an interaction reduces the chances that
performance measures are not understood, that they are inconsistent or unfair, or that
targets are set at unattainable levels. Fourth, performance measure results should be
interpreted as guidance, not answers. The appropriate initial managerial response to
performance measures may firstly be a discussion as to suitable actions. The use of
This research project is amongst one of the first large-scale empirical analyses that
investigate whether the use of PM practices is associated with the performance of
public sector organizations (see also Van Helden, 2005). The findings from this study
are not without limitations. First of all, the results presented here are based on
correlations, not necessarily causal relations. Even though the research model is based
on (normative) PM-literature (Ittner and Larcker, 2001; Otley, 1999; Kravchuk and
Schack, 1996), it may be argued that the implementation of PM-practices provide the
opportunity to define clear and measurable goals or that a high level of perceived
performance makes it appear as if the organization has clear and measurable goals.
In addition, it may be argued that performance is better when there are clearer goals
because, when goals are clear, it is easier to measure performance. In that case, the
results for the relation between PM-practices and performance would suffer from
endogeneity due to a misspecification of the model (Ittner and Larcker, 2001, p. 397).
However, the previous argument would suggest that, after recognizing the impact of
task uncertainty and all other objective factors that may explain the use of
PM-practices, all organizations in one sector would use similar PM-practices and have
similar performance. In other words, this would exclude the impact of political
decisions respectively the notion that organizations may learn dynamically (Ittner and
Larcker, 2001, p. 399).
Another limitation is that the results are based on perceptions rather than hard
measures. These perceptions may be inadequate due to inappropriate measures or
inadequate interpretation of the survey instruments (see also Ittner and Larcker, 2001,
Performance
management
practices
443
AAAJ
21,3
444
and Jacobs, 1997, for comments on this issue). While the use of validated instruments
and the pre-tests on survey experts and public managers should prevent such errors,
additional research may be necessary to (further) validate the results from this study.
Third, the study is based on a cross-sectional survey of public sector organizations
rather than one specific type of public sector organization. Institutional differences in
these specific types of public sector organizations (such as the mandatory initiation of
PM, the collective labor agreements that hardly allow the use of bonus payments or the
legally required diversity of tasks) may explain some of the results in this study.
Another limitation is that factors such as mutual trust amongst stakeholders and
managers or financial stress that may affect PM-practices and/or performance have not
explicitly been considered in the survey.
Finally, the survey instrument that measures performance focuses on managerial
tasks. The survey instrument for performance asks respondents to compare the
performance of their organization to other comparable organizations with regard to
quantities produced, quality produced, the number of innovations and morale of unit
personnel. The survey instrument does not ask respondents whether these tasks are
politically sensitive, or whether political executives are happy with the performance
that is achieved. As a result, the survey does not capture political performance, while
in effect that may be the criterion that is used to judge the performance of the
organization.
In addition to the previous limitations, subsequent research may address the
reasons why some public sector organizations have vague and hard to measure goals
while others have clear and measurable goals. Part of this may be explained by the
processes of the organization (Chenhall, 2003; Ouchi, 1980; Burchell et al., 1980), yet
others may be related to political choices in society or in the organization (Hood and
Peters, 2004; Dranove et al., 2003; Hood, 1998; Hofstede, 1981). The consequences of
these choices (in terms of financial and non-financial performance, yet also in terms of
political consequences and societal effects) may be interesting to address.
The interaction of the performance measurement and reward system with other
aspects of the control system (behavioral controls, social controls) in public sector
organizations appears a fruitful avenue for research. It is unclear at this point whether
the introduction of (or focus on) result controls are used as substitutes for, or
complements to other forms of control. In other words, does the introduction of
PM-practices diminish behavioral and/or cultural controls or can we use behavioral
and/or cultural controls to increase the effectiveness of PM-practices?
Notes
1. See Cavalluzzo and Ittner (2004) and Ter Bogt (2003) for other large scale empirical studies
examining the factors and effects of PM in public sector organizations.
2. There may also be economic benefits associated with ambiguous goals: for example, Hart
et al. (1997) indicate that renegotiation of goals may result in excessive costs. Williamson
(1999) suggests that the ongoing relations between politicians and government agencies, in
which information input, decision making and implementation are interrelated and
interdependent, may (economically) prevent the provision of clear and measurable goals.
3. Politically, PM-practices may be used as tools to appease politicians, media and other
stakeholder groups; in other words, the introduction of PM is driven by institutional factors
(legitimization motive; see Hoque, 2005; Hoque et al., 2004; Geiger and Ittner, 1996; Meyer
4.
5.
6.
7.
8.
9.
10.
11.
and Rowan, 1977). PM-practices may provide information that can be used by public sector
managers, politicians and other stakeholders alike to profit from or prevent crises, scandals
and catastrophes (Johnsen, 2005; Ezzamel et al., 2004; Hood, 1998). In other words,
performance measures may act as prices in the political market: they may be used
strategically by individuals for their unique individual, organizational and political purposes
(for example, to challenge the legitimacy of specific political decisions; see Vakkuri and
Meklin, 2006).
It may be argued that goal setting theory is associated with individual task performance
rather than organizational performance. However, the effects of goal setting have been
shown to be applicable to the individuals as well as to organizational units (Maiga and
Jacobs, 2005; Rodgers and Hunter, 1991) and entire organizations (Locke and Latham, 2002).
Goal setting theory also suggests that incentives can be used to enhance performance.
Incentives can be defined as extrinsic motivators where pay, bonuses or career perspectives
are linked to performance (Bonner et al., 2000; Locke and Latham, 1990). However, Locke
(2004) notes that effective incentive systems are extraordinarily difficult to set up and
maintain.
Additional factors that are recognized by goal setting theory include ability, goal
commitment and task complexity (Locke and Latham, 1990); these factors are not explicitly
considered in this study.
For example, the central government in the Netherlands has created agencies that execute
(mostly) operational tasks. Municipalities, on the other hand, are legally obliged to perform a
wide variety of tasks; they may not have the opportunity to specialize on certain tasks. In
addition, the Dutch central government requires central government organizations to specify
clear and measurable goals and measure performance, while Dutch municipalities are
encouraged yet not required to introduce PM-practices. Finally, public sector organizations
in the Netherlands have collective labour agreements; only a few of them allow the use of
pay-for-performance incentive schemes.
A total of 106 surveys has been returned; small organizations (i.e. organizations with less
than 30 employees) and incomplete surveys are excluded from the analysis.
All Cronbachs alphas are above 0.60, the lower limit in exploratory research (see Hair et al.,
1998, p. 118).
The Dutch public sector has several initiatives where public sector organizations get a
feeling for their relative performance. For example, the Dutch Ministry of Internal Affairs
provides information on how to set up benchmarking practices; this project is shared with
Dutch municipalities and provinces, amongst others. Second, there are several initiatives for
sharing information amongst different functions in the public sector. For example, mayors
as well as CFOs in Dutch municipalities have an annual congress where they discuss best
management practices, amongst others. Therefore, it is assumed that the respondents can
adequately judge their performance relative to others in their sector.
The original survey included a number of additional statements, including: the goals of my
organization are completely based on quantitative issues (budgets, productivity, quantities),
a number of external factors determines whether I realize my goals; and the results that my
organization achieves are only measurable in the long run. These items were included to
measure items such as objectivity and responsiveness. However, they did not load on one
factor and/or factor loadings were (far) below 0.45; as a result, they were excluded from the
analysis to obtain a more unidimensional scale. The convergent validity of the
CLRMSG-measure has been investigated by computing Pearson correlation with another
survey variable. Based on Cavalluzzo and Ittner (2004), the survey also included a number of
questions on the difficulty of developing performance measures, including: the development
of long-term, strategic goals for my organization; integrating different perspectives in the
Performance
management
practices
445
AAAJ
21,3
446
12.
13.
14.
15.
16.
References
Abernethy, M.A. and Bouwens, J. (2005), Determinants of accounting innovation
implementation, Abacus, Vol. 41 No. 3, pp. 217-40.
Abernethy, M.A., Bouwens, J. and Van Lent, L. (2004), Determinants of control system design in
divisionalized firms, The Accounting Review, Vol. 79 No. 3, pp. 545-70.
Anthony, R.N. and Young, D.W. (2003), Management Control in Non-profit Organizations,
McGraw-Hill/Irwin, New York, NY.
Atkinson, A.A., Waterhouse, J.H. and Wells, R.B. (1997), A stakeholder approach to strategic
performance measurement, Sloan Management Review, Vol. 38 No. 3, pp. 25-37.
Baiman, S. (1990), Agency research in managerial accounting: a second look, Accounting,
Organizations and Society, Vol. 15 No. 4, pp. 341-71.
Bevan, G. and Hood, C. (2006), Whats measured is what matters: targets and gaming in the
English public health care system, Public Administration, Vol. 84 No. 3, pp. 517-38.
Bonner, S.E. and Sprinkle, G.B. (2002), The effects of monetary incentives on effort and task
performance: theories, evidence, and a framework for research, Accounting,
Organizations and Society, Vol. 27, pp. 303-45.
Bonner, S.E., Hastie, R., Sprinkle, G.B. and Young, S.M. (2000), A review of the effects of
financial incentives on performance in laboratory tasks: implications for management
accounting, Journal of Management Accounting Research, Vol. 12, pp. 19-64.
Bouwens, J. (2003), The use of value-based measures for assessing managerial performance,
unpublished working paper, Nyenrode University, Breukelen.
Brickley, J., Smith, C. and Zimmerman, J. (1995), The economics of organizational architecture,
Journal of Applied Corporate Finance, Vol. 8 No. 2, pp. 19-31.
Brignall, S. and Modell, S. (2000), An institutional perspective on performance measurement and
management in the new public sector, Management Accounting Research, Vol. 11,
pp. 281-306.
Burchell, S., Clubb, C., Hopwood, A., Hughes, J. and Nahapiet, J. (1980), The roles of accounting
in organizations and society, Accounting, Organizations and Society, Vol. 5 No. 1, pp. 5-25.
Burgess, S. and Ratto, M. (2003), The role of incentives in the public sector: issues and evidence,
Oxford Review of Economic Policy, Vol. 19 No. 2, pp. 285-99.
Bushman, R.M., Indjejikian, R.J. and Penno, M.C. (2000), Private pre-decision information,
performance measure congruity, and the value of delegation, Contemporary Accounting
Research, Vol. 17 No. 4, pp. 561-87.
Carter, N., Klein, R. and Day, P. (1992), How Organisations Measure Success: The Use of
Performance Indicators in Government, Routledge, London/New York.
Cavalluzzo, K.S. and Ittner, C.D. (2004), Implementing performance measurement innovations:
evidence from government, Accounting, Organizations and Society, Vol. 29, pp. 243-67.
Chenhall, R.H. (2003), Management control systems design within its organizational context:
findings from contingency-based research and directions for the future, Accounting,
Organizations and Society, Vol. 28 Nos 2-3, pp. 127-68.
Chenhall, R.H. (2005), Integrative strategic performance measurement systems, strategic
alignment of manufacturing, learning and strategic outcomes: an exploratory study,
Accounting, Organizations and Society, Vol. 30, pp. 395-422.
Chin, W.W. (1998), The partial least squares approach for structural equation modeling, in
Marcoulides, G.A. (Ed.), Modern Methods for Business Research, Laurence Erlbaum
Associates, Hillsdale, NJ, pp. 295-336.
Chin, W.W., Marcolin, B.L. and Newsted, P.R. (2003), A partial least squares latent variable
modeling approach for measuring interaction effects: results from a Monte Carlo
simulation study and an electronic-mail emotion/adoption study, Information Systems
Research, Vol. 14 No. 2, pp. 189-217.
Cool, K., Dierickx, I. and Jemison, D. (1989), Business strategy, market structure and risk-return
relationships: a structural approach, Strategic Management Journal, Vol. 10, pp. 507-22.
Cooper, D.R. and Schindler, P.S. (2003), Business Research Methods, 8th International Edition,
McGraw-Hill, New York, NY.
De Bruijn, H. (2002), Performance measurement in the public sector: strategies to cope with the
risks of performance measurement, International Journal of Public Sector Management,
Vol. 15 Nos 6/7, pp. 578-94.
De Lancer Julnes, P. and Holzer, M. (2001), Promoting the utilization of performance measures in
public organizations: an empirical study of factors affecting adoption and
implementation, Public Administration Review, Vol. 61 No. 6, pp. 693-708.
Dewatripont, M., Jewitt, I. and Tirole, J. (1999), The economics of career concern, part II:
application to missions ad accountability of government agencies, Review of Economic
Studies, Vol. 66, pp. 199-217.
Dixit, A. (1997), Power of incentives in private versus public organizations, AEA Papers and
Proceedings, May, pp. 378-82.
Dixit, A. (2002), Incentives and organizations in the public sector: an interpretive review, The
Journal of Human Resources, Vol. 37 No. 4, pp. 696-718.
Performance
management
practices
447
AAAJ
21,3
448
Dranove, D., Kessler, D., McClellan, M. and Satterthwaite, M. (2003), Is more information better?
The effects of report cards on health care providers, Journal of Political Economy, Vol. 11
No. 3, pp. 555-88.
Dunk, A.S. and Lysons, A.F. (1997), An analysis of departmental effectiveness, participative
budgetary control processes and environmental dimensionality within the competing
values framework: a public sector study, Financial Accountability and Management,
Vol. 13 No. 1, pp. 1-15.
Eisenhardt, K.M. (1985), Control: organizational and economic approaches, Management
Science, Vol. 31 No. 2, pp. 134-49.
Eisenhardt, K.M. (1989), Agency theory: an assessment and review, Academy of Management
Review, Vol. 14 No. 1, pp. 57-74.
Ezzamel, M., Hyndman, N., Johnsen, A., Lapsley, I. and Pallot, J. (2004), Has devolution
increased democratic accountability?, Public Money and Management, Vol. 24 No. 3,
pp. 145-52.
Fornell, C. and Larcker, D.F. (1981), Evaluating structural equation models with unobservable
variables and measurement error, Journal of Marketing Research, Vol. 18, pp. 39-50.
Geiger, D.R. and Ittner, C.D. (1996), The influence of funding source and legislative requirements
on government cost accounting practices, Accounting, Organizations and Society, Vol. 21,
pp. 549-67.
Gordon, L.A. and Narayanan, V.K. (1984), Management accounting systems, perceived
environmental uncertainty and organization structure: an empirical investigation,
Accounting, Organizations and Society, Vol. 9 No. 1, pp. 33-47.
Gray, A. and Jenkins, B. (1993), Codes of accountability in the new public sector, Accounting,
Auditing and Accountability Journal, Vol. 6 No. 3, pp. 52-67.
Gray, A. and Jenkins, B. (1995), From public administration to public management; reassessing
a revolution?, Public Administration, Vol. 7 No. 3, pp. 75-99.
Gupta, P.P., Dirsmith, M.W. and Fogarty, T.J. (1994), Coordination and control in a government
agency: contingency and institutional theory perspectives on GAO audits, Administrative
Science Quarterly, Vol. 39, pp. 264-84.
Guthrie, J., Olson, O. and Humphrey, C. (1999), Debating developments in new public financial
management: the limits of global theorising and some new ways forward, Financial
Accountability and Management, Vol. 15 No. 3, pp. 209-28.
Hair, J.F. Jr, Anderson, R.E., Tatham, R.L. and Black, W.C. (1998), Multivariate Data Analysis, 5th
ed., Prentice-Hall International, London.
Hart, O., Shleifer, A. and Vishny, R.W. (1997), The proper scope of government: theory and an
application to prisons, The Quarterly Journal of Economics, Vol. 112 No. 4, pp. 1127-61.
Hartmann, F. (2005), The effects of tolerance for ambiguity and uncertainty on the
appropriateness of accounting performance measures, Abacus, Vol. 41 No. 3, pp. 241-66.
Heinrich, C. (2002), Outcomes-based performance management in the public sector: implications
for government accountability and effectiveness, Public Administration Review, Vol. 62
No. 6, pp. 712-25.
Henley, D., Likierman, A., Perrin, J., Evans, M., Lapsley, I. and Whiteoak, J. (1992), Public Sector
Accounting and Financial Control, Chapman & Hall, London.
Hofstede, G. (1981), Management control of public and not-for-profit activities, Accounting,
Organizations and Society, Vol. 6, pp. 193-211.
Hood, C. (1991), A public management for all seasons?, Public Administration, Vol. 69 No. 1,
pp. 3-19.
Hood, C. (1995), The New Public Management in the 1980s: variations on a theme, Accounting,
Organizations and Society, Vol. 20, pp. 93-109.
Hood, C. (1998), The Art of the State. Culture, Rhetoric, and Public Management, Clarendon Press,
London.
Hood, C. and Peters, G. (2004), The middle aging of new public management: into the age of
paradox?, Journal of Public Administration Research and Theory, Vol. 14 No. 3, pp. 267-82.
Hoque, Z. (2005), Securing institutional legitimacy or organizational effectiveness? A case
examining the impact of public sector reform initiatives in an Australian local authority,
International Journal of Public Sector Management, Vol. 18 Nos 4/5, pp. 367-82.
Hoque, Z., Arends, S. and Alexander, R. (2004), Policing the police service: a case study of the
rise of new public management within an Australian police service, Accounting,
Auditing & Accountability Journal, Vol. 17 No. 1, pp. 59-84.
Hulland, J. (1999), Use of partial least squares (PLS) in strategic management research: a review
of four recent studies, Strategic Management Journal, Vol. 20, pp. 195-204.
Hyndman, N. and Eden, R. (2000), A study of the coordination of mission, objectives and targets
in UK executive agencies, Management Accounting Research, Vol. 11, pp. 175-91.
Hyndman, N. and Eden, R. (2001), Rational management, performance targets and executive
agencies: views from agency chief executives in Northern Ireland, Public Administration,
Vol. 79 No. 3, pp. 579-98.
Ittner, C.D. and Larker, D.F. (2001), Assessing empirical research in managerial accounting: a
value-based management perspective, Journal of Accounting and Economics, Vol. 32,
pp. 349-410.
Jacobs, K. (1997), The decentralization debate and accounting controls in the New Zealand
public sector, Financial Accountability and Management, Vol. 13 No. 4, pp. 331-43.
Jenkins, G.D. Jr, Mitra, A., Gupta, N. and Shaw, J.D. (1998), Are financial incentives related to
performance? A meta-analytic review of empirical research, Journal of Applied
Psychology, Vol. 83 No. 5, pp. 777-87.
Johnsen, A. (2005), What does 25 years of experience tell us about the state of performance
measurement in public policy and management?, Public Money and Management, Vol. 25
No. 1, pp. 9-17.
Kaplan, R.S. (2001), Strategic performance measurement and management in non-profit
organizations, Non-Profit Management and Leadership, Vol. 11 No. 3, pp. 353-70.
Kaplan, R.S. and Atkinson, A.A. (1998), Advanced Management Accounting, 3rd ed.,
Prentice-Hall, Upper Saddle River, NJ.
Keating, S.A. (1997), Determinants of divisional performance evaluation practices, Journal of
Accounting and Economics, Vol. 24, pp. 243-73.
Kloot, L. and Martin, J. (2000), Strategic performance management: a balanced approach to
performance management issues in local government, Management Accounting Research,
Vol. 11 No. 2, pp. 231-51.
Kravchuk, R.S. and Schack, R.W. (1996), Designing effective performance measurement
systems under the government performance and results act of 1993, Public
Administration Review, Vol. 56 No. 4, pp. 348-58.
Lambert, R. (2001), Contracting theory and accounting, Journal of Accounting and Economics,
Vol. 3, pp. 3-87.
Lapsley, I. (1999), Accounting and the new public management: instruments of substantive
efficiency or a rationalising modernity?, Financial Accountability and Management,
Vol. 15 Nos 3/4, pp. 201-7.
Performance
management
practices
449
AAAJ
21,3
450
Pallot, J. (2001), A decade in review: New Zealands experience with resource accounting and
budgeting, Financial Accountability and Management, Vol. 17 No. 4, pp. 383-401.
Performance
management
practices
451
Pallot, J. (1999), Beyond NPM developing strategic capacity, Financial Accountability and
Management, Vol. 15 No. 3/4, pp. 419-27.
Pollitt, C. (1986), Beyond the managerial model: the case for broadening performance
assessment in government and the public services, Financial Accountability and
Management, Vol. 2 No. 3, pp. 155-70.
Pollitt, C. (2006), Performance management in practice: a comparative study of executive
agencies, Journal of Public Administration Research and Theory, Vol. 16 No. 1, pp. 25-44.
Propper, C. and Wilson, D. (2003), The use and usefulness of performance measures in the public
sector, Oxford Review of Economic Policy, Vol. 19 No. 2, pp. 250-65.
Rangan, V.K. (2004), Lofty missions, down-to-earth plans, Harvard Business Review, Vol. 82
No. 3, pp. 112-9.
Rodgers, R. and Hunter, J.E. (1991), Impact of management by objectives on organizational
productivity, Journal of Applied Psychology, Vol. 76 No. 2, pp. 322-36.
Rogers, E.M. (1995), Diffusion of Innovations, The Free Press, New York, NY.
Shirley, M.M. and Xu, L.C. (2001), Empirical effects of performance contracts: evidence from
China, Journal of Law Economics and Organization, Vol. 17 No. 1, pp. 168-200.
Simons, R. (2000), Performance Measurement and Control Systems for Implementing StrategyText and Cases, Prentice-Hall, Upper Saddle River, NJ.
Smith, D. and Langfield-Smith, K. (2004), Structural equation modeling in management
accounting research: critical analysis and opportunities, Journal of Accounting Literature,
Vol. 23, pp. 49-86.
Smith, P. (1993), Outcome-related performance indicators and organizational control in the
public sector, British Journal of Management, Vol. 4, pp. 135-51.
Smith, P. (1995), On the unintended consequences of publishing performance data in the public
sector, International Journal of Public Administration, Vol. 18, pp. 277-310.
Ter Bogt, H.J. (2003), Performance evaluation styles in governmental organizations: how do
professional managers facilitate politicians work?, Management Accounting Research,
Vol. 14, pp. 311-32.
Tirole, J. (1994), The internal organization of Government, Oxford Economic Papers, Vol. 46
No. 1, pp. 1-29.
Vakkuri, J. and Meklin, P. (2006), Ambiguity in performance measurement: a theoretical
approach to organizational uses of performance measurement, Financial Accountability
and Management, Vol. 22 No. 3, pp. 235-50.
Van de Ven, A.H. and Ferry, D.L. (1980), Measuring and Assessing Organizations, Wiley, New
York, NY.
Van Helden, G.J. (2005), Researching public sector transformation: the role of management
accounting, Financial Accountability and Management, Vol. 21 No. 1, pp. 99-133.
Van Thiel, S. and Leeuw, F. (2002), The performance paradox in the public sector, Public
Performance and Management Review, Vol. 25 No. 3, pp. 267-81.
AAAJ
21,3
452
Williams, J.J., Macintosh, N.B. and Moore, J.C. (1990), Budget-related behavior in public sector
organizations: some empirical evidence, Accounting, Organizations and Society, Vol. 15,
pp. 221-46.
Williamson, O.E. (1999), Public and private bureaucracies: a transaction cost economics
perspective, Journal of Law, Economics and Organization, Vol. 15 No. 1, pp. 306-42.
Wilson, J.Q. (1989), Bureaucracy: What Government Agencies Do and Why They Do It, Basic
Books, New York, NY.
Appendix. Survey
See Table AI.
Centralization/decentralization
Please compare your influence with the influence of your superior on the following decisions:
.
strategic decisions (e.g. development of new products or services; divestment of specific
products and/or services; strategy of your unit);
.
investment decisions (e.g. buying a new building or new property; renovating buildings,
roads or other property; buying and implementing new information systems);
.
marketing decisions (e.g. determining prices/tariff structures of products and/or services;
promotional campaigns);
.
decisions regarding internal processes (determining project budgets; setting priorities;
contracts with external suppliers); and
.
decisions regarding organizational structures (changing reporting structures; hiring/firing
personnel; compensation, competence profiles and career paths of personnel; changing
committee structures).
If you and/or others in your unit make decisions without having to report this to your superior,
you and/or others in your unit are considered to have all the influence. See, Tables AII-AV.
Table AI.
Mean
Std dev.
Factor
loading
3.86
3.83
1.04
1.12
0.788
0.751
3.66
1.17
0.801
2.91
1.17
0.639
2.87
1.06
0.765
2.67
0.98
0.676
0.83
Mean
Std dev.
Factor
loading
3.71
3.51
2.96
3.23
3.22
0.65
1.18
1.27
1.37
1.23
1.33
0.860
0.860
NR
NR
NR
453
Table AII.
Performance
management
practices
Mean
Std dev.
Factor
loading
3.40
1.23
0.677
2.57
1.12
0.817
2.80
1.28
0.738
2.82
1.12
0.783
2.56
1.15
0.688
Table AIII.
0.79
Mean
Std dev.
Factor
loading
2.20
0.92
0.775
2.19
0.97
0.811
2.19
0.94
0.791
2.16
1.09
0.856
2.52
1.09
0.786
2.19
1.06
0.714
0.88
Table AIV.
AAAJ
21,3
454
QUANTPERF
QUALPERF
Mean
Std. dev.
Factor
loading
3.53
0.69
0.810
3.19
0.65
2.96
0.78
Mean
Std dev.
0.076
0.447
3.53
0.64
0.387
0.047
3.32
0.95
0.814
0.195
0.635
3.26
0.94
0.811
0.295
3.71
0.66
0.70
0.749
0.233
0.66
Table AV.
Factor
loading
0.105
0.578
Notes: Quality performance measures; Extraction method: Principal component analysis; Rotation
method: Varimax with Kaiser Normalization; Rotation converged in three iterations
Corresponding author
Frank H.M. Verbeeten can be contacted at: fverbeeten@rsm.nl