Sie sind auf Seite 1von 12

International Journal of Public Sector Management

Emerald Article: Performance information and programme evaluation in the Australian public sector James Guthrie, Linda English

Article information:
To cite this document: James Guthrie, Linda English, (1997),"Performance information and programme evaluation in the Australian public sector", International Journal of Public Sector Management, Vol. 10 Iss: 3 pp. 154 - 164 Permanent link to this document: http://dx.doi.org/10.1108/09513559710166039 Downloaded on: 16-02-2013 References: This document contains references to 24 other documents Citations: This document has been cited by 25 other documents To copy this document: permissions@emeraldinsight.com This document has been downloaded 1299 times since 2005. *

Users who downloaded this Article also downloaded: *


James Guthrie, Linda English, (1997),"Performance information and programme evaluation in the Australian public sector", International Journal of Public Sector Management, Vol. 10 Iss: 3 pp. 154 - 164 http://dx.doi.org/10.1108/09513559710166039 James Guthrie, Linda English, (1997),"Performance information and programme evaluation in the Australian public sector", International Journal of Public Sector Management, Vol. 10 Iss: 3 pp. 154 - 164 http://dx.doi.org/10.1108/09513559710166039 James Guthrie, Linda English, (1997),"Performance information and programme evaluation in the Australian public sector", International Journal of Public Sector Management, Vol. 10 Iss: 3 pp. 154 - 164 http://dx.doi.org/10.1108/09513559710166039

Access to this document was granted through an Emerald subscription provided by NATIONAL CENTRAL UNIVERSITY For Authors: If you would like to write for this, or any other Emerald publication, then please use our Emerald for Authors service. Information about how to choose which publication to write for and submission guidelines are available for all. Please visit www.emeraldinsight.com/authors for more information. About Emerald www.emeraldinsight.com With over forty years' experience, Emerald Group Publishing is a leading independent publisher of global research with impact in business, society, public policy and education. In total, Emerald publishes over 275 journals and more than 130 book series, as well as an extensive range of online products and services. Emerald is both COUNTER 3 and TRANSFER compliant. The organization is a partner of the Committee on Publication Ethics (COPE) and also works with Portico and the LOCKSS initiative for digital archive preservation.
*Related content and download information correct at time of download.

IJPSM 10,3

154

Performance information and programme evaluation in the Australian public sector


James Guthrie
Macquarie Graduate School of Management, Sydney, Australia, and

Linda English
Sydney University, Sydney, Australia
Introduction Since the 1970s the Australian public sector (APS) arguably has undergone some of the most significant and far reaching changes in its entire history. The fundamental changes in the organization and administration of the APS have been a direct response to philosphical drives for a more efficient, effective and accountable public sector (Shand, 1995). Central to these changes has been performance measurement and programme evaluation. Bold statements have been made that Australia leads the world in the quality of performance information at the Commonwealth level (Bartos, 1993, p. 1). Bartos asserts that Australia may be some five to ten years ahead of the US experience of performance measures, and that central to the entire process has been managing for results. Recent Australian public sector reforms The push for performance information and evaluation The push for performance information can be traced back to the Royal Commission on Australian Government Administration (RCAGA, also known as the Coombs review) inquiry and report (RCAGA, 1976) which called for government administration to adapt significantly to deal responsibly, effectively and efficiently with the tasks that confront it. Issues of efficiency, effectiveness and performance of public administration were once again raised in the 1983 Reid Review of Commonwealth Administration (Australia, 1983) and confirmed to be of critical importance to the Australian public by the 198384 study of APS (Australian Public Service Board and Finance, 1984), which resulted in the 1984 introduction of the Financial Management Improvement Programme (FMIP). FMIP was an umbrella mechanism to aid the implementation of managerial reforms throughout the APS (Department of Finance, 1994) by:
International Journal of Public Sector Management, Vol. 10 No. 3, 1997, pp. 154-164. MCB University Press, 0951-3558

streamlining the budget formulation process and simplifying and updating the rules regulating public financial management;

improving the system by which departments and agencies make decisions, manage and evaluate achievements; and enhancing public accountability and scrutiny. As summarized in the 1990 House of Representatives Standing Committee on Finance and Public Administration review of the progress and results of the FMIP (HRSCFPA, 1990):
the basic rationale of the FMIP might be aptly summed up as managing for results. In emphasising the results, the aim has been to focus management attention on the purposes of programmes and the cost effectiveness achievement of outcomes rather than simply on inputs and processes.

Performance and programme evaluation 155

The reform initiatives introduced in the APS since the 1980s are underlined by a public sector management philosophy which has come to be referred to as managerialism in the public sector literature (Guthrie et al., 1990; Guthrie and Humphrey, 1996; Parker and Guthrie, 1993). Managerialism includes the following five characteristics (Beringer et al., 1986; Davis et al., 1989): (1) clear, consistent objectives detailed in corporate plans, performance agreements and individual programmes; (2) greater managerial autonomy through delegation of ministerial authority, devolution of managerial authority to lower levels of the organization, and management training; (3) performance evaluation through the development of performance indicators at the organizational and individual programme levels; (4) rewards and sanctions for senior public service managers; and (5) competitive neutrality for commercial authorities. Underlying these changes is a requirement that performance is measurable and reported via indicators, and that programmes are evaluated. Financial and non-financial performance information The difference between most public sector and private sector activities is that the mechanism for the distribution of goods and services does not follow the market model, and profit is not a measure of performance. Financial information represented in an operating statement and balance sheet does not indicate the extent to which these government entities have achieved their objectives. In the private sector, objectives are measured essentially in terms of profit, market share and return on equity and assets, and are mostly reported in financial terms. They constitute the benchmark against which a business is measured. In the public sector, financial reports are also prepared. However, given that the objectives for government programmes frequently are stated in non-financial terms and the nature and complex array of government activities, conventional financial reporting mechanisms may not easily capture performance measurement. Since effectiveness information is crucial for

IJPSM 10,3

156

managing these activities, other forms of measurement and reporting are needed. Without such information, programme managers and tax-payers cannot know whether programmes achieve what was intended and what, if anything, has been changed as a result of implementing the programme. Programme evaluation is a means by which information and analysis can be provided to address these issues. Programme evaluation is distinguished from monitoring and the continuous collection of performance information in the sense that it is a periodic examination of programmes[1]. Definition and specification of terms Programme evaluation has been defined (Department of Finance, 1994, p. 40) as:
the systematic, periodic assessment of a programme, or part of it, to assist managers and other decision makers to: assess the continued relevance and priority of programme objectives in the light of current circumstances, including government policy changes (appropriateness); test whether the programme outcomes achieve stated objectives (effectiveness); and ascertain whether there are better ways of achieving these objectives (efficiency).

In December 1988, the Commonwealth Government endorsed an evaluation strategy which had three main objectives (Amies, 1994): (1) to provide a better information base to assist managers in improving programme performance; (2) to assist government decision making and prioritization, particularly in the budget process; and (3) to contribute to improved accountability to parliament and the public. The main elements of the strategy were the: better integration of major programme evaluation activities within the central budgetary process through the preparation of portfolio evaluation plans (PEPs); development, by departments and agencies, of agency evaluation plans (AEPs) for the systematic evaluation of all their programmes over a three to five year period; requirement that the results of major evaluations should normally be publicly released; requirement that new policy proposals include an evaluation plan; and measures to improve evaluation skills throughout the APS. The reforms of the 1980s led to a fundamental shift away from a preoccupation with programme inputs and processes to an emphasis on outputs, outcomes

and results. These reforms were directed at making the APS more efficient, effective and accountable for performance and improving the responsiveness of the public sector to the needs of its clients. Links between performance information and evaluation in the federal budget Performance information is identified as evidence which is collected and used systematically to judge the performance of a program; performance measurement as the assessment of the extent to which, and the efficiency with which, objectives are being achieved (Department of Finance, 1994, p. 53). Performance measurement ranges from quantitative data about specific variables to assess performance to verifiable descriptive, narrative or anecdotal (qualitative) information. Performance information is said to assist programme management by its input into: planning (choice between alternative strategies; determination of priorities and changes in policy directions); budgeting (justification of use of resources, development of cost targets); implementation (actual results checked against budgets and goals, corrective action guidance); and evaluation (assistance in determining programme effectiveness, examination of other ways to meet objectives, assistance in determination of better ways to implement programmes)[2]. Prime responsibility for implementing the evaluation strategy lies with individual portfolios, reflecting the decentralized philosophy underlying managerialism. The Department of Finance (DOF) monitors portfolios centrally to ensure that evaluation priorities are consistent with government priorities and policies. As part of its external audit role, the Australian National Audit Office (ANAO) has a brief to review the operation and effectiveness of evaluations and has reported to parliament on the use of evaluation in preparation of the federal budget. In 1994, outlays of the Commonwealth Government totalled $A116 billion. The APS employed 150,000 staff in 18 portfolio departments and a large number of statutory authorities, with 75 per cent of staff located outside Canberra (Amies, 1994). In its study of the use of evaluation in the 1993-94 budget, the DOF (Department of Finance, 1994) found: a dramatic increase in the proportion of proposals influenced by evaluation, up from 31 per cent in 1992-93 to 52 per cent in 1993-94; a substantial reliance on evaluation results in new policy proposals, with an estimated 43 per cent of the $A1,218 million of new policy proposals influenced by evaluation; and 59 per cent of savings options influenced by evaluations. In the 1993-94 budget, the government announced a series of 46 major programme and policy reviews as an extension of the normal programme evaluation activity. The reviews, designed to reveal a whole-of-government

Performance and programme evaluation 157

IJPSM 10,3

158

perspective, relate to aspects of programmes which collectively cover just over 50 per cent of total Commonwealth outlays ($A58 billion), mostly in the major spending portfolios which have been subject to rapid outlay growth in recent years. The reviews focused mainly on programme appropriateness and effectiveness (value for money), and target benchmarking, commercialization and cost recovery. Recent reviews of the APS evaluation process Internal reviews Since 1994, the DOF has undertaken a number of studies of the implementation of the evaluation process which are reported by Amies (1994). Recent initiatives in strengthening the evaluation culture at portfolio level include the introduction of: high level evaluation oversight committees; evaluation co-ordination units; meta-evaluation of portfolio evaluation outcomes; and evaluation of policy advising. A recent in-depth study of the implementation of the evaluation process within the Department of Employment, Education and Training (DEET), conducted jointly by the DEET and DOF (1995), discovered considerable variation in evaluation quality and scope across the portfolio; a need to support evaluation policy development; and to provide the evaluation of programmes funded by the Commonwealth but delivered by state governments or tertiary institutions. Overall, the DOF concluded that the following improvements to the Commonwealth Governments evaluation strategy were needed: consolidation of the progress in the planning and practice of evaluation; and encouragement of better use of evaluation findings. External reviews The ANAO has performed a series of independent efficiency audits of the management of programme evaluation across the APS. There have been four audit reports stemming from this activity which investigate different aspects of the APS evaluation programme; including: implementation of evaluation policies; use of evaluation information in the budget process; the role of evaluation in accountability to parliament and the people; and the impact of evaluation. Each of these four aspects will now be explored briefly. Implementation of evaluation policies. The first efficiency audit of the federal governments programme evaluation policies (ANAO, 1991) examined the implementation of the governments evaluation strategies and considered administrative practices supporting the achievement of objectives in respect of programme evaluation, with particular emphasis on the development of

evaluation strategy, its central administration by DOF, and the response of six representative portfolios. The ANAO drew the following conclusions regarding: (1) Implementation: there had been a major increase in evaluation activity since 1987, increasing attention paid to systematic evaluation planning by management and collaboration between DOF and agencies on evaluation; progress, however, was unsatisfactory due to deep-seated lack of understanding and acceptance of intentions underlying the governments strategy, and substantial variation in progress achieved across agencies; portfolios with a policy or regulatory orientation had not begun to integrate evaluation with programme management despite the overwhelming impacts these politically sensitive policy setting and regulatory programmes can have; resources used for, and benefits received from, evaluations had not been uniformly supplied; and a potential constraint on agency evaluation was a lack of people with the requisite skills. (2) The role of the DOF: there was no widespread acceptance by portfolios of the DOFs central agency role and its implications for the administration of programme evaluations; DOF had ignored programmes which had significant impacts in non-financial ways, such as administration of regulatory, coordination or policy advising programmes; and DOFs involvement with evaluation had resulted in an overall improvement in programme evaluation, to which it devoted significant resources. Specific comments in support of these conclusions included evidence of: a lack of understanding of the relationship between accountability and evaluation at programme management level; a lack of support for, and involvement in, the evaluation process by top management; lower priority assigned to the evaluation process than required by the government; a lack of evaluation strategy at portfolio level; a need for the government to define evaluation more precisely; difficulties for portfolios in interpreting DOF strategies and requirements;

Performance and programme evaluation 159

IJPSM 10,3

160

a need to involve the DOF as a responsible central agency in the management of the overall evaluation process; quality assurance mechanisms applied to the evaluation process; difficulties in evaluation of non-financial programmes; and reporting requirements were not well adhered to. Use of evaluation information in the budget process. The second programme evaluation audit examined the use of evaluation results in the preparation of the Commonwealth Budget (ANAO, 1992a). It concluded that the adequacy of evaluation performance measures fell well short of the objectives set by the government under its evaluation strategy and did not match the important role of the Budget in government policy-setting (ANAO, 1992a, p. 3). In general, the ANAO concluded that: in setting up their evaluation programmes, agencies had not considered the requirements of the government in the budget review process in its consideration of major expenditure items or programmes; and there were problems in reporting useful findings in good time for budget deliberations. The ANAO recommended a need for: improved strategic relevance in evaluation activity; DOF to foster a sense of shared responsibility for the provision of performance information in the central budgetary process; and more effective support for the DOF from the Department of Prime Minister and Cabinet in its efforts to upgrade the level of programme performance information supplied. The role of evaluation in accountability to parliament and the people. The third efficiency audit of the evaluation process had its genesis in providing assurance to the parliament that portfolio evaluation procedures are effective, independent and unbiased, and the subsequent reports are accurate and balanced. The ANAO examined programme evaluation in two departments (ANAO, 1992b) and identified potential problems of internally-conducted evaluations to be: the ability of management to steer evaluation away from areas where it is aware of deficiencies, resulting in the failure to reveal the misdirection of resources; and the lack of independence in perspective not being counterbalanced effectively by the involvement of external parties in the evaluation process. It concluded that there was scope for improvements in the evaluation activity of both departments (ANAO, 1992b, p. 2), particularly in ensuring that formal evaluation plans were a comprehensive control over all evaluation activity, including effective quality control and management. Despite DOF monitoring, each departments evaluations varied in quality, suggesting a need for guidelines to improve monitoring arrangements and ensure a greater consistency in monitoring standards. Management of evaluation resources required attention, with costs and benefits of evaluations still not being addressed, and too few staff with appropriate evaluation skills. Willingness to involve DOF personnel in evaluations varied.

Impact of evaluation. The fourth ANAO efficiency audit focused on whether tangible action had resulted from the evaluations previously undertaken within one portfolio; Industry, Technology and Regional Development Portfolio (ANAO, 1993). The major finding was that the portfolio had reached an appropriate stage for implementing evaluation. However, the ANAO noted that in several programme areas subject to evaluation, there were inadequate arrangements for monitoring and implementation of the evaluation recommendations (ANAO, 1993, p. xi). Calculation of resources used in evaluation continued to pose problems. In cases where evaluations were found to have had successful impacts which resulted in significant visible change, the ANAO observed the following characteristics: top management closely supervised the evaluation; strong commercial objectives were emerging; effective consultative arrangements and/or independent guidance and direction were adopted; the programme area initiated the evaluation; and arrangements for monitoring the implementation of the evaluation recommendations were in place.

Performance and programme evaluation 161

The ANAO efficiency audit reports on evaluation had several recurring recommendations (ANAO, 1992b, p. xii): formal development of the DOFs role in evaluation monitoring and coordination beyond the role of encouragement it had adopted earlier; better identification, measurement and recording of the costs and benefits of evaluation; need to ensure adequate programme coverage over a cyclical period; better reporting to the executive and parliament of the results and outcomes of the evaluation strategies; and greater use of specialized government agencies in evaluation to add a perspective of independence and objectivity to programme evaluation.

The main thrust of many of the above recommendations appears to be the promotion of the central agency (priority setting and resource allocation) and parliamentary (accountability) roles for programme evaluation. In conclusion, the external reviews argued that locating programme evaluation units in departments and agencies gives more immediate focus to the needs of departmental managers interests, rather than to broader government or public roles for programme evaluation.

IJPSM 10,3

162

Summary and key issues In drawing the observations together concerning these major internal and external reviews of programme evaluation, the following summary observations are offered: there were high expectations and great potential for these reforms, which have only partly been fulfilled; the present system is internally focused, leading to a narrow role for evaluation and a lack of credibility because of the independence question; the emphasis is on micro-management issues, rather than on the fundamental question of programme effectiveness in meeting designated objectives and national objectives; present systems associated with the performance approach and its evaluation are not providing enough information to deal with the tough question of the effectiveness of government programmes; and there is no capacity within the current institutional arrangements to focus on programmes involving more than one department. In terms of recommendations and suggested solutions, several are offered. First, there is a need for the recognition of the distinction between various roles for evaluation (e.g. day-to-day management, as an accountability tool). Second, the system of evaluation by a departments evaluation unit requires some form of check of its output (to enhance objectivity and credibility). Third, an enhanced external and internal audit function can play a key role in reinforcing the reforms, including the use of programme evaluations. We are supportive of Caulleys (1993a, p. 150) call for a post-positivist evaluation methodology[3] which recognizes the need for external audit. This would require both the establishment of an audit trial and the carrying out of an audit by a competent external, disinterested auditor. The notion of an audit of an evaluation is metaphorically based on the notion of a financial auditor who checks the keeping of the account corresponding to the process of the evaluation and the information in the financial report (corresponding to the evaluation report) is correct and trustworthy. This external approach to evaluation has to be balanced with the internal (self-evaluation) approach to evaluation. This is what we mean as the middle ground between internal and external programme evaluation strategies. This would allow for the strengths of internal evaluation to be retained, as well as improving the credibility of the evaluation process by adding external independent verification of the methodology and results.
Notes 1. McLean (1993, pp. 121-2) debates the differences between programme evaluation, audit and evaluation research. Rather than partaking in the lively debate on the differences between evaluation and audit, it is noted that the demand for programme evaluation comes

out of the current APS reforms that require programme performance in terms of outcomes on a periodic basis. 2. Several questions could be asked about the performance information. First, whether the performance measures used are appropriate. Second, whether the bases used to calculate the performance measures are consistent with underlying records and transactions. Third, whether the performance information used is consistent from year to year (Guthrie, 1994). 3. Caulley (1993b, p. 132) argues for a fifth generation evaluation approach, in which he states that the responsibility for evaluation, quality control and programme improvement should rest with the staff and be shared with the managers. That is, responsibility for the quality of work rests with the people who do it. References and further reading Amies, M. (1994), Program evaluation: a Commonwealth perspective. Where are we now?, in Guthrie, J. (Ed.) (1995), Making the Australian Public Sector Count in the 1990s , IIR Publications, Sydney. ANAO, (1991), Implementation of Program Evaluation Stage 1, Audit Report No. 23, 1990-91, AGPS, Canberra. ANAO, (1992a), Efficiency Audit: Evaluation in Preparation of the Budget, Audit Report No. 13, 1991-92, AGPS, Canberra. ANAO, (1992b), Efficiency Audit: Program Evaluation in the Department of Social Security and Primary Industry and Energy, Audit Report No. 26, 1991-92, AGPS, Canberra. ANAO, (1993), Efficiency Audit: Program Evaluation Strategies, Practices and Impacts, Audit Report No. 35, 1992-93, AGPS, Canberra. Australia, (1983), Reforming The Australian Public Sector, December, AGPS, Canberra. Australian Public Service Board (APSB) and Finance, (1984), Financial Management Improvement Program: A Diagnostic Study, AGPS, Canberra. Bartos, S. (1993), Appropriate use of performance information management and accountability, AIC Conference on Performance Management, 25 August. Beringer, I, Chomiak, G, and Russell, H. (1986), Corporate Management: The Australian Public Sector, Hale & Iremonger, Sydney, pp. 11-21. Caulley, D. (1993a), Qualitative methodologies within an evaluation framework, in Guthrie J. (Ed.), The Australian Public Sector: Pathways to Change in the 1990s, IIR Publications, Sydney, pp. 145-50. Caulley, D. (1993b), Overview of approaches to programme evaluation: the five generations, in Guthrie J. (Ed.), The Australian Public Sector: Pathways to Change in the 1990s, IIR Publications, Sydney, pp. 124-33. Davis, G., Weller, P. and Lewis, C. (Eds) (1989), Corporate Management in Australian Government, Macmillan, Sydney. Department of Finance, (1994), Commonwealth Financial Management Handbook, AGPS, Canberra. Guthrie, J. (Ed.) (1993), The Australian Public Sector: Pathways to Change in the 1990s, IIR Publications, Sydney. Guthrie, J. (1994), Performance measurement indicators in the Australian public sector, in Buschor, E. (Ed.), Perspectives on Performance and Public Sector Accounting, UPT Publishers, Bern, Switzerland, pp. 259-77. Guthrie, J. (1995), Making the Australian Public Sector Count in the 1990s, IIR Publications, Sydney. Guthrie, J. and Humphrey, C. (1996), Public sector financial management developments in Australia and Britain: trends and contradictions, Research in Government Non-profit Accounting, Vol. 9, pp. 283-302.

Performance and programme evaluation 163

IJPSM 10,3

164

Guthrie, J., Parker, L.D. and Shand, D. (Eds) (1990), The Public Sector: Contemporary Readings in Accounting and Auditing, Harcourt Brace Jovanovich, Sydney. House of Representatives Standing Committee on Finance and Public Administration (HRSCFPA) (1990), Not Dollars Alone: A Review of the Financial Management Improvement Program, AGPS, Canberra. McLean , I. (1993), Are we talking the same language? Defining program evaluation, in Guthrie, J., The Australian Public Sector: Pathways to Change in the 1990s, IIR Publications, Sydney., pp. 120-3. Management Advisory Board and Management Improvement Advisory Committee: Commonwealth (MAB-MIAC) (1993), Building a Better Public Service, AGPS, Canberra. Parker, L. and Guthrie, J. (1993), The Australian public sector in the 1990s: new accountability regimes in motion, Journal of International Accounting, Auditing and Taxation, Vol. 2 No. 1, pp. 57-79. Royal Commission Australian Government Administration (RCAGA) (1976), Report, AGPS, Canberra. Shand, D. (1995), International trends in public sector accounting,. in Guthrie, J. (Ed.), Making the Australian Public Sector Count in the 1990s, IIR Publications, Sydney, pp. 10-14.

Das könnte Ihnen auch gefallen