Sie sind auf Seite 1von 7

Research in Accounting Regulation 23 (2011) 6066

Contents lists available at ScienceDirect

Research in Accounting Regulation


journal homepage: www.elsevier.com/locate/racreg

Research Report

Measuring audit quality of local governments in England and Wales


Gary Giroux a,, Rowan Jones b
a b

Department of Accounting, Mays Business School, Texas A&M University, College Station, TX 77845, USA Birmingham Business School, University of Birmingham, Edgbaston, Birmingham, B15 2TT, United Kingdom

a r t i c l e

i n f o

a b s t r a c t
The purpose of this paper is to model and test the audit quality provided to local governments in England and Wales. A key question is: are there major differences in audit quality provided? The Audit Commission, a national public body under Parliament, regulates the audits. It sets audit standards, appoints the auditors, and (although each auditor and client local government set the specic audit fee for that client) it establishes a formula to determine standard audit fees. The Audit Commission also conducts an annual review of the audit quality provided by the selected auditors, as well as a survey of client satisfaction. The majority of audits are conducted by District Auditors (public sector employees of the Audit Commission). About a quarter of local governments are audited by one of six private sector auditors (including three of the Big 4). Actual results indicate that audit quality differences are associated with the number of governmental audit clients and local government type. Generally, there were modest quality differences by auditor category. 2011 Elsevier Ltd. All rights reserved.

Article history: Available online 6 April 2011 Keywords: Audit quality Audit Commission Local governments Audit regulation Client satisfaction

Introduction Research on the economic relationships for audits of United Kingdom (UK) local governments has been very limited, whereas in the United States (US) it is more extensive. The regulatory environments associated with UK and US local governments differ on several key points.1 Conse-

Corresponding author.
E-mail address: g-giroux@tamu.edu (G. Giroux). Audits of US local governments are subject to limited federal regulations (e.g., Single Audit Act of 1984). State regulations vary substantially by state and category of local government. Thus, there are typically greater regulations of school districts and fewer of cities and counties. Audit standards are set in the private sector (by the American Institute of Certied Public Accountants). Almost always the individual local governments pick their own auditors and the specic procedures are determined locally. The exceptions are the requirements for state (government) auditors to audit specic local governments. Almost all audits are conducted by private auditors, with Big 4 rms auditing a substantial percentage of large local governments. In summary, there is considerable decentralization of the process, with relatively little regulation of the process. Various state and federal agencies do have oversight of the audit process and some of these agencies conduct reviews of audit quality, although these are seldom publicly available.
1

quently, audit structure, risk, and quality relationships may be substantially different. The purpose of this paper is to model and test the audit quality and client satisfaction of local governments in England and Wales. Earlier research (Giroux & Jones, 2007) provides evidence that private rms charge local governments lower audit fees; therefore, it is particularly important to assess audit quality across auditor groups. The Audit Commission regulates local government audits in England and used to in Wales.2 It sets audit standards for local governments, chooses the auditors of each local government, establishes a formula to determine standard audit fees (although the auditor and client determine the actual fees) and evaluates the audit quality provided by these auditors annually. Most audits are conducted by District Auditors (DA), who are in-house audit providers of the Audit Commission, operationally independent of the Commission and, in particular, of the Commissions regulatory functions; however, the Commission does use private sector auditors. For the 2000/01 nancial year, the private audit rms
2 More details concerning the Audit Commission are given in Giroux, Jones, and Pendlebury (2002, pp. 1214) and Giroux and Jones (2007).

1052-0457/$ - see front matter 2011 Elsevier Ltd. All rights reserved. doi:10.1016/j.racreg.2011.03.002

G. Giroux, R. Jones / Research in Accounting Regulation 23 (2011) 6066

61

included three of the Big Four (BIG 4) and three smaller rms that had a specialty in local governments (SMALL). The purpose of regulatory evaluation of audit quality by the Commission is: (1) to conrm that the auditors meet all regulatory requirements and (2) to promote improvement by identifying and promoting good practices. This paper focuses on the Audit Commissions measures of audit quality for multi-function local governments.3 The Audit Commission has a formal procedure called the Quality Review Process (QRP) to: conrm that auditors are meeting the Code of Audit Practices, the Letter of Guidance and other requirements of the Audit Commission and promote continuous improvement through the encouragement, identication and spreading of good practices (Audit Commission, 2002, para. 1). The QRP includes a quantitative performance measure on a scale of 0100% for each auditor. The Audit Commission develops similar scores for district satisfaction of their auditors.4 The audit quality scores are developed for each auditor, rather than for each local government. This paper offers a number of contributions to audit and governmental research and results could inuence regulatory practices. This is one of the few papers that actually measures audit quality from the regulators perspective based on oversight reviews and other evidence. It is the only research that incorporates multiple measures of audit quality issues (specically on-site review and client satisfaction), all based on regulatory oversight. The results indicate that private sector auditors provide marginally signicant higher quality audits than in-house District Auditors. This paper builds on the ndings of Giroux and Jones (2007), and the combined results suggest that private sector auditors provide not only higher quality but also lower fees. This emphasizes the importance of substantial regulatory oversight of the audit function and implies that the Audit Commission considers an expanded role for private sector auditors. This comprehensive analysis provides a potential comparison to different oversight frameworks with potential policy implications for future regulatory decisions. The remainder of the paper is divided into the following sections. The next section describes the audit fee and audit quality determinants associated with local governments in England and Wales. Section Model development reviews model development, specically how to explain audit quality. Section Sample and method describes the sample of local governments used for empirical analysis and describes the statistical methods used. Section Results reviews statistical results. The last section concludes the paper.

Audit quality characteristics in England and Wales The Audit Commission as regulatory authority chooses the auditor, using in-house auditors (known then as District Auditors) and private sector auditors. There are constraints on the Commissions choices (Giroux & Jones, 2007). Since the Commission was established, there has been a norm of allocating about 70% of the audits to the in-house practice, with the remainder to the private sector; this norm was not necessarily a matter of the Commissions own preference. Also, for practical reasons, while the in-house practice has a presence in all regions, the audits conducted by the private sector suppliers are grouped geographically so that each private sector supplier only audits in certain regions. Moreover, the private sector suppliers can select out of audits they do not want, whereas the in-house auditors cannot. Over the life of the Commission, the number of private sector rms used has reduced, in part because of mergers of audit rms, but also because rms have chosen to withdraw from local government audits. The Commissions stated aim is to maintain competition by having at least three auditors to choose from for all local governments. This implies that the Commission seeks high quality audits for reasonable audit fees. Unlike fees, no direct measure of audit quality exists. To overcome this deciency, the Audit Commission completes an annual Quality Review Process, a comprehensive review of audit quality based on an analysis of working papers and other inputs.5 In our view, this is a necessary condition for their choice procedures. The QRP included comprehensive control reviews of all the auditors for 2001. At least one audit was reviewed for each auditor, with more audits reviewed based on the total number of audits conducted by that auditor and other factors.6 The results of the QRPs are summarized as a percentage score for each auditor (from 0% to 100%), with scores ranging from 46% to 94% (Audit Commission, 2002). Similar percentage ratings also are calculated for local government satisfaction with their auditor (based on a survey) and relative timeliness (based primarily on Audit Commission deadlines for turning in various reports). These scores are developed for each auditor, not each audit. Quality Service Scores (SQ) were measured by the Commission using 728 questionnaires sent to the chief executive and an additional 364 to the director of nance for a
5 The Audit Commission reviews the quality of the local government audits annually with their Quality Review Process. The purpose of the QRP is: (1) to conrm that the auditors meet all Audit Commission requirements including the Code of Audit Practice and (2) to promote improvement by identifying and promoting good practices. The QRP includes six elements: (1) performance monitoring of auditors, (2) quality assurance, (3) quality control based on reviews of the audit work, (4) quality evaluation with particular emphasis on good practices and new areas, (5) service quality based on questionnaires to the local governments, and (6) reviewing local government complaints (Audit Commission, 2002). 6 Comprehensive quality control reviews were conducted on 14 Big 4 audits, 16 private rm audits and 21 District Auditor audits. Of the 51 audit reviews, 7 (14%) did not meet Audit Commission standards and 18 (35%) were considered near misses (Audit Commission, 2002, para. 1422). The purpose of these specic detailed reviews was to evaluate the relative quality of each auditor, not development of quality scores for each audit.

3 These are all the multi-function local governments of England and Wales administered by directly-elected politicians. They do not include local governments such as police and re authorities. 4 The analysis of audit quality is based on the eleven auditors (six private-sector and ve District Audit regions), not on the individual audits. The Audit Commission gathers comprehensive data on some aspects of the audit process from all audits, but the on-site reviews are limited to only a limited number of audits (see Note 5). This was the same type of process evaluated by Deis and Giroux (1992), Deis and Giroux (1996) using Texas school districts, the only other example of the direct analysis of audit quality by regulators that we are aware of. Lowensohn and Reck (2004) used desk reviews to evaluate audit quality.

62

G. Giroux, R. Jones / Research in Accounting Regulation 23 (2011) 6066

sample of local governments and the National Health Service. Three hundred and sixty-six responses were received, for a response rate of 50%. The average satisfaction rating for local governments was 73%, below the Audit commissions target of 75% and below the previous years 75% satisfaction rating. Because auditors are chosen by the Commission, the views of the local governments are an important component of the quality review process. The overt consequences of any problems identied by the Commission in its quality reviews are that these are discussed with the audit suppliers and remedies identied, with subsequent monitoring of the remedies. There have been no reported cases of sanctions being imposed by the Commission. Elaborating on the two pools of external auditors from which the Audit Commission chooses, the rst pool is of in-house auditors (known then as District Auditors) who are public ofcials. The second pool is of private sector auditors, including but not restricted to the Big 4 rms.7 For the nancial year 2000/01, District Auditors conducted 303 audits (74.3%), while private sector auditors conducted 105 (25.7%).

(e.g., fewer violations were detected for audit rms with an industry specialization). Client satisfaction is a distinct but related construct to audit quality (Behn, Carcello, Hermanson, & Hermanson, 1997; Carcello, Hermanson, & McGrath, 1992). Since the clients hire and pay the auditors, client satisfaction should be an important goal to most audit rms. Carcello et al. (1992) and Behn et al. (1997) used questionnaires to samples Fortune 1000 companies and others. Samelson, Lowensohn, and Johnson (2006) measured perceived audit quality and satisfaction based on a questionnaire for a sample of local government nance directors. Using ordinal regression, they found that auditor expertise in government accounting increased satisfaction and quality, but Big 5 auditors did not provide higher client quality or satisfaction. Measuring audit quality The purpose of this paper is to evaluate the audit quality and client satisfaction associated with local governments in England and Wales. The rst test is based on the QRPs, the basic summary scores of the Audit Commissions Quality Review Process. Also tested are service quality scores (SQs) based on an Audit Commissions questionnaire to the audited governments on auditor satisfaction. There are ve theoretical constructs used to test audit quality: auditor type, audit experience (industry specialization), audit fee, demographics, and local government type. The primary focus is on auditor type and audit experience. Eleven empirical surrogates are used to measure these theoretical constructs. A close relationship between audit quality and auditor type is expected. The assumption is that auditor type (reputation) of an audit rm is associated with the quality of audit work and higher service quality will lead to an improved reputation (Moizer, 1997). We have three auditor categories: District Auditor (DA), one of the Big Four accounting rms (BIG 4), or a non-Big 4 private rm with a governmental specialty (SMALL). Dummy variables are used to identify BIG 4 and SMALL. The literature focusing on US governments (e.g., Copley, Gaver, & Gaver 1996; Rubin 1988), suggests that BIG 4 audits should be associated with higher quality, a positive sign. However, as pointed out by Khurana and Raman (2004), the institutional framework and other factors suggest that the relationships could be different in other countries. The number of private auditors auditing local governments declined from 1983 to 2001, possible because audit regulation was too strict. If so, this implies that competent auditors withdrew and competition was reduced. This suggests that audit quality would be lower for the private auditors compared to DAs. Also, DAs specialize exclusively in governmental audits. Since it is not clear that Big 4 rms necessarily provide higher quality audits in this specic context, no prediction is made for BIG 4 or SMALL. Number of local government clients (CLIENTS) is used as a measure of industry expertise. In audit environments where Big 4 audits are not as important, industry expertise is considered a good measure of higher quality. Deis and

Model development Audit quality theory We use Khurana and Ramans (2004) denition of audit quality, focusing on relative auditor competence: an auditor will (1) detect and (2) correct/reveal any material omission or misstatements in the nancial statements (p. 475). Discovery of material omissions or misstatements is associated with the auditors technical ability. This construct cannot be directly observed; however, the Audit Commissions scores are used as a surrogate of auditor competence. Working paper reviews were used by Deis and Giroux (1992, 1996) to test audit quality of Texas school districts using working paper reviews by the Texas Education Agency. Banker, Cooper, and Potter (1992) used the Deis and Giroux (1992) data in a simultaneous equation framework. Peer review was evaluated by Giroux, Deis, and Bryan (1995), based on whether the auditors of Texas school districts were peer reviewed under the then voluntary AICPA peer review program.8 Lowensohn and Reck (2004) used the desk reviews of Florida local governments as a measure of audit quality. These studies indicated that violations of technical audit standards were fairly common and could be modeled
7 The Audit Commission (1984, p. 4) explained its use of the private sector auditors as follows: The appointments reected the Commissions overall evaluation of the capabilities of the various accountancy rms with signicant local government audit experience. The Commission sought to match the competence of auditors with the particular circumstances facing individual local governments. It wished to avoid individual rms obtaining an unduly dominant position in particular regions but not totally to the exclusion of appointing the same auditor to adjoining rural local governments (in order to encourage inter-government comparison, and to reduce out-of-pocket costs). For the nancial year 198384, there were 13 private sector auditors, including all of the Big 8. 8 The analysis of Texas school district audits focused on relatively small audit rms.

G. Giroux, R. Jones / Research in Accounting Regulation 23 (2011) 6066

63

Giroux (1992) found a positive relationship of number of clients to their measures of audit quality. Mayhew and Wilkins (2003) assessed industry specialization of auditors using the audit fees of rms using initial public offerings. The rationale for higher fees is the importance of quality differentiation based on industry expertise. A positive sign is expected. An audit fee premium is often associated with higher audit quality, the assumption being a reputation effect. Clatworthy, Mellett, and Peel (2002) and Basioudis and Ellwood (2005) tested audit fees for UK National Health Services trusts (also regulated by the Audit Commission). Clatworthy et al. (2002) found no fee premium for Big 6, while Basioudis and Ellwood (2005) found a fee premium only for PricewaterhouseCoopers. The audit fee variable of interest is audit fee premium (FEE PREM), which is dened as the actual total audit fee divided by the standard fee schedule established by the Audit Commission. The standard fee is dened by local government type, based on a at fee by type plus a percentage of total annual gross expenditures (the percentage also is based on local government type). If the actual total audit fee is equal to the standard fee, then FEE PREM = 1; if actual > standard, then the local government pays a premium to standard. A local government paying a 10% premium over standard would have a FEE PREM score of 1.1. A number of demographic variables are included in the quality model. All the control variables were used by Giroux and Jones (2007), who investigated audit fees and found most demographic variables and some categorical variables of government type signicant. Although the relationships of these variables to audit quality may differ, they are likely to have an effect on quality. However, except for Giroux and Jones (2007), there is no theoretical or empirical evidence in this context. Client size should be associated with audit complexity. Population is used to capture relative size and complexity. Following Bamber et al. (1993) and McLelland and Giroux (2000), a positive sign is expected. Population density (POPDEN) is used to capture relative urbanness, with more urban local governments expected to provide more public services. The relationship to audit quality is unclear and no sign is predicted. Average income is used to capture the relative wealth of the local government. In US studies the level of public services increases with wealth. However, public monies from the UK central government tend to be dependence-based, with increased funding to poorer local governments. No prediction is made for average income. The logs of Population (LPOP), population density (LPOPDEN) and average income (LAVGINC) are used for multivariate analysis to control for skewness. There are ve categories of local governments in this analysis: County Councils, District Councils, London Boroughs, Unitary Authorities, and Metropolitan Councils. Two main differences exist in these categories. First, although all the local governments are multi-function, the different categories have distinct functions. Second, one of the categories, the counties, typically covers wide geographical areas and thus tends to have low population densities. No prediction is made for type of local government.

Table 1 Descriptive statistics. Measure Mean Standard deviation Minimum Maximum

Panel A: Mean scores and distributions by category (n = 409) QRP 73.5 6.4 46 94 SQ 74.6 5.1 49 90

Measure

Big 4

Small private

District auditor 73.7 75.0 303

Panel B: Mean scores by auditor type QRP 72.4 73.7 SQ 73.6 72.5 n 80 25

Variable

Mean

Standard deviation

Minimum

Maximum

Panel C: Continuous variables used for analysis Fee Premium 117.8% .183 71.9% Population 192,015 246,618 25,000 Population 12.58 19.05 0.20 Density Average 20,320 5767 12,900 Income

253.5% 3332,800 137.10 86,500

Variable

Number

Percentage 19.6 6.1 74.3 8.3 58.3 (continued on next page)

Panel D: Categorical variables used for analysis Big 4 80 Small private rms 25 District auditors 303 County councils 34 District councils 238

Sample and method The sample is based on the data les of the Audit Commission, which includes 409 local governments.9 Information on audit characteristics and related data come from this data le. Information on audit fees and related data come from the Audit Commission les. Audit quality information comes from the Audit Commissions analysis of the quality review process (Audit Commission, 2002).10 Financial information on the local governments comes from the Chartered Institute of Public Finance and Accountancy (CIPFA, 2000). Income data came from Inland Revenue Statistics (Reade, 2000). Empirical analysis includes both univariate and multivariate tests. Descriptive analysis is included in Table 1. The testing of the audit quality models uses OLS regression. The dependent variables are QRP and SQ. The independent variables of interest are: (1) dummy variables for SMALL and BIG 4, to discern if these are signicantly different
9 The sample of 409 includes the Corporation of London, which is unique. It is a geographically small business district, ofcially listed with a population of a few thousand. Therefore, this observation was dropped from the empirical analysis. 10 This is considered proprietary data by the Audit Commission. We use it with their permission and maintain the condentiality of the names of specic auditors.

64

G. Giroux, R. Jones / Research in Accounting Regulation 23 (2011) 6066 Table 2 Regression analysis with coefcients (t-value), using normalized rank scores. QRP Big 4 Small private rm Number of clients Audit fee premium Log of population Log of population density Log of average income County council District council Unitary district London borough Intercept F-Value Adjusted R2 N
* **

from fees charged by the District Auditors; and (2) number of local government clients is a measure of industry specialization. The remaining variables are used for control purposes. Multivariate analysis is problematic since there are 11 levels of audit quality for QRP (three for Big 4, three for SMALL, and ve for DA) and seven for SQ and TIME (all DAs are collapsed into a single measure). OLS is used, since it tends to be robust to distribution violations. Regression diagnostics includes tests for multicollinearity, normality of residuals, extreme values, and heteroscedasticity.11 Table 2 summarises the OLS analysis. Initial regression models included a substantial number of extreme values, especially in the SQ model. A major factor in the SQ model (and QRP to a lesser extent) was the low quality scores for one of the auditors. Rather than drop this auditor from the analysis, the scores were rescaled as ranks (for QRP, 1 was the lowest score, 11 the highest; for SQ and TIME, 1 was the lowest and 7 the highest since the District Auditors were not separated by scores). We converted the ranks to normal scores to correct for possible OLS problems.12 Even with the normal score data, some extreme values existed and were eliminated. All three models showed evidence of heteroscedascity; consequently, Whites correction was used to re-estimate t-value. There was no evidence of normality violations (after extreme values were eliminated) or multicollinearity. Results Univariate analysis Panel A of Table 1 summarizes means and distribution characteristics (standard deviation and range) for the quality scores. The scores are compared by auditor type in Panel B. All scores are on a scale of 0100%. Averages for both QRP and SQ were in the mid 1970s, with substantial differences in range. One SMALL rm had the lowest QRP and SQ scores (46 and 49, respectively), while another SMALL rm had the highest (94 and 90, respectively). Panel B presents mean quality scores by auditor type. There was relatively little difference in QRP and SQ score by auditor type, although the private rms were somewhat lower than District Auditors on average. As a group the Big 4 had the lowest QRP scores, an unexpected nding. Panel C summarizes means and distribution characteristics (standard deviation, minimum and maximum) of the variables of interest. FEE PREM averages 117.8%; that is, a premium over standard of 17.8%. The most common explanations for higher fees were (1) high audit risk and/
11 Multicollinearity is analysed using variance ination factors, normal probability plots for normality of residuals, extreme values are residuals beyond three studentised standard deviations, and the Glejser test measures heteroscedasticity. 12 The use of ranks causes potential problems associated with normality of residuals and heteroscedasticity. The ranked quality scores are nonlinear. Following Cooke (1998), ranks are transformed to normal scores to solve this problem using the van der Waerden approach. The ranks are replaced with normal scores by dividing the rank by the number of ranks plus one [=r/(n + 1)].

SQ 0.090 (4.95)* 0.021 (0.67) 0.003 (8.56)* 0.046 (1.66)** -0.001 (-0.09) -0.019 (-3.91)* -0.151 (-5.67)* -0.022 (-0.76) -0.068 (-2.94)* 0.018 (0.83) 0.030 (1.13) 1.8913 18.64* 32.6% 402

0.117 (3.38)* 0.265 (4.72)* 0.003 (4.43)* -0.033 (-0.64) -0.005 (-0.18) -0.033 (-3.56)* -0.176 (-3.52)* 0.027 (0.50) -0.022 (-0.51) 0.167 (4.13)* 0.006 (0.12) 2.136 7.02* 17.9% 407

Signicant at .01. Signicant at .1.

or (2) the statistical calculations on which the standard fees were based were incorrect or misstated. Population averaged 192,000, with a substantial range of 25,000 to 3.3 million. Mean population density was 12.6 persons per hectare, again with a substantial range (0.2137.1). Average income was 20,320, with the poorest local government at 12,900 and the wealthiest at 86,500. Because of skewness, these three variables were logged for the multivariate analysis. Panel D states the frequencies and percentages of the dummy variables used for analysis. Most audits were conducted by DA (303 or 74.3%), while Big 4 audited 80% or 19.6%. The majority of the local governments were District Councils (238 or 58.3%), while there were only 32 London Boroughs and 36 Metropolitan Districts. OLS analysis Regression results are summarized in Table 2. The sample size varied by dependent variable, because of the extreme values eliminated. Each model was signicant at .0001. The normalized rank score QRP model had an adjusted R2 of 17.9%. Both BIG 4 and SMALL were positive and significant, suggesting higher quality relative to DA. Number of clients was positive and signicant, with more experienced auditors (i.e., those conducting more audits) providing higher quality. Audit fee premium was insignicant, suggesting that the relative quality provided was independent of the premium. Population was not signicant, but both population density and average income were negative and signicant. This suggests that high-density local governments, such as major urban areas, received lower audit quality, as did wealthier local governments. Local government type made a difference only for Unitary Districts, which received signicantly higher quality. The results are relatively similar for SQ using normalized rank score. The overall explanatory power based on Adjusted R2 is 32.6%. BIG 4 was positive and signicant, suggesting that clients were more satised with their

G. Giroux, R. Jones / Research in Accounting Regulation 23 (2011) 6066

65

performance relative to District Auditors. However, client satisfaction was insignicant for SMALL, indicating no difference between DA and SMALL rms. CLIENTS was positive and signicant as expected. Audit fee premium was positive and signicant, indicating that higher fees result in higher client satisfaction. Both population density and average income were negative and signicant. Council type results were somewhat different, with District Councils negative and signicant, and Unitary Districts insignicant. A limitation of this model is that separate SQ scores were not available for each DA region. Quality scores represent 11 categories (seven for SQ). OLS may be inefcient since the dependent variables are not continuous. With ordered logistic the ordinal dependent variables are categorical and ordered. Results using ordered regression (not tabulated) were similar to OLS. BIG 4 and SMALL audits generally were associated with higher quality under the different measures, relative to DA. Industry specialty as measured by CLIENTS also provided higher quality. Size as measured by LPOP was not signicant, although LPOPDEN and LAVGINC were negative and signicant for both OLS and logit models. Some differences were noted by local government type, but no systematic patterns emerged.

Conclusions The purpose of this paper was to analyze audit quality of local governments in England and Wales. The structure of UK local governments and auditing regulations are substantially different from US counterparts. Consequently, attempting to understand the relationships and motivations in this environment is a useful addition to the audit economics literature. The different regulatory environment also suggests that alternative regulatory and other institutional factors should be evaluated for future policy implications. Since the Audit Commission reviews audit quality in detail, their measures are particularly useful in evaluating quality differences. OLS Regression results provide basic information on multivariate relationships. All models were signicant and provided interesting results. The most consistent factor to explain quality differences was number of local government clients (CLIENTS), consistent with Deis and Giroux (1992), suggesting the importance of industrial specialization on audit quality. BIG4 and SMALL were positive and signicant when using all models (except SMALL was insignicant using SQ), a reputational effect. On the other hand, size as measured by LPOP was not signicant, contrary to other studies (e.g., Deis & Giroux 1992), although other demographic indicators (population density and average income) were signicant. There are several limitations to this analysis. Quality measures are based on the Audit Commission reviews. Ranks were used to evaluate quality scores for OLS regression and treated as separate categories using logit. Thus, the relative magnitude of quality differences is not analysed. This analysis is based on the Audit Commissions evaluations of quality and the purpose of their review may differ from a typical academic perspective. The

analysis is for a single year, 2001, and this was the rst year when audit fees were based on auditor-government contracts rather than based on number of audit hours. Results may differ from earlier (or later) years. A number of areas for further research are suggested. At this point it is not clear whether the different regulatory requirements in the UK provide substantially different results in overall audit quality. In terms of public policy, this is a key issue. Analysis should be extended to earlier years, to the extent that data are available and to future years as new information becomes available. Evaluating quality changes over time would be a useful approach. The Audit Commission also regulates the audits of the National Health Service, which should be evaluated. In recent years, there has been a shift of emphasis in the in-house (DA) practice. On the one hand, the in-house practice is more clearly dened as being a part of the Commission (whereas in the past there was a tendency to emphasise that it was at arms-length) but on the other hand consequent recognition that quality review of the inhouse practice should be carried out independently of the Commission. In 2008, the Commission produced the second annual report for its in-house audit practice: The Commissions audit practice is an integral part of the Audit Commission and is thus governed by the Commission Board. . . (Audit Commission, 2008, p. 7). The quality monitoring program for the in-house audit practice for 2006 07 included reviews by the audit practices own staff but also by the Association of Chartered Certied Accountants, a private sector professional accounting body (Audit Commission, 2008, p. 15). Moreover, what the Commission (2008, p. 14) terms the nal tier of quality review is provided by the Audit and Inspection Unit of the Financial Reporting Council, the latter of which is a private sector body (though established by law) that, among other things, regulates audit for the private sector.

References
Audit Commission. (1984). Annual report and accounts. London: Audit Commission. Audit Commission. (2002). Quality review process 2000/01. London: Audit Commission. Audit Commission (2008). Audit commission audit practice annual quality report. Bamber, E., Bamber, L., & Schoderbek, M. (1993). Audit structure and other determinants of audit report lag: An empirical analysis. Auditing: A Journal of Practice and Theory, 12(2), 123. Banker, R., Cooper, W., & Potter, G. (1992). A perspective of research in governmental accounting. Accounting Review, 496, 510. Basioudis, I., & Ellwood, S. (2005). External audit in the National Health Service in England and Wales: A study of an oversight bodys control of audit remuneration. Journal of Accounting and Public Policy, 24, 207241. Behn, B., Carcello, J., Hermanson, D., & Hermanson, R. (1997). The determinants of audit client satisfaction among clients of Big 6 rms. Accounting Horizons, 7, 24. Carcello, J., Hermanson, R., & McGrath, N. (1992). Audit quality attributes: The perceptions of audit partners, preparers, and nancial statement users. Auditing: A Journal of Practice and Theory, 1, 15. Chartered Institute of Public Finance and Accountancy (CIPFA). (2000). Finance and general statistics 2000/01. London: CIPFA. Clatworthy, M., Mellett, H., & Peel, M. (2002). The market for external audit services in the public sector: An empirical analysis of NHS Trusts. Journal of Business Finance and Accounting, 1399, 1439. Cooke, T. (1998). Regression analysis in accounting disclosure studies. Accounting and Business Research, 209, 224.

66

G. Giroux, R. Jones / Research in Accounting Regulation 23 (2011) 6066 Lowensohn, S., & Reck, J. (2004). A longitudinal analysis of local government audit quality. Research in Governmental and Nonprot Accounting, 201, 216. Mayhew, B., & Wilkins, M. (2003). Audit rm industry specialization as a differentiation strategy: evidence from fees charged to rms going public. Auditing: A Journal of Practice and Theory, 33, 52. McLelland, A., & Giroux, G. (2000). An empirical analysis of auditor report timing by large municipalities. Journal of Accounting and Public Policy, Autumn, 263, 281. Moizer, P. (1997). Auditor reputation: The international empirical evidence. International Journal of Auditing, 1(1), 6174. Reade, S. (Ed.). (2000). Inland Revenue statistics 2000. London: The Stationery Ofce. Rubin, M. (1988). Municipal audit fee determinants. Accounting Review, 219, 236. Samelson, D., Lowensohn, S., & Johnson, L. (2006). The determinants of perceived audit quality and auditee satisfaction in local government. Journal of Public Budgeting, Accounting & Financial Management, 139166.

Copley, P., Gaver, J., & Gaver, K. (1996). Simultaneous estimation of the supply and demand of differential audits: Evidence from the municipal audit. Journal of Accounting Research, 137, 155. Deis, D., & Giroux, G. (1992). Determinants of audit quality in the public sector. Accounting Review, 462, 479. Deis, D., & Giroux, G. (1996). The effect of auditor changes on audit fees, audit hours, and audit quality. Journal of Accounting and Public Policy, 55, 76. Giroux, G., Deis, D., & Bryan, B. (1995). The effects of peer review on audit economic. Research in Government Regulation, 9, 6382. Giroux, G., & Jones, R. (2007). Investigating the audit fee structure of local authorities in England and Wales. Accounting and Business Research, 37(1), 2137. Giroux, G., Jones, R., & Pendlebury, M. (2002). Accounting and auditing for local governments in the U.S. and the U.K. Journal of Public Budgeting, Accounting and Financial Management, 126. Khurana, I., & Raman, K. (2004). Litigation risk and the nancial reporting credibility of Big 4 versus non-Big 4 audits: Evidence from AngloAmerican countries. Accounting Review, 473, 495.

Das könnte Ihnen auch gefallen