Beruflich Dokumente
Kultur Dokumente
discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/241816025
CITATIONS READS
32 3,272
3 authors, including:
Some of the authors of this publication are also working on these related projects:
All content following this page was uploaded by Nick Oliver on 12 August 2015.
dx.doi.org/10.1108/14635770610676272
Mahmoud M. Yasin, (2002),"The theory and practice of benchmarking: then and now", Benchmarking: An International
Journal, Vol. 9 Iss 3 pp. 217-243 http://dx.doi.org/10.1108/14635770210428992
Andy Neely, Mike Gregory, Ken Platts, (1995),"Performance measurement system design: A literature review and
research agenda", International Journal of Operations & Production Management, Vol. 15 Iss 4 pp. 80-116 http://
dx.doi.org/10.1108/01443579510083622
Access to this document was granted through an Emerald subscription provided by emerald-srm:161330 []
For Authors
If you would like to write for this, or any other Emerald publication, then please use our Emerald for Authors service
information about how to choose which publication to write for and submission guidelines are available for all. Please
visit www.emeraldinsight.com/authors for more information.
About Emerald www.emeraldinsight.com
Emerald is a global publisher linking research and practice to the benefit of society. The company manages a portfolio of
more than 290 journals and over 2,350 books and book series volumes, as well as providing an extensive range of online
products and additional customer resources and services.
Emerald is both COUNTER 4 and TRANSFER compliant. The organization is a partner of the Committee on Publication
Ethics (COPE) and also works with Portico and the LOCKSS initiative for digital archive preservation.
The first book specifically about the subject was published in 1989[1]. A recent
literature review[2] indicated that up to 1990 there were fewer than 20 articles
which dealt with the benchmarking of business processes.
The majority of those 20 articles describe the work and experiences at the
Xerox Corporation which won the Malcolm Baldrige National Quality Award in
1989 and whose activities have been highly significant in bringing widespread
attention to benchmarking. Benchmarking in this instance has been used to
refer to systematic inter-firm comparisons:
Benchmarking is the continuous process of measuring products, services and practices
against the toughest competitors or those companies recognized as industry leaders
(D. Kearns quoted in [1]).
More recently the notion of change and improvement through the adoption of
superior practices has been incorporated into the definition:
Benchmarking is a continuous search for and application of significantly better practices that
lead to superior competitive performance (Westinghouse Productivity and Quality Care
quoted in [1]).
Our conclusion is simple: Lean production is a superior way to make things...It follows that the
whole world should adopt lean production, and as quickly as possible[5, p. 225].
The purpose of this article is not to enter into the debate about levels of analysis
in the question of performance comparisons, but to examine the process of
benchmarking, drawing on the experiences of one particular benchmarking
project. However in doing so, the broader assumptions which underpin the
benchmarking process will be explored alongside the basic principles of the
process itself.
(3) extend this approach to the supply chain, by developing measures of the
relationships between plants and their customers and suppliers.
A further aim, arising from these three, was to use these methods to identify the
correlates of high performance manufacturing.
The measures
Clearly in any benchmarking study, the choice of which attributes to measure is
of paramount importance. Given the objectives identified above, the choice of
measures for the study was relatively straightforward; there had to be a set of
performance measures, and a set of measures to gauge how closely plants
adhered to the lean production model.
For performance measurement, measures were developed in three areas:
productivity, quality and time. In the case of productivity, the study followed
IMVP and focused on physical productivity measures. Financial measures were
largely avoided because of the difficulty in interpreting this data, as factors
such as transfer pricing and currency exchange rates can give misleading
impressions. This proved an advantage in Japan where some companies were
unwilling to divulge financial data but were happy to provide very detailed
information on physical productivity and quality measures. Overall, only three
plants which received a first visit from the team failed to provide usable
responses to the questionnaire. Two of these were plants from within the same
group, where the research effort fell foul of internal political issues. This and
subsequent experiences of the research team suggest that there may be more
problems benchmarking units within the same organization than with
benchmarking independent units because of organizational-political problems.
Virtually all questionnaires were secured from plants with either a full set of
responses or only a small number of unanswered questions (less than 5 per
cent).
The measures of internal management practices were developed from the
assembly plant questionnaire used by the IMVP. The questionnaire contained
sections specifically designed to provide objective, quantitative indicators of the
management practices utilized at the plant (factory practice, work systems and
IJOPM Plant performance
15,4 Productivity Annual units of output divided by annual labour hours,
adjusted for vertical integration, product complexity, overtime
and absenteeism
Quality Failure rate at first final inspection and test
Time Throughput time
54
Plant characteristics A series of features likely to impact on performance was
measured, including annual volumes, value of sales and
headcount
Product characteristics Measures were taken of product variety (number of live part
numbers), product age and product complexity, the last of
these being assessed via a part count
Downloaded by The University of Edinburgh At 19:01 11 August 2015 (PT)
made for vertical integration. Thus plants which outsourced a high proportion
of work had their headcount increased to take account of this, and vice versa in
the case of high, vertically integrated plants.
The issue of comparability is of central importance to benchmarking studies,
as they stand or fall by the legitimacy of the comparisons they make. Unless a
true apples to apples comparison is perceived to have taken place, the results
will be seen as at best irrelevant, at worst misleading. As described above, in the
present study it was necessary to make adjustments even when the definition of
products and processes was very sharp indeed. This must cast doubt on the
genuine comparability of the data generated by many benchmarking studies.
The plants
The project covered 18 plants which manufacture components for the motor
industry, nine in Japan and nine in the UK. Four different products seats,
exhausts, disc brake calipers and wire harnesses were included in the study.
The sample included five plants from each of the first three categories and three
wire harness makers.
The mix of products enabled a range of process technologies to be covered by
the project, giving a cross-section of manufacturing processes from the
relatively capital intensive precision machining of disc brake caliper production
to the more labour intensive activities in wire harness assembly.
The choice of which products to include was also partly determined by the
availability of potential participants which was restricted in the UK industry.
Having a mix of products involved considerably more work since it was
necessary to become familiar with four distinct and different manufacturing
processes, as each process had to be modelled accurately for measurement
purposes. As a consequence the generic questionnaire was amended for each
different product group in some areas, although about 95 per cent of items were
common.
Data collection
The study began in January 1992 and ran for about ten months, including the
development of the methodology employed and the initial presentation of the
IJOPM results. The research period involved two field trips to Japan, one of five weeks
15,4 duration in March and April and a three-week visit in July.
Benchmarking demands a systematic rigorous approach to data collection
with an emphasis on quantitative, hard information. To this end there was a
three-phase data collection programme.
In both the UK and Japan, the first phase in the field research process
56 involved two or three members of the team visiting each manufacturing facility.
The first priority was to sell the project to the plant management. Often the
initial access was negotiated through members of corporate management, but
the project also required the committed participation of managers at plant level.
In fact, these were the main relationships that were nurtured, and it was made
clear the commitment of the project team was to plant management. (Following
Downloaded by The University of Edinburgh At 19:01 11 August 2015 (PT)
publication of the results members of the project team actually had approaches
from corporate managers, asking if their plants had participated in the study;
they were politely told that they should address such queries to their plant
managers, as the pledge of confidentiality and anonymity that the research
team made to each plant included protection from members of their own
organization outside the plant.) There was clearly a potential danger in
participating in the project for plant managers, in that their plant might be
demonstrated to be a low performer. The use made of the benchmarking data
by companies is an issue which will be explored later in the article.
The completion of the questionnaire took three-to-four days of management
time. Thus an agreement to participate represented a considerable
commitment. In exchange for this companies were offered full feedback on their
position relative to others in the study. This offer, and the subsequent necessity
to deliver, was significant in gaining commitment to the research process by the
participants.
The first visit to the plant involved a tour of the factory. This enabled the
research team to map each plants manufacturing processes and to check the
comparability of process and product technology across the sample. For
example, a number of companies manufactured more than one product at the
same plant and the plant tour enabled the research team to confirm what should
be included and excluded for the purposes of the project. Equally this gave an
invaluable opportunity to collect qualitative data and hence flesh out a better
understanding of the plant and how it was managed. The shopfloor visit and
subsequent discussions with managers provided the research team with an
opportunity to guard against the inflexibility of the research process imposing
erroneous patterns on the data.
Finally, in the first visit, the questionnaire was explained to indicate areas of
uncertainty or ambiguity and to gain background information. For example,
the importance of consistency in which time period was used and the need to
exclude head count not associated with the product under consideration were
reaffirmed. The companies were then left to complete the questionnaire. This
process was undertaken with the knowledge that the research team would
return to collect the completed version in a second visit. Four-to-six weeks was The process of
allowed for completion of the questionnaire. benchmarking
The second phase of data collection involved a second plant visit, which was
also conducted by a pair of researchers, one taking the lead role and one
acting as back-up. During the second visits the research team systematically
ran through the questionnaire, double-checking responses for any ambiguities
or anomalies in what the companies reported. In practice one of the research 57
team led the discussion while the second member cross-checked responses and
flagged up problem areas.
As trends began to arise during the research, the team was able to ask follow-
up questions in some areas. In particular the research team was able to gather
extra data on practices such as the workings of supplier clubs or associations
Downloaded by The University of Edinburgh At 19:01 11 August 2015 (PT)
which are common in Japan but about which the West is still learning.
Naturally, the ad hoc nature of the data collection precluded their inclusion in
the main body of the findings but the information helped to set the quantitative
data in context and clarify some of the interrelations between the sample plants
themselves and their environment.
During the construction of the questionnaire, the research team had noted the
difficulty in phrasing concise yet precise questions. Consequently, postal
administration of the questionnaire was ruled out so that ambiguities and
problems could be identified and tackled face-to-face. This was indeed borne
out in practice where problems which were ironed out in a relatively
straightforward manner face-to-face would have proved very difficult if
solutions had been sought via fax or phone. For example, drawing up a graphic
representation of subcontracting interrelationships is often easier than
explaining these verbally. This was a resource-expensive mode of data
collection, but fundamental to the accuracy and subsequent credibility of the
data.
The third phase of the data collection process involved a verification exercise
by each plant. Prior to the comparison, companies were shown their individual
performance calculations and asked to verify that they represented a true and
accurate picture of their performance. This process was also utilized as a safety
net to the second phase of data collection in the instance in which problems or
anomalies were only discovered following further responses from other
members of the sample.
Once the completed questionnaires were actually secured from the companies
the information was entered on to a standard data sheet. This provided a useful
discipline because it helped to indicate areas in which further explanation was
required by the companies, or indeed in which mistakes had been made and
overlooked by the research team while actually in the field. Despite the fact that
the clear objective all through the research design process had been to attempt
to generate systematic and comparable data, the research team was still faced
with decisions and choices over what the appropriate response from a
company might be. The completion of the data sheet forced items to be
represented as numbers, a process which in itself frequently revealed unresolved
IJOPM ambiguities. For example, the true amount of one day of stock is dependent on
15,4 how many shifts, and of what duration, are actually worked during that day.
Thus responses were of necessity broken down to the smallest common unit
across the sample. Such decisions were taken only following a central
discussion involving those actually responsible for having collected the data.
These steps were taken to ensure consistency and to remain true to the original
58 data.
When the completed and clean data set was obtained the adjustment
factors were applied. The research team was keen to try and minimize these
because of their propensity to introduce noise into the data set. However,
because individual plants differed in significant respects it was necessary to
adjust for levels of subcontracting that were encountered. This is necessary to
Downloaded by The University of Edinburgh At 19:01 11 August 2015 (PT)
ensure that the inputs and outputs are reflective of comparable levels and
breadth of activities across the sample.
In addition to the adjustments for subcontracting activity, the research team
also had to address a major criticism of the IMVP, namely that there is no
consideration of the manufacturability of the products in the productivity
calculations[9]. In order to standardize companies on this variable product,
complexity was measured via a part count and companies standardized around
an averagely complex product. Clearly assumptions were made here. One
assumption was that product complexity may be measured by the number of
discrete parts or sub-assemblies entering the assembly line or by the total
number of parts contained in the finished products The second assumption was
that product complexity was a reasonable (reverse) proxy for relative ease of
manufacture.
In making these assumptions the research team sought the advice of
manufacturing managers in each product area. A pilot study of a brake caliper
manufacturer confirmed the legitimacy of this practice, albeit as an
approximation. Ironically, this company fell foul of the subsequent adjustment
to productivity performance made in the light of the relatively simple products
they manufactured. In essence these adjustments were made to remove the
impact of differentials in manufacturability. This issue was the only one which
provoked a complaint of unfairness from any participating company, on the
grounds that the company was being penalized for its progress towards design
for manufacture. The research team response was to stress that the project was
investigating how good people were at making things, not at designing them, a
response which was grudgingly accepted.
Other factors which may influence productivity, such as capacity utilization
and the level of automation, were also measured and compared across the
sample but adjustments were not made to the performance data to attempt to
standardize plants for these. Rather, these were held as potential explanations
of variations in that performance. This meant the research team was able to
minimize the adjustments made to the raw data but still able to infer the relative
impact of these variables.
Results The process of
Although the purpose of this article is not to present the findings of the benchmarking
benchmarking project, which are available elsewhere[12,13], a selection of
results is presented here to provide a flavour of the outputs from the
benchmarking process. Figure 1 shows a scattergram of the 18 plants, with
quality levels plotted against standardized comparative productivity scores. As
may be seen from the figure, a group of five high-productivity, high-quality 59
manufacturers are identified. These plants were all Japanese (but not all the
Japanese plants had world-class performance results).
Table II shows a comparison of the average scores of these plants compared
to the others on a series of performance and management practice measures.
Downloaded by The University of Edinburgh At 19:01 11 August 2015 (PT)
120
100
Standardized units per hour
80
Low Q high P World class
60 Low Q low P High Q low P
40
20 Figure 1.
-4 -2 0 2 4 6 8 Productivity versus
Quality (log scale) quality
60 Assumptions
In all product areas, the research team had tacitly assumed that the
manufacturing facilities were largely stand-alone entities. In the case of the
Japanese wire harness industry, it soon became clear that to consider a plant as
the level of analysis was not feasible. The operations typically found in a single
harness plant in the UK were spread across a complicated series of subsidiaries,
Downloaded by The University of Edinburgh At 19:01 11 August 2015 (PT)
Public reaction
The research results attracted widespread attention from the media, and
featured in an article on the front page of the Financial Times on the day of their
release[14], as well as appearing in several other articles in the following days
and weeks. Interestingly, although the information focused on world class
versus other plants, the dominant message conveyed by the media concerned
the 2:1 productivity gap between Japan and the UK. The newsworthy element
of the results was clearly taken to be the UKs poor performance, and it was this
which was emphasized in most reports. This was a source of some
embarrassment to the research team, as the media presentation of the position
of the UK companies was not consistent with their own; indeed, a letter was
sent to all companies, distancing the research team from the media statements.
Company reactions
Following the release of the results, the research team made several
presentations of companies individual results to the companies themselves in
private seminars. An interesting aspect of this was the (mis)representation of
the results by some senior managers. At the end of one feedback presentation
that the authors gave to the top 60 managers of a plant with unimpressive
performance, the plant director stood up, thanked the research team and The process of
proceeded to lambaste his management team for the inadequate performance benchmarking
figures. This was extremely embarrassing for the authors since the message of
the presentation had been that there were no simple solutions, and that the
problem with many low performing companies was that they had become
locked into self-reinforcing fire-fighting cycles. The hope of the research team
was that the benchmarking results could offer a starting point in a round of 61
constructive discussion and learning, but in this case the reverse seemed to
occur.
References
1. Camp, R., Benchmarking: The Search for Industry Best Practices that Lead to Superior
Performance, Quality Press, Milwaukee, WI, 1989.
IJOPM 2. Watson, G., Strategic Benchmarking: How to Rate Your Companys Performance against
the Worlds Best, Wiley and Sons, New York, NY, 1993.
15,4 3. Balm, G., Benchmarking: A Practitioners Guide for Becoming and Staying Best of the Best,
Quality and Productivity Management Association, Schaumberg, IL, 1992.
4. Parnaby, J., Practical just-in-time inside and outside the factory, paper presented to the
Fifth Financial Times Manufacturing Forum, London, 6-7 May 1987.
5. Womack, J.P., Jones, D.T. and Roos, D., The Machine that Changed the World: The Triumph
62 of Lean Production, Rawson Macmillan, New York, NY, 1990.
6. Graham, I., Japanization as Mythology, Industrial Relations Journal, Vol. 19 No. 1, 1988,
pp. 69-75.
7. Schonberger, R., Japanese Manufacturing Techniques, Free Press, New York, NY, 1982.
8. Monden, Y., Toyota Production System, Industrial Engineering and Management Press,
Atlanta, GA, 1983.
Downloaded by The University of Edinburgh At 19:01 11 August 2015 (PT)
9. Williams, K. and Haslam, C., Against lean production, Economy and Society, Vol. 21
No. 3, 1992, pp. 321-54.
10. Delbridge, R., Oliver, N., Turnbull, P. and Wilkinson, B., Supplier Relations in the UK
Automotive Components Industry in the 1990s: Developments in the Welsh Sector,
Japanese Management Research Unit Special Report, Cardiff, 1990.
11. Turnbull, P., Delbridge, R., Oliver, N. and Wilkinson, B., Winners and losers the tiering
of component suppliers in the UK automotive industry, Journal of General Management,
Vol. 19 No. 1, 1993, pp. 48-63.
12. Andersen Consulting, The Lean Enterprise Benchmarking Project Report, Andersen
Consulting, London, 1993.
13. Oliver, N., Delbridge, R., Jones, D.T. and Lowe, J., World class manufacturing: further
evidence in the lean production debate, British Journal of Management, Vol. 5, 1994,
pp. 53-63.
14. Financial Times, 14 November 1992.
15. IBM Consulting Group, Made in Britain: The True State of Britains Manufacturing
Industry, IBM UK, London, 1993.
This article has been cited by:
10. Mahmoud M. Yasin. 2002. The theory and practice of benchmarking: then and now. Benchmarking: An International Journal
9:3, 217-243. [Abstract] [Full Text] [PDF]
11. Javier Gonzlez-Benito, Barrie Dale. 2001. Supplier quality and reliability assurance practices in the Spanish auto components
industry: a study of implementation issues. European Journal of Purchasing & Supply Management 7, 187-196. [CrossRef]
12. Isabelle Dostaler. 2000. Trying to Resolve Manufacturing Performance Trade-Offs: The Case of British Contract Electronics
Assemblers. Canadian Journal of Administrative Sciences / Revue Canadienne des Sciences de l'Administration 17, 255-268.
[CrossRef]
13. I. Barclay, Z. Dann. 2000. New-product-development performance evaluation: a product-complexity-based methodology.
IEE Proceedings - Science, Measurement and Technology 147, 41-55. [CrossRef]
14. Ian Oakes, Gloria Lee. 1999. Between a rock and a hard place: some dilemmas for smaller component suppliers. International
Journal of Quality & Reliability Management 16:3, 252-263. [Abstract] [Full Text] [PDF]
15. Jeffrey J. Dorsch, Mahmoud M. Yasin. 1998. A framework for benchmarking in the public sector. International Journal of
Public Sector Management 11:2/3, 91-115. [Abstract] [Full Text] [PDF]